Profile cover photo
Profile photo
Craig Sullivan

Post has attachment
Add a comment...

Post has attachment
Add a comment...

Post has attachment
Add a comment...

UX Research and AB testing Need Each Other

I'm a UX guy by background and heartily endorse a range of qual and quant methods to all my clients and use this in my own work.  Unlike some of my fellow 'growth' or 'CRO' practitioners, I'm a big fan of rapid, iterative, testing of products to clear them of defects, barriers, worries and comprehension problems.

There's a nice throwaway statement though that UX people like me can make along the lines of "You only need a few people, say 5-8, to find a huge amount of product defects".  And this is true when usability testing - you do find a huge commonality across test subjects that helps you fix things.

However, I'll cite a good example here where this breaks down.  Sometimes you need data at scale.  If you test eight people with a postcode lookup form, they might all pass perfectly.  However, what you are testing is users’ ability to fill out and execute the postcode search.  The test can't possibly be good enough to check that the street addresses returned are of a high quality.  You simply can't extrapolate a few lookups to the entire population as the data is not representative of what you'll see in 'the wild'.  

The method, sample size and 'confidence' you have in the insights depends on the question you are trying to answer and the methods you use.  If you want to know if people can use the postcode lookup, you not only have to get the design right (copy, UI, error messages, form layout, labels etc.) - you also have to inspect the data running live and continually on the site.

With a skilled moderator, you can both observe and explore the defects or barriers in a product, in order to understand how to fix or correct them.  You don't need large numbers of people to find commonality in what they stumble over.  

In the postcode example, let's say there are two problems - validating the UI works for as large a reach as possible and secondly, validating the UI works for the data and variation that real live operation will throw at it.  

Different kinds of techniques require different methods and sample sizes.  I may be utilising tests at scale with millions of visitors, to work out the best approach to sell, describe or present a product or service.  For this, I will be looking for rigorous method, decent sample sizes, full business cycles and separation of the distribution curves.  

However, in order to get ideas for those tests, I will also be running usability tests to drive ideas, insight and defects to fix.  
For user testing, I will often start seeing patterns with as few as 3 people - bu I'm biased (as we all are) and need more than one data point or more people to be convinced that something is rock solid truth.  I also don't make the mistake of assuming that I know how to shift behaviour, even when I think I have decent expertise about spotting problems.  The two are not the same!

My personal feeling is you should be running a range of qual and quant methods to drive the core of the work to continually improve a product.  Where the problem comes in is when it comes to designing and changing the product after feedback or insight.

The process, people and method used to then 'fix' or 'change' the product is simply a hypothesis.  It is an informed guess.  Most people changing their websites are guessing and when we do research, we can make more informed guesses.

However, what if the approach is wrong or fails at scale, when exposed to the live environment or variety of inputs that people give?

The promo code system might work in the test but the live version fails with mixed case entries from users.  The postcode lookup might reject yours if it doesn't have a space, yet you never saw this in the usability test.

Even our best product guesses, underpinned by years of seeing usability tests, melt when exposed to large scale user behaviour.  What we mustn't do is assume that our lens, our interpretation of the next iteration of a product is anything more than a hypothesis, which remains waiting an answer.  To find the answer requires multiple techniques - usability research, analytics, monitoring (e.g. session replay, realtime reporting) and most importantly, the ability to run tests or have confidence.  

And there is the rub - I see too little UX work done where there is ‘measurement’ of the impact on the customer, their feelings about the product or indeed, the data to confirm the desired behavioural changes.  The redesign, rework or overhaul of the product fails to change the desired metrics because either the faults were not understood or the changes were the wrong guesses and made something worse.  

Usability testing may give you great insights but it doesn't tell you what to do or how to measure any hypothesis or test you implement.  In some cases, the fix may be blindingly obvious but where it is not, you are into the realms of 'guessing' what might work.

The team can only have true confidence if there is a feedback loop between their changes, the resulting observable effect (at scale, as well as small tests) and what they learned from trying things.  It's not the testing that inspires confidence - it's when the team is learning, applying those insights and then reaping positive feedback from their work.  

Nothing is more fun to do or better to see - and it's a people and process thing, not a testing thing.
Add a comment...

WTF do Wheeled suitcases have to do with Design?

“I hate those effing wheelie trolleys – they’re the spawn of the devil” recently spat a friend.  I then got a rundown of how much of a social menace these little lucifers of luggage presented  for us regular travellers  And they’re not alone in this rich seam of haters gonna hate – but there’s a serious story behind this that came to me today.  

I got stuck in a crazy duel with another wheelie suitcase for the racing line into Piccadilly Station in Manchester.  At full tilt, there was only going to be one outcome – yes, unbelievable - two little wheelie suitcases humping each other in the station entrance.  I wasn’t sure whether I was angry or laughing my ass off – both I guess.

And then I thought of the two sides here to the argument.  Those who hate them, those who love them (and those who are on both sides).  So if you regularly use one, I’d like you to try and think about why people get angry with their ‘drivers’.  And if you regularly curse the day they were invented on this planet, I’d like you to consider the flip side too.

Let’s cover the wheelie hate stuff first.  More fun this bit of course (I can’t help myself):

(1) Some of you forget that you’re dragging this long thing behind you, like a small crazy dog on wheels.  It’s like a car trailer – you can’t ignore it being attached to your body as it will smack into things as you corner.  Oblivious and about as spatially aware as a house brick, the wheelie trails carnage: clipped heels, cursing and sometimes even little jumps over the payload happen regularly.

(2) If you make sudden manoeuvres, you’re forcing all the pedestrian traffic around you to adapt – like a school of grumpy fish, the shoal adapts but not without subvocally bitching about it.  You see none of this as you stride blissfully ahead.

(3) If you cross another traffic stream, it’s like dragging a dead killer whale across a busy street on a chain – you part everything but with a huge freaking interruption to the traffic flow.

(4) If the case is low profile, people may miss it in busy foot traffic and trip over it even more.  

(5) Most people buy the big versions of these cases – so they eat much more space in smaller planes – making space harder to find in overheads.

(6) Havoc is caused on escalators due to the mode change between wheeling/pulling to standing.  Expect pirouettes and more.

These are the main things I have observed.  The mosh pit of terror has to be the London Underground ticket hall at London Bridge Station – the designers were dumb enough to cause a terrible problem.  The different entrances and exits lead to at least four traffic streams bisecting each other, at different speeds and points in the hall.  As there’s virtually no overt or even subtly designed flow – it’s freakin open season in there.

Suitcases, strollers, packages, work bags, umbrellas, people, boxes, children are whizzed in a blender by the lack of design and flow control.  People are bumping into each other all the time and constantly forming new and different traffic streams that jostle for space.  Just watch it sometime – fascinatingly dumb.  And into this mix comes tada! – the wheelie suitcase.  

As hilarious as it is being an observer, I’ve been on the end of the terror too.  Makes you angry - it’s like someone ‘driving’ irresponsibly.  But one question why then do THEY have a look on their face like it’s your fault?  Ever wondered why?

So let’s cover the other side of the story.  I wonder how many of you own a suitcase as well as dislike people who use them (or drive them badly).  How many of you that also see both sides but never reconcile them.

At a conference, I asked an audience of senior marketing execs a question.  I got them to raise their hand if they really hated when they called a phone number on a website and the on hold message said “Please visit our website at”.  

Most of them thought about how annoying that is – after all you’ve just come from the effing website haven’t you?  Why would you be calling if the website didn’t have what you needed eh?

I then asked the audience to drop their hands ONLY if they knew 100% that their web phone number did NOT have this type of cruddy message.  Very few hands dropped – it’s a case of hate it for myself but sod you, millions of customers.  Meh.

So the flipside of the story is this.

(1) Wheelie suitcases are really hard to manoeuvre.  A lot of the designs have really long handles and if you’re unfortunate enough to be shorter in stature, it sits lower in profile to the ground making it harder to see and extend further back.

(2) If you’ve got a heavy case, they’re like dragging a rock around.  Really tough to move unless you apply a lot of torque with your poor tired hands.

(3) It’s pretty hard to remember the bit sticking out behind you.  And since the brain is focused on processing the front view, the carnage behind you (or being able to avoid causing it) is kinda hard to manage.

(4) Your hand feels it’s pulling something but doesn’t always mentally extend that into the full length of the wheelie.  Depending on how you hold it, these things go waaay back.  It’s actually hard to know where it is spatially by recalling your movement history..  The problem is the suitcase is never following you – it’s on a curve based on your movements and a completely different path.

(5) And the most compelling reason of all?  Compared to how luggage used to be (remember lugging a suitcase around by hand? – these are a joy to use, especially for those without the hand strength or stamina.  People love wheelie suitcases and owners that remember the old luggage designs swear this is the best design invention for luggage, ever.

So what does this examination tell us – well that it’s a bummer to have one of these, never mind have a gymnastics lesson with one as a pedestrian.  And that both sides are probably right – maybe we should rethink this design to save the misery it causes both.  

The downsides seem less for prounounced for the owners out on the road – compared to the hapless victims – but it isn’t a one sided story.  There are similar things in our behaviour in cars – our expectations of what we want are different to what we wish (charitably or not) would happen with other drivers.  Our standards for their behaviour are different from how we behave ourselves.  And we all have this bias when we design websites too – an ability to take our own thinking or viewpoint or personal experience and use it in place of others. 

Designing products takes a type of empathy that involves you taking the other side, even fighting in several corners, inhabiting alternate realities and trying to think like other people.  

It isn’t copying and you can’t just mind read people – it involves taking contradictory viewpoints, challenging your own opinions and ways of doing things.   It also reminds me a lot of software destruction testing – finding every possible way to break a product, destroy it, make it fail.   Except this is a different killing field – that of cherished notions.

I’m not recommending this as some new technique or in place of proper user research, (I’m a big fan of rapid, iterative, UCD, by the way).   I’m just saying that taking sides and trying to understand all viewpoints can help at every stage in the design process.  

With well loved and instrumented analytics data, user research and iterative design – it’s easier to do without ego, opinion and cherished notions.  The process you follow frees you to shrug off the stuff you think and simply try to imagine.

And so next time you design a new bit of website, drive your car or carve up London Bridge with your wheelie case – flip your side of the argument for one of the others.
Add a comment...

Post has attachment
*Truth, Lies and Damned Analytics *

I've now spent a week looking at four businesses - a B2B lead gen company, a travel operator, an e-commerce site and a B2C furniture supplier.    

As a hands on kinda guy, I get to see a lot of analytics setups and it takes you out of your comfort zone every time.  Why? 

 Because most of them are FUBARed* in a way that makes each one uniquely untruthful about that business and the way it REALLY works and flows online.  It's like being shot from a large cannon into a thorned hedge every week - I never quite get over the variety, purity and depth of pain that I experience - the same but slightly different every time.

And so I have been exposed to these setups and forced to dig and model and construct the real flows and numbers.  To go back to pretty much the raw data - making manual funnels, performing my own segmentation and constructing flow and leak models.  Spending lots of time looking at the page flows and trying to figure out how it all works.

But isn't that what the analytics package is for?  ARF!  I have not found one setup this year (and I count that at nearly 20 now) that isn't broken in some horrendous way.  And by examining the flows, page constructions and technical workings - you can eventually deduce the truth and make a proper model.

Some examples this week:

• A key step count in a funnel inflated by a second flow joining.

• New buyers and existing logging in customers going to the identical page, mixing the two flows.

• A microsite where there were more people using it to log in than engaging with the content.

• A charity site where recruitment content so dominated traffic that it obscured the truth.

• A set of pages fragmented across many URL constructions, making them hard to clump together in analytics, grossly undercounting their contribution.

This is the sometimes miserable and sometimes fun part of my work - decoding the truth from all the inputs I'm getting.  Let me give an example of how the inputs can skew your thinking:

• I've watched quite a few people doing expert reviews of sites and sometimes I've seen the analytics data before they did this.  I can see that they are heavily biased by the way the site looks on the page.  If you were to ask an expert reviewer to describe the flow, key areas and content on a site - they'd give you a visually led answer, based entirely around their exploration, opinion, biases or just the interesting stuff they checked out.  On most sites, the reviewer will start with the homepage, even if 90% of the traffic lands elsewhere.  It's not the truth.

• And if you just watch the usability test, depending on how the scenario is scripted, you could arrive at a second and different view of where the key flows and areas of a site were.

• And then there is the business layer.  To some it's invisible but I'm always thinking about the business outcomes of human behaviour on sites - it's the most important one of the lot.  So what happened then?  Did they leave the funnel and call us or what?  Did they come to our store?  How much did they spend?  Working the money or business outcomes you want is another dimension to this data - what do you WANT them to do and how much MORE do you want them to do it.  

• And then the data - if you accept the reports and data being pumped at you, therein lies the biggest flaw.  Not checking how it works.  We get seduced by what we think the analytics data are telling us - even when the collection method is flawed or broken.

And I've found all of these ways to be useless on their own for optimising sites - it's about how you integrate them holistically.  About the micro AND the macro.  The tiny interactions at field level in a form all the way to the lifetime value or improving your Net Promoter Score.

But it's all built on nothing if you get the wrong data into the mix.  And that's where we come back to the analytics and the truth.  

So unfortunately, most of the time I find myself having to construct models manually.  Sometimes because I can't segment Google Analytics funnels but mainly because the analytics configuration is recording site traffic - but not in a way that lets you readily interpret it
The fundamental problem is that there is (a) The way the developers, CMS and platform have worked to record pages and stuff that happens and (b) The way the analytics setup marries with this.  The two groups of people with a stake here - the developers and the entire business reporting layer of the company - often don't have nice fireside chats together about this stuff enough.  Most marketing execs would be surprised to know how much developers care about instrumentation if only you gave them time to fix it instead of working on something less useful.

When there is a mismatch between what the data should be recording and the site construction is impeding this - it has to be fixed.  All the reports, filters, goal outcomes, funnels and dashboards you can use will never ever fix the fact that the underlying data is wrong.  

And when your data isn't matching the business flows or where customers flow - the behaviours and outcomes - then you begin to lose faith in the analytics.  I've noticed that people just start trusting something less when it diverges hugely from other data points.  They should fix it or someone else should - but this isn't always what happens.  If your car speedo looked busted, would you just sort of ignore it?

So how do you fix this?  By auditing and fixing the instrumentation.  A couple of days is usually enough to find 90% of the dumb mismatch between site and analytics setup.  And yes, sometimes developers will need to fix stuff but generally, most of what you find is easy and quick to fix.  Investing time here will spread layers of greater truth and clarity through all your reporting and insight generation.  

My plea to everyone is to spend money on instrumenting, fixing and improving their analytics continually.  Towards measuring more and better stuff, with greater fidelity and also usability for everyone in the reporting chain.  

It all starts with the instrumentation budget though.  You probably spend less than 1% of your time or money on this area and you'll be delighted if you follow my advice and spend LOTS more.

It will also make me less miserable when for the 900th time, I will conclude that I have to manually construct a layered model for a site, simply because I can't see the real flow.  And when that's happening, it means everyone in your company isn't seeing the real flow either.  I don't want to teach everyone how to do manual funnels or layered models - it's very time consuming - I'd just like to help people fix this distortion of reality in your basic reporting!

I'm still writing up the methodology I've been using to review sites - using a whole range of GA reports, tools and review methods.  Every site, business and customer audience - they're all wonderfully and uniquely different.  However, there are common aspects to how people choose to interact - to enter a site, progress to different conceptual levels or reach one or more goals.  I'd like to explain this conceptual model more - as it offers a nice simple way of seeing where the pinch points are - where the money is.

Armed with the flow model, you can then construct a split testing or improvement plan that maximises conversion holistically - across the entire model.  Not starting on the flashy page you think it's on - but where the money is actually being won or lost.  Knowing where the battlefield actually bloody is can be critical to winning the war, please note!

So a perfect example.  

5 weeks had been spent arguing over the new landing page.  It had been kicked around the brand team, marketing, UX and engineering like some unwanted guest.  Immense amounts of time, large meetings and endless redesigns sucked at productivity like a massive black hole.  It still wasn't ready so oh boy - it sure better be good when it arrives eh?

So here's a hypothesis - what if this nice high traffic landing page doubled the conversion rate?  What if these guys really did get, amazingly a HUGE 100% lift?

Well, the answer is less than 20 checkouts a month.  Yes, 20.  The plain fact was that the problem was with the targeting and traffic - and the elasticity you get playing with the landing page is NOT ENOUGH to shift the business layer.  

All that effort pushing the wrong lever.  As Duke Nukem says about truth: "Come get some".

That now concludes my series of "101 Whines about broken analytics and other miscellany".

If you need your site going over, get in touch.  I know a very small bunch of people who are really good at this and will give you the exact changes to make - the fixes to apply, that will get you started on the road to a better truth.


Add a comment...

Post has attachment
Elite Camp Estonia, Analytics Hell and Redemption

What a week.  

This time last week, I had arrived in Tallinn, Estonia, for a conference called EliitLaager ( - which sounds brilliant in English.  Like some sort brewery conference in the mountains with suits, powerpoints and a painful hangover - but it wasn't like that at  all.  Apart from the hangovers <grins ferociously>.

It's been going for a few years now and the attendees are an excellent mix  of marketers, techies, analysts, growth hackers, investors, startup folks and curious people.  It's nice to see a wide age range and mixture of backgrounds.  And the companies involved serve markets that I've been  involved with but many countries that I've never worked with.  Very interesting to hear about new projects, brands, startups and their different challenges working across western and eastern european markets through to Russia, China and the USA.  

It's also interesting to see many key decision makers avoid the fate of similar western startup failures - by having customers and data at the bloody heart of everything.  I'm never a fan of copying models - you're always one step behind - but taking, understanding and adapting things to make them better?  This is a nice way of thinking that I heard a lot when I was in Estonia.  

I met some fearsomely smart people - if you're building a new tech business for local markets or for the world - then this is a pretty good place to start. 

I'm always interested in economic differences between the places that I visit and that starts with the cost of things.  I like looking in estate agents, recruitment agencies, corner shops and figuring out the cost of everything - and where odd differences stick out. And I love trying new food -  a pathetically great excuse to eat out.

And my rough conclusion is that:
(a) salary versus life quality is very high in Estonia, at least for tech and web stuff
(b) Talent is very good and a  whole raft of young people (early twenties) is coming through and
(c) They're completely clued up about working across timezones, countries and languages. C'mon - where do you think Skype came from eh?

See this for info on the taxation structure in Estonia, which is broad, low  and involves different handling for capital gains.  Seems like very reasonable employee flat rate taxes and low costs for unemployment and employee pension contributions.  What a riveting read this was:

And in many places in Eastern Europe, conferences, communities and groups of  tech people are finding that there's a local market for their skills.  More  importantly, there's an international market too.

For the Xenophobic amongst you, this doesn't mean hordes of imaginary Estonian people in your head coming to visit you in England.  You've been reading the Daily Mail too much.  

They're actually more than happy to keep a high quality  of life and lower costs by staying where they are!  There is also a sense of building something local - investing in their own future.  I actually quite fancy working there for a while myself, it's so darned nice.

But here is an interesting point - costs begin to factor in here.  An example of an economic shift might be if South Africa  manages to get improved telecoms infrastructure sorted.  If that happened, it might well be the  next call centre growth area, moving business from India.  With a ready  population of english speakers and a european timezone, it should be a natural for locating stuff there.

And for Estonia, I think it's an ace place to meet a growing band of people creating new services and companies.  It's also going to be a bunch of people who can afford to work for less and live better than you too. 

I've been working with virtual teams around the globe for a few years and  I'd certainly try Estonia if I wanted to hire people to help on a project.   

Be aware of macroeconomic trends too - the tools and software we're using these days means it doesn't matter where the team are.  And if you can earn good day rates for remote work and stay and build your own thing, in your own country - isn't that the best of both worlds?  Yes - according to a few people I talked to.  

Memory of Soviet occupation and it's shadow are never far away.  There is a hotel in  Tallinn that hid a secret - an entire floor dedicated to the KGB, now  restyled as a museum.  I missed getting on the tour and am gutted - so this is on my list for the next trip.  

Ironic comments about the past were occasionally muttered sotto voce by those old enough to remember that time.  Estonia lost independence in 1940 and finally regained independence in 1991, after the breakup of the USSR,  joining the EU in 2004.  I spent some time reading about over half a century of occupation (by  
Germany and Russia) and the destruction and horror it caused - pretty sobering stuff.  

The one thing that stuck in my mind was the organised demolition of graveyards, gravestones and also monuments from the first world war -  many of them being blown up and sometimes re-used by the Soviets. A chilling  quote from the orders given out:

Võrumaa Committee, Tamm, No. 101/s to 1st secretary Nikolai Karotamm.  06.04.1945. ERAF Archives 

"In order to carry out demolition works, 15 Party activists and 275 persons  from the Destruction Battalion must be mobilised. 15 workers are needed for  the execution of each demolition and 10 people are needed for protection....  In order to carry out demolition works, 225 kg of TNT, 150 metres of  rope/fuse and 100 primers are needed, since there is no demolition material  on the spot. 11 lorries, which are available but which lack petrol, are  needed for carrying the ruins away."

in the UK, our geography as an island may have spared us from many of the deprivations that befell our continental European cousins during the 20th century wars.  As much as our parents and grandparents sacrificed for our freedoms today we were at least spared from occupation and loss of liberty.

But back to Elite Camp:

There's a good vibe in Estonia and the Camp got excellent feedback from the  participants.  Having a less formal setup where everyone stayed in small cabins nearby was pretty different and much nicer than a hotel - right near the beach and miles away from town.

In terms of the sessions, the formats were pretty interactive and the venue  would shame most places in the UK for having great interior design, light and space.  Props to Peep and the team for having a different and better kind of event, in a high quality venue.

Most people are excited about the future, particularly for tech, analytics, startups, marketing and small agencies - it's a zeal for the freedom and the sheer possibilities of tomorrow that I admire so much.  

Tallinn is a wonderful city to stay in if you can visit - the old town was  fascinating and you can experience a dizzying range of architecture in a short walk.  The food was superb.  Away from the usual tourist haunts you can sample  fine fresh ingredients, top of the line cooking and excellent service.  It's much better value than London and the quality of fish and variety of cuisines adds to the mix.

So - analytics hell:

I don't know if you've experienced this but I began to question the very data I was seeing for the last four months.  The only explanations were that our market had been impacted or that the analytics config or collection had been completely borked in the past.  

I'd looked at it with a few people and we had various theories - none of which solidly checked out.  It was all about a huge flip in traffic patterns which happened around the time of launching a mobile site.

It was a head scratcher and I'll write more later about it.  I was  
determined to work out what it was and although I can't prove it, I have very strong evidence for the reason after much help.  

I was actually feeling pretty good about the week at that point. Then the site broke, we found another bug in our call tracking analytics and another two analytics holes surfaced.

I was ready to implode.  Disappear into a puff of data denial.  The numbers not making sense for 3 weeks of digging finally came to a head.  

I took a break and looked at a charity site instead for a day.  Sometimes this is the best way to put something in perspective but I'm lucky to have this variety.

I recently read an article about someone who deliberately designed their work projects to be more exciting!  How?  Well they always carved a few niches in the product design that were not part of their core work.  So you might be redesigning the overall funnel but on the side, you've picked two small or gnarly little problems to keep on ice.

When things get tough, turn to some puzzles you can fix and volunteer for or include these in your work.  At least you get to have the pleasure of fixing something rather than worrying about HOW to fix something else.  

When you return to the core task later, your heart and spirits may be lifted a little!  Having 10-15% of your time AWAY from your main project can be a psychologically useful tool to keep you fresh on your main gig.

And redemption:

Asking other people is usually a good idea and several did look at my analytics hell, some deeply and completely so that the answers surfaced.  Not only that, they looked at the data differently, tried other tools and segmented it across more  
dimensions.  Evidence was crystallised.  

People sometimes think based on my slide decks that I can solve all these optimisation projects single handedly.  What they don't realise is I get stuck or need a fresh viewpoint just like anyone.  Most of what I've learned has come from other people and most of what I continue to learn comes from looking at smarter people than me figure stuff out.

Having the humility to understand how limited one's knowledge is - in the face of the sheer variety and difference of millions of customers - may come to you from usability testing, split testing or just trying harder to inhabit your customers shoes.  

Without some humble, your overconfidence means you think you know more than you actually do.  Most of us if asked to design a web page would have a high degree of confidence about the improvements we're making.  Testing shows me that this isn't reliable  - even for the best optimisers or split testing genuises out there.

One of my biggest insights from the last 3 years is that there is no expert here.  There isn't a CRO genius out there who can magically fix or know the problems on your site - it all depends on your customers, team, tools and mentality of how you approach the task.  The secret sauce is not in the parts but in the connection between the parts.  Like an Orchestra.

Think of the worlds best Orchestra as a CRO programme you've seen at another company.  You could just buy all the instruments - every Stradivarius in town -  just copy everything they own?  

An Orchestra simply wouldn't work without the right quality of musicians to use the tools (instruments) would it?    So you need the musicians, the instruments and the music and you're done?  No - you still need the orchestration.  

The conductor.  Their job isn't to be the best musician or tool user - their role is to support and weave the constituent parts into a musical cohesive whole.

So when you're building your optimisation plan, think about the outcome - the music you're playing.  Look at the parts and also at the whole machine.  And be humble - acknowledge your lack of knowledge and then act to fill this gap and improve the orchestration as well as every group of instruments.

And if you're acting at your limits or know you're getting stuck?  Phone a friend.  Always ask someone else to look over your shoulder, share a problem, pore over something together. You know who you are.  Thank you.
Add a comment...

Hacking the Company - Lean UX in Action

I've had quite a few experiences recently worth sharing - and all of these are centered around the concept of using the website as a way to hack your business model, pivot in a new direction or otherwise 'hypothesise' about how things could be.

When I worked at Belron (who own Carglass, Autoglass and Safelite around the world) - we often did usability testing with hypotheses.

In France, for example, we took our core service proposition on our web channels and thought hard about the usability test we planned.  Since we were going to be changing the entire process for handling customer bookings, how about we model that - instead of the status quo?

So - before the business had even begun putting this in place, we mocked the website process up to reflect the future reality.  We then got the feedback.  The important thing is that customers were feeding back on a server we hadn't yet invested in designing.

This allowed us to adjust the new process to more closely match customer emotional needs, before we'd actually kicked the project off.  This is where the web scores - it's like having a retail store where you employ staff who just test stuff out.  The good stuff gets exported to the rest of your retail empire.

You can create interfaces, products, pages, promotions - that don't exist, that cost little to produce and yet give you warning of the f***ups you're about to make.  Hypothesise the future, with customer feedback NOW.

Another good example is a couple of freemium SaaS vendors I've worked with.  Both were looking closely at how much stuff they gave away for free.

I explained the situation like this - it's a multi step conveyor belt - inbound marketing to landing to engagement, conversion and then the next step.  Either Lifetime Value and other metrics but especially, the conversion of FREE members to PAID members.  And movement up the hierarchy of product or feature mix that drives revenue.  A conveyor belt of conversion steps, outcomes and insights.

Aside from asking them to work the whole chain, I helped design two feature comparison tables.

You know - the ones where you see the ticks and crosses against the feature list.  A visual reminder of what you get for nothing, and what the potential implied value of paid means to you.

A very interesting page to test of course.  But how about we totally mess with their entire business model?  WHAT?  How are we going to do that - without loads of work?

Well let's assume you have 7 core features, 5 of which are available in the free product.  What happens if you design an A/B split test - where you vary the mixture of free and paid for features?  Mix up what people see on the feature table and gain insight from the results?

What I see from tests like these are interesting insights - the first thing they tell you is that your mix may not be optimal.  Either you may be giving away too much for free or you are under converting as you don't make free compelling enough.

It's possible from this kind of testing to arrive at a smaller 'free'  featureset that converts MORE people to both paid and free, whilst holding back more stuff to convert later.

And this is the best bit - if you've found your sweet spot, you can hold back more stuff to use as an upsell to paid or premium features.

If you test this kind of stuff, it might be confusing for customers as the experiment won't match the actual service (yet).  One way round this is to get to the end of the conversion process and say "Congratulations - we've given you the full deal.  Just for today, we're giving away the full feature set to everyone".

So - do be careful about Business Hypothesis Testing - especially if you're trying it on a live website.  However, remember that useful point - you can model the future and dekink the product design, before it's even built, deployed or arrived.

There is nothing new about this stuff - in the UX world we prototype all the time.  However, this isn't about prototyping the interface - it's about getting feedback on your business model, service design or experience - when imagined differently.  And that can be a game changer.

I love these kinds of test because they gain huge leaps in product but in a preventative way - rather than a reactive, looking in the wing mirror kind of way.  Welcome to the new Lean.

Add a comment...

Post has shared content
Better Homes & Gardens is one of the most successful brands on Pinterest. How they do it. 
Add a comment...

Post has attachment
Add a comment...
Wait while more posts are being loaded