We developed ‘mass user testing’ in response to the real world needs of commercial clients and to combat the deficiencies inherent in the most widely used traditional usability testing methods (we have actually been doing this for about 4 years but formalised it last year).
The key to mass user testing is using large numbers of people rapidly and cost effectively – this is achieved through recruiting people ‘off street’ with the lure of some cash (or other incentive – we are quite creative in this regard) for about 15 minutes of their time.
Traditional versus mass user testing:
Traditional testing methods have test sessions that last about an hour (sometimes more), and typically 5 or 6 pre-recruited (and therefore expensive) people are tested per day.
Performed well this approach delivers good qualitative results – for instance it will identify usability issues in a website process and provide recommendations for improvements – but it fails to deliver valid metrics (unless costs go sky high), and is often bloated in terms of the value that a test on any particular person delivers (e.g. the main value is often in the first 15 minutes or so, or clients may want to focus just on key discrete design changes, such as homepage or landing page modifications).
Mass user testing by comparison delivers:
- testing with 5 or 6 times as many people (30 per day) for equivalent costs – sessions are shorter and participants are recruited ‘off street’ (and therefore cheaper)
- valid behavioural observations – web site visits are often less than 15 minutes in real life
valid performance metrics – e.g. statistically robust time to complete task, number of errors made etc only with enough participants can this be robust and built into a performance measurement framework - valid eyetracking metrics – e.g. statistically robust figures for how many people looked at a feature (e.g. advert or call to action), how long on average it was looked at, how many times it was looked at etc.
- valid eyetracking visualisations – e.g. hotspot figures, average visual journeys through a page etc – these are used to drive optimisation above and beyond usability improvements (particularly effective when used with other big number techniques such as multivatiate testing)
- the ideal testing environment for A/B testing of designs – e.g. 30 people see design A and a different 30 see design B – this eliminates order biases but has sufficient numbers to generate valid comparisons of results
A specialist test facility with specialised software and process
Bunnyfoot recently opened a new office specifically to deliver mass user testing. Reading, in the Thames Valley, was chosen specifically to take advantage of demographics representative of most clients’ needs (e.g. high A, B, C1, C2 without London bias) and a high footfall for recruiting test participants ‘off street’.
The testing was developed originally to meet the demands of Bunnyfoot’s clients for quantitative results and insights from eyetracking testing of print adverts, packaging, e-mail and direct mail. This worked so well that it was opened out to Bunnyfoot’s usability clients too, where it has proved highly effective at supporting rapid development cycles and part of a user-centred-design process. Qualitative and quantitative results are returned within a day and the continual measurement allows design effectiveness to be tracked throughout the process, and/or competing designs to be objectively judged against each other.
An innovative ‘omnibus’ service provision reduces the barriers to testing
To further refine the service Bunnyfoot run an omnibus testing service. Each and every week a minimum of 120 people are tested using websites and other media (print ads, TV ads etc) from a variety of industry sectors.
Clients can pay (either one-off or via a subscription service) to have their materials included in the ongoing testing. This proves particularly cost effective if just the effects of a single page (such as a home page or landing page) or small modifications to existing pages need to be tested – the tests for these often only take a minute so it would be highly wasteful to pay for a full blown longer test.
Mass user testing is the golden bullet to all testing? – well no …
Mass user testing is really useful but is not the golden bullet to all types of customer testing – it is particularly suited to B2C offerings with a wide customer base in terms of demographics (strict customer profiles can often be recruited off-street but it takes more manpower, costs more and takes longer – slightly defeating the whole point).
It is excellent as an extra tool in the whole spectrum of user-centred-design testing activities to be deployed when most appropriate and often in conjunction simultaneously with more traditional testing methodologies.
I bet without too much thought you will have a use for it.