- May 23, 2013
Better User Testing
“We don’t have the budget or time for user testing,” is something I’ve heard all too often during planning and estimating meetings. Testing with real users has traditionally been an expensive and time-consuming line item in project plans—usually one of the first to be cut when budgets are tightened. It’s no mystery why: most testing methods have classically been stressful to set up, requiring a tremendous amount of scheduling, coordination, resources, and time.
When I conducted my first usability test, I felt extremely tense. I was given a sliver of hours to painstakingly figure out how to turn a dense wireframe document into a testable, interactive system. Recruiting seemed to take forever, and some recruits inevitably flaked out. Running the tests required cold calling a large number of recruits, many of whom were non-technical and very impatient with the computers and software used to run the tests. At the time, I didn’t think being a designer would ever mean having to face these sort of obstacles. The thing that haunted me the most, though, was the possibility that the designs that my team and I worked so hard on would fail.
Why Designers Should Run the Tests
My early struggles taught me a valuable lesson: engaging with users in realtime while they interact with a system is very different from reading a report about it after the fact. The more testing experience I got, the more personal accounts I gained—which continue to shape how I approach design problems to this day. Testing not only helped me make better design decisions on a particular project, but it also trained me to make even better design decisions well after a project had ended, or as Mr. Hoekman Jr. put it:
“Usability testing informs the designer and the design.”
– Robert Hoekman Jr., “The Myth of Usability Testing”
Adding value is within your budget.
At a fraction of the cost of traditional lab testing, remote and informal testing methods are steadily becoming popular options to reduce costs while still getting real designs in front of real users. These tools can be strategically used throughout the definition (discovery), design, and development of a project to gain the most insight, even with limited time and on a small budget.
During Requirements Gathering
Talking to users about what features and functionality resonate most can help inform decisions on how to prioritize requirements. The most obvious method is simply picking up the phone or conducting user interviews users face-to-face. Recruitment of these users can take many forms—sometimes clients will even have a list of engaged users for you to work with. If you aren’t so lucky, Ethnio is a good product for recruiting users via a web-based intercept on a live site. Ethnio enables you to set up a screener, collect contact information, and provide incentives to potential recruits. Having a real conversation with the participants is ideal, but an online survey tool like Survey Monkey is a quick and cost-effective alternative to gather quantitative data across large sample sizes.
During Content Development
Card sorting and tree testing exercises help to evolve content categories and develop a site’s information architecture. If your project budget doesn’t allow you to recruit users and run these activities in person, you can still find users to participate on the web using tools like Optimal Sort and Tree Jack.
The evolution of the design process at Happy Cog has given rise to the creation of HTML wireframes to more efficiently document interaction design, content strategy, and information architecture across a range of screen sizes. These interactive wireframes not only serve as a design tool, but also are instantly-testable artifacts that require a minimal amount of setup before they are ready to be put in front of real users. Depending on the project, there are a few main ways you could gather data:
1. In-person Testing
Testing in person gives moderators the ability to physically observe participants’ reactions. A lot of information can be gathered from physical reactions, especially on mobile or touch devices where those reactions would be lost with remote testing. Recently, at Happy Cog, we went to a restaurant and informally tested a client’s site on the iPhone. We captured these sessions using the the UX Recorder app which captures gestures, audio, and front-facing video. It took all of three hours to complete, we gave each participant a voucher for free guacamole, and we learned a few simple things we could do to improve the responsive nature of the site.
If you’re conducting an in-person test on a laptop, it’s easy to record these sessions for later reference using a tool like Clearleft’s Silverback. It captures screen interactions, webcam video, and audio of a test session. Pro tip: setting up the test machine with screensharing software like Join.me allows other stakeholders to remotely observe the live test sessions in real time.
2. Remote Testing
Remote testing removes the barrier of having to be in the same geographic region as the participants. A tool like Ethnio can work well for recruiting for this type of test because it allows for recruitment across a large geographic area. To remove as many technical barriers as possible, we’ve preferred calling participants on the phone, hosting a web-based, screen-sharing session like Join.me, and giving users mouse and keyboard control.
3. Testing Concepts or Mockups
Occasionally, there may be a need to get design mockups or concepts in front of an external audience. Using similar remote and in-person methods as described above combined with a tool like InVision, designers can easily add in-browser interactivity to static design comps and put them in front of users.
Not all tools or testing methodologies are right for every project. I am constantly adapting my testing toolkit as new tools become available and our design process evolves. Regardless of which type of test you conduct, it is important to set expectations with stakeholders.
A good test plan answers:
- Why are you conducting the test?
- What you are measuring?
- Who is participating, and what are their incentives?
- What methodology is being employed?
- What devices are being tested?
- What tasks are being tested?
- What happens after the test is over?
Similarly, how you engage with test participants during the sessions can greatly affect the outcome of your test. Having a script that dictates how to describe specific tasks to participants can help create uniformity across testing sessions. Permitting yourself room to deviate from the script when participants behave unexpectedly can help surface the reasons behind potentially-overlooked usability errors.
Time has taught me the real value of user testing and has changed my perception of what is possible with a few hours and a small budget. Maybe one of these methods will help your team incorporate testing into your next project, even if you didn’t anticipate having the resources to do so. Are there any other tools you use for user testing? Let us know in the comments.