How to test the usability of prototypes like a pro

Usability isn’t something you just can cook up in any one phase of design, but must be developed and refined throughout the entire process. If you want the best end product, you have to anticipate real user scenarios from the prototyping phase. Usability testing should be the last place to start thinking about usability.

Why worry about usability testing so early in the process when prototyping already has a big enough to-do list? Because unless your prototype is usable, all your testing will tell you is that people don’t like terrible products.

[pullquote]unless your prototype is usable, all your testing will tell you is that people don’t like terrible products[/pullquote]

It almost goes without saying, but you’re designing the product to be used by real people. In order to prepare it for real people, it should be tested on real people. Prototypes are built for experimentation, so it only makes sense to test them on real users.

With that in mind, let’s look at how to keep usability in mind as you build the prototype, how to test usability before you have a prototype, and tips for testing with prototypes…

Usability tests before the prototype

Usability testing doesn’t have to start with prototyping — in fact, if you have the resources to start sooner, you should. While mostly conceptual, these tests can pinpoint the best way to structure your prototype’s navigation and information architecture. The most common pre-prototyping tests include :

  • Card Sorting: simple and steadfast, this test reveals how users would prefer your product’s information architecture. All the elements of your product are written on cards, and the test-takers are asked to organize them under predefined categories (“closed”) or under ones they’ve thought up (“open”). For details, see Donna Spencer’s Card Sorting: A Definitive Guide.
  • Tree Testing: the “sister test” to card sorting, tree testing evaluates the effectiveness of existing information architectures. Users are given a basic, stripped down map of the site/app/etc. and asked to click through to complete certain tasks. The test monitors if they choose the correct route, and if not, what got them lost. Founder of MeasuringU Jeff Sauro explains the details.
  • Interviews: sometimes the best way to understand your users is to simply ask. It sounds simple enough, but the nuances and strategies for user interviews are endless. Kate Lawrence, UX Researcher at EBSCO Publishing gives some tips on how to run these specifically for usability testing.

Fixing problems earlier is always better, and these preliminary tests will ensure the conceptual foundation of the prototype is in good shape before a single line is drawn.

The right users and the right tasks

While usability tests are all different, all of them need users, and most of them involve tasks. Since these two elements are prominent in all usability testing, we’ll briefly explain how to best deal with both.

  1. Recruiting users: after all the work with personas, by now you should have a clear idea of your target users. It also helps to segment your users based on behavior. In fact, you shouldn’t obsess over demographics. The biggest differentiator will likely be whether users have prior experience or are knowledgeable about their domain or industry — not gender, age, or geography.
    Knowing who to recruit is just the first step. The more involved part is finding and recruiting them. Jeff Sauro outlines the 7 best ways to locate the ideal users for your testing.
  2. Writing tasks: tasks determine what the user actually does during the test, and therefore determines what usability factors are being examined. Tingting Zhao, Usability Specialist for Ubuntu, describes some distinctions to keep in mind when designing a task. There are 2 main decisions:
    a. Direct vs. scenario: a direct task is one that is strictly instructional (eg. “Search the website for a Tandoori chicken recipe”) while a scenario task comes with context (“You’re hosting a dinner party for some old friends, and you need a Tandoori chicken recipe”). Direct tasks work best if you’re testing technical data, while scenario tasks are better in all other cases.
    b. Closed vs. open-ended: A closed task has clearly defined success criteria, while an open-ended task can be completed multiple ways. Closed tasks check specific functionalities, while open-ended tasks are better for understanding how your users’ minds work. A closed task would be: “Your friend is having a birthday this weekend. Find a fun venue for up to 15 people.” An open task would be: “You heard your coworkers talking about the iWatch. You want to learn how it works.”

General advice for testing prototype usability

[pullquote]Given the “incomplete” nature of prototypes…users will have questions…that a moderator will have to answer[/pullquote]

One of the first questions usability testers ask is whether or not it should be moderated. While there are a lot of good reasons for unmoderated tests, for prototype tests we recommend moderation. Given the “incomplete” nature of prototypes, chances are that users will have questions about the UI that a moderator will have to answer.

Another common mistake in testing is to stop or alter the test if the user experiences difficulty. Since the goal of usability testing is to find and solve difficulties, this situation could actually make the test a success. If, for example, the user strays off onto paths that haven’t been developed yet in the prototype, you could ask them why they went there and what they would have liked to accomplish. A few follow-up questions about the obstacles may yield more valuable feedback than a user with a “perfect run”.

Different fidelities for testing prototypes

While some believe in testing early with rough prototypes and others advocate testing higher fidelity prototypes, we believe the best approach is to test at every fidelity possible — and as often as possible. Chris Farnum, the Senior Information Architect at Enlighten, explains the pros and cons of each type. As we’ll describe below, lower fidelity tests are better for testing concepts while higher fidelity tests are more suitable for testing advanced interactions.

[pullquote]the best approach is to test at every fidelity possible[/pullquote]

  1. Low fidelity: lo-fi prototype usability tests, including paper prototypes, can work at the early stages of development, but become impractical later on. Lo-fi prototypes also encourage more honest criticism, since it’s immediately clear that it’s just a work in progress.
    However, at the later stages, when usability tests check advanced functionalities, lo-fi prototypes stop becoming helpful since you’ve hit the fidelity limit. This is especially true for paper prototypes, since you need a “human computer” to manipulate all the parts, and that can become extremely difficult as you add menus, interactions, pages, and elements.
  2. High fidelity: hi-fi prototype testing gives the user a near-realistic experience of what the final product will be like. Hi-fi prototypes are ideal for testing complex interactions and your solutions for usability issues discovered in earlier rounds of testing. However, unlike lo-fi prototypes, these are costlier to make.
  3. Medium fidelity: can’t decide between high or low fidelity? Mid-fi prototypes work best when you need a balance between fidelity and cost. If you’re only going to run one round of usability tests, go medium fidelity.

Four content guidelines for testing any prototype

When you start building the prototype, it’s not only acceptable to gloss over minor details in lieu of the essentials, it’s at times recommended. But when it comes time to test your prototype, make sure you’ve filled in some of these details that may get overlooked in lower fidelity. In our experience, these are the most helpful tips for preparing your prototype for testing:

  1. Avoid lorem ipsum: distracting, confusing, and lacking meaning, lorem ipsum text does not fully capture your product’s message.
  2. Use generic names: tests may be more fun with silly or celebrity names, but fun isn’t the point. Any distractions will bias the results, so keep names generic and realistic.
  3. No placeholder images or icons: boxes with Xs may work during wireframing, but not in testing. Images and icons play a large role in UX, so these should be implemented by testing time, even if only with temporary sketches. The exception is if these images are purely decorative and don’t help to understand the UI.
  4. Use realistic data — Don’t fill data like phone numbers or addresses with Xs or jokes — these are distracting. Realistic and believable data here will give your user test the most accurate results.

Test participants may become fixated on details that you thought were negligible, so be careful what you don’t say. These small steps to reduce distraction and confusion can go a long way toward cleaner test data.

Featured image, usability testing via by K2_UX via Flickr.

Jerry Cao

Jerry Cao

Jerry Cao is a content strategist at UXPin — the wireframing and prototyping app — where he develops in-app and online content. To learn the methods, tools, and processes of UX prototyping, download the free The Guide to Prototyping.

Join to our thriving community of like-minded creatives!