On a recent project, we had enough time to do a small user test, which got me thinking about the optimal time to get feedback. We discussed two options: at the visual design stage with a clickable prototype, or with a rudimentary minimal viable product. Read more on When’s the Best Time to User Test? A Case Study…
When I started at Atomic, I had never participated in or conducted a usability test. In true AO teach-and-learn fashion, I’ve since then been a part of user testing on all of the major projects that I’ve been on since I started almost a year ago.
I have a few concrete take aways that I would want to hear if I were going back and learning about usability testing all over again.
1. Sit in on a test first.
This is probably obvious. But be sure to sit in on a test first, and not as the lead facilitator. See how your teammates run tests, and be there to ask supporting questions and take notes. I recommend pen and paper notes so that you don’t intimidate the tester with the clicking of a keyboard after every comment. Read more on 9 Lessons Learned from a Year of Usability Tests…
One of my core beliefs as a software designer is that regular user testing is extraordinarily valuable to almost any project. Getting feedback and validating the usefulness and usability of features as they are being developed helps ensure that your time, budget, and effort are being spent wisely as you work to create a product.
For many, the phrase “user testing” conjures up images of a test session conducted with scientific precision on a large scale in an elaborate, expensive usability test lab. However, in many cases, testing at this scale is unnecessary. What you really need is some concrete feedback from a few real users. The good news is that you can easily get this feedback with a small budget and a bit of effort. Over the past few years as I’ve incorporated usability testing more fully into my own practice and our process here at Atomic, I’ve had the opportunity to iterate on my technique. In this post, I’ll tell you about how I use cheap, readily available software to conduct and record a usability test.
In a nutshell, I use two computers: one for me (the test facilitator), and one for the test participant. I also use two pieces of software: GoToMeeting and QuickTime Player. Here’s a quick diagram of how it all looks:
“…no one is born a great cook, one learns by doing.”
– Julia Child, My Life in France
Or, to paraphrase the late great Julia — no one is born a great designer, one learns by doing (and testing). Cooks test out their recipes with an audience, and the same principle applies to new products and services. Usability testing is necessary to prove the product viability, alongside making sure that the proposed design will meet (and exceed) the user’s expectations.
Think of usability testing as the test kitchen for successful software design. You put together some great ingredients, follow the necessary steps carefully, and – voila! – a greater design emerges.
The other day, I was having a conversation with a colleague about some user testing she was looking to do on a project. “I need to get permission from the client,” she said, “and even once we do it, I’m not sure how much information it will provide. We’ve already done a few tests already; there may not be a lot of new things to learn.”
As designers, I think we all probably have had these thoughts from time to time. I know I have. But that’s the wrong way to look at it. Here’s why.
I acted as the test facilitator for this project. As the test facilitator, I greeted test participants when they showed up, explained how the test would work, and walked them through the tasks that we needed them to perform.
The reasons for and ways of accomplishing usability studies on complex web applications are well documented. They increase user satisfaction and retention, they reduce user frustration and training—in short they add business value in obvious ways. There are a lot of known ways of doing this testing, too. You can use discount methodologies, testing only a few users; you can use scientific labs, or test hundreds of participants. You can measure error rates and task completion time or you can seek out qualitative feedback.
But what about simple websites—marketing websites whose main purpose is to convey information? Is it important to test those? If so, how do you go about usability testing marketing messages, content and visual design?
When usability testing, it is often desirable to record the screen of an application to either share it with a remote group of stakeholders or capture it for further review. With screen recording software and built-in webcams, testing websites on a computer is relatively simple. But testing applications and websites on mobile, and particularly iOS devices, presents a challenge. This post shares some of the options I’ve come across for the iOS platform, specifically the iPad.