The other day, I was having a conversation with a colleague about some user testing she was looking to do on a project. “I need to get permission from the client,” she said, “and even once we do it, I’m not sure how much information it will provide. We’ve already done a few tests already; there may not be a lot of new things to learn.”
As designers, I think we all probably have had these thoughts from time to time. I know I have. But that’s the wrong way to look at it. Here’s why.
User Testing Is Test-Driven Design
Let’s take a look at Test-Driven Development, one of the core practices of agile software development. When a developer goes to create a new feature, the first step is to write an acceptance test and then watch it fail. (If the test passes, this is a problem — either the feature exists, or the test is defective.) Next, the developer writes code and runs the test again. He continues writing code and running the test, until the test passes.
After the test passes, it’s time to clean up and refactor the code, if necessary. As development progresses, the developer continues to run all of the tests that have been written so far. This is how the developer is able to make sure that the new code didn’t break any of the software’s other functionality.
Good design is test-driven, too. Design starts when there’s a problem to solve. The problem is the designer’s “broken test.” By doing user interviews, mental modeling, and other contextual research, the designer gathers the information that allows her to understand the problem. Then she designs a solution to the problem.
User testing is the acceptance test for the design. If you skip out on testing your design, you’re like a developer who never runs his test suite. Your design could be broken. It probably is. You just don’t know it.
No News is Good News
If you’re being a good designer and “running your tests” regularly by getting your designs in front of real users early and often, you may eventually get to the point where you’re not getting a lot of feedback besides, “I like it; that works great!” Maybe your users point out a few minor things, or express an opinion about the shade of green that you used, but they don’t have a lot of trouble with the application. There are no “aha!” moments, everything works as expected, and the users are generally happy.
I used to feel like these sessions were a waste of time and money. Then I realized, uneventful user testing sessions are a sign that the design works. As a product team, we can continue design and development with confidence, knowing that our software functions correctly (thanks to test-driven development) and solves the user’s problem (thanks to test-driven design.)
TDD Is Not Optional
If a product owner is looking to cut product development costs, user research and testing (for some reason) are often the first thing to go — or even worse, they weren’t included in the budget in the first place. This is absolutely the wrong thing to do.
We don’t cut testing out of our development process. Without test-driven development, you end up with a bunch of code that may or may not (probably not) work. Without user testing, you end up with a piece of software that has a bunch of features that may or may not (probably not) solve a problem in a usable way. You may think it’s marketable, and you may think it works, but you just don’t know.
Test your designs, and be confident of the value that those tests provide. By testing your designs, you are giving your product a much greater chance of success. And who doesn’t want that?