Carl Erickson, our CEO, said it best:
I think one of Atomic’s unspoken values is that of validation. We don’t assume; we show. We experiment to learn. We don’t rest until we can show we’re done, and it’s done right. Through validation, we drive out as much risk as is possible and makes sense.
The first thing I ever did at Atomic Object was write a unit test for a bit of Java code that hadn’t even been written yet. This was called Test-Driven Development (TDD), and it emphasized tested code as a cornerstone of mature software development. It was strange and difficult at first, but also magical. I quickly began to feel the tremendous power of always knowing that the code we were building—every bit of it—was done and done right.
Since our start, TDD has been an essential technical practice at Atomic. But over the last thirteen years, we’ve broadened our focus to encompass product design through post-deployment support. Our culture of validation has expanded as well, pervading our practices every step of the way, from early design through performance monitoring in the field. We can help our clients know they’re building the right thing before we settle into building it right… and keep it running that way.
Here’s a survey of some of those practices viewed from a validation-minded standpoint.
Research, Design and Planning (RDP)
A lot happens during the Research, Design and Planning phase, including exploring problems, brainstorming solutions, and building a shared understanding and consensus among members of the team. It’s a cycle of divergence and convergence—generating ideas, then using validation to improve and refine them.
We play Innovation Games with our clients to understand business and user needs and to increase empathy for customers’ experience, both our customers and their customers. Some games in particular (like “Start Your Day” and “Remember the Future”) improve our understanding of users in relation to the product, and validate our ideas for how best to meet their needs.
Personas and Context Scenarios
Together, they create a guide we use to validate our work throughout the design process, from early sketching and wireframing through hi-fidelity visual design. Context scenarios also provide language that we use to write automated acceptance tests for the product.
We often interview users, asking them a combination of closed- and open-ended questions to learn more about their perspective, expectations, and needs. While partly a process of discovery and fact-collecting, user interviews also help us validate what we think we know. Through direct conversation with a user, we have an opportunity to test our assumptions about their perspective, needs, and obstacles, and to validate our own ideas for providing a valuable experience to them.
UX Display Boards
We maintain and exhibit big-picture artifacts—personas, context scenario storyboards, app flows, lo-fi interface sketches, etc.—on UX display boards in the project space. Keeping these guiding documents in plain sight serves as a form of continuous integration for design by inviting continual critique and validation.
Test-Driven Design: Prototyping & User Testing
We construct interactive prototypes to give life to the ideas coming out of RDP. The act of building a prototype itself is the first test of features fulfilling users needs. Gaps or blocks appear that may have been overlooked earlier. Ideas that seem good on the surface turn out to be difficult or confusing when put into motion.
We may also leverage these artifacts for early user testing by bringing in users and asking them to act out our context scenarios. This can generate feedback and insight into everything from UI design down through our assumptions on how someone uses the product to accomplish goals.
Test-driven development (TDD) is a foundational practice here at Atomic; we’ve done it consistently since we started making software in 2001.
In TDD, a developer writes a test to see if a (nonexistent) piece of code works, then writes the code itself, then runs the test to validate their work. They also write larger tests that determine if a specific feature or function is working (unit tests) and that validate the system as a whole from the perspective of the end user (system tests).
- System Tests – We translate context scenarios into automated system tests that exercise the application using the same interfaces as a real user. For any given feature, we begin development by first writing a system test which exercises that feature in the product, and guides us in adding new software components to support that feature. When features are completed, their system tests run continuously for the life of the product, ensuring they continue to function as specified.
- Unit Tests – As we devise components to support a feature in the product, we drive their implementation with thousands of narrowly-focused unit tests. These tests describe the behavior of every component in the system, expose a myriad of common development bugs within minutes of their inception, and catch regressions as soon as they appear.
TDD ensures that every piece of an application—large or small—has been clearly defined and checked. And it helps us easily pinpoint problems if different pieces of software are reacting against each other.
Backlog Grooming, Sprint Planning, and Review
We work continually with product managers to add definition and estimates to feature work, as well as give them visibility and control over the day-by-day creation of the product. Our agile practices keep information flowing smoothly in both directions so that the stakeholders know exactly where their product is and developers and designers can proceed firmly in the knowledge that we’re working on the most valuable features at all times.
We know we’re able to deploy the product to production because we do it early and often. But the big win here is that we know our clients know where their product is, especially if they’re the ones doing the final testing and deployment of each feature as it is completed.
Software Quality Assurance
We build products for humans, so we need to test like humans. Where automated tests focus on specified, quantitative verification, an exploratory tester evaluates the product from the perspective of a real person—making mistakes, getting irritated by strange interfaces, and learning as they go. They validate that the product is functional, stable, and usable by its intended audience.
Before launch, we observe users interacting with a pre-release of the product. At this stage, we may achieve ultimate validation of the many ideas that brought the product to this point, or we may uncover yet a few more invaluable items to help us hone the product in to its final released form.
Deployment (and Beyond)
The final assumption we need to verify is that the product is out there doing its job. Even when we’re not actively building or maintaining a product, we set ourselves up to know things are going smoothly for our clients and their users.
We set our customers up with analytics tools to understand how their product is being adopted and used, removing much of the guesswork surrounding how many people use the app, where they spend time, where they don’t. We also instrument the product to let us know if something goes wrong (or is about to) via performance monitoring and error notification tools. As a final touch, we automate the tracking of important maintenance tasks (when does your SSL certificate expire?) on behalf of our customers to ensure they don’t get lost in the shuffle.