On an agile development team, multiple people share the responsibility of ensuring that the user stories in the backlog are desirable, viable, and feasible. At Atomic, our delivery leads, tech leads, and designers work together to prepare stories for the development team to work on.
While our delivery leads are responsible for defining the stories and setting scope and acceptance criteria, it’s the tech lead’s responsibility to translate business requirements into technical ones. We do this by maintaining a close relationship with our stakeholders and keeping an eye to the technical forest while the development team navigates through the trees.
The core challenge of the tech lead job is, in my opinion, this:
After developing that high-level perspective, I need to hand my learnings over to the development team, who have been dutifully keeping their heads down crushing through other stories. I need to ensure that they have enough information to do their jobs well, without distracting them with irrelevant details or dictating that they be clones who build things exactly the same way I might.
As tech lead on a relatively large team, I’ve run into challenges like:
- As I write out a hypothesis for how something might be built, I spend too much time outlining the details. The developer working on the story feels they’ve been robbed of an interesting engineering problem and have been given a boring recipe instead. Later, my overly-prescriptive implementation details make them feel like they don’t have room to flex to meet challenges I didn’t account for.
- During estimation with the full dev team, someone asks a question that rocks my expectations for how the story should be approached, and I need to go back to the drawing board.
- During PR review, I discover that the implementation of story X is wildly different than what I expected. Maybe it won’t mesh well with story Y down the road. Maybe it makes tradeoffs that sacrifice performance when we’d rather have sacrificed design, or vice versa. Or maybe the design just clashes with some other module.
- During testing or review by the product owner, some key component is found to be missing or working differently than the planning team expected. Both sides point to the ticket, the planning team claiming that obviously the story implied it’d include X, and the development team frustrated that X wasn’t explicitly included in the requirements (and, if it were, I’d have asked for more points, dang it).
Every project is, of course, different. Your development team’s makeup, skills, and experience levels will determine what the right level of refinement is for you, but I’ve found that it helps me to avoid some of the above issues if I have a sort of mental checklist while I work through the backlog.
Here are some of the things I try to ask myself before I hand a story off for implementation. This list isn’t complete, and it lacks a great deal of nuance, but I’ve found it to be a good place to start.
1. Technical Details
These questions help me visualize the implementation of the story and surface any problem areas early so I can resolve them for the team before work begins.
- First, and most importantly, is this the first time that any connections will be made across the codebase? For example, will you need to look up a transaction record by item ID for the first time? Call it out loudly and in bold so the team can factor that work into their estimate.
- Is there a reason something like this wasn’t done previously? If the new feature runs counter to existing architecture, ask yourself why.
- Has the team implemented something like this before? Is there an existing feature you’d like them to take inspiration from?
- Have you included enough of a roadmap to help the team estimate accurately? For example, for a more junior team, it may help to include a step-by-step guide that suggests adding a utility to module A, then building UI for widget B, then adding a query that fetches data from module A to give to widget B, etc. A more senior team might only need a brief suggested route. Rather than turn-by-turn directions, a simple “Use the widget C model for this” might suffice.
- Have you included enough of the broader context you hold that the development team can make smart decisions about their approach and potential tradeoffs? What should they optimize for: development time, reuse, performance? What might we want to leverage this work for (e.g., as a prototype for future widgets or maybe an opportunity to flex a new testing utility)?
- Are there any other technical goals you’d like them to keep in mind? I use this question as an opportunity to share any development philosophies that should inform the feature.
2. Presentability
Our delivery leads work hard to ensure that stories are cut in such a way that they can be shipped independently, but it’s important to review these questions with a technical eye.
- If the feature is dependent on any kind of system state, how will the QA team get into that state? Do they need any utilities or scripts, to, for example, review the order confirmation page without activating an entire shipping workflow?
- Will this feature be clear and presentable in a demo? Might stakeholders who aren’t at your demo be confused about what’s finished and what’s not? (We strive to release shippable software every sprint, but sometimes scheduling requires that a quiz lands on a “PLACEHOLDER” page until the completion page is finished or that a profile page shows a fake user for the time being. When you encounter these loose ends, is there a quick stopgap that will minimize confusion?)
- Does the development team have access to all of the content they need? Do they need any new credentials for integrations? If you’re not able to provide real content at the time of implementation for reasons beyond your control (e.g., the wheels of bureaucracy moving slowly), can you use placeholder content that makes it extremely clear to reviewing stakeholders what’s missing and why?
- Is this work going to be backwards compatible with previous releases? If not, how will the team cope with that? This is particularly relevant in cases where all releases of a client are using a single server or when you’re working with external services.
3. Dotting I’s and Crossing T’s
Finally, I find that there are a few things I tend to forget. Have you defined:
- The data source for any text on the screen, particularly dynamic copy?
- How any UI behaves at different device and window sizes?
- Any animations?
- Any error states?
- Any form validation? What makes an invalid input?
Disclaimer
This list has been helpful for me, but bear in mind that many of these questions may shift or become irrelevant from project to project. In particular, a development team’s experience breakdown can change the landscape significantly. Don’t be afraid to task your team with answering some of these questions themselves! Just make sure someone is asking them.