Product Design Is Changing in the World of AI and Agentic Tools

For years, designers waited on development. We’d finish research, deliver specs, hand off mockups — and then sit in a holding pattern while engineers built what we’d envisioned. The bottleneck was implementation. Design was ready. Dev needed time.

That dynamic has completely flipped.

Development speed has accelerated dramatically. AI coding tools, agentic workflows, and orchestrated generation have collapsed what used to take sprints into days. And now the constraint isn’t how fast the team can build — it’s how fast we can figure out what to build, for whom, and why it matters.

Two forces are colliding: faster implementation and a real opportunity to close the gap between design intent and working software. Let’s talk about what’s changing for designers, what we need to let go of, and what we need to embrace.

Speed and the Design-to-Development Gap

On AI-forward greenfield projects, development teams are ready to move with a high-level directive. Give them a direction and they can start generating. But design still has to answer the hard questions: Who are the users? What do stakeholders actually care about? What are the right patterns for this problem?

That discovery process takes time. And the gap between “dev is ready” and “design has enough clarity” is where the pain lives now.

This isn’t a small process adjustment. It’s a pendulum swing. We went from waiting on implementation to being the thing everyone is waiting on. The pressure to make critical decisions that shape product direction — without having all the information you’d ideally want — is real and constant.

What’s Actually Changing

The answer isn’t to skip discovery. It’s to change how we approach it.

We’re compressing the cycle from insight to action. AI-assisted synthesis lets us process user interviews, market research, prototype interaction data, and product vision documents together — pulling out patterns and gaps that would have taken days of manual synthesis. We’re running more interviews, connecting more data sources, and getting that feedback to the people who need it faster.

Here’s what that looks like in practice: we gather feedback from customers, run transcripts through custom prompts to synthesize the information, cross-reference what we learn against the product roadmap and vision, identify where our thinking overlaps or gaps exist, and generate hypothetical directions to explore. The cycle from “we talked to users” to “here are options to evaluate” has collapsed from weeks to days.

But the speed creates its own problem. When you’re prototyping concepts to get feedback — because you genuinely don’t know what users want yet — the cycle of build, show, learn, adjust gets tight. You need to get in front of subject matter experts or customers quickly, absorb what you learn, and adjust. That feedback loop used to have natural breathing room built in. Now it doesn’t.

The fundamental goal of supporting usability and desireability hasn’t changed. The approach to getting there has.
Designing for the Orchestration Spectrum

AI work spans a spectrum. At one end, you have simple one-off prompts — ChatGPT-style conversations where a human is heavily involved at every step. At the other end, you have orchestrated multi-agent systems where AI manages entire workflows with minimal human oversight.

Google DeepMind has described this as a five-level scale of agent autonomy, from Operator (agent executes direct instructions) to Observer (agent operates autonomously). The design challenge is that different projects, different features, and different moments within the same project may require operating at different points along this spectrum.

Deciding Where to Operate

How far down the spectrum you push depends on a few things: the team’s capabilities, the client’s willingness to engage with AI tooling, and your appetite for the risk that comes with more autonomous operation. Ideally, you push as far as makes sense — but only as far as you’re willing to take on the risk that creeps in with less human oversight.

On our teams, designers are responsible across the whole spectrum. That means reading the situation and calibrating — sometimes you need tight, hands-on iteration with a client; sometimes you need to define constraints and let the system generate against them.

What Specs Look Like Now

Here’s where it gets counterintuitive. The more autonomous the AI operation, the more upfront specification you need. Not less.

A spec can be applied at any level of fidelity or abstraction in the process. But as you move toward orchestrated generation, you need more specificity around constraints, expectations, and the ability to evaluate whether what was delivered is correct. You need to know what “right” looks like before the system starts producing.

For practitioners of Scrum or iterative development, this might feel uncomfortably like waterfall. It’s not quite that — but there is genuinely more planning required to enable the speed. The investment in upfront definition is what makes rapid generation possible and trustworthy. Without it, you get fast output that drifts from intent, and no one can tell until it’s too late.

The specs themselves are changing too. Traditional design specs documented what to build. AI-native specs need to document what to build, why it matters, what constraints apply, and what success looks like — because the system generating the work needs all of that context to produce something useful. When you work with an AI system, you have to externalize the judgment that previously lived in your head. You can’t rely on implicit understanding anymore.

Keeping Everyone Aligned at Speed

When development moves fast, visibility breaks down. Teams and clients lose their mental model of what’s been built and where things stand. This is one of the most underappreciated consequences of AI-accelerated work.

I experienced this firsthand. On a project where we were moving significantly faster than usual, I found it really hard to communicate the volume of changes happening. Keeping track of the diff — what changed in the system between the last prototype I showed and the current one — was genuinely difficult. Making sure I was bringing the client along on the journey and maintaining shared understanding across team members became a real challenge.

When you’re iterating faster, the fidelity of what design produces needs to be clearly articulated. Clients and team members need to understand what’s exploratory versus what’s committed, what changed since the last review, and why. The faster you move, the more intentional your communication has to be.

What Helps

Teams are adapting in a few ways. More frequent demos — not just at sprint boundaries but as work is produced. Changing how work is tracked so progress is visible in near-real-time. More intentional communication about what’s changed and why.

But the bigger shift is distributing product ownership. When decisions are happening fast, you can’t funnel everything through a single bottleneck. Someone needs to be at the forefront making fast calls — whether that’s a dedicated product owner or a designer stepping into that role situationally. The teams that move well are the ones where ownership is shared and decision-making authority is close to the work.

Designers Getting Technical: Prompts, Context, and Intent

Designers are now designing for agents — not just users. Writing prompts, engineering context, and shaping intent for AI systems is becoming part of the craft.

This is a real shift in what “design work” means. Designers are writing prompts themselves, adapting their process to include human-in-the-loop touchpoints with agentic workflows, and iterating on generative AI outputs as part of everyday work. The tools keep changing — I’ve personally moved through ChatGPT, Cursor with knowledge bases, and Claude-based workflows — but the underlying skills transfer. Iterating, moving information to the right place, thinking about the fidelity of communication — these concepts are tool-agnostic.

Three Layers of Communication with AI

A framing that’s emerged across the industry — and one I’ve found particularly useful — describes three layers of how we communicate with AI systems:

Prompting is the first layer — direct instruction to an AI system. It’s where everyone starts and it’s valuable, but it’s limited to what you can fit in a single conversation.

Context engineering is the second layer — providing the AI with the right information, structure, and background to do good work. This is what knowledge bases, project context files, and structured documentation enable. Most of the industry is here right now.

Intent engineering is the third layer — encoding how we approach problems and what we care about so the AI can make decisions on our behalf during longer autonomous runs. This becomes critical as agents do longer, specification-driven work. It’s not enough to tell the system what to know. You need to tell it what to want — what outcomes matter, what trade-offs are acceptable, and where the boundaries are.

The Bifurcation

Design work is splitting off in two directions simultaneously. On one side, it’s becoming more technical — designers need to understand prompts, context architecture, and how to shape AI output. On the other side, it’s becoming more strategic — because as implementation gets abstracted away, the value of deep research, user understanding, and product judgment only increases.

Whether these become two different roles or stay within one person probably depends on the team. On our teams, designers are responsible for both. But I could see specialization emerging as the field matures.

The Mindset Shift: Letting Go of Ego and Old Processes

This is the hardest section to write because it’s personal for a lot of people.

Many designers — and honestly, people across every discipline in product building — feel a pull to practice their craft the way they always have. There are tasks within design that people have learned to love. They derive value and identity from those tasks. Someone who thinks of themselves as a researcher finds joy in the practice of research. Someone who identifies as a visual designer finds joy in pushing pixels. That’s natural, and it’s human.

But when AI can do a significant portion of those tasks, the question becomes: how do you adjust, adapt, and extend to meet the changing ecosystem?

Task Loss vs. Job Loss

Andreessen Horowitz has articulated a useful distinction here: task loss vs. job loss. The framing resonates because it connects what’s happening now to how innovation has always worked. Every generation of every role has experienced this. As innovation happens, the tasks that make up a job change — what you get paid for shifts. This has always been true, but it typically happened slowly enough that people could re-skill along the way.

What’s different now is the rate. Innovation S-curves are compressing. The underlying tasks that define your role are changing faster than many people can adapt, and that velocity makes it feel like job loss when it’s actually task loss. The value is being created in different places, and you need to figure out where it’s moving.

This isn’t a design-specific problem. It’s ubiquitous to every discipline as innovation accelerates. But designers feel it acutely because so much of our identity is tied to the artifacts we produce rather than the judgment behind them.

What This Requires

Empathy — for yourself and your colleagues. This transition is harder than most organizations acknowledge. The uncertainty is real. But empathy has to be paired with realism and openness to being wrong about what “good design” looks like going forward.

The hardest part for designers learning to work in a more AI-forward way isn’t the technical skill. It’s letting go of the ego around “I do it this way” and getting back to fundamentals: how can we create value? How can these new tools and concepts be applied to what we’ve learned about building good products?

It’s not up to any of us to dictate what’s valuable. The market will decide. Our job is to stay close enough to the work that we can adapt as the answer changes.

Conclusion

Design isn’t going away. The need to understand users, synthesize complexity, and make products that work — that’s more important than ever. But the shape of the work is changing faster than it has at any point in the field’s history.

The designers who thrive through this transition will be the ones who develop a specific set of competencies: speed in how they operate, technical fluency with AI tools and systems, depth in research and user understanding, taste and creative originality in a world flooded with generated output, and ego-free adaptability when the ground shifts underneath them.

Taste, creativity, and originality deserve special emphasis. As AI generates more, the ability to discern what’s good from what’s generic — and to push beyond the obvious — becomes a defining skill. AI is going to do a lot. Things are going to change a lot. But the human capacity for taste and original thinking isn’t going anywhere. If anything, its value increases.

The opportunity is real, but it requires stepping into it actively rather than waiting for it to arrive.

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *