How I Use AI Before the First Line of Code

Every project starts the same way. There’s a problem worth solving, a rough sense of the constraints, and a blank page. The hard part isn’t writing the first line of code—it’s figuring out which direction to go when several look equally reasonable. Should this be an event-driven system or a synchronous pipeline? Do we split this into services now or start monolithic and extract later? These questions don’t have objectively correct answers, and sitting with that ambiguity is some of the most demanding work in software development.

The Usual Conversation

Most of the discussion around AI in software development focuses on code generation. It’s like autocomplete on steroids: Write a function, generate a test, or scaffold a component. That’s useful, and I use it — but it’s not where I’ve found AI most valuable. The place AI has changed my work the most is before implementation, during that architectural exploration phase, before any code exists.

Architecture and Ambiguity

The reason early architectural decisions feel so heavy is that you’re making choices with incomplete information. You’re committing to a direction based on what you know now, aware that what you learn later might reveal a better path. The cost of exploring alternatives is high. Sketching out an approach well enough to evaluate it takes real time and cognitive effort. So in practice, teams often go deep on one or two options and move forward.

AI changes the economics of that architectural exploration. I can describe a system’s constraints and goals, then quickly get a rough sketch of how a particular approach might work—not production-ready design, but enough to see the shape of it. What would this look like as a message queue architecture? What are the implications of putting this logic in the API layer versus a background worker? How does this data model hold up if we need to support multi-tenancy later?

The sketches come back fast enough that I can explore three or four approaches in the time it used to take to think through one. That’s not about the AI being smarter than me. It’s about lowering the friction of generating material that I can evaluate critically.

What This Looks Like in Practice

This isn’t a prompting tutorial — the specifics vary by project. But the pattern is consistent. Early in a project, when my team is working through how to structure something, I use AI as a thinking partner for the parts that benefit from rapid iteration.

  • Exploring multiple viable approaches in parallel, rather than committing to the first reasonable one.
  • Surfacing tradeoffs I might not have considered—especially across domains I’m less familiar with.
  • Getting rough outlines of an approach quickly enough to discuss them with the team the same day.
  • Making “what if we did it this way instead?” a cheap question to ask, not a half-day detour.

The key distinction: this accelerates thinking. It doesn’t replace judgment. I’m not asking the AI to make the decision. I’m using it to generate options and surface considerations so the humans in the room can make a better-informed call, faster.

Where This Falls Apart

AI has no idea what your team is good at. It doesn’t know that your senior backend developer is leaving in two months. It doesn’t know the client has a hard constraint against third-party data processors or that the last project with this tech stack left scars. It doesn’t understand your organization’s appetite for operational complexity, your deployment pipeline’s limitations, or the political dynamics that make certain technical choices easier to sell than others.

Architecture isn’t purely a technical exercise. It’s a set of decisions made within human constraints like timelines, team skills, business context, and risk tolerance. AI can help you map the technical landscape, but it can’t weigh those factors for you. And it can’t be accountable when a decision doesn’t pan out. That responsibility stays with the people who made the call.

Treating AI-generated architectural sketches as authoritative rather than exploratory is where teams will get into trouble. The output is a starting point for discussion, not a conclusion.

Try It at the Beginning

If you’ve been using AI primarily during implementation—generating code, writing tests, debugging—I’d encourage you to try pulling it earlier into your process. The next time you’re at the start of a project or a significant new feature, use it for architectural exploration before you start building.

This surprised me. I expected AI to be most useful where the work is most mechanical. Instead, I’ve found it most valuable where the work is most ambiguous. Not because it resolves the ambiguity—it can’t—but because it helps me move through it faster and with more options on the table.

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *