An AI-Powered Development Workflow for Solo Builders

If you’ve ever started a side project with grand ambitions only to watch it slowly die in a graveyard of half-finished Git repositories, you’re not alone. The solo developer experience is uniquely brutal. One moment you’re the product manager sketching out user flows, and the next you’re the backend engineer debugging database queries. Then, suddenly, you’re the QA tester wondering why your own code doesn’t work the way you thought it would.

Why solo projects fail

The fundamental challenge isn’t technical complexity, it’s cognitive overhead. Besides constantly bouncing between PM, designer, developer, and QA roles, on a side project you’re often trying to just squeeze in time between other day-to-day obligations. This fragments our attention and kills momentum faster than a poorly configured webpack build (although hopefully by now on solo projects you’re using Vite).

I also know that I personally tend to sacrifice a lot of good software development practices to the altar of “just getting something done” when working on side projects. Design, planning, writing stories to track work, tests, good commits and PRs, everything goes out the window. It’s just a side project, who cares, right?

The motivation factor compounds these problems. Loss of creative inertia after breaks probably kills more solo projects than technical challenges. Project fatigue sets in when you lose sight of the overall objectives between fractured development sessions. Without external deadlines, projects “drift” indefinitely. It turns out all that stuff we do to get “real” projects across the line actually helps.

My current AI orchestration workflow

Obviously, side projects are a great playground to flex the use of LLM-based tools, but just churning out a lot of code doesn’t really solve our context and project management challenges. However, I’ve been iterating on a workflow that focuses on orchestrating remote agents with good task management that I’ve found delightfully effective for this kind of work.

Linear for task management through MCP

Model Context Protocol (MCP) servers are essential for connecting AI agents to your project management workflow. Think of MCP as a Google Translate for AI applications, letting them connect to and use external data sources and tools.

Linear has a great user interface for humans, with a much lower barrier to entry than Jira. It also provides an MCP server that lets AI agents easily create and manage issues. Instead of spending a bunch of time in the code trenches, I focus most of my side project time budget on defining and prioritizing stories in Linear.

My workflow looks like this:

  1. Write stories for specific work to be done. I try to keep stories at a high level and avoid getting into implementation details, but I write enough detail that an AI can understand the intent.
  2. Label the stories as “AI Refinement”. This allows an AI to identify stories that still need that implementation detail. AIs are quite good at refining initial prompts into better ones.
  3. Prompt an AI agent to search through my Linear backlog for any stories with the “AI Refinement” label and update them with clear implementation details and acceptance criteria. The agent is also tasked with removing the label and moving the task to a new status, “AI refined.” I keep a refined prompt template for this task in the project’s README.md.
  4. I review stories in the “AI refined” status, and if I’m satisfied, I move the story to “To do”.

Spending more time in this story writing/refining mode makes it easier to keep the project context in view across disparate working sessions. It also helps me to better visualize my project progress and define clear project stages, like an MVP, which makes it more likely that I’ll actually see a project through to completion.

Asynchronous code generation

While I’m focused on writing stories, I use remote agents to generate code asynchronously. A remote agent is basically an AI LLM process that runs in a separate cloud environment. I’ve personally be using Augment in VS Code, but there are a number of other options out there.

That workflow looks like:

  1. Spin up a new remote agent. If I have time, I recommend doing some configuration to customize the remote agent environment (like installing tools you want to use such as Bun).
  2. Prompt the agent to work on the next story in the “AI refined” status. I keep another prompt template for this in the README that I can simply copy/paste. I’ve found it helps to be explicit about things like which Linear project Id to use and what rubric to use for choosing a story to work on. The agent is also tasked with moving the story to “In progress” and opening a PR when the work is complete and moving the story status to “In review”.
  3. I can spin up multiple remote agents in parallel, and because the first thing each agent does when finding a story is move it to a new status, as long as I start them up one at a time (the only option right now in Augment), I don’t run into issues of multiple agents doing the same work.
  4. When I’ve got time for code review, I can check out all the PRs, do whatever QA I need to do, and then merge them.

Local workflows for quality assurance

I still prefer doing PR reviews locally. I can build and run the app, do my own manual testing, and actually get into the code. With several PRs available to review, it’s easier to build up and maintain my mental model of the project and be more effective than the typical loop of switching between planning a single task, working on it, reviewing it, and then moving back to planning.

Convex as the AI-optimized backend

Convex deserves special mention as a backend that’s particularly well-suited for AI-assisted development. They provide an MCP server that allows your agents to interact with your backend directly, as well as being able to quickly spin up a local database for the remote agent environments (which allows them to test and debug without affecting my main dev db which I might use for PR reviews).

I’ve found Convex’s functions-as-API approach particularly effective for AI development. Convex also provides a thorough rules document for tools like Cursor to have all the context they need for effectively generating complete type-safe API endpoints following Convex best practices.

Transforming productivity

I’ve successfully used this process to iterate on and deploy several apps while writing very little code myself, but I also think I’m just scratching the surface since this is purely side-project work. I think this workflow lends itself really well to a regular full daily cadence, where:

  1. Your mornings are spent wearing your dev hat, reviewing PRs from the day before and merging them.
  2. Your PR reviews may lead to new stories/bug fixes being added to the backlog, which the AI can then start refining in the background.
  3. You spend your afternoon wearing your PM hat, writing and refining stories for the next round of work. Do your best to identify orthoganl work that can be done in parallel.
  4. Finally, kick off as many remote agents as you can to start working on the next round of stories.
  5. Rinse, repeat.

I’m not a believer in efficiency for efficiency’s sake, but I think this workflow is a great way for a software consultant to get more deep work flow-state time while focusing on the most important parts of the job: defining the problem and creating value.

Perhaps most importantly, this workflow has rejuvenated my passion for building software for fun. The quick feedback cycle of writing code and seeing it work is what drew me, and many developers, to programming in the first place. Supercharging that loop and finding a way to create functional apps in my spare time, without feeling like I have to completely rebuild context from scratch for every new piece of work, has been like discovering the joy of software development for the first time.

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *