There’s a new kind of coding I call “Anxiety Coding”, where you fully give in to your anxiety, embrace cortisol, and remember that the code exists. It’s possible because the LLMs and coding tools are getting good, and devs are getting really good.
In essence, Anxiety Coding is the practice of embracing your cortisol‑fueled skepticism—writing tests before accepting any AI‑generated draft, verifying each line so you never lose sight of the code you actually own.
The Rise of Vibe Coding
“Vibe Coding” has recently exploded in popularity. In the eyes of some, if you aren’t “embracing exponentials” you’re embracing failure. The basic idea is simple: close that fancy IDE, open up bolt, and type a prompt for whatever project you might want. Then you’re vibing. This was the idea of Andrej Karpathy’s original X/Twitter post.
But what happens when we let that AI‑powered spontaneity run unchecked?
Anxiety Coding
AI tools like Copilot, Cursor, and Bolt.ai have made Vibe Coding inevitable; transcending it is not.
This brings us to my idea of “Anxiety Coding”, essentially Vibe Coding but keeping a watchful eye over all code written by AI agents. A developer Anxiety Coding should always remember the code exists (opposed to “forgetting the code exists” -Andrej Karpathy). Test Driven Development(TDD) is a must, and tests for new functions should be handwritten.
Anxiety Coding also treats the AI-generated output as a rough draft or boilerplate—None of the code is sacred. Copilot and Cursor act as naive intern or junior dev driving in pair programming, you need to act as a strong navigator, ensuring changes made follow good dev practices. Navigating is a difficult skill to learn when you are working with another developer. When working with an AI tool it becomes even more difficult. The challenges include prompt engineering, deciding which model to use, and what “mode” you want agentic (Agentic mode lets the AI suggest multi‑step changes rather than single‑line completions) or conversational (Chatting with the LLM without it editing code directly).
Early Vibe Coding
I was a pseudo Vibe Coder the moment I got Copilot beta access in late 2021 as a sophomore CS student. Copilot would improve my Data Structures and Algorithms learning, I thought, but, in retrospect, it was a trap. I leaned on AI to fill gaps I didn’t even know I had.
Without a TDD‑first mindset and line‑by‑line review, those missing fundamentals went unnoticed until I had to relearn them from scratch. Had I been anxiety coding — writing tests before accepting AI drafts and vetting every line — I would’ve caught those blind spots immediately.
Anxiety Coding in 2025
Fast forward four years, and the tools have only gotten sharpe and our potential blind spots even more costly. Now, if you aren’t using Cursor, Claude Code, or Copilot you are missing out. The models outperform a vast majority of software developers when tested against certain benchmarks.
I was never off the LLM train—I’ve relied exclusively on Copilot throughout my entire academic and (so far, quite short) professional career. While brainstorming this post, I finally gave Cursor’s Agentic mode a spin. I went in wary: early Copilot had spoiled me with basic line completions, and I had watched as my coworkers fought with copilots fairly aggressive edits.
After a week with Cursor, I realized it wasn’t my favorite. Its Agentic mode is powerful, but the chat panel kept pulling me away from the code and my own thought process. What I did appreciate, however, was having an AI “junior dev” that could act genetically — something Stable Copilot still lacks.
Rather than revert straight back to base Copilot, I switched to VS Code Insiders and tried the Copilot Agentic beta. It felt much more measured: edits were less forceful, it didn’t sneak in extra context, and every change allowed for a mini‑PR review in my editor. Paired with my handwritten tests, it produced more accurate, hallucination‑free code far more comfortably than Cursor did.
Comparison
For a quick comparison, here is how my experience shook out (all of these being TDD with handwritten tests):
Tool | Edit Aggressiveness | Agentic Mode | Distraction Factor |
---|---|---|---|
Cursor | High | ✓ | Chat window pulls focus |
Copilot (Stable) | Low | ✗ | Inline; minimal context |
Copilot Agentic Beta | Medium | ✓ | Mini‑PR reviews in-editor |
AI code tools will continue to integrate into software developers’ daily work. No one can change that and we shouldn’t want to. There are a lot of unknowns, the changes may be daunting as a computer science student, junior developer, or even as an established mid-level/senior developer. For me, one thing is clear: knowing the fundamentals and keeping the big picture in mind should be the top priority. We have a much bigger context window than an LLM, meaning we can reason.
A Challenge for You
Here’s my challenge to you. Next time on your project, when you’re writing something simple, try this: write its test suite first, then prompt your AI to draft the code. Notice how you can still embrace the exponentials, without forgetting to embrace the cortisol as a guide along the way.
Anxiety Coding Checklist
Here is a quick checklist for practicing implementing Anxiety Coding in your workflow:
- Handwrite failing tests first
- Prompt AI with your requirements, including the tests in the context window
- Run tests and inspect AI draft line by line
- Commit changes only after manual review/editing