Why So Many Smart People Feel Lost About AI Right Now

I don’t write code, I don’t build models, and a year ago, I was the person Googling, “What is a prompt” while pretending I already knew the answer. Now, I give a talk called “Claude for Normies” to rooms full of professionals who feel the same way I did then. I’ve given it enough times now to see a pattern that I think matters more than anything I teach in the actual session.

The pattern is this: the people who are getting comfortable with AI are almost always learning it from other people. And the people who feel the most behind are almost always trying to figure it out alone.

How the Conversation Went Wrong

The public discourse about AI is dominated by people performing expertise. Hot takes, “build in public” threads, a steady drumbeat of broey content that implies you’re already behind. If you’re a capable professional who hasn’t automated your entire workflow, the internet would like you to feel bad about that.

When I surveyed the 49 people who signed up for my latest session, I expected beginners. Instead, 47% said they already use Claude regularly. Another 14% said they can’t imagine working without it. These are competent, experienced people who use the tool and still feel like they’re doing it wrong. The online conversation about AI manufactured that feeling, and it’s doing real damage to people’s willingness to keep learning to use AI.

Then there’s the other end of the spectrum. A lot of people I know and respect deeply have decided they want nothing to do with AI. They’re exhausted by the state of things: the labor exploitation, the environmental costs, the concentration of power, the political climate, all of it. And, they’ve put AI in the same bucket as the rest of it, something to wash their hands of as an act of principle.

I sympathize with this more than I can say. I deleted most of my social media accounts in 2019 for similar reasons. People should be able to make whatever choice they want about the tools they use.

But when I talk to the people in my life who’ve made this choice, I often find that their understanding of what AI actually does is limited. They know enough to know they’re against it, and they’ve stopped there. And this is the part that worries me: the people I consider the most ethical and thoughtful are increasingly the ones with the least accurate picture of what’s happening. Meanwhile, people with fewer scruples are neck-deep in learning mode, building fluency every day.

You could opt out of Twitter and Instagram and lose very little in your professional life. But, AI is coming to the near future of knowledge work, whether we like it or not. That means the gap between people who understand it and those who’ve decided they don’t need to will have real consequences. My concern is that the people with the best moral instincts are most likely to be caught off guard.

What Actually Happened to Me

Here’s what I think the discourse gets wrong: it frames AI adoption as an individual performance. You watch the right tutorials, read the right threads, and emerge transformed. That has almost no resemblance to how I actually learned.

I learned because I show up every day to an office full of software developers who are adapting their practice around these tools. They’re technical by nature, so the barrier to picking this stuff up is lower for them. I trust them. I like them. And I absorb what excites them without trying to, through hallway conversations, over-the-shoulder glances at someone’s screen, offhand suggestions while getting coffee.

Nobody sat me down for a lesson. I just kept being in the room.

Over the course of a year, that ambient exposure turned into real working knowledge. I went from asking Claude basic questions to feeding it messy drafts and getting back useful edits. I started describing spreadsheet formulas in plain English and letting Claude write the code. Eventually, I set up workflows where Claude reads my inbox on a schedule and writes me a Monday morning briefing without being asked. Each of those steps felt small at the time, and every single one of them started with someone near me saying, “Have you tried this?”

A Smaller Gap Than It Looks

The distance between “I’ve used it once” and “This is changing how I work” is much shorter than the discourse makes it seem. The six or seven things I learned over the past year each took a few afternoons of tinkering. The hard part was never the technology. The hard part was knowing how to frame problems to be addressed with these new tools, and that’s the kind of information that travels best between people in the same room.

When I started sharing prompts with colleagues, comparing results, passing tricks along at lunch, everything accelerated. Someone figured out that telling Claude to critique a draft before rewriting it produces dramatically better output, and that insight spread through our office in about 48 hours. The tool got more useful faster because we were learning to use AI together, in person, with the kind of low-friction back-and-forth that happens when you can just turn your chair around and say, “Look at this.”

The Case for Showing Up

I think about this a lot, because I think it has implications beyond AI.

We are in a period of fundamental change in knowledge work. The tools that professionals use every day are being reshaped in real time, and the people who will understand those tools fastest are the ones who are physically near other people, figuring them out. That’s how I learned. And if, in this last year, I’d been working remotely, isolated from the ambient curiosity of the people around me, I’d probably still be where most smart, capable people are right now: aware that something big is happening, unsure where to start, and quietly feeling behind.

Companies are abandoning their offices at the exact moment when physical proximity to people who understand new technology matters more than it has in decades. I don’t think this is a productivity argument. I think it’s a literacy argument. The gap between people who understand AI and those who don’t will widen fastest among people trying to learn it in isolation.

Eyes Open

I should say clearly: this is an essay about how I learned, not an endorsement of the technology without caveats. AI has real costs. The energy required to train and run these models is enormous. There are real questions about underpaid labor in data labeling, about bias, about deepfakes and misinformation, about the concentration of power in a handful of companies, and about what this does to education.

I want policymakers to understand what they’re regulating. I want skeptics to understand what they’re skeptical of. And I want curious people to have enough information to make their own informed decisions. Closing the gap between what AI does and how well it’s understood benefits everyone, including the critics.

What I’d Tell Someone Starting Out

Find someone to learn with. That’s the first thing, and it matters more than which tutorial you watch or which tool you pick. Find someone curious, try one thing each this week, and compare notes on Friday. That single habit will teach you more than any course.

Then pick one annoying task from your actual work, the draft you keep rewriting or the formula you never remember, and try it with Claude. If it helps, try the next thing. If it doesn’t, you’ve lost 10 minutes.

Get the desktop app. Get the $20-per-month Pro plan. Turn off model training in your privacy settings. Five minutes of setup removes most of the friction that makes people quit early.

One Last Thing

My goal here isn’t to convince anyone that AI is great. It’s to close the gap between what the tool can do and how well it’s understood. I would rather someone understand Claude thoroughly and decide it’s not for them than feel dumb for not understanding it at all.

And if you’re lucky enough to be near other people who are figuring this out, pay attention. Ask them what they’re excited about. Show them what you tried.

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *