AI and the Shifting Boundary of Software Development

As with most peers I know, I’ve been thinking a lot about what it actually means to be a software developer right now. There seem to be two predominant takes floating around the industry right now about AI and the future of dev work. The first is that software developers are going the way of COBOL programmers, a slowly dwindling population maintaining legacy systems until the machines fully take over. The second is that AI tools will always be untrustworthy enough to require skilled human oversight, and since the pace of software generation is only accelerating, the demand for developers isn’t going anywhere.

I currently side pretty firmly with camp two, but I find myself disagreeing with the commonly stated reasons why. There’s a common refrain I’ve heard that goes something like “We may not be writing code manually in the future, but we’ll still have to know how to read and understand it.” And, honestly, I think that might be completely wrong. I think some of the skills we consider *fundamental* to this profession are about to stop being fundamental.

Layers of abstraction are only additive.

I don’t have to understand machine bytecode to write a web application. I don’t need to know the difference between the heap and the stack to build a feature that ships to production and works correctly for thousands of users. These were once considered essential knowledge for anyone who called themselves a programmer, and for a certain era, they genuinely were. But higher-order languages abstracted those concerns away, and now only a relatively niche set of engineers deals with them on a daily basis. The rest of us are standing on top of that work, often without even thinking about it.

AI agents have gotten good enough at generating application code that it’s not unreasonable to think that the ability to read and compose valid code in a programming language, as a standalone skill, may wither and die in the same way that the ability to write assembly has for most practitioners. But that doesn’t leave us with nothing to do. We’ll need a new technical toolset built around guiding the creation of software that actually works at scale for human users, and around validating the outputs of these AI systems through methods that are more abstract than “reading the code” but still rigorous.

Chefs don’t generate their own raw ingredients (anymore).

This is a normal trend across industries as they mature. We don’t expect chefs to be skilled at slaughtering animals or harvesting and grinding their own spices. Instead, we expect them to understand flavor, technique, and composition. We expect them to combine the right ingredients into a good dish and to have the culinary intuition to know what new flavors and methods will work together before trying them. A head chef is often not the one cooking at all. They’re merely specifying the menu and overseeing the execution, including validating that the dishes sent out are up to the chef’s standards.

The raw material processing that was once inseparable from cooking has been abstracted into a supply chain. Today, nobody thinks less of a chef because they can’t dress a chicken (although there is certainly now a niche within the industry for restauranteurs who maintain close ties with their local supply chains). Most of our food itself is also mass-produced by machines with nary a chef in sight.

Also, farmers and butchers haven’t disappeared. They’ve just become more specialized roles within the food industry, and the people who do it well are valued members of the culinary ecosystem. Skilled programmers will be needed for a while yet, just not nearly as many as the industry and academia have been pumping out lately.

I completely understand the grief that many people who have spent their careers honing their coding craft are feeling right now over the potential loss or diminishment of that skill. There’s no reason you can’t still spend time on your artisinal, locally-sourced, hand-crafted all-natural algorithms, though. Nobody’s saying you’re not allowed to do that anymore. Just maybe not for a lucrative salary. That’s scary, but it doesn’t change where things are headed.

This is the real jagged frontier.

This is the part I’m the least certain about: if we’re not reading the code to verify it works as intended, how *do* we verify that? I don’t think Test Driven Development is a real answer, not if we’re not reading the tests either. Agents validating other agents something something spiderman pointing at himself meme. Sandboxes, sure, but that’s not a verification strategy, it’s a risk mitigation strategy.

At some point, I think a human needs to use the software and decide whether it works or not. That’s what we do now anyway. No matter how talented the developer, there have always been bugs, vulnerabilities, uncovered edge cases, unexpected side effects. Eventually, someone has to try the thing out in the real world and discover if there are problems. We may not be the ones pushing the fixes anymore. However, we must to be able to identify problems and confirm when they’re resolved.

And look, that’s still deeply technical work. It requires understanding system architecture, failure modes, performance characteristics, and all the messy realities of software running in production. It’s just not the same technical work as knowing how to write a for loop or debug a null pointer exception. If the future of software looks a lot more like QA + DevOps wizardy, well, we’ve been underinvesting in that skillset for a long time anyway. Seriously, pick any tech company. What’s the ratio of software developers to QA and Ops staff?

It’s an opportunity for upcoming generations.

Unfortunately, in the short term, many junior developers will struggle without intentional investment in their growth. Universities will keep teaching data structures and algorithms courses long after the practical need for most graduates to hand-write sorting algorithms evaporates. That’s because curriculum changes move at geological speed relative to the tech industry. But, I see that lag as an opportunity for enterprising people willing to learn how to pilot these tools and develop new methodologies for crafting reliable software without needing to “know how to code” in the traditional sense.

Twenty-five years ago, it was entirely possible to become a self-taught developer without a computer science degree (citation: me). The web was young, the barriers were low, and curiosity plus persistence could get you a career. I think we might be approaching a similar inflection point. A new generation of self-taught AI-era software practitioners, who deeply understand digital systems and have mastered the art of guiding and validating AI-generated software, can build real careers without ever learning to write a function from scratch.

They may even have an advantage over the old heads in 10-20 years because they’ve been trained to think about software in a way that’s more abstract and less beholden to the details of a particular implementation than those of us who cling too tightly to the old ways and just can’t keep up. Already for about a decade, kids in gradeschool have been learning this kind of thing from no/low-code tools like Scratch and Code.org. They’re going to be well-adapted when this boundary shifts.

Looking Ahead

Obviously, I don’t know where the new boundary will land for what counts as “fundamental” knowledge in this field. But I’m confident that the boundary is moving, and that the people who thrive will be the ones who are paying attention to *where* it’s moving rather than insisting (or hoping) it won’t move too far.

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *