Is AI an Existential Threat to Tech Workers?

Artificial Intelligence has experienced a resurgence recently with the popularity of large language models (LLMs) like OpenAI‘s ChatGPT. The speed with which LLMs can solve problems dependent on the automatic recall of large amounts of information is certainly remarkable. These LLMs have improved tools like Github Copilot to create large modular blocks of functionality without requiring specialized knowledge from software engineers. This has led some people to wonder if these tools will soon take the next step by commoditizing professions as during the industrial revolution.

Will tech workers be replaced by AI? Should my software engineer colleagues look for a way to re-skill? Are firms like Atomic Object about to be disrupted in a major way? I don’t think so. Here’s why.

Commodity firms should prepare to be commoditized.

Last year, I attended an HBR Webinar entitled “What Professional Service Firms Must Do to Thrive” by Ashish Nanda and Das Narayandas. I found their Professional Service Spectrum particularly illuminating. On one end of the spectrum are what Nanda and Narayandas call “commodity” firms. These firms solve simple, routine problems that require some engineering knowledge. These organizations are low-cost and typically very efficient. Stakeholders can work with these organizations when they have a clear, precise, and well-defined problem to solve. If AI is to disrupt any organizations in the near term, it would be commodity firms.

If the only skill you bring to the table is turning detailed specifications into a single-page JavaScript application, you probably should worry about being replaced by AI in the next five years. It is possible that AI bots will carry out large application maintenance projects in the near future. (I imagine Ruby version upgrades and library updates fall into this category.)

One of the reasons I am optimistic about Atomic’s future is that we aren’t a commodity firm. In fact, we regularly turn away commodity work because it generally commands a lower hourly rate than we can permit. Less than 5% of our annual revenue comes from work within the commodity vertical.

Innovation firms are safe from AI, for now.

Most of our projects are within what Nanda and Narayandas call the “procedural” vertical. The procedural vertical focuses on solving complex and interrelated problems involving many possible external integrations. These types of services involve research, sense-making, synthesis, and judgment-taking.

These situations are often rife with what computer scientists call “ill-defined problems.” In 1969, Newell and Simon coined this term in reference to problems that are not well-defined enough to be solved using traditional algorithms. They are often complex, vague, and difficult to understand. They require what Newell and Simon called “heuristic problem-solving methods.” These are akin to experiential-based design activities similar to many of our human-centered design activities. This is work that an AI can’t currently do.

Successful completion of these activities requires what AI scientists call “broad-spectrum intelligence.” What we’ve seen so far in the AI and LLM space is broader than we’ve seen before, but it’s still quite narrow. Again, proficiency in coding is on a narrow spectrum. Figuring out what solution to create given a certain business problem requires broad-spectrum intelligence.

Inventing a completely new software application from nothing and delivering value to the market requires lateral thinking. You have to hold one viewpoint, take a couple of steps to the side, and then look at the whole context in a new light. This ability is currently beyond the AI models I’ve seen. Experts in the AI vertical think we are anywhere from 50 to 100 years away from seeing broad-spectrum AI. Most tech workers won’t be replaced by AI any time soon.

ChatGPT can’t innovate.

Recently, I heard my friend Mike van Lent refer to ChatGPT as “autocomplete on steroids.” I believe Mike because he has a Ph.D. in Computer Science from the University of Michigan and has been the CEO of an AI company for the last 15 years. But it also reflects my experience of using ChatGPT over the last few months.

The program will give you the information you ask for, but it won’t make connections for you between different problem areas. Neither will it help you understand the problem so we can find a solution together. ChatGPT won’t ask clarifying questions. It won’t question your assumptions. ChatGPT won’t synthesize all these things into a product that meets abstract business needs. It won’t incorporate your input into the types of suggestions it makes. This is the kind of thinking required when working on an innovation project in the procedural vertical.

AI will make our work more valuable.

At Atomic, I get to work with some of the most thoughtful, razor-sharp folks I’ve ever met. In some respects, coding is the least valuable task they perform every day. The real value comes as they get face-to-face with clients to hash our real-world solutions to business problems. If we could somehow decrease the amount of coding necessary to implement solutions, that would be great. It would only make our work more potent and more valuable. I look forward to the day when my fellow Atoms leverage AI to speed up the delivery of reliable, intuitive custom software on a daily basis.

 
Conversation

Join the conversation

Your email address will not be published. Required fields are marked *