Article summary
Remember in math class how your teacher wouldn’t let you use a calculator? Maybe they even tried to justify it by saying something like, “Some day you won’t have a calculator…” But we all knew that was nonsense. The real reason was that using a calculator would short-circuit your ability or motivation to learn the underlying concepts. There’s nothing inherently wrong with using a calculator. And, once you know what you’re doing, it can save a lot of time. AI tools are kind of like this.
It’s easy to turn your brain off and simply accept what they say. Although AI might seem like a miraculous time saver, over-reliance on it will rot your brain! But there are some things you can do to take care of your brain cells.
Be able to explain it.
“I don’t know what this code does; I got it from ChatGPT.” Have you ever had this thought or said these words to someone else? This is not an explanation – it is a deflection! Of course, this is just a more modern version of saying, “I don’t know what this code does; I just pasted it from StackOverflow.” The blessing and curse of LLMs is that they give you more tailored code to paste blindly.
What has not changed is that incorporating code you don’t understand has only negative consequences. Maybe you think that you need to get some code committed as fast as possible to meet some deadline. But this will just reinforce a false expectation about how quickly you can complete tasks, and you’ll be in exactly the same situation next time.
But if it works, and all the tests pass, mission accomplished! Right? Negative! There are lots of ways to produce code that works today. If you’re creating a throw-away prototype, then it may make sense to move fast and break things. But if your goal is to produce something maintainable, you’ll have to apply some wisdom.
This is something that the machines don’t have, even artifically. LLMs hold mind-boggling amounts of information. But they are susceptible to the bandwagon logic that is already pervasive in their training material. They assume that the most popular answer is correct. And as a side-effect of chatbots’ attempts to sound more human, they will fudge their answer to something short of best.
If you don’t understand the snippet that the robot gave you, how will you know if it integrates well with existing code? AI doesn’t know about your project’s specific conventions, trajectory, and historical decisions. You would need to take these into account whether writing code from scratch or having it handed to you.
Do your homework.
When I was in college I took an astronomy class. In a homework assignment, we had to calculate the distance to a star using its redshift, or something. I plugged some numbers into some equations and (using a calculator!) came to an answer. “Looks good to me!” I said, and turned in the homework. It turns out I was over by several orders of magnitude – but what did I know, space is big! I was using tools without really knowing what I was doing.
Use the output from an AI tool as a starting point, rather than as a substitute for learning. The first thing it suggests often works but could be improved. Use what you already know to help it refine its answer. Look up what you don’t know. Reading the documentation remains one of the best ways to learn how a system is meant to be used.
By accepting some AI-generated code at face value, you rob yourself of the opportunity to level up. Certainly some problems have been solved to death, and applying some AI will save you a lot of boilerplate. But other problems are worth taking a crack on your own, before comparing your answer to an AI’s.