Here’s How I Became Open to Using AI

In what feels like the blink of an eye, the tech world has once again transformed. We are now living in an age where we can have a competent conversation with a computer. Not only that, but these same conversations are rapidly becoming a source of assistance, if not an entire livelihood. I’ve integrated AI into my own workflows — it continues to increase my velocity and knowledge. It’s something I imagined occurring after turning 50. If I’m 25 now, can you imagine where we’ll be in double the time?

But it wasn’t always this way. A few short years ago, I was completely against the idea of even using AI as a tool, much less to do work. So what happened? Why the sudden switch, and what led to me becoming open to using AI?

The Release Of ChatGPT

I’ll never forget the waves that ChatGPT made when it was first released. It was late 2022, and I was completing my final year of my undergraduate degree. The first few messages began to make the rounds in our university’s WhatsApp — something about it being able to “help” with our homework. By the end of the semester, it was all the rage. ChatGPT was able to code on a high enough level that people were outright asking it to complete assignments.

To me, I couldn’t care less. I was already halfway through my internship here, and I valued learning things my way. I felt that using ChatGPT in any capacity was going to significantly hinder my own ability to make what I learned stick in my mind — a traditionalist approach, if I may.

At the time, I was working on a senior design project with some friends. One insisted that we should give ChatGPT a shot in our codebase, to see what it could do. It seemed to be a good idea in theory, but my concerns centered around its ability to produce high-quality code.

After a week or two of discussion, we tried it out. And it further drove me away from becoming open to using AI.

Using ChatGPT to Build

We were building a Wordle clone, and there was a feature another and I were working on implementing. He had proactively taken on the responsibility of implementing some of the logic, and, in the process, chose to utilize ChatGPT to help. By the next day his work was complete.

I looked it over; the code seemed good. It matched our existing patterns and returned the correct object. ChatGPT had exceeded my expectations. Now it was time to test the app. In our shared Replit repository, I ran the app, and was greeted with a series of errors. Why? There were no syntax errors; Everything was written correctly. Replit wasn’t complaining about anything being broken.

Something in the back of my mind said, “It’s gotta be the generated code.” I decoupled the code from its entry point as a test and replaced its object with a placeholder. The website immediately started.

I wanted to attempt to salvage the code that we had in place. We couldn’t figure out what the exact issue was, so we ran it through ChatGPT again. This time, we discovered new errors in addition to the previous ones that still weren’t resolved. Thankfully, the point of failure became more obvious, so we asked ChatGPT to fix the problem again. Once again, additional errors were layered on top of the previous set. We ended up rewriting most of the generated code to solve the problem so we could continue.

From that point forward, I mentally committed myself to never using an AI model to write code.

Reflecting on What Changed

In hindsight, the conclusion I reached was largely based on the frustrations from my experience. This is what I didn’t consider: While ChatGPT might not have been able to generate large amounts of code, it could code legibly. In smaller contexts, and in smaller chunks. ChatGPT could be a great assistant. Also, ChatGPT had the potential to explain why it was suggesting certain code, given that it could understand the nuances of a given block and make contextually sensible changes. The only thing it couldn’t do consistently was write working code, which was preventing me from becoming open to using AI.

Fast forward to 2024. AI has become common in the office, with most people referring to it to help quickly solve problems. I acknowledged the potential it had, but feared becoming bogged down by the mistakes it would make. I addressed these concerns with many teammates and coworkers, and one theme rang true: Use AI as an assistant for smaller blocks of code, not an entire context. My teammate Michael had suggested that I could start by asking it to suggest a refactor of a chunk of code I was working on after I had fleshed it out.

I’m not one to shy away from an opportunity to revisit something if there’s a possibility it’ll change my perspective. So, I accepted the advice – I completed the story I was working on, and prompted ChatGPT to suggest a possible refactor.

By the time I merged my PR, it used virtually the same code that ChatGPT had returned.

I found that using the AI as an assistant rather than a generator was extremely beneficial to me. The code it suggested was super-clean, made sense, and most importantly, it worked. In addition to all this, it explained to me what it was doing and why, which taught me something new. It was around this time that I had adopted the ideology that AI can act as an advanced search engine. This got me thinking: In what ways can I speed up both my work and my learning process?

Full Steam Ahead

ChatGPT prompt and a response from the AI
One of my more recent queries in my journey of becoming open to using AI.

I would try AI a few more times, cautiously, in my work. I eventually had the opportunity to integrate it into my flow when I switched to working on our team’s product backend. Many things were new to me and didn’t make a lot of sense, and I struggled to research the what and the why. The more I dove headfirst into this codebase, the more I asked ChatGPT to talk to me, and the quicker I learned. This turned into a positive feedback cycle, leading to me increasingly becoming open to using AI.

I honestly believe, a year later, that my ability to pick up as much as I have is thanks in part to my use of ChatGPT. I would prompt it by asking in the format of, “I need X/I want to do X, can you suggest code, and then explain why?” and receive a detailed breakdown of what was happening. Of course, ChatGPT isn’t perfect, and still has a slight tendency to hallucinate. Despite this, I’ve found it to be invaluable to my processes. I can’t imagine developing without it anymore.

Becoming Open To Using AI

I can’t imagine anyone being as unsure as I was at this point in time, but there is never a 0% chance of anything. My suggestion is this — start small. Whichever AI model you choose to use for development, it doesn’t always need anything more than a few lines of code. If you can identify the exact point you need to work on, only give the model that. In my experience, ChatGPT is exceedingly capable in determining what it needs to do and returning a viable solution. Also, always ask the model, “Why?” It’ll help you to understand what is going on and leave an impression that you can use elsewhere. This will build your own confidence in your coding abilities, as you will learn exponentially.

I personally have been using AI using this pattern, and it has taught me multitudes. So much so, in fact, that I was confident enough to build my own working API with minimal assistance in under 32 hours. But that’s another story for another blog post.

 
Conversation

Join the conversation

Your email address will not be published. Required fields are marked *