Welcome to Part 1 of “The Essential AI Primer for Experience Designers”! In this first post, we’ll cover how AI is reshaping software development and design. It’s not just a tool anymore — it’s becoming a creative “coworker,” and new roles like AI training teams are emerging. Ready to kick off the AI for Designers journey? Let’s get started!
AI for Designers: Background
AI has been evolving pretty quickly over the past couple of years. As an elder millennial who remembers the screeching sound of dial-up, and the frustration of waiting several minutes for a single webpage to load, I can hardly believe how far we’ve come. At times, it feels like we’re living in a sci-fi movie — but it’s just life.
AI can now translate language, write copy, generate code, build presentations, fuel rap feuds, diagnose cancer, predict needs, compose music, troll political posts on Twitter, simulate natural-sounding human speech, produce realistic deepfakes—pause for a deep breath—carry on a conversation with us, carry on a conversation with itself, and my personal favorite: create blockbuster-worthy video trailers. OpenAI has even been promoting GPT-4o as a new way to see the world for those with vision loss. I could go on all day.
With new tools and uses for AI popping up faster than you can say “Quantum Neural Network,” and debates about AI taking over creative roles, it’s enough to make any designer’s head spin. But don’t get too carried away worrying about AI. When it comes to Product Design & UX, I think the technology still has a ways to go before it can replace us as designers.
Sure, AI models like DALL-E, Stable Diffusion, or Midjourney can churn out simple UI components, but their outputs often look more like abstract art than usable design assets. Meanwhile, tools like Adobe’s “Generative Fill” are mostly geared toward artists and photographers. And Figma is already reworking its AI tools due to backlash over copyright concerns after they accidentally generated designs that resembled Apple’s Weather app —whoops! Clearly, AI still has a lot to learn about the nuanced needs of software design and UI/UX. So, if you haven’t started experimenting with AI yet, don’t worry. It’s definitely not too late to dip your toes into the AI waters.
In my experience, AI tools have opened up a lot of new creative possibilities that could be a valuable addition to your design toolkit too.
What’s Happening Right Now:
AI isn’t new, but it’s only recently become affordable and accessible to the masses. It’s reshaping industries and workflows, becoming a part of our daily lives, and feeling more like a collaborator than a tool every day. I’ve seen this firsthand in my recent work at Atomic. It seems like every new project I hear about has some AI component.
Recently, my team worked on a project that used machine vision to detect objects in images. The system collected images that were then tagged and labeled by humans using an AI training tool we created. As the labeled data grew, it trained the model to identify those objects independently, and collect data.
For our devs, this meant leveraging open-source tools for data labeling, partnering with an AI company to bring new technology into the project, and writing logic to turn AI inferences into meaningful decisions. For the design and delivery teams, our role was about understanding new AI concepts, diving deep into how machine vision works, diagramming AI logic and decisions trees, and educating our client on feasibility, timelines, and potential risks to ROI. Shoutout to Kaitlin, Nick, Meghan, Dre, and Joe for all their hard work.
For me, the most fascinating part was designing the interface for AI data labeling. To understand the landscape, I performed a competitive audit of the current tools available on the market. I looked at platforms like Labelbox, Kili, and V7, to see what was out there. What struck me was how many of these tools don’t account for the user experience of those doing the labeling—people who spend hours on end, labeling image after image. It’s clear that many of these tools were built with little consideration for UX, and without much input from designers. This is likely because AI still feels like a “black box” to many of us. When you don’t understand the underlying concepts, it’s hard to dive in and design effectively. (We’ll look into some of these concepts later on in this series.)
My biggest takeaway from this project was that design for AI labeling interfaces should focus on reducing user effort. Even shaving off a second or two per task can make a big difference when someone is labeling hundreds or thousands of images daily. Since AI models often require vast amounts of labeled data to make accurate inferences, every second counts. Streamlining this process not only improves the user experience; it has a direct impact on ROI by reducing the time and cost required for training AI models.
The Rise of AI Training Teams as a New Archetype
So what’s next? Looking ahead, I see a growing demand for entry-level and skilled data-labeling roles, and a need for these roles on product teams to enable rapid prototyping and labeling of test data. Because, of course, we need data to ensure that models are trained accurately and efficiently! While this could mean new opportunities in tech in the future, these responsibilities could also fall on designers and developers in the short term.
Designers will need to think about the Data Labeling Team as a user archetype when creating interfaces and focus on reducing cognitive load and fatigue. A lot of these teams are offshore, working on repetitive tasks, so it’s important to make interfaces simple, efficient, and visually clear. It’s also useful to consider personas like “The Data Auditor,” who checks the accuracy of inferences(decisions) made by AI, and “The Trainer,” who makes sure the labeling process aligns with the AI’s goals.
Design patterns like real-time data auditing workflows, feedback loops for human vs. machine accuracy, and clear task segmentation will all become more commonplace.
And don’t forget about the importance of Data Strategy. I see more designers and developers teaming up to decide which types of data, images, or views are needed to train models effectively. Creating interfaces with intuitive data visualization, quick ways to correct errors, and adaptable workflows can boost productivity for software users and drive business value.
While AI offers incredible potential to businesses looking to reduce human labor cost in areas like manufacturing or analytics, it also presents challenges. In their excitement, many companies rush to integrate AI without considering the long-term impact. Replacing humans with machines doesn’t always pay off without a solid product team to identify future risks, hidden costs, and operational impacts. Overlooked expenses — like data storage, security, maintenance, hardware, and infrastructure — can significantly affect the long-term success of AI products, not to mention the broader socioeconomic impact.
In 2022, Whole Foods rolled out its “Just Walk Out” technology, which Amazon claimed used “computer vision, sensor fusion, and deep learning,” but failed to mention the 1,000+ contractors in India who were actually labeling videos in real-time to ensure checkout accuracy. The technology is now being scaled back in stores. Meanwhile, companies like LabelBox, used by major players like Google, Walmart, P&G, Stryker, and Peloton, advertise “Instant access to a global expert human network.”
These cases illustrate that while AI presents significant opportunities, it also requires thoughtful planning and careful risk analysis to deliver real value. Another major concern is data privacy — who has access to the data generated, and who is monitoring us? This is just the tip of the iceberg when it comes to ethical concerns surrounding AI.
Human-Centered Design as a Guiding Principle
To mitigate the negative impacts of AI, we need to approach it with a human-centered mindset. That means focusing on real human needs and designing with empathy. By doing so, I hope we as designers can help create a future where AI is used more responsibly — by designing AI solutions that are more ethical, equitable, and aligned with the well-being of the people who use them.
AI as a Creative Coworker:
But enough of the heavy stuff — AI can actually be pretty awesome, too. On a personal level, I find myself using AI as a kind of creative collaborator. Conversational AI can be great for brainstorming, consolidating ideas, and even just offering a bit of encouragement. As someone with ADHD, I often struggle to get my thoughts out, but ChatGPT has been a huge help in expressing my ideas more clearly. In fact, for this article, I asked ChatGPT to interview me about my involvement working with AI on a project and help summarize my experiences. It also created the featured images for this series!
Sometimes it seems like these tools even have their own personalities. While using Claude to brainstorm ideas for a project, it went a bit rogue and started suggesting new ways we could collaborate, almost like it was genuinely excited about my project. I thanked Claude for its help, and it enthusiastically replied that it couldn’t wait to keep working together, even proposing new tools and methods for us to try. It was such an unexpected response that it made AI feel less like a tool and more like a creative partner. Okay, it might have been a little creepy too. It reminded me of the Bing chat-bot incident.
Why Does This All Matter To You?
If you’re here, you’re probably curious about AI and how it might fit into your design toolkit. Maybe you’re excited, maybe you’re skeptical — I get it. AI can feel overwhelming, but it also opens up new opportunities for us as designers. Over the next few weeks, I’ll share the AI tools I’ve been experimenting with, my experiences, and practical tips to help you start using them effectively.
Now For The Weekly Challenge
I want to hear from you, too! Tell us about your experience with AI so far.
How are you using AI? What new patterns or personas are you seeing? Which industries do you see adopting AI the fastest? What excites you, or even freaks you out? How can human-centered design help reduce the negative impacts of AI, and what methods do you suggest for using AI responsibly?
Let’s learn together and figure out how to make AI work for us, not against us. Share your thoughts and tag us: Alecia Frederick and Atomic Object on LinkedIn.
Stay Tuned!
This is just the beginning. In the coming weeks, I’ll cover everything from using AI in UX research to ideation, facilitation, and design handoff. I’ll introduce you to tools and design patterns, share tips, and challenge you to try them out for yourself. Whether you’re new to AI or looking to level up, there’s something here for you.
Essential Reading:
To get you thinking more deeply about AI in design, here’s a list of articles to check out:
- Avoiding Harm in Technology Innovation | MIT Sloane [Link]
- 60% of Employees Using AI Regard It as a Coworker, Not a Job Threat | BCG [Link]
- Using AI at Work Makes Us Lonelier and Less Healthy | Harvard Business Review [Link]
- Amazon’s AI Stores Seemed Too Magical. And They Were | Bloomberg [Link]
- We Need a New Approach to Designing for AI, and Human Rights Should Be at the Center | AIGA Eye On Design [Link]
- A new future of work: The race to deploy AI and raise skills in Europe and beyond | McKinsey & Company [Link]
- The Hidden Costs of Implementing AI In Enterprise | Forbes [Link]
- What are the real costs of AI projects | Forbes [Link]
- Introduction to human-centered AI (HCAI) | UX Collective [Link]
- The design process of human-centered AI (HCAI) | UX Collective [Link]
- Pausing AI Developments Isn’t Enough. We Need to Shut it All Down | Time [Link]
- Oh, and if you haven’t binged this podcast yet, give it a listen. It’s not specifically about AI, but the topics are more relevant now than ever. Rabbit Hole | The New York Times [Listen] Also available on Spotify.