Designing AI Suggestions for Emotional Contexts: What We Learned

When you’re designing AI features for a reflection platform, you face a specific challenge: how do you add AI assistance to sensitive moments without making things worse?

Here’s the context. People are writing deeply personal stories about moments when someone they know was at their best. These aren’t performance reviews—they’re reflections meant to help someone understand their impact on others. Writers often feel uncertain: *Is this good enough? Will this actually help?*

We saw an opportunity for AI to help. But the constraint was clear: the AI had to enhance what people wrote, not replace it or take away from the experience.

Three Design Principles

After prototyping and client discussions, we landed on three principles:

Make it optional.

AI assistance can’t be mandatory. We designed a “Get Suggestions” button with a sparkle icon and clear messaging about what to expect. In emotional contexts, forced AI feels intrusive. Optional means respectful.

Show how it works.

We added a “View coaching system prompt” link showing exactly what instructions the AI followed. When people understand *why* the AI suggests something, they can decide if it’s useful.

Don’t rewrite anything.

The AI highlights specific phrases and asks questions: “Try describing how they ‘stepped up’—what did they do or say? How did it make you feel?” The moment AI rewrites someone’s words, it stops being their story.

The Interaction Design

We mapped the writer’s emotional journey: initial hesitation, blank page anxiety, mid-writing uncertainty, pre-submission doubt. The AI needed to address uncertainty without amplifying anxiety.

We prototyped four states:

1. Default: “Do you want the Digital Coach to help you reflect more fully?” with clear messaging that original text won’t change.
2. Processing: “Analyzing your story…” with the textarea read-only to prevent confusion.
3. Suggestions: Visual highlights on one or two phrases. Click for contextual tooltips.
4. Re-run: After edits, offer to run again—but don’t create a perfectionism loop.

We limited suggestions to two maximum. More than that feels overwhelming. Two feels more addressable. When we tested with five or six highlights, writers felt like their entire story was being critiqued. They’d see all the yellow marks and think “I did this all wrong.” But with one or two highlights, the response changed to “Oh, I could add a bit more detail here.” It shifted from feeling judged to feeling supported. The AI went from being a critical editor to being a thoughtful reader asking for clarification.

We also chose to highlight phrases rather than individual words. Highlighting single words like “stepped” or “helped” felt like nitpicking grammar. But highlighting a phrase like “stepped up during a team meeting” gave context for what we were asking about. It made the suggestion feel like it was about enriching the story, not fixing mistakes.

Visual Design Matters

We tested different approaches:

– Red underlines → felt like errors (too negative)
– Green highlights → felt like “correct answers” (created pressure)
– Yellow highlights → felt neutral (something to consider, not something wrong)

We designed tooltips to be conversational. Questions instead of commands. One to two sentences maximum.

The prompts reinforced the core purpose: helping writers capture moments when someone was at their best. Instead of generic writing advice, we asked questions that drew out specific details about positive impact. “What did they do that made you think they were at their best?” or “How did it make you feel?” These questions kept writers focused on the strengths-based reflection the platform was designed for.

Key Learnings

Here are a few key things we learned.

Opt-in beats automatic. Early prototypes ran AI analysis automatically. Testers found it unsettling—like someone reading over their shoulder. Opt-in changed things.

Transparency builds trust. The “View system prompt” feature was contested internally. But in testing, knowing storytellers *could* see that it built trust. Transparency is a feature, even if rarely used.

Less is more. We tried highlighting five or six phrases or words. It felt overwhelming. One to two suggestions made the AI feel like a thoughtful coach, not a demanding editor.

Visual design carries weight. Yellow highlights versus red underlines fundamentally change how people perceive AI intent. Design for encouragement, not correction.

Control must be obvious. Every interaction included clear escape routes: dismiss buttons, clear all options, direct editing. When people feel overpowered by AI, they seemed more likely to disengage.

What We Built

We haven’t shipped this yet. The core submission experience took priority, and we want to validate the base workflow first.

But the design work gave us:

– A clearer philosophy for AI in sensitive contexts
– Patterns for opt-in AI and transparent prompting
– A prototype ready for testing
– Deeper thinking about what AI should and shouldn’t do in high-stakes experiences

The Takeaway

AI can enhance user experiences, but in more sensitive contexts, design makes the difference between supportive and intrusive.

Building trust with AI means: putting user agency first, making AI behavior more transparent, preserving the author’s voice, and designing systems that encourage rather than correct.

These patterns help us think through how AI can enhance—not overshadow—human expression in personal contexts.

Conversation

Join the conversation

Your email address will not be published. Required fields are marked *