Have you been using a tool to record transcripts of your meetings? Options like MacWhisper, Limitless, or Granola promise: “Never take notes again!” Hook, line, and sinker — I was in.
Fast forward three months, and I now had a growing pile of transcripts… but I still couldn’t remember which one to search to find that topic I knew we’d discussed. My searching mechanism was to hunt and peck through the transcripts, pasting them into ChatGPT. I liked ChatGPT’s answers when I pasted in a single transcript and asked a question. It just doesn’t scale well.
Meanwhile, as I tried to ease my AI-skeptic brain, I watched coworkers create an entire GitHub repository of prompts—everything from building project knowledge bases to drafting story cards. Inspired, I downloaded Cursor, gathered my transcripts into one place, and decided to try writing my own prompt.
Thinking Like a Programmer: Inputs and Outputs
Naturally, I approached this like a coder. What are my inputs? A folder of meeting transcripts and the topic I’m trying to find. What’s the desired output? A structured, readable summary of what was said about that topic.
With a little help from ChatGPT (and some good old-fashioned Googling), I put together my first attempt. At a coworker’s suggestion, I wrote it in Markdown, which the AI uses to help determine what’s most important based on the structure.
Here’s how it started:
You are an expert analyst reviewing a folder of meeting transcripts. I want you to summarize everything that was said, discussed, and decided regarding the topic: **"${user_input}"**. Instructions: - Look through **/Knowledge Base/Meeting Transcripts** (including any subfolders if applicable). - Return a **structured summary** of all mentions of the topic, including: 1. **Statements**: what people said about the topic (key opinions, concerns, suggestions). 2. **Discussions**: back-and-forth conversations, debates, or clarifications involving the topic. 3. **Decisions**: outcomes, agreements, or action items related to the topic. - Include **timestamps or meeting names/dates** if available to indicate when key things were discussed or decided. - If the topic appears in multiple meetings, group the insights by meeting or date. - Be concise but detailed, aiming to give a clear understanding of how this topic evolved over time. - If relevant, note any **unresolved issues or follow-up items** related to the topic.
I was off to the races. Now I just needed a test case. I remembered a meeting a few days ago where we had talked a field in the data called rname. In Cursor’s agent chat, I typed @generate-topic-summary on rname and hit enter.
The Limits of “Helpful” AI
As I watched the AI’s thought process, it immediately began taking liberties with the topic. It cheerfully informed me:
Let me also search for potential variations like ‘real name,’ ‘resource name,’ or other similar terms.
Crap. I get why it did that—but that wasn’t what I wanted. I knew the exact topic I was looking for. There was no need for creative interpretation.
So I added some constraints:
Important constraints: - Only look for **the exact term** "${user_input}" as written. - **Do not interpret, expand, or guess** at similar terms or variations. - If the term is not found, just say so — do not assume related topics. - Do not convert abbreviations into full phrases unless I explicitly request that.
Round two. The AI responded: “Topic not found.” I was shocked.
Being AI-wary, I double-checked. I opened the transcript and searched manually. Sure enough—“rname” wasn’t in the text. But “name” was. Specifically, the transcript had recorded it as “our name.” Facepalm moment.
Of course! The transcription tool misheard “rname.” How would the AI know that?
Adding Phonetic Awareness
For a moment, I thought that might be the end of my experiment. But I was having fun—and I wasn’t giving up that easily.
Not being an English major—or knowing the terms for “sounds like”—I went back to ChatGPT and asked for help adjusting the prompt to account for topics that sound like the intended term. It gave me exactly what I needed: the concepts of homophones and phonetic matches. Here’s what I added:
- Search for the exact text **plus** obvious homophones or common speech-to-text mistakes. Examples: “rname” ➜ “our name”, “B2C” ➜ “bee two see”. - Limit yourself to *close* sound-alikes; do **not** drift to broad synonyms. - In the final summary, label any match that came from a phonetic variant like this: `- (phonetic match: “our name”)`
Bingo. When I reran the prompt, it returned a summary for “rname” because it now knew to include “our name” as a valid match. And it flagged that match clearly in the output. This was the turning point. With that tweak, I finally got helpful summaries — even when the transcription wasn’t perfect.
Suport for Multiple Formats
Slack wraps long lines awkwardly. Teams butchers Markdown headers. Sometimes I just want a clean answer in chat. So I added support for multiple output formats:
- chat – A tidy Markdown summary for the Cursor agent window.
- md – Full Markdown headings and bullets for documentation.
- slack – Uses Slack’s flavor of formatting (bold, monospace, line limits).
- teams – Avoids headers and tables; flattens content into simple lists.
- short – A compact, holistic summary when I just want the high points.
Here’s the switch I added at the top of the prompt:
Topic: **${user_input}** Output: **${out|chat}**
This small addition makes the prompt immediately reusable across formats—no copying, pasting, or reformatting needed. Whether I’m reading the summary in Cursor, posting in Slack, or pasting into a planning doc, it just works.
And to make each format behave correctly, I added detailed instructions inside the prompt for each output type:
--- ### Output rules – obey the selected `Output` value] #### chat (default) - Return the summary directly in the Agent chat window using regular Markdown (headings, bullet lists, code blocks as needed). #### slack - Format the entire summary so it can be pasted into Slack **without breaking**. – Use `*bold*`, `_italics_`, and `` `code` `` (Slack flavour). – Avoid top-level headings (`#` H1); start with `*Summary for {Topic}*`. – Keep lines ≤ 80 chars to prevent awkward wrapping. #### teams -Optimise for Microsoft Teams. – Use Markdown, but assume limited heading support. – Prefer `**bold**` section labels instead of `#` headings. – Convert tables into plain lists (Teams tables often lose formatting). #### md - Create a new Markdown file **(not just chat output)**: 1. File path: `/JIS Knowledge Base/topic results/` 2. File name: `{Topic}-{YYYY-MM-DD}.md` 3. Begin the file with: ``` # {Topic} – Meeting Transcript Summary _Generated on {YYYY-MM-DD}_ ``` 4. Write the same structured content as in “chat” mode. 5. At the end, add an “## Source Files” list with the names of transcripts you used. 6. **Create the file at the file path**, then reply in chat with: > Created: `/JIS Knowledge Base/topic results/{Topic}-{YYYY-MM-DD}.md` #### short - Provide a cleaner, more compact summary focused on the most essential points. - Present a **single, holistic view** of the topic across all transcripts. - Do **not** group or organize by meeting, date, or chronology. - Combine related points from different meetings into unified, high-level insights. - Use bullet points or short paragraphs, whichever improves clarity. - Omit verbose headings (e.g., “Statements / Discussions / Decisions”) unless useful. - Avoid repeating background context unless it directly affects the conclusion. - Prioritize clarity, signal, and synthesis over exhaustiveness.
Why Output Formatting Is Worth It
I’ve learned that the more frictionless the output is, the more likely I am to use it—and the more likely someone else on my team will, too. If this prompt is going to support async updates, retrospectives, or executive briefings, it needs to meet people where they are. It’s not just about making the AI smarter—it’s about making it usable.
And when a tool fits into your workflow without friction? That’s when it sticks.
What’s Next?
All in, this took me about two hours of thinking, experimenting, and refining. It was a fun side quest—but now I want to turn it into something my teammates can use.
Next up: connecting this to our shared meeting notes. That likely means giving the AI access to a Google Doc where the team takes notes. Yet another problem waiting to be solved—bring it on.