Imagine Your AI Notetake Starts Adding Tone of Voice to the Minutes...

We all knew AI notetakers would get smarter. But imagine, just for a moment, that they start getting a little… too smart.

https://www.awaremind.ai/newsimages/ainotetaker.png

Picture your next meeting summary coming back like this:

  • “Sarah (mentioning excited): I think we can launch this quarter.”
  • “Jonas (sounding slightly annoyed): We’ve already discussed these dependencies.”
  • “Priya (seeming a bit exhausted): Let’s revisit after the budget review.”
  • “Mark (obviously frustrated): We cannot keep pushing this timeline.”

Suddenly your meeting minutes stop looking like a business document and start reading like a screenplay.

And you can imagine what happens next:

  • People over-enunciating to sound more “strategic”
  • Colleagues announcing their tone mid-sentence (“AI, please capture that I said this calmly…”)
  • Someone insisting: “Let the notes reflect that I said this warmly.
  • And at least one person trying not to sigh and accidentally register as “irritated.”

 

Reseach shows AI summarization optimization (AISO) is already happening: https://www.csoonline.com/article/4077438/manipulating-the-meeting-notetaker-the-rise-of-ai-summarization-optimization.html?utm_campaign=NL%20Automated%20Society%20EN%20-%20issue%20143&utm_medium=email&utm_source=Mailjet

To be clear: Adding tone of voice to the transcript is explicitly prohibited under the EU AI Act. Because the moment an AI system starts inferring emotions from your voice, behaviour, or expressions in a workplace setting, you’ve crossed a clear red line:

Article 5(1)(f) - Prohibited practice
AI systems that infer emotions of a natural person in the workplace or education institutions are banned, except for strictly medical or safety reasons.

Why such a strict stance?
Because once AI starts guessing things like:

  • “sounded stressed,”
  • “seemed angry,”
  • “appeared disengaged,”

…you’re no longer summarising a meeting - you’re analysing people.


And people will, naturally, start changing how they speak to “perform” for the AI.

Interpreting emotions, while technically feasible, would make us even more self-aware, guarded, manipulative for the AI and fundamentally change how we communicate.

How many people in your organisation actually know when an AI is transcribing - or potentially analysing - their meetings?

Page top