A recent paper, “Thinking - Fast, Slow, and Artificial,” suggests something much more fundamental: AI is becoming a third cognitive system. It’s not just something we use; it’s something we’ve started to think with. And in some cases, it’s something we think instead of.

image AI generated with Nanobanana
One concept in the paper really hit home for us: “cognitive surrender.” It’s that subtle tendency to accept AI outputs without much second-guessing, even when they’re wrong. It’s not that people are being lazy; it’s just that AI is fast, confident, and incredibly frictionless. It’s easy to just go with it.
From the perspective of the EU AI Act, this creates a massive tension. Most of our current regulations are built on the assumption that humans stay in the driver’s seat, that oversight is active, and that we’re constantly reviewing and challenging what the machine tells us.
But in the real world, that’s not always how we behave. We tend to follow AI recommendations, and we actually feel more confident when we do. When we’re under pressure or dealing with something complex, that reliance only grows.
This shifts the entire conversation around AI governance. The question isn't just "Is the system reliable?" anymore. It’s "How is this system shaping human judgment?" and "At what point does oversight turn into passive acceptance?"
If cognitive surrender becomes our default setting, human oversight starts to look like a box-ticking exercise. Accountability gets harder to pin down, and our compliance frameworks risk losing touch with how people actually work.
We believe this is where the next phase of AI governance has to go. We need to look beyond just model performance and documentation and start focusing on the design of human-AI interaction.
That means:
- Designing for reflection, not just speed.
- Intentionally adding friction in places where blind trust is a risk.
- Measuring how humans are actually engaging with the system, rather than just assuming they are.
AI risk isn’t just a technical problem; it’s a behavioral one. Addressing that gap is going to be the key to making the EU AI Act actually work in practice.
We’d love to hear how others are navigating this. How are you handling the "human element" in your AI strategy?
author: Marco Langhorst - Certified AI Compliance Officer