"It’s just a draft" might be the most dangerous sentence in AI adoption.
I’ve been watching the discussion around U.S. agencies using generative AI to draft transportation regulations.

What stands out isn’t the technology but it’s the pattern.
- AI starts as a drafting assistant.
- It saves time, everyone is happy.
- People start (over) relying on it.
- Without an explicit decision, AI becomes the author of the outcome.
Sounds familiar? Who has time to check all the output anyway?
That moment matters.
Under the EU AI Act, this is exactly where “experimentation” ends. Not because the AI is generating text, but because that text is now influencing rules with real safety and legal consequences.
In our consulting work, I see organizations cross this line every day without realizing it.
They tell me:
“It’s just a draft.”
“Humans review everything.”
“We’re not automating decisions.”
But when I ask who owns the risk when the AI hallucinates a regulatory clause, or how the oversight actually works in practice, the room goes a bit quiet.
That vagueness is the risk.
The AI Act doesn't just regulate models. It regulates reliance, accountability, and decision-making behavior. Once AI outputs shape decisions that matter, the system is no longer low-risk even if no one intended it to be.
The Bottom Line:
Most organizations won’t fail the AI Act because of their technology. They will fail because they never decided when AI stopped being a support tool and started being the lead writer.
How does your team decide when a support tool becomes the decision maker?