Why "The computer said so" is no longer a legal defense

We recently came across a truly harrowing story about Angela Lipps, a 50-year-old grandmother from Tennessee. She was arrested at gunpoint while babysitting and spent nearly six months in jail for a crime she didn’t commit all because a facial recognition tool in North Dakota flagged her as a "match." Lipps says she lost her home, her car, and her dog as a result of her stint in jail.

 /newsimages/computersaidso.jpeg

Police treated the AI’s output as an absolute truth. They didn't check her alibi. They didn't verify her location. They just followed the software.

When we work with clients on EU AI Act compliance, I use cases like this to explain why the regulation is so "strict." The law isn't just trying to fix buggy code; it’s trying to fix a specific human flaw: Automation Bias.

The trap of over-trusting AI

The EU AI Act doesn't treat facial recognition as high-risk simply because it’s imperfect. It’s because when a computer gives us a high-confidence "hit," we humans have a documented tendency to stop thinking for ourselves.

Beyond the "Checkbox"

The real work we do with organizations isn't just checking off a list of rules; it’s about operationalizing accountability and making sure AI is used sensibly.

By August 2026, if you’re using high-risk biometrics, you aren't just required to have a human in the loop. You have to prove that human was actually capable of overseeing the system. This means:

1. Auditable Logging: Proving who looked at the AI output and why they decided to act on it.

2. Bias Mitigation: Proving your data didn't lead to the kind of demographic errors seen in the Lipps case.

3. Governance: For certain types of organisations this means completing a Fundamental Rights Impact Assessment (FRIA) before the system even goes live.

The Takeaway

Cases like Angela Lipps’ aren't just "technical glitches.” They are a preview of the legal and reputational nightmares that the AI Act is designed to prevent. For those of us navigating this transition, the goal isn't just to stay on the right side of the law, it’s to ensure that when an AI makes a suggestion, there is a person standing behind it who is actually in control to ensure no one gets hurt by the use of flawed AI systems.

If you’re currently mapping out a biometric or high-risk AI project, now is the time to audit your human oversight workflows. Compliance in 2026 starts with the architecture you build today.

Page top