When your AI Agent books a holiday - Who is really responsible?

✈️ When your AI Agent books a holiday – Who is really responsible?

https://www.awaremind.ai/newsimages/agent_vacation.png

You tell your AI assistant: “Book me a holiday.”

Within minutes, it’s done:
Flights reserved
Hotel confirmed
Rental car booked
Dinner spots and a walking tour scheduled
 all charged to your card.

Then you change your mind.

You never personally clicked “Buy,” right?

So who’s responsible for the bookings? You, the AI provider, or the travel sites?

When an AI Agent acts on your behalf, it acts as you

Under EU law, AI systems have no legal personality. They can’t “sign” contracts or hold rights. They act purely as digital instruments of a human or an organisation.

So when an agent books your trip using your data and credentials, it’s legally you making those commitments. The travel sites rely on your apparent intent and those contracts will stand. That’s the hard truth of automation: when it acts for you, it can also bind you.

 

Where the EU AI Act steps in

The EU Artificial Intelligence Act doesn’t rewrite contract law -it builds guardrails around it.

Under Article 14, they must enable human oversight, allowing natural persons to understand and, when necessary, approve or correct the system’s actions. In this case a Human in Command approach would be beneficial.

For systems that act on behalf of users, this obligation means one thing above all:

You must be given a real opportunity to confirm or refuse an AI’s decision before it becomes final.

 

The confirmation layer: small UX, big compliance

That familiar prompt -“Approve purchase?”“Confirm booking?”“Are you sure you want to proceed?” - isn’t just good design, it’s a regulatory safeguard.

A confirmation layer ensures:

  • Intent verification - the human actually meant to proceed.
  • Transparency - the user knows the action is AI-initiated.
  • Traceability - there’s a logged record of consent.
  • Risk mitigation - it prevents unintended harm, spending, or contractual exposure.

 

From a compliance perspective, it satisfies multiple AI Act duties:

  • Article 14 – human oversight mechanisms
  • Article 52 – transparency toward natural persons
  • Annex IV – documentation and audit evidence
     

In other words: that “Approve purchase?” prompt is not optional polish, it’s part of your conformity assessment file.

 

Why this matters

As agentic systems become mainstream - booking trips, negotiating subscriptions, even filing complaints - the difference between convenience and liability will rest on whether the user had a genuine, recorded chance to intervene.

So when your Agent proudly says, “I’ve found your perfect holiday!” 

You should expect the next line to be: “Would you like me to confirm this booking?”

 That single click could make all the legal difference.


At AWAREMIND.AI, we help organisations design, document, and govern AI systems that act on behalf of humans - aligning autonomy, transparency and compliance under the EU AI Act.

 

Further reading: https://thefuturesociety.org/wp-content/uploads/2023/04/Report-Ahead-of-the-Curve-Governing-AI-Agents-Under-the-EU-AI-Act-4-June-2025.pdf

Page top