EU AI Act — AI System Qualification Checklist

1) Machine-Based System
Does this system run on software, hardware, or cloud infrastructure (i.e., it is a machine-based system rather than a purely manual human process)?
Examples — typically YES: A web service, SaaS tool, mobile app, analytics platform, script, model, chatbot, recommender tool, or fraud detection engine.
Examples — typically NO: A manual spreadsheet updated entirely by a human, a paper checklist, or a fully human-only process without software support.
2) Autonomy
Once the system is running, can it operate with some level of autonomy (i.e., it produces outputs without a human manually controlling every internal step)?

Note: "Some level of autonomy" can be minimal – it does not require full automation or human‑free operation. It only means the system can process inputs and generate outputs without humans scripting every individual internal step.

Examples — typically YES: Risk scoring tool that evaluates inputs automatically, document classifier, chatbot generating responses.
Examples — typically NO: Macro executing a single fixed command, workflow requiring human approval at every step, static page tool that only prints predefined text when a user explicitly triggers each generation step.
3) Objective-Driven Behaviour
Is the system designed to achieve one or more explicit or implicit objectives, such as classifying, predicting, recommending, generating content, optimising, or making/assisting decisions?
Examples — typically YES: Recruitment tool ranking candidates, predictive maintenance model, recommendation engine, text generator, route-optimisation engine.
Examples — typically NO: A static database that only stores and retrieves records, without any logic that classifies, predicts, or optimises; basic file converter (PDF → Word) performing fixed, deterministic format transformations; simple stopwatch, calculator, or timer app performing straightforward, predefined operations.
4) Inference From Inputs
Does the system infer or determine, based on the inputs it receives, how to generate its outputs, rather than only executing a trivial, fully predetermined sequence of steps (e.g., a simple direct calculation or formatting)?
Examples — typically YES: Fraud model evaluating patterns in transaction data, sentiment analysis tool interpreting text tone, visual quality‑inspection system detecting defects, rules-and-learning hybrid that adapts scoring thresholds, complex rule‑based decision engine that evaluates multiple conditions to reach different conclusions based on inputs.
Examples — typically NO: A system that just applies a simple, fixed formula (e.g., "price × quantity") without any further reasoning or classification; very simple hardcoded rules script that always produces the same type of output for the same inputs and does not classify, predict, optimise, or otherwise "reason" beyond basic calculations or formatting; script renaming files according to a single fixed pattern (e.g., prefix + timestamp).
5) Influence on Physical or Virtual Environments
Do the system's outputs (predictions, content, recommendations, or decisions) influence or have the potential to influence actions, outcomes, processes, or behaviour in a physical or digital/virtual environment?
Examples — typically YES: Medical triage suggestions displayed to a clinician, credit‑risk score affecting lending workflows, moderation tool flagging or removing content, warehouse routing decision that changes robot movement, pricing recommendation used in e‑commerce.
Examples — typically NO: A passive dashboard that displays information only for curiosity, with no realistic link to decisions, processes, or actions; demo tool that is never used beyond internal experimentation and is not intended or likely to be used for real decisions or workflows; strictly sandboxed prototype that cannot and is not intended to influence users, systems, or processes.