When AI overlooks darker skin, patients pay the price and the EU AI Act is meant to stop that.
An investigation by Civio points out a harmful practice: a melanoma-triage algorithm reportedly missed about one in three (33%!) melanomas and underperformed on darker skin because its image datasets weren’t representative of real patients. In healthcare, that isn’t a mere model limitation, it’s a safety failure with life altering consequences. If the training set doesn’t reflect the people it serves, the “accuracy” can translate into delayed diagnoses for the very groups already facing care disparities.

Why this keeps happening
• Unbalanced datasets: Models learn the majority class and generalize poorly to underrepresented groups; dermatology is especially vulnerable when darker skin tones are scarce in image banks.
• “Average accuracy” hides harm: A single (sales) headline metric can mask wide subgroup performance gaps, leading decision makers to think a tool is “good enough.”
• Workflow pressure and automation bias: Doctors may over trust AI suggestions, compounding systematic errors rather than catching them early.
How the EU AI Act adresses it (For “high risk” medical AI)
• Data quality and representativeness: Providers must use training, validation, and testing datasets that are appropriate for the intended population, with documented governance to detect, prevent, and mitigate bias across the lifecycle. This aims squarely at the root cause in Civio’s case: non representative data.
• Human oversight by design: Systems must enable clinicians to understand limits, avoid over reliance, and intervene or override when signals conflict with clinical judgment—crucial when an algorithm is less reliable on certain skin tones.
• Transparency, documentation, and monitoring: Clear instructions on intended use, performance bounds, known limitations, logging, incident reporting, and post market corrective actions make blind spots visible and fixable. This creates accountability loops that many tools currently lack.
What builders and buyers should do now
• Demand representative data and prove it: Show the distribution of skin tones (or other attributes) across training and test sets, not just totals.
• Prevent automation bias: Ensure UI/UX and clinical workflow make it easy to disagree with the model, record the reason, and feed that back into monitoring .
• Monitor after deployment: Track real world performance drift and incident rates by subgroup, and act when gaps emerge.
Bottom Line: The EU AI Act moves “this works for everyone” from aspiration to requirement so tools like the one in Civio’s story are challenged before they can cause avoidable harm.
Want to know more about applying AI ethically? Get in touch with us