AI sees race. Cancer diagnostic algorithms were found to have bias, performing unevenly based on patient demographics.


 A disturbing study from Harvard Medical School has uncovered a "hidden" bias in AI models used for cancer diagnosis. The research found that deep learning algorithms, when trained on medical images like pathology slides, can learn to identify a patient's self-reported race—a feat human doctors cannot do from images alone. The problem arises when the AI uses this racial data as a "shortcut" to make diagnostic predictions, rather than relying solely on biological disease markers.

The study revealed that in nearly 30% of the tested tasks, the AI models exhibited significant performance disparities, often yielding less accurate results for Black patients due to imbalances in the training data. This "algorithmic racism" could lead to misdiagnoses and unequal care if left unchecked. The researchers are calling for a new training approach, proposing a method called "FAIR-Path" that explicitly prevents models from relying on demographic shortcuts, ensuring that AI tools remain colorblind and clinically objective.

Read the original article at: https://futurism.com/health-medicine/ai-cancer-diagnostic-bias

 

Follow us on Instagram, Twitter, and Facebook to stay up to date with what's new in healthcare all around the world.

Comments

Popular posts from this blog

Guesswork gone. A new scoring system (ST-RADS) predicts soft-tissue tumor malignancy with 99.2% accuracy.

The system is too slow. New data reveals that missed "waiting time" targets are hiding the true, deadly cost of cancer care.

Digital Oncology Insights: 4th December - 10th December