AI in Healthcare Faces Bias and Privacy Challenges Amid Growing Adoption
Growing Use of AI in Clinical Settings
Artificial intelligence is increasingly being integrated into medical workflows. Open Evidence, a tool employed by a large number of physicians, draws on medical journals, U.S. Food and Drug Administration labels, health guidelines, and expert reviews to summarize patient histories and retrieve information. Each AI‑generated output is accompanied by a citation to its source, providing transparency for clinicians.
Addressing Bias in Medical AI
Google has emphasized that it takes model bias "extremely seriously" and is developing privacy techniques that can sanitise sensitive datasets while safeguarding against discrimination. Researchers suggest that reducing bias begins with careful selection of training data, advocating for diverse and representative health datasets.
Large‑Scale Research Initiatives
University College London and King’s College London collaborated with the UK’s National Health Service to develop a generative AI model called Foresight. The model was trained on anonymized patient data from tens of millions of individuals, encompassing records of hospital admissions and Covid‑19 vaccinations. Lead researcher Chris Tomlinson noted that the national‑scale data "allows us to represent the full kind of kaleidoscopic state of England in terms of demographics and diseases," offering a stronger foundation than more generic datasets.
European scientists have also created an AI model named Delphi‑2M, which predicts long‑term disease susceptibility using anonymized medical records from hundreds of thousands of participants in the UK Biobank.
Privacy Concerns and Regulatory Scrutiny
The NHS Foresight project was paused to allow the UK Information Commissioner’s Office to consider a data‑protection complaint filed by the British Medical Association and the Royal College of General Practitioners. The complaint highlighted concerns over the use of sensitive health data in model training.
Risks of Hallucination and Clinical Impact
Experts caution that AI systems can "hallucinate"—producing fabricated answers—which could be especially harmful in medical contexts. Despite these risks, MIT researcher Ghassemi expressed optimism, stating that AI brings "huge benefits to healthcare" and should focus on addressing critical health gaps rather than merely improving marginal task performance.
Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas