Artificial Intelligence Chatbots Perpetuating Debunked Medical Stereotypes about Black People, Study Finds

A recent study conducted by researchers at Stanford University has revealed that artificial intelligence (AI) chatbots, including popular platforms like ChatGPT and Google’s Bard, are returning responses that contain debunked medical claims about Black people. The study involved running nine medical questions through four AI chatbots trained to analyze large amounts of internet text.

The responses from the chatbots included incorrect information about kidney function, lung capacity, and muscle mass, perpetuating harmful stereotypes about Black people’s health. This discovery raises concerns about the growing use of AI in the medical field and its potential impact on health disparities.

Stanford University assistant professor Roxana Daneshjou, who advised on the study, emphasized the real-world consequences of perpetuating these stereotypes in medicine. She expressed the need to remove such tropes from medical practices to ensure equal and fair treatment for all patients.

William Jacobson, a law professor from Cornell University and the founder of the Equal Protection Project, highlighted the long-standing concern of immaterial racial factors influencing medical decision-making. He warned that the spread of AI could exacerbate this issue. Jacobson stressed the importance of not relying solely on AI as a source of medical information and cautioned against politicizing AI by manipulating its inputs.

Phil Siegel, founder of the Center for Advanced Preparedness and Threat Response Simulation, acknowledged that AI systems do not possess inherent racism but acknowledged the potential for biased information based on the data sets they draw from. He emphasized the need for regulations to ensure fairness and prevent biases from being hardcoded into AI models, particularly in crucial areas such as healthcare.

The study’s findings highlight the urgent need to address the potential biases present in AI systems used in the medical field. It serves as a reminder that AI should be used as a tool to assist healthcare professionals rather than replace human judgment and expertise.

Neither Google nor OpenAI, the developers of ChatGPT and Bard, have responded to requests for comment on the study’s findings.

Subscribe to our newsletter to stay updated on the latest news from around the world.

0 0 votes
Article Rating
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments

Zeen is a next generation WordPress theme. It’s powerful, beautifully designed and comes with everything you need to engage your visitors and increase conversions.

0
Would love your thoughts, please comment.x
()
x