
Recent studies have raised significant concerns about the use of artificial intelligence in healthcare, particularly regarding its impact on women and ethnic minorities. Research from top universities in the US and UK indicates that many large language models (LLMs) may underestimate symptoms exhibited by these groups, potentially leading to poorer health outcomes. The findings suggest that as AI models gain traction in the healthcare sector, they may unintentionally perpetuate existing biases in medical treatment. Specifically, these AI tools appear less capable of accurately assessing the severity of symptoms in female patients and show a lack of empathy towards Black and Asian individuals seeking care. This revelation comes at a time when major tech companies like Microsoft, Amazon, OpenAI, and Google are rapidly advancing AI products designed to alleviate the workload of healthcare professionals and expedite patient care. Many healthcare providers are now utilizing LLMs, including Gemini and ChatGPT, along with medical note-taking apps from startups like Nabla and Heidi, to streamline patient visit documentation and summarize clinical data. In June, Microsoft touted its AI medical tool as being four times more effective than human doctors at diagnosing complex health issues. However, a study from MIT's Jameel Clinic found that AI models, including OpenAI's GPT-4, were recommending lower care levels for female patients, even suggesting that some could self-treat at home rather than seek professional assistance. Additionally, the same MIT research highlighted that responses from models like GPT-4 displayed diminished compassion towards Black and Asian individuals facing mental health challenges. Marzyeh Ghassemi, an associate professor at MIT, noted that this trend indicates that some patients may receive less supportive advice based solely on perceived racial biases embedded within these models. Compounding these issues, a study conducted by the London School of Economics revealed that Google’s Gemma model, which aids social workers in over half of the UK’s local authorities, tends to minimize the physical and mental health concerns of women compared to their male counterparts when generating case notes. These findings underscore the urgent need for a critical examination of AI tools in healthcare to ensure equitable treatment for all patients.
During an interview with CNBC, Palantir's CEO Alex Karp emphasized the significant advantage that artificial intelligenc...
CNBC | Mar 12, 2026, 22:05
Facebook Marketplace is enhancing its platform with innovative Meta AI functionalities aimed at streamlining communicati...
TechCrunch | Mar 12, 2026, 18:45
Lucid Motors is setting its sights on the bustling midsize SUV market, a move that could prove pivotal for the company's...
Ars Technica | Mar 12, 2026, 17:55
In the wake of recent airstrikes by the US and Israel on Iran, cybersecurity experts issued warnings to organizations wo...
Ars Technica | Mar 12, 2026, 22:20
In an exciting development for AI enthusiasts, Perplexity has introduced its latest innovation: the 'Personal Computer.'...
Ars Technica | Mar 12, 2026, 17:45