
The United Kingdom's plan to utilize artificial intelligence for age estimation of migrants has sparked significant concern among rights advocates. This technology aims to assist in determining the ages of individuals claiming to be under 18, particularly those arriving by small boats from France. However, critics warn that reliance on AI could deepen existing biases and lead to more inaccuracies in age assessments. The story of Jean, a central African migrant, highlights the potential pitfalls of such technology. After fleeing conflict, he was misclassified as a 26-year-old upon arrival in 2012, a decision based on his height rather than his actual age. "I look 10 years older because I am taller, that was the reason they gave," Jean recounted, emphasizing the desperate situation he faced without the necessary support. In a move that has garnered criticism, the UK government plans to introduce facial age estimation technology by 2026. Rights groups argue that these assessments are inherently flawed and should be conducted by trained professionals instead of machines. Luke Geoghegan from the British Association of Social Workers expressed concern over the complexity of age assessments, stating, "This should never be compromised for perceived quicker results through artificial intelligence." The Home Office contends that AI could be a cost-effective solution to prevent adults from masquerading as minors in the asylum process. A spokesperson asserted that the new technology would complement other methods employed by trained assessors, not replace them. However, many fear that this approach could result in more vulnerable children being placed in adult accommodations without adequate safeguards. As global migration rates surge due to conflict and climate crises, governments are increasingly looking to digital solutions. The UK has also announced plans to leverage AI to expedite asylum decisions and improve efficiency in casework. Yet, advocates warn against using the asylum process as a testing ground for unproven AI technologies, which may lack transparency and accountability. Experts from various human rights organizations have voiced strong opposition to the use of AI in sensitive situations like these, pointing out the potential harm to privacy and human rights. They argue that the technology may not be reliable and could further entrench societal biases, especially against marginalized groups. Jean’s experience serves as a stark reminder of the dangers of mishandling age assessments. His initial misclassification led to immense emotional distress, and he was later granted asylum only after his age was corrected. According to data from the Helen Bamber Foundation, nearly half of the migrants reassessed in 2024 were children mistakenly placed in adult accommodations, raising alarms about the safety and well-being of these vulnerable individuals. In light of these issues, advocates are calling for the involvement of child protection experts in age assessments, emphasizing that human oversight is crucial in decisions that significantly impact lives. The ongoing debate underscores the need for careful consideration of how technology is implemented in sensitive humanitarian contexts.
A recent report from Anthropic highlights that older, female, and highly educated professionals are more likely to be af...
Business Today | Mar 06, 2026, 06:35
The future of semiconductor exports from the United States is under scrutiny as new reports suggest that the Trump admin...
TechCrunch | Mar 05, 2026, 21:55
In a striking courtroom exchange, a lawyer representing the Trump administration asserted that Health Secretary Robert F...
Ars Technica | Mar 05, 2026, 21:30
Recent reports indicate that hackers have infiltrated the FBI's networks, raising serious security concerns. On Thursday...
TechCrunch | Mar 05, 2026, 22:10
OpenAI has launched its latest model, GPT-5.4, which integrates cutting-edge reasoning skills, enhanced coding functiona...
Business Today | Mar 06, 2026, 05:35