In a thought-provoking discussion, Nobel Prize-winning physicist Saul Perlmutter highlighted the psychological risks associated with artificial intelligence, emphasizing the need for critical thinking in its use. Perlmutter, renowned for his discovery of the universe's accelerating expansion, pointed out that AI can create a false sense of understanding, which may impair judgment as this technology becomes further integrated into daily life. During a recent podcast with Nicolai Tangen, CEO of Norges Bank Investment Group, Perlmutter warned that reliance on AI might lead students to prematurely depend on it before mastering fundamental concepts. He stated, "The tricky thing about AI is that it can give the impression that you've actually learned the basics before you really have." Rather than viewing AI as a rival to human intellect, Perlmutter advocates for its use as a supportive tool designed to enhance human thinking rather than replace it. He noted that the true potential of AI is unlocked when users possess a solid foundation of critical thinking skills. "When you know different tools and approaches to think about a problem, AI can often help you find the information you need," he explained. At UC Berkeley, where Perlmutter teaches, he has worked with colleagues to create a course focused on developing critical thinking skills through scientific reasoning, including probabilistic thinking and structured disagreement. This curriculum employs games and discussions aimed at fostering these essential habits in students’ everyday decision-making. One of Perlmutter's significant concerns is the overconfidence that AI often conveys. He described how AI's assertive tone can undermine skepticism, leading individuals to accept its conclusions without questioning their validity. He noted that this phenomenon mirrors a dangerous cognitive bias: the tendency to trust information that appears authoritative or aligns with our pre-existing beliefs. To combat this instinct, he encourages individuals to scrutinize AI-generated content with the same critical lens they would apply to human statements, considering factors such as credibility and potential errors. In scientific research, Perlmutter explained, scientists aim to identify and mitigate mistakes by implementing systems that reveal their errors — a mindset that should also be applied when interacting with AI. Perlmutter concluded with a reminder that AI literacy involves discerning when to question its outputs and embracing uncertainty. He stressed that as AI continues to evolve, society must continually assess whether it serves to enhance our understanding or leads us into deception. "We have to keep asking ourselves: is it helping us, or are we getting fooled more often?"
In a bid to re-engage users and attract a younger audience, Tinder unveiled a series of exciting updates during its firs...
TechCrunch | Mar 12, 2026, 18:40
Rox, a pioneering startup focused on autonomous AI agents designed to enhance sales productivity, has successfully secur...
TechCrunch | Mar 12, 2026, 22:40
A recent conversation with a CEO from a leading software firm revealed alarming predictions for the industry. He warned ...
Business Insider | Mar 12, 2026, 18:20In an exciting development for AI enthusiasts, Perplexity has introduced its latest innovation: the 'Personal Computer.'...
Ars Technica | Mar 12, 2026, 17:45
In the wake of recent airstrikes by the US and Israel on Iran, cybersecurity experts issued warnings to organizations wo...
Ars Technica | Mar 12, 2026, 22:20