Anthropic has unveiled a research preview of its innovative browser-based AI assistant, Claude, powered by the company's Claude AI models. The announcement came on Tuesday, highlighting the rollout to an initial group of 1,000 subscribers enrolled in Anthropic’s Max plan, which ranges from $100 to $200 per month. Additionally, a waitlist is being opened for other interested users eager to experience this cutting-edge technology. By integrating an extension into Chrome, select users now have the ability to engage with Claude through a sidecar window that retains contextual awareness of their browsing activities. This functionality allows users to authorize the Claude agent to perform specific tasks within their browser, streamlining their online experience. As the competition heats up among AI labs, the browser is becoming a crucial arena for advancements in AI integration. Perplexity has recently introduced its own browser, Comet, featuring an AI agent designed to assist users by managing tasks. Meanwhile, OpenAI is reportedly on the verge of launching its own AI-enhanced browser, rumored to include features akin to those of Comet. In a related development, Google has also rolled out Gemini integrations with Chrome in recent months. The urgency to innovate in AI-driven browsers is underscored by Google's impending antitrust case, with a ruling expected imminently. The presiding federal judge has hinted at the possibility of requiring Google to divest its Chrome browser. Perplexity has even put forth an unsolicited bid of $34.5 billion for Chrome, while OpenAI's CEO Sam Altman expressed interest in a potential acquisition. In its blog post, Anthropic acknowledged the emerging safety concerns associated with AI agents having browser access. Recently, Brave's security team identified potential vulnerabilities in Comet’s browser agent that could allow for indirect prompt-injection attacks, where concealed code on a webpage might mislead the agent into executing harmful instructions. However, Perplexity confirmed that they have since mitigated this issue. Anthropic aims to utilize this research preview as a platform to identify and address new safety challenges, having already implemented several safeguards against prompt-injection attacks. Their measures have reportedly decreased the success rate of such attacks from 23.6% to 11.2%. Claude’s settings permit users to restrict the browser agent's access to specific websites, and it has been programmed to automatically block access to sites offering financial services, adult content, and pirated materials. Additionally, before taking significant actions—such as publishing content, making purchases, or sharing sensitive information—Claude will seek user consent. This marks not Anthropic's first venture into AI models capable of controlling computer interfaces. In October 2024, the company launched an AI agent designed for PC control; however, early tests revealed issues with speed and reliability. Fortunately, advancements in agentic AI capabilities have significantly improved since then. Recent evaluations by TechCrunch indicate that modern browser-integrated AI agents like Comet and ChatGPT Agent are generally reliable for managing straightforward tasks, though they may still encounter challenges with more intricate scenarios.
A recent incident involving the mis-issuance of TLS certificates for Cloudflare’s 1.1.1.1 encrypted DNS service has spar...
Ars Technica | Sep 04, 2025, 22:30In a strategic move as it prepares for the significant launch of its R2 SUV next year, Rivian has announced a reduction ...
TechCrunch | Sep 04, 2025, 23:50Augment, an innovative logistics startup founded by Harish Abbott, who previously co-founded the e-commerce shipping pla...
TechCrunch | Sep 04, 2025, 20:15OpenAI has unveiled plans for an innovative AI-driven hiring platform designed to bridge the gap between businesses and ...
TechCrunch | Sep 04, 2025, 19:45Climate technology startups frequently encounter a significant funding challenge known as the "valley of death," a criti...
TechCrunch | Sep 04, 2025, 21:05