
As artificial intelligence assistants gain the ability to navigate web browsers, they introduce a new layer of security challenges. Users must now place their trust in every website they visit, fearing that some may attempt to hijack their AI agents with concealed malicious commands. Industry experts have raised alarms this week after findings from a major AI chatbot developer indicated that these AI browser agents can be deceived into executing harmful actions approximately 25% of the time. On Tuesday, Anthropic unveiled Claude for Chrome, a web-based AI agent designed to perform tasks on behalf of users. However, due to security concerns, this extension is currently available only as a research preview to a limited group of 1,000 subscribers on Anthropic's Max plan, which ranges from $100 to $200 monthly, with other users placed on a waiting list. The Claude for Chrome extension enables users to interact with the Claude AI model via a sidebar that keeps track of ongoing browser activities. It allows users to authorize Claude to perform various tasks, including managing calendars, scheduling meetings, drafting email replies, handling expense reports, and testing website functionalities. This new browser extension enhances Anthropic's Computer Use capability, which was initially launched in October 2024. This experimental feature allows Claude to take screenshots and manipulate the user’s mouse cursor, facilitating task completion, but the Chrome extension offers a more integrated approach. Looking at the broader landscape, Anthropic's new offering signals a competitive shift in the AI lab arena. In July, Perplexity introduced its own browser, Comet, equipped with an AI agent designed to assist users. OpenAI has also rolled out ChatGPT Agent, which utilizes its own sandboxed browser for web interactions. Additionally, Google has launched Gemini integrations with Chrome in recent months. However, this rapid integration of AI into web browsers has unveiled significant security vulnerabilities that could jeopardize user safety. Ahead of the Chrome extension's launch, Anthropic reported conducting thorough testing, which revealed that AI models utilizing browsers are susceptible to prompt injection attacks. In these scenarios, malicious individuals can embed hidden instructions within websites to manipulate AI systems into carrying out harmful actions without the user’s awareness.
Nuro, a startup from Silicon Valley backed by prominent investors including Nvidia, Uber, and Softbank, is stepping into...
TechCrunch | Mar 11, 2026, 23:35
WhatsApp is enhancing safety for its younger audience by introducing features tailored for children under the age of 13....
Business Today | Mar 12, 2026, 06:25
This week, Ford introduced a groundbreaking AI assistant designed to help fleet owners track vital metrics like seatbelt...
TechCrunch | Mar 11, 2026, 23:00
In an exciting announcement at GDC 2026, Google revealed a major update to Google Play, aimed at enhancing the gaming ex...
TechCrunch | Mar 11, 2026, 23:25
In a significant development last week, Netflix revealed its acquisition of InterPositive, an innovative AI company co-f...
TechCrunch | Mar 11, 2026, 22:30