Anthropic launches code review tool to check flood of AI-generated code

Anthropic launches code review tool to check flood of AI-generated code

In the world of software development, peer review is essential for identifying bugs early, ensuring consistency, and enhancing overall code quality. With the advent of 'vibe coding'—a practice where AI tools translate plain language into extensive code—developers' workflows have transformed significantly. While these AI tools accelerate development, they also introduce a host of new bugs, security vulnerabilities, and complex code that can be challenging to manage. To tackle these issues, Anthropic has introduced an AI reviewer designed to identify bugs before they become part of the software codebase. This new tool, named Code Review, was launched on Monday as part of Claude Code. "We've experienced substantial growth in Claude Code, especially in the enterprise sector, and a recurring question from enterprise leaders is how to efficiently review the numerous pull requests generated by Claude Code," explained Cat Wu, Anthropic's head of product, in a conversation with TechCrunch. Pull requests are crucial for developers, allowing them to submit code changes for review prior to their integration into the software. Wu noted that the rise in code output from Claude Code has led to a bottleneck in pull request reviews, hampering the speed at which code can be deployed. "Code Review is our solution to that problem," Wu asserted. The launch of Code Review coincides with a significant moment for Anthropic, as the company recently filed two lawsuits against the Department of Defense over its classification of Anthropic as a supply chain risk. This legal battle may prompt Anthropic to further leverage its rapidly growing enterprise segment, which has seen a fourfold increase in subscriptions since the beginning of the year. According to the company, Claude Code's revenue run-rate has exceeded $2.5 billion since its introduction. This new product targets larger enterprise customers like Uber, Salesforce, and Accenture, who already utilize Claude Code and are seeking assistance to manage the volume of pull requests produced. Wu highlighted that developer leaders can activate Code Review for all engineers on their team. Once enabled, the tool integrates with GitHub, automatically analyzing pull requests and providing comments directly within the code that highlight potential issues and recommended fixes. The primary focus of Code Review is correcting logical errors rather than stylistic concerns. Wu emphasized the importance of this approach: "Many developers have encountered AI-generated feedback that isn't actionable, which can be frustrating. We decided to concentrate solely on logic errors so that we address the most critical issues needing resolution." The AI tool explains its reasoning methodically, detailing the identified issue, its potential impact, and possible solutions. It categorizes the severity of issues using a color-coded system: red for urgent problems, yellow for items worth further consideration, and purple for issues related to legacy code or historical bugs. Wu noted that the system operates efficiently by employing multiple agents working in parallel, each examining the code from different perspectives. A final agent aggregates and prioritizes the findings, removing duplicates to focus on the most important issues. While the tool includes a light security analysis, engineering leads have the option to customize additional checks based on internal protocols. Wu mentioned that Anthropic's recently launched Claude Code Security offers a more comprehensive security analysis. The multi-agent architecture does require significant resources, and similar to other AI services, the pricing is token-based, with costs varying based on code complexity—Wu estimates each review averages between $15 and $25. She described it as a premium and essential service as AI tools continue to generate increasing amounts of code. "[Code Review] is a response to immense market demand," Wu concluded. "As engineers leverage Claude Code, they are experiencing less friction in feature development and a rising need for code reviews. We are optimistic that this tool will empower enterprises to accelerate their development processes while significantly reducing the number of bugs."

Sources : TechCrunch

Published On : Mar 09, 2026, 20:15

Computing
Oracle's Mass Layoffs Spark Controversial Advice Among Employees

This week, Oracle shocked thousands of its workforce by announcing widespread layoffs via early morning emails, leaving ...

Business Today | Apr 04, 2026, 06:10
Oracle's Mass Layoffs Spark Controversial Advice Among Employees
Startups
Oracle's Drastic Layoffs Spark Debate on High Salaries and Job Security

In a shocking turn of events, Oracle employees received a startling email last week at 6 a.m. announcing, "Today is your...

Business Today | Apr 04, 2026, 07:55
Oracle's Drastic Layoffs Spark Debate on High Salaries and Job Security
Startups
Delve Cuts Ties with Y Combinator Amid Controversy

In a significant turn of events, compliance startup Delve has ended its association with Y Combinator, as confirmed by t...

TechCrunch | Apr 04, 2026, 21:30
Delve Cuts Ties with Y Combinator Amid Controversy
AI
Revolutionizing Home Assistance: How Everyday Chores Are Shaping the Robots of Tomorrow

The vision of integrating humanoid robots into everyday life has sparked the creation of a new type of workforce, where ...

CNN | Apr 04, 2026, 21:05
Revolutionizing Home Assistance: How Everyday Chores Are Shaping the Robots of Tomorrow
Computing
Colorado's Right-to-Repair Law Faces Pushback from Tech Giants

The movement for right-to-repair legislation is making significant strides across the United States, particularly in Col...

Ars Technica | Apr 04, 2026, 20:45
Colorado's Right-to-Repair Law Faces Pushback from Tech Giants
View All News