Monday

9 March 2026 Vol 19

Anthropic launches code review tool to check flood of AI-generated code

When it comes to coding, peer feedback is crucial for catching bugs early, maintaining consistency across a codebase, and improving overall software quality. 

The rise of “vibe coding” — using AI tools that takes instructions given in plain language and quickly generates large amounts of code — has changed how developers work. While these tools have sped up development, they have also introduced new bugs, security risks, and poorly understood code. 

Anthropic’s solution is an AI reviewer designed to catch bugs that humans might miss. The new product, canned Code Review, launched Monday in Claude Code.

“We’ve seen a lot of growth in Claude Code, especially within the enterprise, and one of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner?” Cat Wu, Anthropic’s head of product, told TechCrunch. 

Pull requests are the mechanism developers use to submit code changes for review before those changes make it into the software. Wu said Claude Code has dramatically increased code output, making pull request reviews a bottleneck to shipping.

Wu said that because Claude Code allows developers to produce more code, pull requests that are made to ensure that code is safe and reliable are becoming a bottleneck to shipping code. 

“Code Review is our answer to that,” Wu said.

Techcrunch event

San Francisco, CA
|
October 13-15, 2026

Anthropic’s launch of Code Review — arriving first to Claude for Teams and Claude for Enterprise customers in research preview — comes at a pivotal moment for the company. 

On Monday, Anthropic filed two lawsuits against the Department of Defense in response to the agency’s designation of Anthropic as a supply chain risk. The dispute will likely see Anthropic leaning more heavily on its booming enterprise business, which has seen subscriptions quadruple since the start of the year. Claude Code’s run-rate revenue has surpassed $2.5 billion since launch, according to the company.

“This product is very much targeted towards our larger scale enterprise users, so companies like Uber, Salesforce, Accenture, who already use Claude Code and now want help with the sheer amount of [pull requests] that it’s helping produce,” Wu said.

She added that developer leads can turn on Code Review to run on default for every engineer on the team. Once enabled, it integrates with GitHub and automatically analyzes pull requests, leaving comments directly on the code explaining potential issues and suggested fixes. 

The focus is on fixing logical errors over style, Wu said. 

“This is really important because a lot of developers have seen AI automated feedback before, and they get annoyed when it’s not immediately actionable,” Wu said. “We decided we’re going to focus purely on logic errors. This way we’re catching the highest priority things to fix.”

The AI explains its reasoning step by step, outlining what it thinks the issue is, why it might be problematic, and how it can potentially be fixed. The system will label the severity of issues using colors: red for highest severity, yellow for potential problems worth reviewing, and purple for issues tied to pre-existing code or historical bugs. 

Wu said it does this all fast and efficiently by relying on multiple agents working in parallel, with each agent examining the codebase from a different perspective or dimension. A final agent aggregates and ranks the filings, removing duplicates and prioritizing what’s most important. 

The tool provides a light security analysis, and engineering leads can customize additional checks based on internal best practices. Wu said Anthropic’s more recently launched Claude Code Security provides a deeper security analysis. 

The multi-agent architecture does mean this can be a resource-intensive product, Wu said. Similar to other AI services, pricing is token-based, and the cost varies depending on code complexity — though Wu estimated each review would cost $15 to $25 on average. She added that it’s a premium experience, and a necessary one as AI tools generate more and more code. 

“[Code Review] is something that’s coming from an insane amount of market pull,” Wu said. “As engineers develop with Claude Code, they’re seeing the friction to creating a new feature [decrease], and they’re seeing a much higher demand for code review. So we’re hopeful that with this, we’ll enable enterprises to build faster than they ever could before, and with much fewer bugs than they ever had before.”

Source link

QkNews Argent

Leave a Reply

Your email address will not be published. Required fields are marked *