Anthropic revealed Claude Code Security, a cutting, edge cybersecurity feature built directly into Claude Code on the web. The new feature, currently in a limited research preview, is aimed at helping security teams identify and fix vulnerabilities in complex codebases more efficiently than traditional tools.
Development organizations find it difficult to handle the ever, increasing volume of software vulnerabilities and the lack of highly skilled security engineers. Most standard automated analysis tools mainly concentrate on patterns of known vulnerabilities, and they often miss subtle, context, dependent issues that can later be exploited as attack vectors. Claude Code Security should fix this problem by using AI reasoning on software code just like a human expert would.
“Nothing is applied without human approval: Claude Code Security identifies problems and suggests solutions, but developers always make the call.”
Claude Code Security scans entire codebases for security weaknesses, reasons about component interactions and data flow, and proposes targeted software patches for review. Findings are verified through a multi-stage process that filters out false positives and assigns severity ratings to help teams prioritize the most critical issues. Verified results are displayed in a centralized dashboard where engineers can assess both vulnerability details and patch recommendations.
Also Read: Simbian Launches Industry-First Autonomous AI Pentest Agent to Eliminate “Window of Exposure” in Enterprise Security
The capability is initially offered as a limited research preview to Enterprise and Team customers, with expedited access available for open-source project maintainers. Anthropic emphasizes collaboration during this early stage to refine the tool’s effectiveness and ensure responsible deployment.
“Claude reads and reasons about your code the way a human security researcher would: understanding how components interact, tracing how data moves through your application, and catching complex vulnerabilities that rule-based tools miss.”
The company’s internal cybersecurity research, backed by its Frontier Red Team, has driven the development of Claude Code Security. Over the past year, Anthropic has stress-tested the AI’s defensive capabilities through competitive Capture-the-Flag events and partnerships with external research organizations. Using the new Claude Opus 4.6 model, Anthropic’s team identified more than 500 previously undetected vulnerabilities in open-source codebases, some of which had persisted for decades despite expert review.
Claude Code Security is only one part of a larger initiative to raise cybersecurity standards in software engineering and DevSecOps workflows. Anthropic, through the use of AI, powered reasoning along with human expertise, is looking to help corporate security teams enhance their security posture while at the same time quickly and easily identify hidden risks that might have been overlooked.
Security teams can apply for early access or to participate in the research preview through the official access channels of Anthropic.


