*Claude Code Hits $2.5B in Revenue, Rolls Out Auto Mode with AI Classifier*
Anthropic's Claude Code has reached a significant milestone, surpassing $2.5 billion in revenue. As part of this achievement, the company has introduced a new feature called auto mode, an AI classifier designed to determine what actions are safe to perform on a machine. This feature is likely to be of interest to developers who have been using Claude Code, but it also raises concerns about the level of trust required in a black box AI model.
**Auto Mode: A Middle Ground Between Manual Approval and Blind Trust**
Until now, users of Claude Code had two options: manually approve every file write and bash command, or use the skip permissions flag, which allows the AI to make decisions without human oversight. Auto mode, on the other hand, attempts to strike a balance between these two approaches. An AI safety classifier evaluates every tool call in real-time, automatically approving routine tasks like writing files and running tests. However, it blocks potentially destructive operations, such as mass deletion or data exfiltration.
**The Black Box Classifier: A Concern for Security**
While auto mode may seem like a convenient solution, it relies on a black box machine learning model that makes security decisions about the file system. The catch is that Anthropic has not published what the classifier allows or blocks, making it difficult to understand the rules and implications of this feature. The company acknowledges that the classifier "may still allow some risky actions" when intent is ambiguous, which raises concerns about the potential consequences of trusting a black box AI model.
**Channels and the Managed Alternative to OpenClaw**
In addition to auto mode, Anthropic has launched Channels, a feature that allows users to control Claude Code through Discord and Telegram. This move is seen as a response to the open-source project OpenClaw, which gained significant traction with over 100,000 GitHub stars. However, Anthropic had previously sent a cease and desist to the creator of OpenClaw, and now appears to be building a managed alternative.
**Other AI Trust Issues and Industry Developments**
Meanwhile, Cursor has launched Composer 2, a tool built on Moonshot AI's Kimi K2.5, a Chinese open-source model. However, the company was caught hiding this fact, with a developer discovering the model identifier by intercepting API traffic. This incident highlights the importance of transparency in AI development. Additionally, GitHub has announced that it will train on Copilot user data by default starting April 24, raising questions about data ownership and control.