With the tech trade singularly occupied with AI fashions, Anthropic is having an exceptionally excellent yr.
The corporate would possibly quickly pull forward of its primary competitor, because it appears to be like to lift tens of billions of bucks in a investment spherical that will put its valuation at some $950 billion (OpenAI was once valued at $854 billion in its March spherical), and industry shoppers increasingly more specific a prefererence for Claude over ChatGPT. A up to date file confirmed Anthropic lately outpaced OpenAI amongst industry shoppers, quadrupling its marketplace proportion since Would possibly 2025.
Cat Wu, Anthropic’s head of product for Claude Code and Cowork, has been a key determine in that luck. Since becoming a member of the corporate in August 2024, Wu has helped shepherd Claude thru a crucial segment, leveling it up from a purely informational chatbot to a coding device and past. Wu, who oversees the improvement of recent options, is ceaselessly paired with Boris Cherny, a core member of Anthropic’s technical group of workers and the author of Claude Code, main the pair to be characterised as Anthropic’s “Batman and Robin.”
Wu sat down with me ultimately’s week’s 2d annual Code with Claude convention in San Francisco, the place she mentioned how she thinks about product technique, and the way she hopes the enjoy of the usage of Claude will alternate sooner or later.
This interview has been edited for duration and readability.
While you’re having a look at product technique, how a lot of it’s reactive in your friends or your competition? Do you take into consideration that in any respect?
The primary factor that we design for is staying at the exponential, so I feel, throughout our group, we instill in everybody the lesson that AI will simply proceed to recover. For us, we simply wish to keep at this frontier. We don’t take into consideration competition. I feel in the event you do take into consideration competition, you find yourself being, like, endlessly two weeks, or like, a month at the back of how briskly you’ll execute. And so it’s in most cases no longer the easiest way to stick on the frontier.
Anthropic launched a minimum of six fashions ultimate yr and has already launched virtually as many this yr. Do you are expecting this tempo of construction to proceed?
Our hope is that it continues (guffawing). I feel the fashions are nonetheless bettering at an overly secure tempo, and so we must be capable to stay sharing the ones with our customers. I feel the deployments would possibly glance a little bit other—like how we treated Glasswing, however up to imaginable, we would like this intelligence to profit as many of us as imaginable, and it must be treated in an overly protected approach, which is why we treated Glasswing [in the way that we did].
[Glasswing is an initiative that Anthropic launched in April that invited a small consortium of partner organizations — including companies like Amazon, Apple, CrowdStrike, and Microsoft — to gain access to its new cybersecurity model, Mythos. Unlike many of Anthropic’s other AI models, Mythos is not being given a general public release. The company has claimed that it fears the model — which is designed to scan codebases for software vulnerabilities — is too powerful, and could be weaponized by bad actors.]
You mentioned in a prior interview that the way forward for paintings is principally group of workers managing fleets of brokers. It kind of feels like that might sooner or later result in a scenario the place the brokers are higher on the task, or know the task, higher than the human.
I feel this can be very onerous to regulate brokers if you’ll’t do the task your self. I feel the managers nonetheless wish to be professionals of their area. It is a new talent set that numerous persons are going to have to be told, however managing brokers is in fact similar to being a supervisor of folks, within the sense that it’s a must to perceive, like, why did the agent make this error? Did it misread my instruction? Used to be my request under-specified? You could have to be able to debug it.
It does appear to be the long run function is to chop down on group dimension, even though. As a result of you probably have brokers doing a role, you then are not looking for an intern, proper?
Preferably, I feel the speculation is that everybody can get much more finished. I feel that, for everybody’s task, there may be at all times this share of it that is actually tedious. For me, it’s responding to emails. I feel everybody has this a part of their lifestyles…So my hope is that it [the AI agents] in fact does that, after which everybody has, like, these kinds of cool issues that they are going to need to construct [in their spare time].
What are you guys maximum eager about within the subsequent six months?
I feel the following giant factor is proactivity. Final yr we have been on this international of synchronous construction. At the moment, persons are transferring to routines, so like automating, for instance, responses to buyer reinforce tickets. And I feel your next step is that Claude understands what you’re employed on, and simply units up a few of these automations for you.
While you acquire thru hyperlinks in our articles, we would possibly earn a small fee. This doesn’t have an effect on our editorial independence.



