🧭 Anthropic Triples to $30B Run Rate and Signs 3.5-Gigawatt Google-Broadcom TPU Deal
In a pair of announcements spanning April 6-7, Anthropic disclosed that its annualised revenue run rate has crossed $30 billion — up from roughly $9 billion at the end of 2025 — a more than threefold increase in approximately three months. Simultaneous with the revenue disclosure, the company confirmed a major compute infrastructure contract: Anthropic will receive approximately 3.5 gigawatts of next-generation TPU capacity built jointly by Google and Broadcom, coming online from 2027, on top of 1 gigawatt already flowing in 2026. Broadcom's shares rose 3.5% on the news.
The enterprise traction behind the numbers is equally striking: more than 1,000 companies now spend over $1 million annually with Anthropic — a figure that doubled in fewer than two months. Mizuho analysts estimated Broadcom would record $21 billion in AI revenue from Anthropic in 2026 alone, rising to $42 billion in 2027.
What the compute deal means in practice
- Custom silicon at gigawatt scale: Google's TPUs have historically offered Anthropic better cost-per-token economics than equivalent GPU clusters for inference-heavy workloads; 3.5 additional gigawatts is a very large forward commitment
- Capacity from 2027 — not 2026: The near-term constraint is not going away; expect rate limits and pricing to remain flat through the rest of this year
- 1,000+ $1M+ customers is the real signal: Enterprise API revenue at that scale implies deep integration into production systems — this is no longer a pilot market
- Broadcom as a new stakeholder: Broadcom's direct financial exposure to Anthropic revenue changes the chip vendor calculus — it now has a strong incentive to win the next generation of Google TPU contracts
Practical implication for API users
The gigawatt-scale compute commitment is a strong signal that Anthropic is building for sustained high-throughput demand growth. For teams that have been deferring production migration due to capacity uncertainty, this deal materially de-risks the long-term availability story. That said, the new capacity doesn't arrive until 2027 — so if your workloads are rate-limited today, plan for that constraint to persist for the next 12 months.
revenue
compute
Google Cloud
Broadcom
TPU
infrastructure
🧭 Five Anthropic Sessions at Google Cloud Next Outline the "After Software" Era for Enterprises
Anthropic has confirmed its presence at Google Cloud Next 2026 (April 22–24, Las Vegas, Booth #2021) with five sessions oriented around enterprise agentic deployment. The headline session — "After Software: Anthropic's Vision for the Next Era of Enterprise AI" — frames AI agents not as productivity tools layered on top of existing software but as replacements for entire categories of software-driven work, a significant reframing of how enterprises should think about their software investment roadmap.
Session lineup
- After Software — Anthropic's macro vision: what enterprise software looks like when agents run end-to-end workflows instead of humans using point applications
- Multi-agent design patterns — architectural guidance and common failure modes when orchestrating Claude agents at production scale
- Evaluation frameworks for agentic Claude Code deployments — how to measure correctness, reliability, and cost efficiency for autonomous coding pipelines in CI/CD
- Shopify case study: the "Sidekick" commerce agent — how Shopify powers its autonomous commerce assistant via Claude on Vertex AI, including latency and tooling choices
- Design patterns for long-running agents — handling context compaction, state persistence, and graceful recovery for agents running continuously over hours or days
Why the "After Software" framing matters
Calling it the "After Software" era is a deliberate repositioning of the competitive discussion: rather than comparing Claude to competing LLMs, Anthropic is now comparing agentic Claude to SaaS applications. The Shopify case study is particularly telling — Sidekick is built on Claude via Vertex, meaning Google Cloud's distribution is now a primary route for enterprise Claude adoption alongside the direct Anthropic API. If you are building enterprise software that depends on incumbent SaaS categories, that framing should inform your roadmap thinking.
Google Cloud Next
enterprise AI
agentic
Shopify
Vertex AI
multi-agent
🧭 Claude Code's /ultraplan Command Brings Cloud-Powered Deep Planning and Sub-Agent Architecture Reviews
Shipped alongside the broader April Claude Code release wave, /ultraplan is the most significant planning mode upgrade since /plan launched in January. When invoked, Claude Code spins up a session in Anthropic's Cloud Container Runtime (CCR) with Opus 4.6 as the planner, up to 30 minutes of dedicated compute, and a cloud-synced snapshot of the repository. The system dispatches specialised sub-agents to perform risk assessments, dependency-impact analysis, and architecture reviews in parallel before returning a unified plan.
The three /ultraplan modes
- Simple Plan — equivalent to local
/plan mode but runs remotely on Opus 4.6; useful when you want the stronger model without consuming local tokens
- Visual Plan — adds Mermaid and ASCII diagrams to the output, making it easier to review multi-component refactors or migration plans before approving them
- Deep Plan — dispatches parallel sub-agents for risk assessment and architectural review; approximately 2× faster than equivalent local planning for large codebases; most useful for dependency updates, large-scale refactors, and cross-service integrations
Plans drafted via /ultraplan can be refined collaboratively in the Claude Code web interface with inline comments, then executed either remotely (in CCR) or locally. The command requires a Max plan subscription for Deep Plan mode; Simple and Visual modes are available on Pro.
# Invoke ultraplan — prompts for mode selection
/ultraplan
# Start directly in Deep Plan mode (Max only)
/ultraplan --deep
# Open the generated plan in the web interface for collaborative review
/ultraplan --web
When to reach for /ultraplan vs /plan
Use /plan for focused, single-file or small-scope changes where local context is sufficient. Reach for /ultraplan when the task crosses multiple modules, involves dependency upgrades with ripple effects, or requires a risk assessment you would otherwise do manually. The parallel sub-agent architecture means Deep Plan results are typically ready within 4–6 minutes for codebases under 200k lines — faster than writing out the plan yourself.
/ultraplan
Claude Code
planning
sub-agents
CCR
Opus 4.6