Anthropic Inks $1.8 Billion, Seven-Year Cloud Deal With Akamai — the Largest in Akamai's History
Bloomberg reported May 8 that Anthropic has signed a $1.8 billion, seven-year cloud computing contract with Akamai Technologies — the largest single deal in Akamai's 27-year history. Akamai disclosed the agreement during its Q1 2026 earnings call without naming the customer; Bloomberg subsequently confirmed the counterparty was Anthropic. Akamai's stock surged more than 40% on the week following the announcement, its best weekly performance since 2013.
What the deal provides
Akamai's cloud infrastructure — built around its globally distributed CDN edge network and GPU compute clusters — gives Anthropic low-latency inference capacity in regions where Google Cloud, AWS, and Azure have longer deployment cycles. The deal is additive to Anthropic's existing compute relationships with Google (multi-year TPU commitment), Amazon Bedrock, and SpaceX Colossus 1, announced two days earlier. Anthropic is now the anchor customer for at least four distinct compute providers simultaneously, a level of infrastructure diversification unprecedented for a company of its age.
Why Anthropic needs this much compute
Fortune's May 8 reporting on the deal revealed the underlying driver: Claude's annualised revenue grew 80-fold in a single quarter, putting annualised revenue at approximately $30 billion — triple the figure from a year earlier. Corporate customers including Uber and Netflix are heavy users, primarily through Claude Code. The pace of growth turned what had been a managed capacity constraint into, in Fortune's words, "an infrastructure emergency." The SpaceX and Akamai deals are the visible response to that emergency; both were reportedly negotiated at speed under timeline pressure rather than through normal procurement cycles.
Rapid demand growth at this scale typically degrades API reliability before new capacity comes online. The rate limit increases announced with the SpaceX deal are a lagging indicator — capacity improvements take weeks to propagate. If your application depends on low-latency Claude API responses during peak periods, ensure your retry logic and timeout handling are robust. Building in graceful degradation (caching common responses, queueing non-real-time requests) provides a buffer while infrastructure catches up with demand.