🧭 Claude Code Gets Automated Routines and a Redesigned Desktop App
Anthropic released two substantial Claude Code updates on April 14–15 that together shift Claude Code from a conversational tool toward a persistent background agent. The first is Routines (research preview): a way to save a Claude Code configuration — your system prompt, connected repositories, and MCP servers — that then runs on Anthropic's cloud infrastructure on a schedule or trigger, without needing your laptop to stay open. The second is a redesigned desktop app that collapses the context-switching that previously plagued multi-session development work.
Routines: cloud-scheduled Claude Code without the open laptop
A Routine bundles three things: a natural-language task description, one or more repo connections, and any MCP server attachments. Once saved, it can be triggered on a cron schedule (e.g. every weekday at 07:00) or by an external event (a GitHub webhook, a CI pass, an incoming Slack message via an MCP bridge). Anthropic's infrastructure runs the Claude Code session, so the user's machine is not involved in execution.
Quotas at launch: 5 Routine runs per day on Claude Pro, 15 on Max, 25 on Team and Enterprise. Each run counts against the plan's existing Claude Code usage, so heavy users should budget accordingly.
What Routines are actually useful for now
The most practical immediate use case is a daily or weekly code-hygiene agent: run a linting pass, generate a dependency-audit diff, or post a changelog summary to Slack. Routines are not yet suitable for write-heavy tasks that need human review of every change — the run produces output that still requires a human to act on. Think of this release as the scheduling infrastructure arriving before the full autonomous execution model. The approval-gate workflow (review → merge → deploy) still belongs to you.
Desktop app redesign: one window, everything in it
The redesigned Claude Code desktop app ships with an integrated terminal pane, a faster inline diff viewer (replaces the previous external pop-out), an in-app file editor with syntax highlighting, an expanded preview area for rendered output, and native multi-session support — all within a single window. The practical benefit: you no longer need to alt-tab between Claude Code's sidebar, your terminal, and your editor to review what the agent actually changed. The diff viewer alone eliminates the most common complaint in the Claude Code GitHub issue tracker.
Claude CodeRoutinesscheduled agentsdesktop appautomationdeveloper toolscloud execution
🧭 Investors Are Offering Anthropic $800 Billion — IPO Discussions Have Begun
Bloomberg reported on April 14 that Anthropic has received multiple unsolicited investor offers for a new funding round valuing the company at approximately $800 billion or higher — more than double the $350 billion pre-money valuation attached to its $3 billion February raise (which closed at a post-money of approximately $61.5 billion including prior capital). Anthropic has so far declined to accept any of the new offers, but has not closed the door on a future raise. Secondary-market data from Caplight shows Anthropic shares changing hands at an implied valuation of $688 billion, up roughly 75% from three months ago.
Separately, people familiar with the matter told Bloomberg that Anthropic has begun early-stage discussions with investment banks about a possible public listing, with an October 2026 IPO window cited as a scenario under consideration — though no formal mandate has been awarded and no filing timeline has been set.
What is driving the valuation jump
Revenue acceleration. Annualised revenue has crossed $30 billion, up from approximately $9 billion at end-2025 — roughly tripling in one quarter, driven by the enterprise API and Claude Code seat expansion.
Enterprise depth. More than 1,000 businesses are each spending $1 million or more annually with Anthropic, a customer cohort that commands premium revenue multiples because of its stickiness.
Compute supply secured. The Google/Broadcom TPU deal (3.5 gigawatts by 2027), the CoreWeave GPU agreement, and the Amazon Inferentia arrangement collectively reduce the near-term risk of supply constraints choking growth — a concern that previously weighed on investor confidence.
What this means for API users
A public listing would impose quarterly disclosure requirements, which historically leads enterprise software companies to prioritise revenue predictability over experimental pricing. If Anthropic lists in late 2026, watch for: (1) API pricing tiers becoming more rigid, (2) aggressive enterprise contract lock-ins ahead of the IPO to demonstrate ARR, and (3) possible consolidation of experimental beta programmes into paid tiers. None of this is confirmed; it's the pattern, and worth factoring into long-term build decisions.
🧭 US Treasury Rushes to Access Mythos to Scan Federal Financial Infrastructure
Bloomberg and Semafor both reported on April 14–15 that the US Treasury Department's technology leadership is seeking access to Anthropic's restricted Mythos Preview model specifically to hunt for cybersecurity vulnerabilities in federal financial systems — including core payment processing infrastructure and inter-agency data exchange networks. Treasury CIO Sam Corcos briefed the department's cybersecurity team and directed it to prepare evaluation protocols for the model.
The context: Mythos Preview has already been credited with finding thousands of high-severity vulnerabilities — including novel zero-days in every major operating system and in widely deployed web browsers — across its Project Glasswing deployment with 40+ vetted partners. The same Bloomberg report noted that Wall Street banks have been quietly testing Mythos internally at the urging of their federal regulators.
Why this matters beyond the security beat
This story is notable not just because Mythos is capable, but because of the institutional posture it reveals. The US Treasury is not waiting for a government-procurement programme or a bilateral agreement with Anthropic — it is rushing to secure access to a model that is not publicly available, through informal channels, because it perceives a time-sensitive vulnerability-hunting window before adversaries develop equivalent capability. That urgency is a signal about where government AI adoption is actually heading: not through normal procurement but through exception-path access agreements, which are faster but also much less standardised.
What developers building on Claude should watch
Anthropic is simultaneously managing: (1) Project Glasswing restricted access for vetted partners, (2) Treasury and regulatory requests for access, and (3) active enforcement against the 16 million unauthorised exchanges logged by the Frontier Model Forum. This is a lot of access-control surface to manage on a model that is not public. If you are building on the standard Claude API and your use case edges toward security research or infrastructure vulnerability assessment, expect Anthropic's trust-and-safety review process to become more scrutinous over the next 12 months — not less — as Mythos access politics heat up.
MythoscybersecurityUS Treasuryfederal governmentvulnerability researchProject Glasswingaccess control
🧭 Users Revolt After Anthropic Silently Cut Claude's Default Reasoning Depth by 67%
A growing wave of developer complaints crested this week after AMD Senior Director Stella Laurenzo published a detailed GitHub analysis showing that Claude Code's median thinking depth had dropped from approximately 2,200 characters to around 600 characters — a 67% reduction — between late February and early April 2026. Fortune, The Register, and VentureBeat all covered the resulting controversy. The core issue is not the change itself, but that Anthropic made it without any public changelog entry or user communication.
What actually changed
In late February / early March, Anthropic silently shifted Claude Opus 4.6's default effort level from high to medium (internally mapped to effort level 85 on a 0–100 scale). The change came in response to feedback that excessive token consumption was making long Claude Code sessions prohibitively expensive for users paying per-token on the API. Claude Code lead Boris Cherny acknowledged the change was intentional when pressed on GitHub, but confirmed it had not been announced.
How to restore the previous behaviour
Users who want the previous high-effort reasoning can override the default in their Claude Code settings or CLAUDE.md:
# In your CLAUDE.md or system prompt
You are operating at maximum reasoning effort. Do not abbreviate
your internal thinking chain. Work through every problem step by
step before providing a response.
# Or via the Claude Code settings.json
{
"effort": "high"
}
The real issue: trust, not tokens
Anthropic's decision to change a fundamental quality parameter without a changelog entry has reopened a recurring criticism: that the company treats silent model changes as acceptable because models are not versioned software. Many production users have no way to detect silent quality regressions without running continuous evaluation harnesses — an expensive operational overhead most teams cannot afford. If you do not already run a lightweight weekly eval (10–20 representative prompts, scored against a rubric), this incident is the clearest argument yet for building one. A sudden 67% drop in reasoning depth on a task your product depends on is indistinguishable from a bug in your own code unless you have a baseline to compare against.