🧭 Claude Opus 4.7 Is Here — Task Budgets, xhigh Effort, High-Res Images and a New Tokenizer
Anthropic released Claude Opus 4.7 (model ID: claude-opus-4-7) on April 16, 2026, making it the new flagship generally available model. The release comes with four major capability upgrades and several breaking changes significant enough to require a migration audit before you update your API calls. Here is what changed and what it means in practice.
New capability 1 — Task budgets (beta)
Task budgets let you tell Opus 4.7 how much token capacity it should reserve for a full agentic loop, not just a single response. Instead of the model deciding ad-hoc when to wrap up a subtask, you set an advisory limit via the new task-budgets-2026-03-13 beta header. The model self-moderates to stay under budget while still completing the work — similar in spirit to a hard-cap prompt, but implemented at inference level rather than as a system-prompt constraint.
import anthropic
client = anthropic.Anthropic()
response = client.messages.create(
model="claude-opus-4-7",
max_tokens=8096,
betas=["task-budgets-2026-03-13"],
messages=[{"role": "user", "content": "Refactor the auth module."}]
)
When to use task budgets
Task budgets are most useful for multi-step agentic workflows — code generation, document analysis, or tool-calling loops — where you want predictable cost and latency. Set your budget to 2–3× the typical completion token count for your use case; the model will prioritise completing the task within budget rather than over-elaborating. This is particularly valuable for batch processing pipelines where token cost predictability matters more than absolute completeness.
New capability 2 — xhigh effort level
Opus 4.7 adds a new xhigh effort level above the existing high setting, targeted at coding and complex agentic tasks. Where high effort enabled extended chain-of-thought, xhigh engages a deeper planning loop before generating the first response token. Expect significantly higher token consumption — roughly 2–3× the output of high effort — but noticeably stronger performance on problems involving multiple interdependent steps.
New capability 3 — High-resolution image support
The maximum image resolution has been raised from 1568px / 1.15MP to 2576px / 3.75MP, with true 1:1 pixel coordinate mapping for computer-use workflows. In practice this means Opus 4.7 can accurately locate and interact with fine UI elements (small buttons, dense tables, code editors) that were previously outside its coordinate precision. The change primarily benefits computer-use agents operating on high-DPI displays.
Breaking changes — migration required
Three parameters have been removed entirely: temperature, top_p, and top_k. Any API call that sets these will now receive a 400 error. Additionally:
- Adaptive thinking only: Opus 4.7 uses adaptive thinking by default; extended thinking budgets set via the old API shape now return a
400. Remove explicit thinking budget blocks from your prompts.
- Thinking content omitted by default: The model's internal thinking chain is no longer streamed or returned unless explicitly requested. This improves latency but will break any code that parsed thinking tokens.
- New tokenizer: Opus 4.7 uses an updated tokenizer that encodes tokens differently, resulting in approximately 35% more tokens for the same input text. Audit any
max_tokens or cost-cap logic that assumed Opus 4.6 tokenization rates.
The 35% tokenizer change will surprise you if you don't audit first
If you have production code that sets max_tokens close to a hard limit (e.g. to cap API spend), the new tokenizer means your existing limits will be hit ~35% sooner per equivalent task. Run a token-count comparison on a representative sample of your prompts before migrating. The Anthropic Python SDK's client.messages.count_tokens() method supports Opus 4.7 and is the right tool for this audit.
Opus 4.7
model launch
task budgets
xhigh effort
high-res images
tokenizer
adaptive thinking
breaking changes
🧭 Novartis CEO Vas Narasimhan Joins Anthropic's Board via the Long-Term Benefit Trust
Anthropic announced on April 14 that Vas Narasimhan, CEO of Novartis and one of the most prominent executives in global pharma, has been appointed to Anthropic's Board of Directors — a seat filled by the Long-Term Benefit Trust (LTBT), the independent oversight body that holds a minority equity stake with majority board-appointment rights. With this addition, LTBT-appointed directors now constitute a majority of Anthropic's full board — a structural milestone for a company at this valuation that is relatively rare in venture-backed AI.
Who is Vas Narasimhan
- CEO of Novartis since 2018; oversaw the approval of more than 35 novel medicines during his tenure
- Previously chaired PhRMA (the US pharmaceutical industry trade group); sits on the National Academy of Medicine and Harvard Medical School boards
- A physician-scientist who led Novartis's internal AI transformation programme — one of the first large pharma companies to operationalise AI-assisted drug discovery at scale
- The first pharmaceutical executive to join Anthropic's board
Why this appointment matters beyond the headline
The LTBT's mandate is to ensure Anthropic's long-term-benefit mission takes precedence over short-term financial returns — particularly relevant as Anthropic approaches a potential IPO and the associated pressure to maximise near-term revenue. Adding a board member whose entire career has involved deploying breakthrough technology responsibly within one of the most heavily regulated industries in the world signals that Anthropic is preparing the board for exactly that kind of pressure. Narasimhan brings direct, hard-won experience navigating the gap between what is technically possible and what is safe enough to ship — a domain that maps directly onto Anthropic's daily decisions about which Claude capabilities to release and under what constraints.
What LTBT majority control actually means in practice
The Long-Term Benefit Trust structure gives Anthropic's board a built-in institutional counterweight to investor pressure. When a frontier AI company IPOs, the typical dynamic is that financial shareholders dominate board seats and push for growth over caution. Anthropic's LTBT majority means that even after a public listing, a group with no financial stake in the company retains decisive board influence. Narasimhan, with no Anthropic equity, is structurally incentivised to prioritise mission alignment rather than share price — which is precisely what the structure is designed to ensure.
LTBT
governance
board of directors
Vas Narasimhan
Novartis
life sciences
responsible AI
🧭 Three Deprecations to Act On Now: Sonnet 4 & Opus 4 Retire June 15, Haiku 3 Retires April 20
Anthropic published updated deprecation entries on April 14 alongside the Opus 4.7 launch. Three models now have firm retirement dates — meaning API calls to them will fail after those dates, with no graceful fallback:
Immediate action: Haiku 3 retires April 20 (three days away)
claude-3-haiku-20240307 has a retirement date of April 20, 2026. If you have any production code, scripts, or CI pipelines still calling this model, they will begin returning errors on Monday. The recommended replacement is claude-haiku-4-5-20251001. Run a grep now:
# Find any remaining Haiku 3 calls in your codebase
grep -r "claude-3-haiku-20240307" . --include="*.py" --include="*.ts" --include="*.js"
Plan ahead: Sonnet 4 and Opus 4 retire June 15
claude-sonnet-4-20250514 → retirement date June 15, 2026. Recommended replacement: claude-sonnet-4-6
claude-opus-4-20250514 → retirement date June 15, 2026. Recommended replacement: claude-opus-4-7
You have roughly eight weeks. Use this window to run your eval suite against the replacement models — both Sonnet 4-6 and Opus 4.7 have meaningful behaviour differences from their predecessors (particularly Opus 4.7's new tokenizer), so a direct swap without testing is inadvisable.
The Opus 4.7 tokenizer means your Opus 4 tests won't transfer cleanly
Because Opus 4.7 uses a new tokenizer that consumes approximately 35% more tokens for equivalent input, any eval that compares output length, cost, or token-per-task ratios between Opus 4 and Opus 4.7 will show apparent regressions that are tokenizer artefacts rather than quality regressions. Normalise your evals by task outcome rather than token count when comparing across the deprecation boundary. For cost modelling, update your baseline: Opus 4.7 at the same task will cost more tokens, and you should verify whether the new pricing tiers offset this increase.
Model lifecycle pattern to adopt going forward
Anthropic's deprecation policy requires at least 60 days notice before retirement. The pattern is now consistent: a new model launches → older same-family models are deprecated the same week → 60-day countdown begins. Pinning to versioned model IDs (e.g. claude-opus-4-20250514) rather than alias endpoints (e.g. claude-opus-4) is the right practice — aliases track the latest model silently, which can introduce unexpected behaviour changes; versioned IDs give you explicit control at the cost of requiring manual migration on deprecation.
model deprecation
Haiku 3
Sonnet 4
Opus 4
model lifecycle
migration
API versioning