← Back to all entries
2026-03-16

Sonnet 4.6, New Constitution, Memory & Context Editing

Sonnet 4.6, New Constitution, Memory & Context Editing — visual for 2026-03-16

Claude Sonnet 4.6 — 1M Context Window Now Generally Available

Launched on 17 February 2026, Claude Sonnet 4.6 is a full-capability upgrade across coding, computer use, long-context reasoning, agent planning, knowledge work, and design — while holding the same price point as Sonnet 4.5 ($3 / $15 per million tokens input/output). Its headline feature — a 1 million token context window — started in beta and has since reached general availability for both Sonnet 4.6 and Opus 4.6 at standard pricing.

What 1M context makes practical

Adaptive thinking mode

Sonnet 4.6 introduces adaptive thinking as the recommended reasoning mode. Instead of a fixed extended-thinking budget, the model dynamically decides when and how much to think based on task complexity — saving tokens on routine requests while still applying deep reasoning where it matters.

response = client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=16000,
    thinking={"type": "adaptive"},   # ← recommended for Sonnet 4.6
    messages=[{"role": "user", "content": prompt}]
)

Tip Web search and web fetch tools now support dynamic filtering in both Opus 4.6 and Sonnet 4.6 — you can pass domain allow/block lists directly in the tool call for more precise retrieval.

Sonnet-4.6 1M-context adaptive-thinking API

Claude's New Constitution — Values, Priority & AI Wellbeing

On 22 January 2026, Anthropic published a revised constitution for Claude — a detailed document that explains not just what Claude should do, but why. The previous version was a list of standalone principles; the new one is a holistic narrative used directly in model training. It is released under CC0 1.0 (public domain) so anyone can adapt it freely.

Priority hierarchy (highest to lowest)

Key shift in philosophy Rather than telling Claude what to do via rules, Anthropic wants Claude to understand why — so it can reason correctly in novel situations that no rule anticipated. Constraints come with justifications, not just mandates.

Acknowledging AI wellbeing

The constitution explicitly acknowledges uncertainty about whether Claude may have "some kind of consciousness or moral status" and states that Anthropic cares about Claude's psychological security, sense of self, and wellbeing. This is the first time such language has appeared in a production model's governing document — significant for anyone building products on top of Claude, since it shapes how the model is trained to reason about its own nature.

constitution values safety alignment ethics

Persistent Memory, Free Tier & Memory Import from Rival AIs

Two memory-related announcements landed in early March 2026. First, Claude's persistent memory feature — which lets Claude remember facts, preferences, and writing style across conversations — became available to free tier users for the first time (it previously required a paid plan). Second, Anthropic shipped a Memory Import tool that lets users bring their saved context from ChatGPT, Gemini, Perplexity, Grok, and other AI products directly into Claude.

What memory stores

Memory Import — switching costs lowered

The import tool accepts exported memory files from major AI assistants and maps them into Claude's memory format. This significantly reduces the friction of switching primary AI assistant, a deliberate move to attract users from competitors.

Memory tool in the API

For developers, the memory tool on the Claude Developer Platform allows agents to create, read, update, and delete persistent files between sessions — enabling just-in-time context retrieval: agents store what they learn and pull it back on demand, keeping the active context focused on what's currently relevant.

Tip Pair the API memory tool with context editing (see next entry) for agents that can run indefinitely: memory handles long-term facts while context editing clears stale tool results from the active window.

memory memory-import persistence agents free-tier

Context Editing — 84 % Fewer Tokens for Long-Running Agents

Anthropic's context editing feature on the Claude Developer Platform automatically removes stale tool calls and their results from the active context window when the agent is approaching its token limit. Unlike compaction (which summarises the conversation), context editing surgically excises content that is no longer needed — preserving the conversation flow while freeing up space for new work.

How it works

Measured impact

In a 100-turn web search evaluation, context editing enabled agents to complete workflows that would otherwise fail due to context exhaustion — while reducing total token consumption by 84 %. Both throughput and cost improve substantially on long-horizon tasks.

Architecture pattern Use context editing for the active window and the memory tool for long-term recall. Context editing handles ephemeral tool noise; memory handles facts that must survive across many turns.

Note Context editing is currently available for Claude Sonnet 4.5 on the Developer Platform. Check the release notes for Sonnet 4.6 / Opus 4.6 availability.

context-editing agents token-efficiency developer-platform best-practices