Three New Claude Managed Agents Features: Dreaming, Outcomes, and Multi-Agent Orchestration
Anthropic announced three major new features for Claude Managed Agents on May 7, the day after the Code with Claude 2026 keynote in San Francisco. The features were flagged during the keynote but shipped with technical documentation the following day. Together, they address three persistent gaps in production agent deployments: agents that don't learn from experience, agents that can't self-evaluate quality, and agents that can't scale work across parallel specialised processes.
Dreaming — agents that improve over time
Dreaming is a scheduled background process that runs on an agent's session history and memory store. On a configurable schedule (daily by default), a "dreaming session" reviews past agent sessions, extracts patterns, curates memories, and updates the agent's long-term knowledge base — surfacing what worked, what failed, and what context is worth retaining for future sessions. The term echoes sleep-based memory consolidation in neuroscience, where experiences are processed and integrated into long-term memory during low-activity periods. Dreaming is in research preview and must be explicitly enabled per agent; it is not on by default for existing Managed Agents deployments.
Dreaming means a Managed Agent that handles your monthly KYC review process will gradually develop a richer internal model of your specific data patterns, common exception types, and preferred output formats — without requiring you to manually update its system prompt. The quality improvement is incremental but compounding. Enable it on lower-stakes agents first to observe what it retains before activating on critical workflows.
Outcomes — graded self-improvement
Outcomes allows you to define success criteria for a Managed Agent task, then uses a separate grading agent to evaluate completed outputs against those criteria. When the grader identifies that an output falls short, it re-runs the task with additional context from the failure analysis. Anthropic's internal benchmarks show a 10.1% improvement in PowerPoint generation quality (on an internal OfficeQA benchmark) after three Outcomes cycles — a meaningful gain for document-heavy workflows where output format and data accuracy both matter.
Multi-Agent Orchestration — parallel specialist delegation
Multi-Agent Orchestration lets a lead Managed Agent decompose a complex task and delegate each component to a specialist agent with its own model, system prompt, tool set, and memory store. Specialists run in parallel on a shared filesystem, with the lead agent coordinating outputs and resolving conflicts. This mirrors the sub-agent patterns used by Anthropic's own engineering teams (see May 3 entry), but as a managed cloud service rather than a local CLI workflow.
Example workflow: a lead agent receives a request to "prepare a competitive analysis of three companies in our sector." It delegates company research to three parallel specialist agents (one per company), waits for completion, then synthesises the outputs into a unified report — all without human intervention.