🧭 Claude for Word: How Tracked Changes, Comment Replies & Document Scanning Actually Work
The April 11 launch announcement positioned Claude for Word as a tool for legal professionals reviewing contracts. But a closer look at Anthropic's official help documentation and gHacks' April 14 feature walkthrough reveals a substantially more capable integration than the headline suggested. Three specific mechanics are worth understanding in depth if you plan to use it on real documents.
1. Tracked changes mode — the full edit loop
When you activate suggested edits mode in the Claude sidebar, every change Claude proposes appears as a native Word tracked revision: the original text is struck through as a deletion, and the new text appears as an insertion, all visible in Word's Review pane. This means the entire change set is reviewable using Word's existing Accept / Reject workflow — your team does not need to learn any new interface. More usefully, when a counterparty returns a document with their own tracked changes, you can ask Claude to read and summarise what they changed. You can prompt it to group changes by severity or flag the ones worth pushing back on — turning counterparty markup review from a 30-minute skim into a structured negotiation brief.
2. Comment replies — contextual, not global
Claude's comment responses are threaded directly within Word's native comment system, not inserted as a separate document or sidebar note. Critically, each response is anchored to the specific passage it relates to: if you ask Claude to explain a clause in paragraph 12, the response appears as a reply to a comment on paragraph 12, not at the end of the document. This contextual threading matters for multi-reviewer workflows — other collaborators can see exactly which text triggered the AI note without hunting for context.
3. Document scanning — structural integrity checking
The document scan feature reads the full document for structural issues rather than just answering questions about selected text. It flags: inconsistent defined terms (e.g., "Licensor" used in some sections and "Vendor" in others), broken cross-references (e.g., "see Section 4.3" where Section 4.3 was renumbered), and multi-level legal numbering that has drifted out of sync in longer documents. When an issue is found, Claude flags it and suggests a fix — but does not make automatic changes. The user must accept each change individually, preserving the audit trail.
Workflow tip: scan before you send, not after you receive
The document scan is most valuable when run on your own draft before sending to counterparty — not after. Internal cross-reference drift and inconsistent defined-term usage are typically introduced by the author, not the reviewer. Running a scan pass as a pre-send checklist on long contracts or policy documents can catch a class of error that neither the author nor a standard proofreader will reliably catch. Budget roughly 30 seconds of Claude processing per 20 pages for the scan on a typical legal document at current API speeds.
One constraint worth noting: the integration currently works only in Word for Mac and Windows desktop — not in Word on the web. If your organisation is standardised on browser-based Office 365, check with your IT administrator whether the desktop app is available on your licence before planning any workflow changes.
Claude for Word
tracked changes
document scanning
legal tech
Microsoft Word
enterprise
contract review
🧭 Anthropic Weighs Designing Its Own AI Chips as Claude Compute Demand Surges
Reuters reported on April 10, confirmed the same day by CNBC, that Anthropic is in early-stage exploration of designing its own custom AI chips. The plans are preliminary — the company has not committed to a specific design and has not yet assembled a dedicated chip engineering team — but the exploration is serious enough to be described by multiple sources familiar with the matter as an active strategic discussion at the executive level.
The trigger is straightforward: Claude's annualised revenue run rate has crossed $30 billion, up from roughly $9 billion at the end of 2025, and demand for inference compute has accelerated faster than existing supply agreements can easily accommodate. Anthropic currently runs Claude on Google TPUs (via its multi-billion-dollar Google partnership), Amazon Inferentia chips (through AWS), and the newly announced Broadcom TPU deal. Each of these is external silicon, where Anthropic is a buyer rather than a designer — which constrains both cost optimisation and the ability to tailor hardware to Claude's specific model architecture.
Why custom silicon makes strategic sense — and why it is hard
- Inference efficiency gains. Models as large as Claude Opus 4.6 have specific memory bandwidth and precision requirements that general-purpose accelerators are not optimised for. A custom ASIC designed around Claude's architecture could deliver 2–4× better tokens-per-watt efficiency, which at $30B+ revenue scale translates to hundreds of millions of dollars in annual compute cost reduction.
- Supply chain independence. Dependency on Google, Amazon, and Broadcom for all compute creates a strategic vulnerability. Custom silicon — even if it supplements rather than replaces third-party chips — gives Anthropic negotiating leverage and a backstop against supply shocks.
- The cost barrier is real. Industry estimates put the design cost for a single advanced AI chip at around $500 million, with a 2–3 year development timeline before tape-out. Anthropic would need to raise or allocate capital specifically for this effort, and the project would require a chip engineering team the company does not currently have.
- Meta, OpenAI, and Microsoft are all doing this. Meta's MTIA chip, OpenAI's reported chip efforts with Broadcom, and Microsoft's Maia ASIC all reflect the same calculus: at frontier scale, silicon strategy is as important as model strategy.
What this means for API users
Custom silicon, if it proceeds, would primarily benefit Anthropic's cost structure and supply reliability — not change the API surface area or model capabilities in the near term. The more immediate relevance for developers is as a signal about Anthropic's confidence in sustained, large-scale Claude inference demand. Companies building long-term infrastructure on the Claude API can read this as Anthropic making a multi-year commitment to supply reliability rather than depending indefinitely on third-party chip availability. It reduces (though does not eliminate) the risk of capacity-constrained rate limits at periods of peak demand.
custom silicon
AI chips
infrastructure
compute strategy
Anthropic growth
supply chain
ASIC