🧭 Anthropic Now Asks Some Users for a Government ID — Here Is What Triggers It and Why
Anthropic has begun prompting select Claude users to verify their identity using a government-issued photo ID and a real-time selfie before accessing certain capabilities. The verification is handled by Persona Identities, a third-party KYC (Know Your Customer) platform. As of this week, users on the Max subscription tier are among those receiving verification prompts, with the rollout framed around platform integrity and responsible-use compliance. The official rationale, quoted directly from Anthropic's help page: "Being responsible with powerful technology starts with knowing who is using it."
What triggers a verification request
- Accessing advanced capabilities that Anthropic has designated as requiring identity assurance
- Routine platform integrity checks triggered by usage patterns
- Safety and compliance requirements that Anthropic has not fully specified publicly
What Persona does — and does not — do with your data
- Accepted documents: passports, driver's licences, national ID cards (physical originals only; no photocopies or digital IDs)
- Data custody: Persona holds the ID images and selfies, not Anthropic directly; all data is encrypted in transit and at rest
- No training use: verified ID data will not be used for model training or shared for marketing purposes, per the help page
- Retention: Persona's standard retention policy applies; users can request deletion via Anthropic support
The privacy tension Anthropic has to navigate
A meaningful share of Claude's privacy-conscious user base switched from ChatGPT specifically because Anthropic had not required identity verification. The irony was captured bluntly by Decrypt's headline: "You Switched to Claude Over Surveillance Fears. Now It Wants Your Passport." Anthropic is walking a genuine tightrope here. On one side: the platform integrity argument is real — restricted models like Mythos and advanced agentic capabilities carry legitimate misuse risk that anonymous accounts make harder to manage. On the other: government ID requirements create a data-collection surface area and a chilling effect on legitimate privacy-preserving use cases. Developers building Claude-powered applications should factor this into user communications if your product requires a Claude subscription or API access — users who hit a verification gate mid-workflow are not going to enjoy the experience.
For context: OpenAI implemented similar ID verification measures for API developers earlier, and the broader pattern across frontier AI providers is moving in this direction as governments in the EU, UK, and US signal that identity-linked access controls may become regulatory requirements for high-capability models. Anthropic is ahead of that curve but not alone in heading toward it.
identity verification
KYC
Persona
privacy
platform integrity
responsible AI
Max plan
🧭 3-Hour Platform Outage on April 15 Took Down Claude.ai, Claude Code, and the API Simultaneously
On April 15, 2026, Claude experienced its largest publicly documented outage to date — roughly three hours of disruption affecting claude.ai, the mobile apps, Claude Code, and the API simultaneously. CNBC and TechRadar covered the event in real time. At peak, approximately 6,000 users were actively reporting issues on Downdetector. The outage was not tier-selective: both free and Pro/Max subscribers were affected, and the API showed the same errors as the consumer product.
Timeline
- 10:53 AM ET: Anthropic posts "Elevated errors on Claude.ai, API, Claude Code" on the status page
- 11:03 AM ET: First fix deployed — partial improvement reported
- 11:40 AM ET: Status worsens — "Claude.ai and Platform are down" (broader degradation than initial report)
- ~12:30 PM ET: Login rates begin stabilising; error rates begin to decline
- 1:42 PM ET: Fully resolved, all systems operational
User-reported symptoms included: login failures, "temporarily busy" errors on claude.ai, incorrect usage-limit warnings (users seeing "limit reached" despite having quota remaining), and model unavailability errors on the API returning HTTP 529. Root cause was not disclosed by Anthropic.
Build for outage resilience — don't treat the API as always-on
Three-hour outages at this scale are rare but not unprecedented for frontier AI providers. If your production workflow depends on Claude API availability, these defensive patterns will save you during the next incident:
- Retry with exponential backoff: HTTP 529 (overloaded) is retryable; implement 1s → 2s → 4s → 8s backoff with jitter. The Anthropic Python and JS SDKs do this automatically when you set
max_retries.
- Async request queuing: for non-real-time workflows (batch classification, document processing), push tasks to a queue and process asynchronously so a 15-minute API gap doesn't become a customer-visible failure.
- Graceful degradation: identify which parts of your product require Claude in the critical path vs. which can fail silently or show a "try again" message. Not everything needs Claude in real time.
- Status page subscription: subscribe to
status.claude.com for email/webhook incident notifications so your on-call team gets ahead of user complaints.
The outage is notable in context: it occurred the same week Anthropic launched Claude Code Routines (the cloud-scheduled automation feature), expanded Cowork, and published the IPO valuation story. Platform reliability at the scale Anthropic is now operating — $30B+ annualised revenue, enterprise customers with SLA expectations — is a qualitatively different operational challenge than it was 18 months ago. The CoreWeave and Google TPU capacity additions are partly a hedge against exactly this kind of load-related degradation, but the pace of demand growth is keeping the pressure on.
outage
reliability
API resilience
platform status
Claude Code
production engineering
HTTP 529
🧭 Anthropic Fellows Program 2026 Is Open — $50K Stipend, 10 Days to Apply for the July Cohort
Anthropic has opened applications for two rounds of its Anthropic Fellows Program — a four-month paid AI safety research fellowship targeting people who want to do serious safety research but are not currently at a frontier AI lab. The May 2026 cohort is already underway; the July 2026 cohort deadline is April 26, 2026 — ten days away. Applications are via Greenhouse (linked from the alignment blog).
What you get
- Stipend: $3,850 USD/week (~$50,000 for the four-month term)
- Compute: approximately $15,000/month in compute funding — enough to run serious experiments against current-generation models
- Mentorship: Anthropic researchers pitch project ideas; fellows choose and co-design their project, then work with a dedicated researcher mentor throughout the term
- Location: San Francisco; relocation support available
Research areas open for July 2026
Anthropic has explicitly expanded the scope from the inaugural cohort. Current target areas include:
- Scalable oversight and weak-to-strong generalisation
- Adversarial robustness and jailbreak dynamics
- AI control and corrigibility under distribution shift
- Model organisms of misalignment
- Mechanistic interpretability (circuit-level analysis of frontier models)
- AI security — novel attack vectors on deployed model systems
- Model welfare and consciousness research
Who should apply
No PhD or prior ML publications are required. Past fellows came from physics, mathematics, CS, and cybersecurity. The filter is demonstrated research ability — a strong coding background, prior experience running experiments, and evidence of independent thinking on hard problems. The cohort is small (typically 10–15 fellows) and competitive.
Why Anthropic runs this programme — and what it signals
The Fellows Program is an explicit acknowledgement that AI safety talent cannot be built fast enough through traditional hiring. Anthropic is trying to compress the gap by funding a cohort of externally-trained researchers, giving them access to frontier models, and converting the best into full-time hires: 40%+ of first-cohort fellows subsequently joined Anthropic, and 80%+ produced published papers during their term. The research output directly feeds back into Claude's development — papers from earlier cohorts have been cited in Anthropic's responsible scaling policy updates and in the team studying functional emotion concepts in Claude's activations. If you are in a position to apply, or know someone who is, the April 26 deadline is real.
Fellows Program
AI safety
research fellowship
alignment
mechanistic interpretability
model welfare
career