Claude
Anthropic
Anthropic was founded in 2021 by former OpenAI researchers who wanted to tackle AI safety as a core engineering discipline, not an afterthought. Read more →
Latest Claude updates
16 published updates
Higher usage limits for Claude and a compute deal with SpaceX
We’ve raised Claude's usage limits and agreed a new compute partnership with SpaceX that will substantially increase our capacity in the near term.
Agents for financial services
We're releasing ten new Cowork and Claude Code plugins, integrations with the Microsoft 365 suite, new connectors, and an MCP app for financial services and insurance organizations.
Building a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Claude for Creative Work
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Anthropic Sydney office
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Introducing The Anthropic Institute
We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies.
Latest Claude videos
170 published videos
YouTube
Binti helps social workers license foster families faster with Claude
Binti is transforming child welfare by helping social workers license foster and adoptive families faster. With 400,000 children in U.S. foster care, Binti integrated Claude to reduce paperwork from weeks to hours—shrinking approval timelin...
YouTube
What does AI mean for education?
How is AI affecting education? At Anthropic, we often talk about “holding light and shade”: taking seriously both the benefits and the risks of the AI systems we’re building. In education, that trade-off is especially acute. AI offers the...
YouTube
What does it take to be an AI whisperer?
What does it take to truly understand how AI models think? Anthropic researcher Amanda Askell shares what it means to be an “LLM whisperer.”
YouTube
Why we built—and donated—the Model Context Protocol (MCP)
Anthropic's Stuart Ritchie speaks with co-creator David Soria Parra about the development of the Model Context Protocol (MCP), an open standard to connect AI to external tools and services—and why Anthropic is donating it to the Linux Found...
YouTube
Getting started with connectors in Claude.ai
Learn how to supercharge Claude by connecting the tools you already use. This video shows you how to set up connectors that give Claude access to your files, apps, and workflows.
YouTube
Why is a philosopher working in AI?
Amanda Askell explains what a philosopher is doing at Anthropic.
Anthropic was founded in 2021 by former OpenAI researchers who wanted to tackle AI safety as a core engineering discipline, not an afterthought. Their Constitutional AI approach — training models to be helpful, harmless, and honest — has become an industry reference point. Claude is now one of the most widely used AI assistants, recognized for its nuanced reasoning, long context window, and lower tendency for hallucination compared to its peers.
Timeline
Anthropic founded by former OpenAI researchers focused on AI safety.
Claude 1 released; Constitutional AI technique published.
Claude 2 released with 100K context window.
Claude 3 family (Haiku, Sonnet, Opus) launches; major Amazon investment.
Claude 3.5 Sonnet sets new benchmark scores; MCP protocol released.
Claude 4 family released; Claude Code launched as an agentic coding tool.