Claude
Anthropic
Anthropic was founded in 2021 by former OpenAI researchers who wanted to tackle AI safety as a core engineering discipline, not an afterthought. Read more →
Latest Claude updates
16 published updates
Higher usage limits for Claude and a compute deal with SpaceX
We’ve raised Claude's usage limits and agreed a new compute partnership with SpaceX that will substantially increase our capacity in the near term.
Agents for financial services
We're releasing ten new Cowork and Claude Code plugins, integrations with the Microsoft 365 suite, new connectors, and an MCP app for financial services and insurance organizations.
Building a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Claude for Creative Work
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Anthropic Sydney office
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Introducing The Anthropic Institute
We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies.
Latest Claude videos
170 published videos
YouTube
Lessons on AI agents from Claude Plays Pokemon
Alex Albert (Claude Relations) and David Hershey (Applied AI) explore the story behind Claude Plays Pokémon—an experiment that demonstrates how AI agents navigate complex tasks. They discuss how Pokemon's turn-based gameplay provides an i...
YouTube
Could AI models be conscious?
As we build AI systems, and as they begin to approximate or surpass many human qualities, another question arises. Should we also be concerned about the potential consciousness, agency, and experiences of the models themselves? Should we be...
YouTube
Introducing Claude for Education
We're partnering with universities to bring AI to higher education, alongside a new learning mode for students. Claude for Education is now available for all Pro users with an .edu email. To learn more or to speak with our education team:...
YouTube
Tracing the thoughts of a large language model
AI models are trained and not directly programmed, so we don’t understand how they do most of the things they do. Our new interpretability methods allow us to trace their (often complex and surprising) thinking. With two new papers, Anthro...
YouTube
How Intercom is redefining customer support with Claude
Intercom's AI assistant Fin, powered by Claude, hit 8-figures in its first year. Hear from Fergal Reid, VP of AI at Intercom, on how AI is changing customer service, why Fin is integral to their business, and why they chose to partner wit...
YouTube
Controlling powerful AI
Anthropic researchers Ethan Perez, Joe Benton, and Akbir Khan discuss AI control—an approach to managing the risks of advanced AI systems. They discuss real-world evaluations showing how humans struggle to detect deceptive AI, the three maj...
Anthropic was founded in 2021 by former OpenAI researchers who wanted to tackle AI safety as a core engineering discipline, not an afterthought. Their Constitutional AI approach — training models to be helpful, harmless, and honest — has become an industry reference point. Claude is now one of the most widely used AI assistants, recognized for its nuanced reasoning, long context window, and lower tendency for hallucination compared to its peers.
Timeline
Anthropic founded by former OpenAI researchers focused on AI safety.
Claude 1 released; Constitutional AI technique published.
Claude 2 released with 100K context window.
Claude 3 family (Haiku, Sonnet, Opus) launches; major Amazon investment.
Claude 3.5 Sonnet sets new benchmark scores; MCP protocol released.
Claude 4 family released; Claude Code launched as an agentic coding tool.