Claude
Anthropic
Anthropic was founded in 2021 by former OpenAI researchers who wanted to tackle AI safety as a core engineering discipline, not an afterthought. Read more →
Latest Claude updates
16 published updates
Higher usage limits for Claude and a compute deal with SpaceX
We’ve raised Claude's usage limits and agreed a new compute partnership with SpaceX that will substantially increase our capacity in the near term.
Agents for financial services
We're releasing ten new Cowork and Claude Code plugins, integrations with the Microsoft 365 suite, new connectors, and an MCP app for financial services and insurance organizations.
Building a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Claude for Creative Work
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Anthropic Sydney office
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Introducing The Anthropic Institute
We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies.
Latest Claude videos
170 published videos
YouTube
Claude 3.5 Sonnet for vision
Claude 3.5 Sonnet is our strongest vision model. Improvements are most noticeable in tasks requiring visual reasoning, like interpreting charts, graphs, or transcribing text from imperfect images. Find out more: https://anthropic.com/news...
YouTube
Claude 3.5 Sonnet for sparking creativity
Claude 3.5 Sonnet can be used with Artifacts—a new feature that expands how users can interact with Claude. You can ask Claude to generate docs, code, mermaid diagrams, vector graphics, or even simple games. Artifacts appear next to your ch...
YouTube
Scaling interpretability
Science and engineering are inseparable. Our researchers reflect on the close relationship between scientific and engineering progress, and discuss the technical challenges they encountered in scaling our interpretability research to much l...
YouTube
What should an AI's personality be?
How do you imbue character in an AI assistant? What does that even mean? And why would you do it in the first place? In this conversation, Stuart Ritchie (Research Communications at Anthropic) speaks to Amanda Askell (Alignment Finetuning...
YouTube
What is interpretability?
A surprising fact about modern large language models is that nobody really knows how they work internally. At Anthropic, the Interpretability team strives to change that — to understand these models to better plan for a future of safe AI....
YouTube
Claude is now available in Europe
Claude is now available in Europe. Try it out: http://claude.ai Download the iOS app: https://apps.apple.com/us/app/claude/id6473753684 Find out more: https://www.anthropic.com/news/claude-europe
Anthropic was founded in 2021 by former OpenAI researchers who wanted to tackle AI safety as a core engineering discipline, not an afterthought. Their Constitutional AI approach — training models to be helpful, harmless, and honest — has become an industry reference point. Claude is now one of the most widely used AI assistants, recognized for its nuanced reasoning, long context window, and lower tendency for hallucination compared to its peers.
Timeline
Anthropic founded by former OpenAI researchers focused on AI safety.
Claude 1 released; Constitutional AI technique published.
Claude 2 released with 100K context window.
Claude 3 family (Haiku, Sonnet, Opus) launches; major Amazon investment.
Claude 3.5 Sonnet sets new benchmark scores; MCP protocol released.
Claude 4 family released; Claude Code launched as an agentic coding tool.