Claude
Anthropic
Anthropic was founded in 2021 by former OpenAI researchers who wanted to tackle AI safety as a core engineering discipline, not an afterthought. Read more →
Latest Claude updates
16 published updates
Higher usage limits for Claude and a compute deal with SpaceX
We’ve raised Claude's usage limits and agreed a new compute partnership with SpaceX that will substantially increase our capacity in the near term.
Agents for financial services
We're releasing ten new Cowork and Claude Code plugins, integrations with the Microsoft 365 suite, new connectors, and an MCP app for financial services and insurance organizations.
Building a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Claude for Creative Work
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Anthropic Sydney office
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
Introducing The Anthropic Institute
We’re launching The Anthropic Institute, a new effort to confront the most significant challenges that powerful AI will pose to our societies.
Latest Claude videos
170 published videos
YouTube
AI on campus
AI is ubiquitous on college campuses. We sat down with students to hear what's going well, what isn't, and how students, professors, and universities alike are navigating it in real time. 0:00 - Introduction 0:22 - Meet the panel 1:06 - Vi...
YouTube
Introducing Cowork: Claude Code for the rest of your work
Claude Cowork lets you hand off your most time-consuming tasks to Claude, and come back to finished outputs. Describe what you need, point Claude at your sources, and walk away. Cowork pulls from your local files, cloud tools, and the web,...
YouTube
AI's limited self-knowledge
Anthropic researcher Amanda Askell discusses the self-knowledge problem that AI models face.
YouTube
What is sycophancy in AI models?
Learn what AI researchers mean when they talk about sycophancy, when it's more likely to show up in conversations, and tactics you can use to steer AI towards truth.
YouTube
Let Claude handle work in your browser
See Claude for Chrome handle three complete workflows in your browser. Pull data from dashboards into one analysis doc Address slide comments automatically Build with Claude Code, test in Chrome Claude for Chrome is a browser extension that...
YouTube
Claude ran a business in our office
For a large part of 2025, we ran Project Vend: an experiment where we let Claude manage a small business in the Anthropic office. We learned a lot from how close it was to success—and the curious ways that it failed—about the plausible, str...
Anthropic was founded in 2021 by former OpenAI researchers who wanted to tackle AI safety as a core engineering discipline, not an afterthought. Their Constitutional AI approach — training models to be helpful, harmless, and honest — has become an industry reference point. Claude is now one of the most widely used AI assistants, recognized for its nuanced reasoning, long context window, and lower tendency for hallucination compared to its peers.
Timeline
Anthropic founded by former OpenAI researchers focused on AI safety.
Claude 1 released; Constitutional AI technique published.
Claude 2 released with 100K context window.
Claude 3 family (Haiku, Sonnet, Opus) launches; major Amazon investment.
Claude 3.5 Sonnet sets new benchmark scores; MCP protocol released.
Claude 4 family released; Claude Code launched as an agentic coding tool.