Worth checking before choosing or changing a subscription. Handig om te checken voordat je een abonnement kiest of wijzigt.
OpenAI o3-mini OpenAI o3-mini
Title: OpenAI o3-mini Title: OpenAI o3-mini
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Check plan details before changing subscriptions or advising a team. Controleer plandetails voordat je abonnementen wijzigt of een team adviseert.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
OpenAI o3-mini | OpenAI
Skip to main content
[](https://openai.com/)
* Research
* Products
* Business
* Developers
* Company
* Foundation(opens in a new window)
Log inTry ChatGPT(opens in a new window)
Try ChatGPT(opens in a new window)Login
OpenAI
Table of contents
* Developers
* Company
* Foundation(opens in a new window)
Try ChatGPT(opens in a new window)Login
January 31, 2025
Release
OpenAI o3‑mini
Pushing the frontier of cost-effective reasoning.
We’re releasing OpenAI o3‑mini, the newest, most cost-efficient model in our reasoning series, available in both ChatGPT and the API today. Previewed in December 2024, this powerful and fast model advances the boundaries of what small models can achieve, delivering exceptional STEM capabilities—with particular strength in science, math, and coding—all while maintaining the low cost and reduced latency of OpenAI o1‑mini.
OpenAI o3‑mini is our first small reasoning model that supports highly requested developer features including function calling(opens in a new window), Structured Outputs(opens in a new window), and developer messages(opens in a new window), making it production-ready out of the gate. Like OpenAI o1‑mini and OpenAI o1‑preview, o3‑mini will support streaming(opens in a new window). Also, developers can choose between three reasoning effort(opens in a new window) options—low, medium, and high—to optimize for their specific use cases. This flexibility allows o3‑mini to “think harder” when tackling complex challenges or prioritize speed when latency is a concern. o3‑mini does not support vision capabilities, so developers should continue using OpenAI o1 for visual reasoning tasks. o3‑mini is rolling out in the Chat Completions API, Assistants API, and Batch API starting today to select developers in API usage tiers 3-5(opens in a new window).
ChatGPT Plus, Team, and Pro users can access OpenAI o3‑mini starting today, with Enterprise access coming in February. o3‑mini will replace OpenAI o1‑mini in the model picker, offering higher rate limits and lower latency, making it a compelling choice for coding, STEM, and logical problem-solving tasks. As part of this upgrade, we’re tripling the rate limit for Plus and Team users from 50 messages per day with o1‑mini to 150 messages per day with o3‑mini. Additionally, o3‑mini now works with search to find up-to-date answers with links to relevant web sources. This is an early prototype as we work to integrate search across our reasoning models.
Starting today, free plan users can also try OpenAI o3‑mini by selecting ‘Reason’ in the message composer or by regenerating a response. This marks the first time a reasoning model has been made available to free users in ChatGPT.
While OpenAI o1 remains our broader general knowledge reasoning model, OpenAI o3‑mini provides a specialized alternative for technical domains requiring precision and speed. In ChatGPT, o3‑mini uses medium reasoning effort to provide a balanced trade-off between speed and accuracy. All paid users will also have the option of selecting o3‑mini‑high in the model picker for a higher-intelligence version that takes a little longer to generate responses. Pro users will have unlimited access to both o3‑mini and o3‑mini‑high.
Fast, powerful, and optimized for STEM reasoning
Similar to its OpenAI o1 predecessor, OpenAI o3‑mini has been optimized for STEM reasoning. o3‑mini with medium reasoning effort matches o1’s performance in math, coding, and science, while delivering faster responses. Evaluations by expert testers showed that o3‑mini produces more accurate and clearer answers, with stronger reasoning abilities, than OpenAI o1‑mini. Testers preferred o3‑mini's responses to o1‑mini 56% of the time and observed a 39% reduction in major errors on difficult real-world questions. With medium reasoning effort, o3‑mini matches the performance of o1 on some of the most challenging reasoning and intelligence evaluations including AIME and GPQA.
Competition Math (AIME 2024)
_Mathematics: With low reasoning effort, OpenAI o3‑mini achieves comparable performance with OpenAI o1‑mini, while with medium effort, o3‑mini achieves comparable performance with o1. Meanwhile, with high reasoning effort, o3‑mini outperforms both OpenAI o1‑mini and OpenAI o1, where the gray shaded regions show the performance of majority vote (consensus) with 64 samples._
PhD-level Science Questions (GPQA Diamond)
_PhD-level science_ _: On PhD-level biology, chemistry, and physics questions, with low reasoning effort, OpenAI o3‑mini achieves performance above OpenAI o1‑mini. With high effort, o3‑mini achieves comparable performance with o1._
FrontierMath
_Research-level mathematics: OpenAI o3‑mini with high reasoning performs better than its predecessor on FrontierMath. On FrontierMath, when prompted to use a Python tool, o3‑mini with high reasoning effort solves over 32% of problems on the first attempt, including more than 28% of the challenging (T3) problems. These numbers are provisional, and the chart above shows performance without tools or a calculator._
Competition Code (Codeforces)
_Competition coding_ _: On Codeforces competitive programming, OpenAI o3‑mini achieves progressively higher Elo scores with increased reasoning effort, all outperforming o1‑mini. With medium reasoning effort, it matches o1’s performance._
Software Engineering (SWE-bench Verified (n=477))
_Software engineering: o3‑mini is our highest performing released model on SWEbench-verified. For additional datapoints on SWE-bench Verified results with high reasoning effort, including with the open-source Agentless scaffold (39%) and an internal tools scaffold representing maximum capability elicitation (61%), see oursystem card__as the source of truth. All SWE-bench evaluation runs use a fixed subset of n=477 verified tasks which have been validated on our internal infrastructure._
LiveBench Coding
_LiveBench coding_ _: OpenAI o3‑mini surpasses o1‑high even at medium reasoning effort, highlighting its efficiency in coding tasks. At high reasoning effort, o3‑mini further extends its lead, achieving significantly stronger performance across key metrics._
General knowledge
_General knowledge_ _: o3‑mini outperforms o1‑mini in knowledge evaluations across general knowledge domains._
Human Preference Evaluation
_Human preference evaluation: Evaluations by external expert testers also show that OpenAI o3‑mini produces more accurate and clearer answers, with stronger reasoning abilities than OpenAI o1‑mini, especially for STEM. Testers preferred o3‑mini's responses to o1‑mini 56% of the time and observed a 39% reduction in major errors on difficult real-world questions._
Model speed and performance
With intelligence comparable to OpenAI o1, OpenAI o3‑mini delivers faster performance and improved efficiency. Beyond the STEM evaluations highlighted above, o3‑mini demonstrates superior results in additional math and factuality evaluations with medium reasoning effort. In A/B testing, o3‑mini delivered responses 24% faster than o1‑mini, with an average response time of 7.7 seconds compared to 10.16 seconds.
Latency comparison between o1-mini and o3-mini (medium)
_Latency_ _: o3‑mini has an avg 2500ms faster time to first token than o1‑mini._
_General knowledge_ _: o3‑mini outperforms o1‑mini in knowledge evaluations across general knowledge domains._
Human Preference Evaluation
_Human preference evaluation: Evaluations by external expert testers also show that OpenAI o3‑mini produces more accurate and clearer answers, with stronger reasoning abilities than OpenAI o1‑mini, especially for STEM. Testers preferred o3‑mini's responses to o1‑mini 56% of the time and observed a 39% reduction in major errors on difficult real-world questions._
Model speed and performance
With intelligence comparable to OpenAI o1, OpenAI o3‑mini delivers faster performance and improved efficiency. Beyond the STEM evaluations highlighted above, o3‑mini demonstrates superior results in additional math and factuality evaluations with medium reasoning effort. In A/B testing, o3‑mini delivered responses 24% faster than o1‑mini, with an average response time of 7.7 seconds compared to 10.16 seconds.
Latency comparison between o1-mini and o3-mini (medium)
_Latency_ _: o3‑mini has an avg 2500ms faster time to first token than o1‑mini._
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesOpenAI available at FedRAMP Moderate OpenAI available at FedRAMP Moderate
Expanding secure AI for government. Expanding secure AI for government.
Choco automates food distribution with AI agents Choco automates food distribution with AI agents
Using OpenAI APIs, Choco processes millions of orders, reducing manual work and enabling always-on operations across global food supply chains. Using OpenAI APIs, Choco processes millions of orders, reducing manual work and enabling always-on operations across global food supply chains.
An open-source spec for Codex orchestration: Symphony. An open-source spec for Codex orchestration: Symphony.
Title: An open-source spec for Codex orchestration: Symphony. Title: An open-source spec for Codex orchestration: Symphony.
The next phase of the Microsoft OpenAI partnership The next phase of the Microsoft OpenAI partnership
Amended agreement provides long-term clarity. Amended agreement provides long-term clarity.