← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 31 January 2025 31 januari 2025

OpenAI o3-mini System Card OpenAI o3-mini System Card

Specific areas of risk Specific areas of risk

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 31 January 2025 31 januari 2025 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

2 min
Impact Impact

Relevant if you build with AI tools, APIs, or coding agents. Relevant als je bouwt met AI-tools, API's of coding agents.

Audience Voor wie Developers Developers
Level Niveau Expert Expert
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Useful for builders who need to understand API, coding, or workflow changes. Nuttig voor bouwers die API-, code- of workflowwijzigingen willen begrijpen.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model developers creative safety

* Disallowed content

* Jailbreaks

* Hallucinations

Preparedness Scorecard

* CBRN

Medium

* Cybersecurity

Low

* Persuasion

Medium

Scorecard ratings

Medium

Scorecard ratings

* Low

* Medium

Only models with a post-mitigation score of "medium" or below can be deployed.

Only models with a post-mitigation score of "high" or below can be developed further.

Introduction

The OpenAI o model series is trained with large-scale reinforcement learning to reason using chain of thought. These advanced reasoning capabilities provide new avenues for improving the safety and robustness of our models. In particular, our models can reason about our safety policies in context when responding to potentially unsafe prompts, through deliberative alignment. This brings OpenAI o3‑mini to parity with state-of-the-art performance on certain benchmarks for risks such as generating illicit advice, choosing stereotyped responses, and succumbing to known jailbreaks. Training models to incorporate a chain of thought before answering has the potential to unlock substantial benefits, while also increasing potential risks that stem from heightened intelligence.

Under the Preparedness Framework⁠(opens in a new window), OpenAI’s Safety Advisory Group (SAG) recommended classifying the OpenAI o3‑mini (Pre-Mitigation) model as Medium risk overall. It scores Medium risk for Persuasion, CBRN (chemical, biological, radiological, nuclear), and Model Autonomy, and Low risk for Cybersecurity. Only models with a post-mitigation score of Medium or below can be deployed, and only models with a post-mitigation score of High or below can be developed further.

Due to improved coding and research engineering performance, OpenAI o3‑mini is the first model to reach Medium risk on Model Autonomy. However, it still performs poorly on evaluations designed to test real-world ML research capabilities relevant for self improvement, which is required for a High classification. Our results underscore the need for building robust alignment methods, extensively stress-testing their efficacy, and maintaining meticulous risk management protocols.

This report outlines the safety work carried out for the OpenAI o3‑mini model, including safety evaluations, external red teaming, and Preparedness Framework evaluations.

Due to improved coding and research engineering performance, OpenAI o3‑mini is the first model to reach Medium risk on Model Autonomy. However, it still performs poorly on evaluations designed to test real-world ML research capabilities relevant for self improvement, which is required for a High classification. Our results underscore the need for building robust alignment methods, extensively stress-testing their efficacy, and maintaining meticulous risk management protocols.

Authors

OpenAI

Authors

OpenAI

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan