A product update that may change what people can do with AI this week. Een productupdate die kan veranderen wat mensen deze week met AI kunnen doen.
Disrupting malicious uses of AI: October 2025 Kwaadaardig AI-gebruik verstoren: oktober 2025
Discover how OpenAI is detecting and disrupting malicious uses of AI in our October 2025 report. Learn how we’re countering misuse, enforcing policies, and protecting users from real-world harms. Ontdek hoe OpenAI in ons rapport van oktober 2025 kwaadaardig gebruik van AI detecteert en verstoort. Lees hoe we misbruik tegengaan, beleid handhaven en gebruikers beschermen tegen reële schade.
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Relevant for creators comparing tools for images, audio, video, or publishing. Relevant voor creators die tools vergelijken voor beeld, audio, video of publicatie.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Our mission is to ensure that artificial general intelligence benefits all of humanity. We advance this mission by deploying innovations that help people solve difficult problems and by building democratic AI grounded in common-sense rules that protect people from real harms.
Since we began our public threat reporting in February 2024, we’ve disrupted and reported over 40 networks that violated our usage policies. This includes preventing uses of AI by authoritarian regimes to control populations or coerce other states, as well as abuses like scams, malicious cyber activity, and covert influence operations.
In this update, we share case studies from the past quarter and how we’re detecting and disrupting malicious use of our models. We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models. When activity violates our policies, we ban accounts and, where appropriate, share insights with partners. Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users.
In this update, we share case studies from the past quarter and how we’re detecting and disrupting malicious use of our models. We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models. When activity violates our policies, we ban accounts and, where appropriate, share insights with partners. Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users.
* Read the full report(opens in a new window)
* Alignment
* Policies and Procedures
Author
Ben Nimmo, Kimo Bumanglag, Michael Flossman, Nathaniel Hartley, Jack Stubbs, Albert Zhang
Ben Nimmo, Kimo Bumanglag, Michael Flossman, Nathaniel Hartley, Jack Stubbs, Albert Zhang
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesThe next phase of the Microsoft OpenAI partnership The next phase of the Microsoft OpenAI partnership
Amended agreement provides long-term clarity. Amended agreement provides long-term clarity.
Our principles Our principles
By Sam Altman By Sam Altman
GPT-5.5 Bio Bug Bounty GPT-5.5 Bio Bug Bounty
Title: GPT-5.5 Bio Bug Bounty Titel: GPT-5.5 Bio Bug Bounty
How to get started with Codex Zo begin je met Codex
Tips to set up Codex, create your first project, and start completing real tasks. Tips om Codex in te stellen, je eerste project te maken en echte taken af te ronden.