← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 7 October 2025 7 oktober 2025

Disrupting malicious uses of AI: October 2025 Kwaadaardig AI-gebruik verstoren: oktober 2025

Discover how OpenAI is detecting and disrupting malicious uses of AI in our October 2025 report. Learn how we’re countering misuse, enforcing policies, and protecting users from real-world harms. Ontdek hoe OpenAI in ons rapport van oktober 2025 kwaadaardig gebruik van AI detecteert en verstoort. Lees hoe we misbruik tegengaan, beleid handhaven en gebruikers beschermen tegen reële schade.

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 7 October 2025 7 oktober 2025 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

2 min
Impact Impact

A product update that may change what people can do with AI this week. Een productupdate die kan veranderen wat mensen deze week met AI kunnen doen.

Audience Voor wie AI users AI-gebruikers
Level Niveau Medium Gemiddeld
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Relevant for creators comparing tools for images, audio, video, or publishing. Relevant voor creators die tools vergelijken voor beeld, audio, video of publicatie.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model apps video safety

Our mission is to ensure that artificial general intelligence benefits all of humanity. We advance this mission by deploying innovations that help people solve difficult problems and by building democratic AI grounded in common-sense rules that protect people from real harms.

Since we began our public threat reporting in February 2024, we’ve disrupted and reported over 40 networks that violated our usage policies. This includes preventing uses of AI by authoritarian regimes to control populations or coerce other states, as well as abuses like scams, malicious cyber activity, and covert influence operations.

In this update, we share case studies from the past quarter and how we’re detecting and disrupting malicious use of our models. We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models. When activity violates our policies, we ban accounts and, where appropriate, share insights with partners. Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users.

In this update, we share case studies from the past quarter and how we’re detecting and disrupting malicious use of our models. We continue to see threat actors bolt AI onto old playbooks to move faster, not gain novel offensive capability from our models. When activity violates our policies, we ban accounts and, where appropriate, share insights with partners. Our public reporting, policy enforcement, and collaboration with peers aim to raise awareness of abuse while improving protections for everyday users.

* Read the full report(opens in a new window)

* Alignment

* Policies and Procedures

Author

Ben Nimmo, Kimo Bumanglag, Michael Flossman, Nathaniel Hartley, Jack Stubbs, Albert Zhang

Ben Nimmo, Kimo Bumanglag, Michael Flossman, Nathaniel Hartley, Jack Stubbs, Albert Zhang

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan