← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 25 February 2026 25 februari 2026

Disrupting malicious uses of AI Kwaadwillig AI-gebruik tegengaan

Our latest report featuring case studies of how we’re detecting and preventing malicious uses of AI. Ons nieuwste rapport met casestudy's over hoe we kwaadwillig gebruik van AI detecteren en voorkomen.

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 25 February 2026 25 februari 2026 Updates Updates Videos Video's View original article Bekijk origineel artikel

Read the report(opens in a new window)

In the two years since we began publishing these threat reports, we have gained important insights into the ways threat actors attempt to abuse AI models. In particular, the case studies in this report, as in our earlier reports, illustrate how threat actors typically use AI in combination with other, more traditional tools such as websites and social media accounts. Threat activity is seldom limited to one platform; as our report on a Chinese influence operator shows, it is not always limited to one AI model. Rather, threat actors may use different AI models at various points in their operational workflow. We share these insights in our threat reports so that our industry, and wider society, can be better placed to identify and avoid such threats.

Read the full report here⁠(opens in a new window).

* 2026

Author

OpenAI

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan