← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 5 June 2025 5 juni 2025

Disrupting malicious uses of AI: June 2025 Disrupting malicious uses of AI: June 2025

Our latest report featuring case studies of how we’re detecting and preventing malicious uses of AI. Our latest report featuring case studies of how we’re detecting and preventing malicious uses of AI.

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 5 June 2025 5 juni 2025 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

2 min
Impact Impact

Worth checking before choosing or changing a subscription. Handig om te checken voordat je een abonnement kiest of wijzigt.

Audience Voor wie Teams Teams
Level Niveau Easy Beginner
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Check plan details before changing subscriptions or advising a team. Controleer plandetails voordat je abonnementen wijzigt of een team adviseert.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
video pricing safety

Our mission is to ensure that artificial general intelligence benefits all of humanity. We advance this mission by deploying our innovations to build AI tools that help people solve really hard problems.

As we laid out in our submission⁠ to the Office of Science and Technology Policy’s U.S. AI Action Plan in March, we believe that making sure AI benefits the most people possible means enabling AI through common-sense rules aimed at protecting people from actual harms, and building democratic AI. This includes preventing the use of AI tools by authoritarian regimes to amass power and control their citizens, or to threaten or coerce other states; as well as activities such as covert influence operations (IO), child exploitation, scams, spam, and malicious cyber activity.

It also includes _using_ AI to develop groundbreaking new tools for those who defend against such abuses. By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt, and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams.

It also includes _using_ AI to develop groundbreaking new tools for those who defend against such abuses. By using AI as a force multiplier for our expert investigative teams, in the three months since our last report we’ve been able to detect, disrupt, and expose abusive activity including social engineering, cyber espionage, deceptive employment schemes, covert influence operations and scams.

* Read the full report(opens in a new window)

* 2025

Author

OpenAI

OpenAI

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan