A product update that may change what people can do with AI this week. Een productupdate die kan veranderen wat mensen deze week met AI kunnen doen.
Disrupting malicious uses of AI Disrupting malicious uses of AI
Our latest report featuring case studies of how we're detecting and preventing malicious uses of AI. Our latest report featuring case studies of how we're detecting and preventing malicious uses of AI.
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Relevant for creators comparing tools for images, audio, video, or publishing. Relevant voor creators die tools vergelijken voor beeld, audio, video of publicatie.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Listen to article
Our mission is to ensure that artificial general intelligence benefits all of humanity. We advance this mission by deploying our innovations to build AI tools that help people solve really hard problems.
As we laid out in our Economic Blueprint in January, we believe that making sure AI benefits the most people possible means enabling AI through common-sense rules aimed at protecting people from actual harms, and building democratic AI. This includes preventing use of AI tools by authoritarian regimes to amass power and control their citizens, or to threaten or coerce other states; as well as activities such as child exploitation, covert influence operations (IOs), scams, spam, and malicious cyber activity. The AI-powered investigative capabilities that flow from OpenAI’s innovations provide valuable tools to help protect democratic AI against the measures of adversarial authoritarian regimes.
It has now been a year since OpenAI became the first AI research lab to publish reports on our disruptions in an effort to support broader efforts by U.S. and allied governments, industry partners, and other stakeholders, to prevent abuse by adversaries and other malicious actors. This latest report outlines some of the trends and features of our AI-powered work, together with case studies that highlight the types of threats we’ve disrupted.
* Read the full report(opens in a new window)
* Global Affairs
* 2025
Authors
Ben Nimmo, Albert Zhang, Matthew Richard, Nathaniel Hartley
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesOpenAI available at FedRAMP Moderate OpenAI available at FedRAMP Moderate
Expanding secure AI for government. Expanding secure AI for government.
Choco automates food distribution with AI agents Choco automates food distribution with AI agents
Using OpenAI APIs, Choco processes millions of orders, reducing manual work and enabling always-on operations across global food supply chains. Using OpenAI APIs, Choco processes millions of orders, reducing manual work and enabling always-on operations across global food supply chains.
An open-source spec for Codex orchestration: Symphony. An open-source spec for Codex orchestration: Symphony.
Title: An open-source spec for Codex orchestration: Symphony. Title: An open-source spec for Codex orchestration: Symphony.
The next phase of the Microsoft OpenAI partnership The next phase of the Microsoft OpenAI partnership
Amended agreement provides long-term clarity. Amended agreement provides long-term clarity.