Relevant if you build with AI tools, APIs, or coding agents. Relevant als je bouwt met AI-tools, API's of coding agents.
Security on the path to AGI Security on the path to AGI
Title: Security on the path to AGI Title: Security on the path to AGI
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Useful for builders who need to understand API, coding, or workflow changes. Nuttig voor bouwers die API-, code- of workflowwijzigingen willen begrijpen.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Security threats evolve constantly and as we get closer to AGI, we expect our adversaries to become more tenacious, numerous and persistent. At OpenAI, we proactively adapt in multiple ways, including by building comprehensive security measures directly into our infrastructure and models.
AI-powered cyber defenseTo protect our users, systems and intellectual property, we’re leveraging our own AI technology to scale our cyber defenses. We developed advanced methods to detect cyber threats and respond rapidly. As a supplement alongside conventional threat detection and incident response strategies, our AI-driven security agents help enhance threat detection capabilities, enable rapid response to evolving adversarial tactics, and equip security teams with precise, actionable intelligence necessary to counter sophisticated cyberattacks.
Continuous adversarial red teamingWe have partnered with SpecterOps(opens in a new window), renowned experts in security research and adversarial operations, to rigorously test our security defenses through realistic simulated attacks across our infrastructure, including corporate, cloud and production environments. These continuous assessments enable us to identify vulnerabilities proactively, enhance our detection capabilities, and strengthen our response strategies against sophisticated threats. Beyond these assessments, we are also collaborating to generate advanced skills training to improve our model capabilities into additional techniques for better protecting our products and models.
Disrupting threat actors and proactively combating malicious AI abuseWe continuously monitor and disrupt attempts by malicious actors to exploit our technologies. When we identify threats targeting us, such as a recent spear phishing campaign aimed at our employees(opens in a new window), we don’t just defend ourselves, we share tradecraft with other AI labs to strengthen our collective defenses. By sharing these emerging risks and collaborating across industry and government, we help ensure AI technologies are developed and deployed securely.
Securing emerging AI agentsAs we introduce advanced AI agents, such as Operator and deep research, we invest in understanding and mitigating the unique security and resilience challenges that arise with such technology. Our efforts include developing robust alignment methods to defend against prompt injection attacks, strengthening underlying infrastructure security, and implementing agent monitoring controls to quickly detect and mitigate unintended or harmful behaviors. As part of this, we're building a unified pipeline and modular framework to provide scalable, real-time visibility and enforcement across agent actions and form-factors.
Security for future AI initiativesSecurity is a cornerstone in the design and implementation of next-generation AI projects such as Stargate. We work with our partners to adopt industry-leading security practices such as zero-trust architectures and hardware-backed security solutions. Where we are substantially expanding our physical infrastructure, we closely partner to ensure our physical safeguards evolve in tandem with our AI capabilities. These strategies include implementing advanced access controls, comprehensive security monitoring, cryptographic protections, and defense in depth. These practices, combined with a focus on securing software and hardware supply chains, help build foundational security from the ground up.
Expanding our security programWe are growing our security program across several dimensions, and are looking for passionate engineers in several areas. If you are interested in protecting OpenAI and our customers – and building the future of secure and trustworthy AI—we’d love to hear from you(opens in a new window)!
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesOpenAI available at FedRAMP Moderate OpenAI available at FedRAMP Moderate
Expanding secure AI for government. Expanding secure AI for government.
Choco automates food distribution with AI agents Choco automates food distribution with AI agents
Using OpenAI APIs, Choco processes millions of orders, reducing manual work and enabling always-on operations across global food supply chains. Using OpenAI APIs, Choco processes millions of orders, reducing manual work and enabling always-on operations across global food supply chains.
An open-source spec for Codex orchestration: Symphony. An open-source spec for Codex orchestration: Symphony.
Title: An open-source spec for Codex orchestration: Symphony. Title: An open-source spec for Codex orchestration: Symphony.
The next phase of the Microsoft OpenAI partnership The next phase of the Microsoft OpenAI partnership
Amended agreement provides long-term clarity. Amended agreement provides long-term clarity.