← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 9 June 2025 9 juni 2025

Scaling security with responsible disclosure Scaling security with responsible disclosure

Title: Scaling security with responsible disclosure Title: Scaling security with responsible disclosure

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 9 June 2025 9 juni 2025 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

2 min
Impact Impact

Relevant if you build with AI tools, APIs, or coding agents. Relevant als je bouwt met AI-tools, API's of coding agents.

Audience Voor wie Developers Developers
Level Niveau Expert Expert
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Useful for builders who need to understand API, coding, or workflow changes. Nuttig voor bouwers die API-, code- of workflowwijzigingen willen begrijpen.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model apps developers safety

Scaling security with responsible disclosure | OpenAI

OpenAI’s approach to reporting vulnerabilities in third-party software, built on integrity, cooperation, and scale.

We are publishing an Outbound Coordinated Disclosure Policy that we will follow when disclosing vulnerabilities to third-parties.

At OpenAI, we are committed to advancing a secure digital ecosystem. That’s why we’re introducing our Outbound Coordinated Disclosure Policy, which lays out how we responsibly report security issues we discover in third-party software. We're doing this now because we believe coordinated vulnerability disclosure will become a necessary practice as AI systems become increasingly capable of finding and patching security vulnerabilities. Systems developed by OpenAI have already uncovered zero-day vulnerabilities in third-party and open-source software, and we are taking this proactive step in anticipation of future discoveries.

Whether surfaced through ongoing research, targeted audits of open source code we leverage, or automated analysis using AI tools, our goal is to report vulnerabilities in a way that’s cooperative, respectful, and helpful to the broader ecosystem.

This policy lays out how we disclose issues found in open-source and commercial software through automated and manual code review, as well as discoveries arising from internal usage of third-party software and systems.

It explains:

* How we validate and prioritize findings

* How we contact vendors and the disclosure mechanics we follow

* When and how we go public (non-public first, unless the details demand otherwise)

* Our principles, which include being impact oriented, cooperative, discreet by default, high scale and low friction, and providing attribution when relevant.

We take an intentionally developer-friendly stance on disclosure timelines and have elected to leave timelines open-ended by default. This approach reflects the evolving nature of vulnerability discovery, particularly as AI systems become more effective at reasoning about code, its strengths and weaknesses, and generating reliable patches to increase code security. We anticipate our models detecting a greater number of bugs of increasing complexity, which may require deeper collaboration and more time to resolve sustainably. We’ll continue working with software maintainers to develop disclosure norms that balance urgency with long-term resilience. We still reserve the right to disclose when we determine there is, for example, public interest in doing so.

We will keep improving this policy as we learn. If you have questions about our disclosures practices, reach out to us at outbounddisclosures@openai.com⁠.

Security is a journey defined by continuous improvement. We’re thankful to the vendors, researchers, and community members who walk that road with us. We hope that transparent communication around our approach supports a healthier, more secure ecosystem for everyone.

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan