← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 21 July 2023 21 juli 2023

Moving AI governance forward Moving AI governance forward

OpenAI and other leading labs reinforce AI safety, security and trustworthiness through voluntary commitments. OpenAI and other leading labs reinforce AI safety, security and trustworthiness through voluntary commitments.

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 21 July 2023 21 juli 2023 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

2 min
Impact Impact

A product update that may change what people can do with AI this week. Een productupdate die kan veranderen wat mensen deze week met AI kunnen doen.

Audience Voor wie Creators Creators
Level Niveau Medium Gemiddeld
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Relevant for creators comparing tools for images, audio, video, or publishing. Relevant voor creators die tools vergelijken voor beeld, audio, video of publicatie.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model apps creative safety

Illustration: Justin Jay Wang × DALL·E

OpenAI and other leading AI labs are making a set of voluntary commitments to reinforce the safety, security and trustworthiness of AI technology and our services. This process, coordinated by the White House, is an important step in advancing meaningful and effective AI governance, both in the US and around the world.

As part of our mission to build safe and beneficial AGI, we will continue to pilot and refine⁠ concrete governance practices specifically tailored to highly capable foundation models like the ones that we produce. We will also continue to invest in research in areas that can help inform regulation, such as techniques for assessing potentially dangerous capabilities in AI models.

“Policymakers around the world are considering new laws for highly capable AI systems. Today’s commitments contribute specific and concrete practices to that ongoing discussion. This announcement is part of our ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance,” said Anna Makanju, VP of Global Affairs.

Voluntary AI commitments

_The following is a list of commitments that companies are making to promote the safe, secure, and transparent development and use of AI technology. These voluntary commitments are consistent with existing laws and regulations, and designed to advance a generative AI legal and policy regime. Companies intend these voluntary commitments to remain in effect until regulations covering substantially the same issues come into force. Individual companies may make additional commitments beyond those included here._

Scope: Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (e.g. models that are overall more powerful than any currently released models, including GPT‑4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2).

Scope: Where commitments mention particular models, they apply only to generative models that are overall more powerful than the current industry frontier (e.g. models that are overall more powerful than any currently released models, including GPT‑4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2).

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan