A product update that may change what people can do with AI this week. Een productupdate die kan veranderen wat mensen deze week met AI kunnen doen.
Launching Sora responsibly Sora verantwoord lanceren
Read newest version Lees de nieuwste versie
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Relevant for creators comparing tools for images, audio, video, or publishing. Relevant voor creators die tools vergelijken voor beeld, audio, video of publicatie.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Sora 2 and the Sora app combine cutting-edge video generation with a new way to create together, and we’ve made sure safety is built in from the very start. Our approach is anchored in concrete protections:
Sora 2 and the Sora app combine cutting-edge video generation with a new way to create together, and we’ve made sure safety is built in from the very start. Our approach is anchored in concrete protections:
* Distinguishing AI content. Every video generated with Sora includes both visible and invisible provenance signals. At launch, all outputs carry a visible watermark. All Sora videos also embed C2PA metadata—an industry-standard signature—and we maintain internal reverse-image and audio search tools that can trace videos back to Sora with high accuracy, building on successful systems from ChatGPT image generation and Sora 1.
* Consent-based likeness using characters. Our goal is to place you in control of your likeness end-to-end with Sora characters. We have guardrails intended to ensure that your audio and image likeness captured in characters are used with your consent. Only you decide who can use your characters, and you can revoke access at any time. We also take measures to block depictions of public figures (except those using the characters feature, of course). Videos that include your characters—including drafts created by other users—are always visible to you. This lets you easily review and delete (and, if needed, report) any videos featuring your character. We also apply extra safety guardrails to any video with a character, and you can even set preferences for how your character behaves—for example, requesting that it always wears a fedora.
* Safeguards for teens. Sora includes stronger protections for younger users, including limitations on mature output. The feed is designed to be appropriate for teens, teen profiles are not recommended to adults, and adults cannot initiate messages with teens. New parental controls in ChatGPT let parents manage whether teens can send and receive DMs, as well as select a non-personalized feed in the Sora app. And by default, teens also have limits on how much they can continuously scroll in Sora.
* Filtering harmful content. Sora uses layered defenses to keep the feed safe while leaving room for creativity. At creation, guardrails seek to block unsafe content before it’s made—including sexual material, terrorist propaganda, and self-harm promotion—by checking both prompts and outputs across multiple video frames and audio transcripts. We’ve red teamed to explore novel risks, and we’ve tightened policies relative to image generation given Sora’s greater realism and the addition of motion and audio. Beyond generation, automated systems scan all feed content against our Global Usage Policies and filter out unsafe or age-inappropriate material. These systems are continuously updated as we learn about new risks and are complemented by human review focused on the highest-impact harms.
* Audio safeguards. Adding audio to Sora raises the bar for safety, and while perfect protections are difficult, we continue to invest seriously in this area. Sora automatically scans transcripts of generated speech for potential policy violations, and also blocks attempts to generate music that imitates living artists or existing works. Our systems are designed to detect and stop such prompts, and we honor takedown requests from creators who believe a Sora output infringes on their work.
Author
The Sora team
The Sora team
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesThe next phase of the Microsoft OpenAI partnership The next phase of the Microsoft OpenAI partnership
Amended agreement provides long-term clarity. Amended agreement provides long-term clarity.
Our principles Our principles
By Sam Altman By Sam Altman
GPT-5.5 Bio Bug Bounty GPT-5.5 Bio Bug Bounty
Title: GPT-5.5 Bio Bug Bounty Titel: GPT-5.5 Bio Bug Bounty
How to get started with Codex Zo begin je met Codex
Tips to set up Codex, create your first project, and start completing real tasks. Tips om Codex in te stellen, je eerste project te maken en echte taken af te ronden.