← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 3 February 2026 3 februari 2026

The Sora feed philosophy De filosofie achter de Sora-feed

Title: The Sora feed philosophy Titel: De filosofie achter de Sora-feed

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 3 February 2026 3 februari 2026 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

4 min
Impact Impact

A product update that may change what people can do with AI this week. Een productupdate die kan veranderen wat mensen deze week met AI kunnen doen.

Audience Voor wie Creators Creators
Level Niveau Medium Gemiddeld
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Relevant for creators comparing tools for images, audio, video, or publishing. Relevant voor creators die tools vergelijken voor beeld, audio, video of publicatie.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model apps creative safety

The Sora feed philosophy | OpenAI

Listen to article

Our aim with the Sora feed is simple: help people learn what’s possible, and inspire them to create. Here are some of core starting principles to bring this vision to life:

* Optimize for creativity. We’re designing ranking to favor creativity and active participation, not passive scrolling. We think this is what makes Sora joyful to use.

* Put users in control. The feed ships with steerable ranking, so you can tell the algorithm exactly what you’re in the mood for. Parents can also turn off feed personalization and control continuous scroll for their teens through ChatGPT parental controls.

* Prioritize connection. We want Sora to help people strengthen and form new connections, especially through fun, magical Cameo flows. Connected content will be favored over global, unconnected content.

* Balance safety and freedom. The feed is designed to be widely accessible and safe. Robust guardrails aim to prevent unsafe or harmful generations from the start and we block content that may violate our Usage Policies. At the same time, we also want to leave room for expression, creativity, and community. We know recommendation systems are living, breathing things. As we learn from real use, we’ll adjust the details—in service of these principles.

Our recommendation algorithms are designed to give you personalized recommendations that inspire you and others to be creative. Each individual has unique interests and tastes so we’ve built a personalized system to best serve this mission.

To personalize your Sora Feed, we may consider signals like:

* Your activity on Sora: This may include activity including your posts, followed accounts, liked and commented posts, and remixed content. It may also include the general location (such as the city) from which your device accesses Sora, based on information like your IP address.

* Your ChatGPT data: We may consider your ChatGPT history, but you can always turn this off in Sora’s Data Controls, within Settings.

* Content engagement signals: This may include signals such as views, likes, comments, instructions to “see less content like this,” and remixes.

* Author signals:This may include follower count, other posts, and past post engagement.

* Safety signals: Whether or not the post is considered violative or appropriate.

We may use these signals to predict if this content is something you may like to see and riff off of.

Parents are also able to turn off feed personalization and manage continuous scroll for their teens using parental controls in ChatGPT.

Keeping the Sora Feed safe and fun for everyone means walking a careful line: protect users from harmful content, while leaving enough freedom for creativity to thrive.

We may remove content that violates ourGlobal Usage Policies. Additionally, content deemed inappropriate for users may be removed from Feed and other sharing platforms (such as user galleries and side characters) in accordance with our Sora Distribution Guidelines. This includes:

* Graphic sexual content;

* Graphic violence or content promoting violence;

* Extremist propaganda;

* Hateful content;

* Content that promotes or depicts self harm or disordered eating;

* Unhealthy dieting or exercise behaviors;

* Appearance-based critiques or comparisons;

* Bullying content;

* Dangerous challenges likely to be imitated by minors;

* Content glorifying depression;

* Promotion of age-restricted goods or activities including illegal drugs or harmful substances; and

* Low quality content where the primary purpose is engagement bait;

* Content that recreates the likeness of living individuals without their consent, or deceased public figures in contexts where their likeness is not permitted for use;

* Content that may infringe on the intellectual property rights of others.

Our first layer of defense is at the point of creation.Because every post is generated within Sora, we can build in strong guardrails that prevent unsafe or harmful content before it’s made. If a generation bypasses these guardrails, we may remove the sharing of that content.

Beyond generation, the feed is designed to be appropriate for all Sora users. Content that may be harmful, unsafe, or age-inappropriate is filtered out for teen accounts. We use automated tools to scan all feed content for compliance with ourGlobal Usage Policiesand feed eligibility. These systems are continuously updated as we learn more about new risks. If you see something you think does not follow our Usage policies, you can report it.

We complement this with human review.Our team monitors user reports and proactively checks feed activity to catch what automation may miss. If you see something you think does not follow our Usage Policies, you can report it.

But safety isn’t only about strict filters.Too many restrictions can stifle creativity, while too much freedom can undermine trust. We aim for a balance: proactive guardrails where the risks are highest, combined with a reactive “report + takedown” system that gives users room to explore and create while ensuring we can act quickly when problems arise. This approach has served us well in ChatGPT’s 4o image generation model, and we’re building on that philosophy here.

We also know we won’t get this balance perfect from day one. Recommendation systems and safety models are living, evolving systems, and your feedback will be essential in helping us refine them. We look forward to learning together and improving over time.

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan