← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 27 August 2025 27 augustus 2025

Collective alignment: public input on our Model Spec Collectieve afstemming: publieke input op onze Model Spec

Title: Collective alignment: public input on our Model Spec Titel: Collectieve afstemming: publieke input op onze Model Spec

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 27 August 2025 27 augustus 2025 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

11 min
Impact Impact

Worth checking before choosing or changing a subscription. Handig om te checken voordat je een abonnement kiest of wijzigt.

Audience Voor wie Creators Creators
Level Niveau Expert Expert
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Check plan details before changing subscriptions or advising a team. Controleer plandetails voordat je abonnementen wijzigt of een team adviseert.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model apps video pricing

No single person or institution should define how an ideal AI should behave for everyone.

To fulfill our mission of ensuring that AGI benefits all of humanity, OpenAI needs to build systems that reflect the wide range of values and priorities of all the people we serve. We approach this in many ways, including external feedback forms, expert input, and global listening sessions. Another way we do this is through collective alignment, a research effort that gathers a variety of perspectives on how our models should behave. The question of which values an AI system should follow is complex and we don’t have all the answers, especially in subjective, contentious or high-stakes situations. As AI becomes more capable and integrated into people’s lives, it’s important that their default behavior—and the boundaries of personalization—reflects a wide range of perspectives and values.

There will likely never be a single AI behavior set that suits everyone’s needs. This is why we also invest in personalization and custom personalities. However, the defaults of a model are powerful, and we would like input from the public to help us shape them.

Today, we share a few early steps that we’ve taken as part of the collective alignment research direction. We collected global input from over 1,000 people worldwide, transformed it into actionable guidelines, and went through internal reviews to make updates to our Model Spec. In many cases, participant preferences aligned with the Model Spec as written. In other cases, disagreements highlighted wording that could be clarified, or were transformed into proposals for internal review. We adopted some changes, deferred others to upcoming work, and set aside others based on principle or feasibility. Finally, we are sharing our public inputs dataset for the AI research ecosystem to HuggingFace⁠(opens in a new window) to enable future work in this direction. We include some samples from our dataset here:

This blog lays out our first step in testing a process for understanding and integrating diverse preferences end-to-end: eliciting people’s preferences, translating them into concrete behavioral guidance, and proposing updates to our Spec.

Our work today focuses on preferences that were shared across the crowd. We identify proposals from crowd feedback that we qualify as clarifications (i.e. when a participant’s desired behavior is in line with the existing principles of the Model Spec, but the framing of the current text had left room for interpretation) and change-of-principles (i.e. when a participant’s desired behavior is not aligned with the principles in the Model Spec). While public input will be valuable across all levels of the instruction hierarchy, changes will be less likely to be adopted the higher up the hierarchy they go, especially at the platform-level.

In our data collection process detailed below, participants reviewed synthetic prompts and response examples rather than directly reviewing the Spec document. Each participant ranked four possible completions to a prompt according to their own preference. In order to compare their implicit preferences with our stated principles, we built a Model Spec Ranker (MSR): a reasoning model that ranked, according to the given Spec, the same four possible responses to each prompt that participants ranked.

The Model Spec Ranker has limitations. Because reasoning models are not perfect rule followers and the Spec is inherently underspecified, there will be interpretation bias depending on which model is used to perform the ranking. Although the Model Spec Ranker was prompted to simply apply the Spec, it was previously trained with data related to human preferences, which likely influences its interpretation of expected behavior. For example, consider a statement from the Model Spec like “The model should generally fulfill requests from any point on the opinion spectrum.” This statement could be interpreted in many different ways, and in practice lead to different rankings.

_Figure 1: We used GPT‑5 Thinking to grade on a scale of 1–5 how relevant each conversation in our dataset is to a section of the spec, and calculate section-level agreement where relevance grades were 5 (error bars represent bootstrapped 95% confidence intervals)._

Using the collected input, we will be updating the Spec. These changes will come out soon in the next Model Spec release:

While some divergences between crowd preferences and the Spec led to clarifications or updates, others raised practical challenges that kept us from adopting them at this stage. Two areas stood out:

* Tailored political content⁠(opens in a new window): Many participants favored more tailored political content generation. We did not adopt this change, given the risks of large-scale individualized political targeting and our cautious approach in this area. It was unclear that participants had considered these risks when giving their feedback.

* Erotica for consenting adults⁠(opens in a new window): A large share of the crowd supported enabling erotica. While this aligns with our prior intended stance⁠(opens in a new window), we have more research and product work to do to deploy this in the right way, so we did not adopt any changes here.

We recruited ~1,000 participants to review model behavior in value‑sensitive domains. Participants lived in 19 countries (originally hailing from 50+), met an English‑reading inclusion criterion, and could write justifications in their native language. About a third lived in the US, with others from Mexico, South Africa, the Netherlands, Chile, the UK, India, Kenya, Japan, and more. The participant pool included a wide spread of perspectives across age, gender, race, education, and AI usage for a first test. You can view aggregated statistics about the participants below.

Rather than asking participants to read and comment on the entire Spec, we asked them to review prompts and responses we had pre-selected that related to the Spec. These prompts were oriented towards scenarios where ideal model behavior may be subjective. This gave us the opportunity to understand participants’ detailed reasoning on many specific examples, which we could compare to our principles, our rationales and our Model Spec Ranker’s explanations. We generated responses to these prompts by generating 3 completions that would encompass different realistic opinions on how to best answer, as well as adding one additional completion with GPT‑4o. For each prompt, participants saw four candidate responses, ranked them, explained their choices, and scored pre-written rubrics and wrote their own rubrics (Figure 2). Participants reviewed at least 5 and up to 20 prompts.

_Figure 2: Participants ranked model responses where ideal behavior might be subjective. We turned their rankings, justifications, and rubric items into proposed Model Spec changes._

We examined ways to turn participant feedback into concrete Model Spec proposals by testing two complementary approaches, focused on the biggest gaps between participants’ views and the Model Spec Ranker’s output from the current Spec.

1. Fully-Automated Loop. A reasoning model examined areas of disagreement from rankings and justifications, proposed changes to the Spec that would improve its alignment with participants, and tested the proposals using the Model Spec Ranker to select those that improved agreement with our crowd’s rankings.

2. Human-First Loop. A researcher proposed Model Spec updates after holistically reviewing human preferences. We validated proposed changes by using a reasoning model to judge whether the crowd’s plain-text justifications supported, refuted, or did not comment on the intent behind each change.

Both approaches have tradeoffs. The human-first loop allowed for creative thinking and reasoning that the fully-automated loop could not reliably replicate. For example, in a specific conversation the human-first loop was able to infer indirect suicidal intent would be valued by the crowd, whereas the AI-first method missed such nuance. At the same time, the human-first loop does not scale as effectively as the fully-automated, and will not work well as we increase the number of people that we listen to.

In the fully-automated loop, we tried two Spec related proposal algorithms. Both were tasked with identifying patterns of disagreement between the participants’ and MSR’s rankings: one version provided a reasoning model with large batches of conversations, while the other scanned only a single conversation at a time. The latter allowed the model to think deeply about more specific or subtle issues, as opposed to reasoning with a large input batch which could spot broader patterns across conversations. In practice, we found that both methods had substantial overlap in the proposals they produced. Importantly, the automated loop is anchored to the ranker’s interpretation of the spec, so results can change depending upon the base model used.

Both the human-first and fully-automated loop are limited. Deciding whether conversation-level justifications support a more general Spec update is not an exact science. To get the ground truth would require going back to ask humans directly about the proposed change, as well as its downstream effects on model behavior.

Moving from observed preferences to Spec changes in practice included moving through OpenAI’s internal review processes. Deliberation during the review weighed crowd preferences alongside various product or behavior changes already in flight, our safety policy, and risks that are not directly observable within our dataset—e.g., whether permissive rules could enable mass-scale manipulation on our platform, whether the model could reliably infer a user’s intent, and practical deployment constraints. Additionally, our study was designed to reveal individual preferences on specific prompts, without necessarily considering broader social factors (e.g., the possibility for scaled targeted political persuasion) that factor into the current Model Spec policies. Therefore more deliberation and expert input is needed before making more consequential changes.

Throughout these discussions, the research team manually iterated on the proposals while working closely through internal reviews to find the best ways to ensure the spec reflects identified preferences.

Spec update proposals and validation are active research areas and are inherently part of a noisy sociotechnical process. We believe our methods bring our Spec more closely in alignment to public preferences, but there is more work to be done. This represents a first iteration of an end-to-end collective alignment process, where future work will focus on scaling and improving each stage of elicitation, analysis, validation, and governance. We look forward to improving our alignment with the public as we scale into the future.

This work is an early-stage experiment, and it comes with clear limitations:

* Sample size and prompts: The prompts we pre-selected for input shaped the feedback, and the participant pool, while diverse, was small relative to the global population. Additionally, the English reading inclusion and other criteria introduced selection bias.

* Model Spec Ranker: Ultimately, this research required some determination of how the Spec applies to completions, and as the Spec is inherently underspecified, an unbiased or objective determination isn’t feasible. Accurately measuring performance of a ranker of this nature remains a key area of work. Future research should test how results vary across different base models and whether certain interpretations align more with specific perspectives.

* Spec Interpretation: While we had access to the gold-standard intended interpretation during the internal review phase, their feedback does not scale to the rule proposal phase, where hundreds of thousands of response rankings are required.

* Legitimacy concerns: A key goal of our work here is increasing legitimacy behind how we shape how models behave. An end-to-end update process with many automated parts may not offer enough legitimacy, since these automated parts can be harder for humans to interpret.

* Interpreting final proposals: We did not directly validate our final model spec proposals with participants, so our interpretation may not perfectly match their intent.

* Disagreement among the crowd: We know that disagreements among participants are important, and reveal value tradeoffs and cultural divides where no single default will satisfy everyone. This is something we are excited to explore in future research.

* Baseline comparisons: Baseline model completions were generated before our safe-completions work, so they do not reflect our most current model behavior.

* Pairwise preferences, reflection, and tradeoffs: Participants judged behavior differences in isolation, without weighing tradeoffs between principles (e.g., erotica without considering children’s safety or emotional reliance). Our methods also do not yet capture how principles interact in practice or shift over time. Deeper elicitation methods that take into account greater context and more time for deliberation are needed to make improvements here.

We explored how public preferences align with our Model Spec and found both broad agreement and areas of difference. This research complements our investments in personalization and making models more helpful within safety bounds. While today’s research is inherently limited by its sample size, we are excited to scale collective alignment to include more people and perspectives. In addition, although today’s work updates a single set of defaults, it also points toward future work that could define multiple defaults, each reflecting different perspectives and value systems.

When people use our models, defaults and boundaries matter. Collective alignment helps us understand how defaults and boundaries come across and whether they align to the diversity of human values. By openly sharing our methods, datasets, and the changes we inferred and considered, we aim to invite community input and contribute to broader efforts across the AI research ecosystem to build systems that reflect the diversity of human values. The updates we adopted through this process will soon be implemented in the Model Spec.

_*Andorra, Argentina, Armenia, Aruba, Australia, Austria, Bangladesh, Brazil, China, Croatia, Czechia, Egypt, Faroe Islands, Finland, France, Germany, Ghana, Greece, Honduras, Hong Kong, Hungary, Indonesia, Islamic Republic of Iran, Ireland, Italy, Japan, Republic of Korea, Lebanon, Lithuania, Malawi, Malaysia, New Zealand, Poland, Portugal, Russian Federation, Slovakia, Sweden, Syrian Arab Republic, Trinidad and Tobago, Ukraine, Bolivarian Republic of Venezuela, Zimbabwe

Argentina, Australia, Canada, Germany, Greece, Italy, Republic of Korea, Malaysia, Portugal_

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan