Worth checking before choosing or changing a subscription. Handig om te checken voordat je een abonnement kiest of wijzigt.
Reducing bias and improving safety in DALL·E 2 Reducing bias and improving safety in DALL·E 2
Today, we are implementing a new technique so that DALL·E generates images of people that more accurately reflect the diversity of the world’s population. Today, we are implementing a new technique so that DALL·E generates images of people that more accurately reflect the diversity of the world’s population.
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Check plan details before changing subscriptions or advising a team. Controleer plandetails voordat je abonnementen wijzigt of een team adviseert.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Listen to article
Today, we are implementing a new technique so that DALL·E generates images of people that more accurately reflect the diversity of the world’s population. This technique is applied at the system level when DALL·E is given a prompt describing a person that does not specify race or gender, like“firefighter.”
Based on our internal evaluation, users were 12x more likely to say that DALL·E images included people of diverse backgrounds after the technique was applied. We plan to improve this technique over time as we gather more data and feedback.
CEO Woman Firefighter Teacher Software engineer
A photo of a CEO Generate
Before mitigation
After mitigation
In April, we started previewing the DALL·E 2 research to a limited number of people, which has allowed us to better understand the system’s capabilities and limitations and improve our safety systems.
During this preview phase, early users have flagged sensitive and biased images which have helped inform and evaluate this new mitigation.
We are continuing to research how AI systems, like DALL·E, might reflect biases in its training data and different ways we can address them.
During the research preview we have taken other steps to improve our safety systems,including:
* Minimizing the risk of DALL·E being misused to create deceptive content by rejecting image uploads containing realistic faces and attempts to create the likeness of public figures, including celebrities and prominent political figures.
* Making our content filters more accurate so that they are more effective at blocking prompts and image uploads that violate ourcontent policy(opens in a new window)while still allowing creative expression.
* Refining automated and human monitoring systems to guard against misuse.
These improvements have helped us gain confidence in the ability to invite more users to experience DALL·E.
Expanding access is an important part of ourdeploying AI systems responsiblybecause it allows us to learn more about real-world use and continue to iterate on our safety systems.
* ChatGPT
* 2022
Author
OpenAI
Related articles
View all
Global news partnerships: Le Monde and Prisa Media Company Mar 13, 2024
OpenAI announces new members to board of directors Company Mar 8, 2024
Review completed & Altman, Brockman to continue to lead OpenAI Company Mar 8, 2024
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesOur principles Our principles
By Sam Altman By Sam Altman
GPT-5.5 Bio Bug Bounty GPT-5.5 Bio Bug Bounty
Title: GPT-5.5 Bio Bug Bounty Titel: GPT-5.5 Bio Bug Bounty
How to get started with Codex Zo begin je met Codex
Tips to set up Codex, create your first project, and start completing real tasks. Tips om Codex in te stellen, je eerste project te maken en echte taken af te ronden.
What is Codex? Wat is Codex?
Understand what Codex is and how it fits into your work Begrijp wat Codex is en hoe het in je werk past