← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 1 December 2025 1 december 2025

Funding grants for new research into AI and mental health Subsidies voor nieuw onderzoek naar AI en mentale gezondheid

Title: Funding grants for new research into AI and mental health Titel: Subsidies voor nieuw onderzoek naar AI en mentale gezondheid

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 1 December 2025 1 december 2025 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

4 min
Impact Impact

A product update that may change what people can do with AI this week. Een productupdate die kan veranderen wat mensen deze week met AI kunnen doen.

Audience Voor wie Creators Creators
Level Niveau Expert Expert
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Relevant for creators comparing tools for images, audio, video, or publishing. Relevant voor creators die tools vergelijken voor beeld, audio, video of publicatie.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model apps creative safety

Funding grants for new research into AI and mental health | OpenAI

Introducing a new program to award up to $2 million to support independent safety and well-being research.

Listen to article

Update January 28, 2026:

Grant applications are now closed. We were excited and encouraged to receive more than 1,000 high-quality entries from both established and emerging researchers from around the world, making this one of our largest calls for research grants to date. Each submission was carefully reviewed by our team of experts, and we have notified all applicants whose proposals are being funded. The depth and creativity of the proposals reflect the growing momentum in this field, and given the high volume of interest in the program we are actively exploring ways to expand and build on this work in the future.

We’re announcing a call for applications to fund research proposals that explore the intersection of AI and mental health. As AI becomes more capable and ubiquitous, we know that people will increasingly use it in more personal areas of their lives.

We continue to strengthen how our models recognize and respond to signs of mental and emotional distress. Working closely with leading experts, we’ve trained our models⁠ to respond more appropriately during sensitive conversations and have shared detailed updates for how those improvements are performing. While we’ve made meaningful progress on our own models and interventions, this remains an emerging area of research across the industry.

As part of our broader safety investments, we are opening a call for research submissions to support independent researchers outside of OpenAI, helping to spark new ideas, deepen understanding, and accelerate innovation across the ecosystem. These grants are designed to support foundational work that strengthens both our own safety efforts, and the wider field.

We believe that continuing to support independent research on AI and mental health will help improve our collective understanding of this emerging field and help fulfill our mission to ensure that AGI benefits all of humanity.

We're seeking research project proposals that deepen our understanding of the overlap of AI and mental health—both the potential risks and benefits—and help build a safer, more helpful AI ecosystem for everyone. We are particularly interested in interdisciplinary research that combines technical researchers with either mental health experts and those with lived experience.

Successful projects will produce clear deliverables (datasets, evals, rubrics) or generate actionable insights(like synthesized views from people with lived experience, descriptions of how mental health symptoms manifest in a specific culture, research on language and slang used to discuss mental health topics that classifiers may miss) that can inform OpenAI’s safety work and the AI and mental health community overall.

Submissions are open today through December 19, 2025. A panel of internal researchers and experts will review applications on a rolling basis and notify selected proposals on or before January 15th, 2026. Follow this link to apply⁠(opens in a new window).

We present these potential topics of exploration as examples, but this is not meant to be a comprehensive list of all potential research directions. Successful proposals can pertain to topics that are not included on this list.

Potential areas of interest include:

* _How expressions of distress, delusion, or other mental health-related language vary across cultures and languages, and how these differences affect detection or interpretation by AI systems_

* _Perspectives from individuals with lived experience on what feels safe, supportive, or harmful when interacting with AI-powered chatbots_

* _How mental healthcare providers currently use AI tools, including what is effective, what falls short, and where safety risks emerge_

* _The potential of AI systems to promote healthy, pro-social behaviors and reduce harm_

* _The robustness of existing AI model safeguards to vernacular, slang, and under-represented linguistic patterns—particularly in low-resource languages_

* _How AI systems should adjust tone, style, and framing when responding to youth and adolescents to ensure that guidance feels age-appropriate, respectful, and accessible, with deliverables such as evaluation rubrics, style guidelines, or annotated examples of effective vs. ineffective phrasing across age groups_

* _How stigma associated with mental illness may surface in language model recommendations or interaction styles_

* _How AI systems interpret or respond to visual indicators related to body dysmorphia or eating disorders, including the creation of ethically collected, annotated multimodal datasets and evaluation tasks that capture common real-world patterns of distress_

* _How AI systems can provide compassionate, sensitive support to individuals experiencing grief -- helping them process loss, maintain connections, and access coping resources -- along with deliverables such as exemplar response patterns, tone/style guidelines, or evaluation rubrics for assessing supportive grief-related interactions_

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan