← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 14 December 2023 14 december 2023

Superalignment Fast Grants Superalignment Fast Grants

In partnership with Eric Schmidt, we are launching a $10M grants program to support technical research towards ensuring superhuman AI systems are aligned and safe: In partnership with Eric Schmidt, we are launching a $10M grants program to support technical research towards ensuring superhuman AI systems are aligned and safe:

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 14 December 2023 14 december 2023 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

2 min
Impact Impact

Worth checking before choosing or changing a subscription. Handig om te checken voordat je een abonnement kiest of wijzigt.

Audience Voor wie Teams Teams
Level Niveau Expert Expert
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Check plan details before changing subscriptions or advising a team. Controleer plandetails voordat je abonnementen wijzigt of een team adviseert.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model apps video pricing

* We are offering $100K–$2M grants for academic labs, nonprofits, and individual researchers.

* For graduate students, we are sponsoring a one-year $150K OpenAI Superalignment Fellowship: $75K in stipend and $75K in compute and research funding.

* No prior experience working on alignment is required; we are actively looking to support researchers who are excited to work on alignment for the first time.

* Our application process is simple, and we’ll get back to you within four weeks of applications closing.

* Apply by February 18(opens in a new window)

With these grants, we are particularly interested in funding the following research directions⁠(opens in a new window):

* Weak-to-strong generalization: Humans will be weak supervisors relative to superhuman models. Can we understand and control how strong models generalize from weak supervision⁠?

* Interpretability: How can we understand model internals? And can we use this to e.g. build an AI lie detector?

* Scalable oversight: How can we use AI systems to assist humans in evaluating the outputs of other AI systems on complex tasks?

* Many other research directions, including but not limited to: honesty, chain-of-thought faithfulness, adversarial robustness, evals and testbeds, and more.

For more on the research directions, FAQs, and other details, see our Superalignment Fast Grants page⁠(opens in a new window).

Join us in this challenge

We think new researchers could make enormous contributions! This is a young field with many tractable research problems; outstanding contributions could not just help shape the field, but be critical for the future of AI. There has never been a better time to start working on alignment.

* Alignment

* 2023

Author

OpenAI

Contributors

Leopold Aschenbrenner, Jan Leike, Sherry Lachman, Aleksander Madry, Chris Clark, Collin Burns, Pavel Izmailov, Nat McAleese, William Saunders, Bobby Wu, Lisa Pan, Janine Korovesis, Ilya Sutskever, Elie Georges, Kayla Wood, Kendra Rimbach, Thomas Degry, Ruby Chen

Related articles

View all

Disrupting malicious uses of AI by state-affiliated threat actors Security Feb 14, 2024

Building an early warning system for LLM-aided biological threat creation Publication Jan 31, 2024

Democratic inputs to AI grant program: lessons learned and implementation plans Safety Jan 16, 2024

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan