The Next Input updates The Next Input-updates
Browse every published The Next Input update in a calm card overview with images, dates, and direct access to each article. Bekijk alle gepubliceerde The Next Input-updates in een rustig kaartenoverzicht met beelden, datums en directe toegang tot elk artikel.
The Next Input update The Next Input-update
OpenAI Five Finals OpenAI Five Finals
We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13. We’ll be holding our final live event for OpenAI Five at 11:30am PT on April 13.
The Next Input update The Next Input-update
Implicit generation and generalization methods for energy-based models Implicit generation and generalization methods for energy-based models
We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models. Generation in EBMs spends more compute to continually refine its answers and doing so can generate samples competitive with GANs at low temperatures, while also having mode coverage guarantees of likelihood-based models. We hope these findings stimulate further research into this promising class of models. We’ve made progress towards stable and scalable training of energy-based models (EBMs) resulting in better sample quality and generalization ability than existing models. Generation in EBMs spends more compute to continually refine its answers and doing so can generate samples competitive with GANs at low temperatures, while also having mode coverage guarantees of likelihood-based models. We hope these findings stimulate further research into this promising class of models.
The Next Input update The Next Input-update
OpenAI Scholars 2019: Meet our Scholars OpenAI Scholars 2019: Meet our Scholars
Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statistics, economics, quantum physics, and business innovation. Our class of eight scholars (out of 550 applicants) brings together collective expertise in literature, philosophy, cell biology, statistics, economics, quantum physics, and business innovation.
The Next Input update The Next Input-update
OpenAI LP OpenAI LP
We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission. We’ve created OpenAI LP, a new “capped-profit” company that allows us to rapidly increase our investments in compute and talent while including checks and balances to actualize our mission.
The Next Input update The Next Input-update
Introducing Activation Atlases Introducing Activation Atlases
We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neurons can represent. As AI systems are deployed in increasingly sensitive contexts, having a better understanding of their internal decision-making processes will let us identify weaknesses and investigate failures. We’ve created activation atlases (in collaboration with Google researchers), a new technique for visualizing what interactions between neurons can represent. As AI systems are deployed in increasingly sensitive contexts, having a better understanding of their internal decision-making processes will let us identify weaknesses and investigate failures.
The Next Input update The Next Input-update
Neural MMO: A massively multiagent game environment Neural MMO: A massively multiagent game environment
We’re releasing a Neural MMO, a massively multiagent game environment for reinforcement learning agents. Our platform supports a large, variable number of agents within a persistent and open-ended task. The inclusion of many agents and species leads to better exploration, divergent niche formation, and greater overall competence. We’re releasing a Neural MMO, a massively multiagent game environment for reinforcement learning agents. Our platform supports a large, variable number of agents within a persistent and open-ended task. The inclusion of many agents and species leads to better exploration, divergent niche formation, and greater overall competence.
The Next Input update The Next Input-update
Spinning Up in Deep RL: Workshop review Spinning Up in Deep RL: Workshop review
On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI. On February 2, we held our first Spinning Up Workshop as part of our new education initiative at OpenAI.
The Next Input update The Next Input-update
AI safety needs social scientists AI safety needs social scientists
We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rationality, emotion, and biases. The aim of this paper is to spark further collaboration between machine learning and social science researchers, and we plan to hire social scientists to work on this full time at OpenAI. We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems with human values requires resolving many uncertainties related to the psychology of human rationality, emotion, and biases. The aim of this paper is to spark further collaboration between machine learning and social science researchers, and we plan to hire social scientists to work on this full time at OpenAI.
The Next Input update The Next Input-update
Better language models and their implications Better language models and their implications
We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training. We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.
The Next Input update The Next Input-update
Computational limitations in robust classification and win-win results Computational limitations in robust classification and win-win results
The Next Input update The Next Input-update
OpenAI Fellows Summer 2018: Final projects OpenAI Fellows Summer 2018: Final projects
Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month apprenticeship. Our first cohort of OpenAI Fellows has concluded, with each Fellow going from a machine learning beginner to core OpenAI contributor in the course of a 6-month apprenticeship.
The Next Input update The Next Input-update
How AI training scales How AI training scales
Read paper(opens in a new window) Read paper(opens in a new window)
Showing 841 to 852 of 994 updates. Je bekijkt 841 tot 852 van 994 updates.