Worth checking before choosing or changing a subscription. Handig om te checken voordat je een abonnement kiest of wijzigt.
OpenAI Scholars 2020: Final projects OpenAI Scholars 2020: Final projects
Title: OpenAI Scholars 2020: Final projects Title: OpenAI Scholars 2020: Final projects
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Check plan details before changing subscriptions or advising a team. Controleer plandetails voordat je abonnementen wijzigt of een team adviseert.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Alethea Power
Mentor: Christine Payne
Looking for Grammar in All The Right Places
I’m fascinated by neural network interpretability. Understanding how networks of various architectures represent information can help us build simpler and more efficient networks, as well as predict how the networks we’ve built will behave, and perhaps even give us some insight into how human beings think. Along these lines, I analyzed how GPT-2 represents English grammar, and found smaller sub-networks that seem to correspond to various grammatical structures. I will present my methodology and results.I’m fascinated by neural network interpretability. Understanding how networks of various architectures represent information can help us build simpler and more efficient networks, as well as predict how the networks we’ve built will behave, and perhaps even give us some insight into how human beings think. Along these lines, I analyzed how GPT-2 represents English grammar, and found smaller sub-networks that seem to correspond to various grammatical structures. I will present my methodology and results.
Previous role:B.S. in Applied Mathematics, MSc in Philosophy of Mind from Ediburgh, Software and Site Reliability Engineer at Facebook
Interesting learning:“My advice to someone starting in deep learning research is to take your time to understand insights from fundamental papers and remember that the field is still relatively new. There’s a lot of room for individuals to have an outsized impact.”
* Blog post
Andre Carerra
Mentor: Melanie Subbiah
Social links for Andre Carerra
Semantic Parsing English to GraphQL
My scholars program project is semantic parsing English-to-GraphQL. Given an English prompt such as “How many employees do we have?”, find a corresponding GraphQL query to return the information. The project involved creating a dataset, training models, and creating an interaction tool to see results.
Previous role:CTO at Droplii, Founder at Lambdo
Interesting learning:“I wanted to have a say in how AI is shaped—the Scholars program has been a great opportunity to learn and participate.”
Cathy Yeh
Mentor: Jerry Tworek
Long Term Credit Assignment with Temporal Reward Transport
Standard reinforcement learning algorithms struggle with poor sample efficiency in the presence of sparse rewards with long temporal delays between action and effect. To address the long term credit assignment problem, we use “temporal reward transport” (TRT) to augment the immediate rewards of significant state-action pairs with rewards from the distant future, using an attention mechanism to identify candidates for TRT. A series of gridworld experiments show clear improvements in learning when TRT is used in conjunction with a standard advantage actor critic algorithm.
Previous role:Data Scientist at Square and Driver
Interesting learning:“I appreciate that this program gave me the freedom to learn deeply and flex my creativity.”
Jorge Orbay
Mentor: Karl Cobbe
Quantifying Interpretability of Models Trained on Coinrun
This project’s purpose is to create a scalar that measures the interpretability of an A2C model trained on Procgen’s Coinrun. The scalar is generated using a combination of attribution on the model and masks of Coinrun’s assets. The scalar is used to test the validity of the diversity hypothesis.
Previous role:CS Engineering at Columbia, Research at the Creative Machines Lab, Software Engineer at Autonomic
Interesting learning:“This program, and specifically my mentor, has fostered a self-confidence in me to dive into a field I don’t understand and break down problems until I can solve them. I’m hoping to take the self-confidence I’ve learned from this program to continue breaking down problems in and with AI.”
Kamal Ndousse
Mentor: Natasha Jaques
Social Learning in Independent Multi-Agent Reinforcement Learning
My project has explored the social transfer of expertise among completely independent RL agents trained in shared environments. The motivating question is whether novice agents can learn to mimic expert behavior to solve hard-exploration tasks that they couldn’t master in isolation. I’ll discuss my observations as well as the environments I developed to experiment with social skill transfer.
Previous role:Math and Physics at MIT, Algorithms Research Scientist at Fitbit, Independent Algorithms/ML consultant, ML Engineer at Coinbase
Interesting learning:“I joined the Scholars program in order to learn from the brilliant folks at OpenAI and to immerse myself in AI research. I’m grateful to have had the opportunity to explore state of the art research with the support of such talented researchers (special thanks to my mentor Natasha Jaques!)”
Kata Slama
Mentor: Johannes Otterbach
Towards Epileptic Seizure Prediction with Deep Network
I have been working on a project to predict epileptic seizures using brain recordings. I framed it as an image classification problem based on the spectrogram representation of the brain data. My most successful model so far has been a ResNet18. In my post-Scholars life, I plan to continue working on this project, and make my way to interpretability of spectrogram classification networks.
Previous role:PhD in Neuroscience at UC Berkeley, Behavioral Research at Harvard and Brown
Interesting learning:“I wanted to learn how to apply deep learning for solving scientific and real-world problems. The OpenAI Scholars program was this magical opportunity to get started by learning from the very best minds in the field.”
Pamela Mishkin
Mentor: Alec Radford
Universal Adversarial Perturbations and Language Models
Adversarial perturbations are well-understood for images but less so for language. My presentation will review the literature on how universal adversarial examples can inform understanding of generative models, replicating results generating universal adversarial triggers for GPT-2 and for attacking NLI models.
Previous role:Math and CS at Williams College, Research Analyst at the Federal Reserve Bank of NY, Herchel Smith Scholar at Cambridge, Product Manager at The Whistle, Researcher at Lumi Labs
Interesting learning:“This program strengthened my technical basis in machine learning and helped me understand how AI researchers understand policy implications of their work.”
Mentor: Alec Radford
Universal Adversarial Perturbations and Language Models
Adversarial perturbations are well-understood for images but less so for language. My presentation will review the literature on how universal adversarial examples can inform understanding of generative models, replicating results generating universal adversarial triggers for GPT-2 and for attacking NLI models.
Previous role:Math and CS at Williams College, Research Analyst at the Federal Reserve Bank of NY, Herchel Smith Scholar at Cambridge, Product Manager at The Whistle, Researcher at Lumi Labs
Interesting learning:“This program strengthened my technical basis in machine learning and helped me understand how AI researchers understand policy implications of their work.”
* Blog post
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesOur principles Our principles
By Sam Altman By Sam Altman
Introducing GPT-5.5 GPT-5.5 geïntroduceerd
Title: Introducing GPT-5.5 Titel: GPT-5.5 geïntroduceerd
GPT-5.5 Bio Bug Bounty GPT-5.5 Bio Bug Bounty
Title: GPT-5.5 Bio Bug Bounty Titel: GPT-5.5 Bio Bug Bounty
How to get started with Codex Zo begin je met Codex
Tips to set up Codex, create your first project, and start completing real tasks. Tips om Codex in te stellen, je eerste project te maken en echte taken af te ronden.