Relevant if you build with AI tools, APIs, or coding agents. Relevant als je bouwt met AI-tools, API's of coding agents.
OpenAI Scholars 2018: Final projects OpenAI Scholars 2018: Final projects
Title: OpenAI Scholars 2018: Final projects Title: OpenAI Scholars 2018: Final projects
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Useful for builders who need to understand API, coding, or workflow changes. Nuttig voor bouwers die API-, code- of workflowwijzigingen willen begrijpen.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Interesting learning:“This summer I had a basic introduction to RL, and I learned how to use Tensorflow within the Unity game engine to create a multi-agent soccer game. I also learned a lot about language modeling (through my music generation project). I feel much more fluent in PyTorch and TensorFlow than I did at the start of the summer.”
Final project:Adapted current language modeling techniques to model classical music via two different approaches (notewise and chordwise) with parallel character and word level language models. This lets you generate solo piano music or piano and violin chamber music. You also have the option of training the generative model on a large music database or only on specific composers or styles. This project also incorporates a music critic network which scores whether a music sample is real or fake, and another which tries to determine who composed the sample.
Your browser does not support the audio element.
* Blog post
* GitHub repo
What’s next: Joining the Fall 2018 class of OpenAI Fellows to continue to expand her ML research knowledge.
Interesting learning:“This summer with OpenAI was truly amazing. The Scholars community has been truly supportive. Everyone was incredibly vested in our success. One thing I learned is how important pictures are to a deeper understanding. My mentor encouraged me to draw out and explain the math and processes behind what I was doing. I found that I was most helped by online blog posts drawing out what was going on. I also developed a much deeper appreciation for data generation and feature engineering. The most amazing thing I got out of this experience was the confidence to know that I could actually work in this space.”
Final project:Built a model with good performance on the SemEval STS Task, Track 4; explored different iterations of what the model will look like, from LSTMs to the Transformer architecture to RNNs.
What’s next: Working as an engineer at Microsoft for a year before pursuing a dual PhD at the University of Illinois in Computer Science and Linguistics in 2019.
Interesting learning:“This summer I gained a concrete understanding about various neural network architectures, especially those used in creative practices, such as recurrent neural networks, generative adversarial networks, variational autoencoders, and their iterations. I also learned how to implement neural networks in Tensorflow and Pytorch.”
Final project:An experiment in generating emotional landscapes with a GAN, a conditional VAE, and a multi-scale VAE to varying degrees of success. This was a two-week project after previously working on a dataset of stand-up comedy punchlines and realizing it would take longer than the time available.
Emotion class represented in each row (from top to bottom): joy, anticipation, fear, sadness, surprise, trust, disgust, anger, none
What’s next: Teaching a graduate-level course on generative music this fall at New York University, then work as a machine learning engineer and continuing to work with generative music and emotion datasets.
Interesting learning:“I’m very appreciative of my three months at OpenAI. For the first two months, I was able to learn about the latest reinforcement learning algorithms and OpenAI’s toolkits. With my final project, I was able to learn about training with multiple outputs, modifying a CycleGAN model with additional loss terms, and increasing my experience with Tensorflow and Keras. In the context of Machine Learning, I learned patience, persistence, and to spend some time just thinking. These qualities helped me to work through diverse projects each week. Additionally, I found that writing about my experiences helped to clarify my thinking on a project and come up with some interesting solutions.”
Final project:An Art Composition Attributes network with a pretrained ResNet50 network fine-tuned on eight different art composition attributes, used within a CycleGAN network to generate art compositions by setting target values for each art attribute.
What’s next: Begin working in Machine Learning as an engineer, based in New Mexico; continue to develop final project and incorporate it into her art practice.
Interesting learning:“I experienced the joy and excitement that comes with making progress and also the self-doubt and despair that attends failure. It’s been quite the intellectual journey and one I’m glad I didn’t have to take on alone. I know I’ve still got a long way to go, but I’m proud of the progress I’ve made. When I started this program, I’d not trained a single neural network. I’d read and written about them, but I had zero hands-on experience. Machine learning is not an easy topic, but now I feel less like I’m trying to scale Mount Everest and more like I’m climbing up a ladder. Knowing the path forward makes the way shorter.”
Final project:Built a network that can learn the rules of how objects move in space in the same manner that humans learn those rules—via observation and without explicit definition of concepts like momentum, force, or friction.
Left: target animation. Right: predicted frames generated by network.
What’s next: Work as an ML engineer after spending a few months as a front-end developer while learning and practicing ML skills a little more.
Interesting learning:“Before the program, I had seen TensorFlow code in tutorials but I had not worked with it myself. During the program, I got to explore and familiarize myself with the code in the TF libraries and to modify the libraries to get them to work on new data. I’m more aware of the field now. It’s easier to read and understand papers. Before the program, reading deep learning papers took a long time because every few sentences I would encounter a term or concept I had never heard before. Now when I read a paper, I understand the terminology as well as why the authors might have made certain choices. I also learned how others in this industry build deep neural nets—common practices, prominent architectures and popular datasets, tools, and the like. Knowing this helps me know where to start on new projects.”
Final project:Exploring the use of semantic trees in LSTMs as a way to better represent the relationships between entities in a sentence.
What’s next: Apply this Machine Learning expertise as an engineering consultant to help small to mid-size businesses take advantage of ML; hosting a short seminar on deep learning in Harare, Zimbabwe to help inform engineers in his home country.
Interesting learning:“I am grateful for the time and space that this program provided me with to self-study at the intersection of deep learning and NLP. I found and refined my blogging voice, including more visual storytelling than ever before. I am much more comfortable working with deep learning frameworks, especially PyTorch and Keras. And I now have experience building an end-to-end, deep learning product. I couldn’t have done it without this support system!”
Final project:@deephypebot is a music commentary generator. It is essentially a language model, trained on past human music writing from the web and conditioned on attributes of the referenced music. There is an additional training step that attempts to encourage a certain type of descriptive, almost flowery writing commonly found in this genre. Our goal is to teach this language model to generate consistently good and entertaining new writing about songs.
What’s next: Go broader on/experiment more with creative applications in ML - both new and existing - and start looking and applying for ML engineer roles!
Interesting learning:“During this program, I significantly increased my expertise in AI. I became more comfortable building models from scratch, working with TensorFlow, and built a general understanding of reinforcement learning. I’ve built a deeper understanding of NLP through a lot of low level implementations of common algorithms like word2vec, simple RNNs, LSTMs and various preprocessing approaches to text. In my final project, I’m combining NLP with RL where an agent achieves target cells in the Gridworld environment upon commands. The project will be modified to be more language conditioned as a step towards grounded language learning.”
What’s next: Sophia will continue to pursue her education in machine learning with a view towards a career in AI research.
Interesting learning:“Before the program, I had seen TensorFlow code in tutorials but I had not worked with it myself. During the program, I got to explore and familiarize myself with the code in the TF libraries and to modify the libraries to get them to work on new data. I’m more aware of the field now. It’s easier to read and understand papers. Before the program, reading deep learning papers took a long time because every few sentences I would encounter a term or concept I had never heard before. Now when I read a paper, I understand the terminology as well as why the authors might have made certain choices. I also learned how others in this industry build deep neural nets—common practices, prominent architectures and popular datasets, tools, and the like. Knowing this helps me know where to start on new projects.”
Final project:Exploring the use of semantic trees in LSTMs as a way to better represent the relationships between entities in a sentence.
* Blog post
* GitHub repo
What’s next: Apply this Machine Learning expertise as an engineering consultant to help small to mid-size businesses take advantage of ML; hosting a short seminar on deep learning in Harare, Zimbabwe to help inform engineers in his home country.
Interesting learning:“I am grateful for the time and space that this program provided me with to self-study at the intersection of deep learning and NLP. I found and refined my blogging voice, including more visual storytelling than ever before. I am much more comfortable working with deep learning frameworks, especially PyTorch and Keras. And I now have experience building an end-to-end, deep learning product. I couldn’t have done it without this support system!”
Final project:@deephypebot is a music commentary generator. It is essentially a language model, trained on past human music writing from the web and conditioned on attributes of the referenced music. There is an additional training step that attempts to encourage a certain type of descriptive, almost flowery writing commonly found in this genre. Our goal is to teach this language model to generate consistently good and entertaining new writing about songs.
* Blog post
* GitHub repo
What’s next: Go broader on/experiment more with creative applications in ML - both new and existing - and start looking and applying for ML engineer roles!
Interesting learning:“During this program, I significantly increased my expertise in AI. I became more comfortable building models from scratch, working with TensorFlow, and built a general understanding of reinforcement learning. I’ve built a deeper understanding of NLP through a lot of low level implementations of common algorithms like word2vec, simple RNNs, LSTMs and various preprocessing approaches to text. In my final project, I’m combining NLP with RL where an agent achieves target cells in the Gridworld environment upon commands. The project will be modified to be more language conditioned as a step towards grounded language learning.”
* Blog post
* GitHub repo
What’s next: Sophia will continue to pursue her education in machine learning with a view towards a career in AI research.
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesJust a moment... Zo begin je met Codex
Verification successful. Waiting for openai.com to respond Tips om Codex in te stellen, je eerste project te maken en echte taken af te ronden.
Working with Codex Werken met Codex
Title: Working with Codex Leer hoe je je Codex-werkruimte instelt en aan de slag gaat met threads en projecten.
GPT-5.5 Bio Bug Bounty GPT-5.5 Bio Bug Bounty
Title: GPT-5.5 Bio Bug Bounty Titel: GPT-5.5 Bio Bug Bounty
What is Codex? Wat is Codex?
Understand what Codex is and how it fits into your work Begrijp wat Codex is en hoe het in je werk past