← Back to OpenAI updates ← Terug naar OpenAI-updates
OpenAI ARTICLE ARTIKEL 20 June 2024 20 juni 2024

Empowering defenders through our Cybersecurity Grant Program Empowering defenders through our Cybersecurity Grant Program

Title: Empowering defenders through our Cybersecurity Grant Program Title: Empowering defenders through our Cybersecurity Grant Program

Article details Artikelgegevens
AI maker AI-maker OpenAI Type Type Article Artikel Published Gepubliceerd 20 June 2024 20 juni 2024 Updates Updates Videos Video's View original article Bekijk origineel artikel
Why it matters Waarom dit telt

Quick editorial signal Snelle redactionele duiding

3 min
Impact Impact

Worth checking before choosing or changing a subscription. Handig om te checken voordat je een abonnement kiest of wijzigt.

Audience Voor wie Developers Developers
Level Niveau Expert Expert
  • Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
  • Check plan details before changing subscriptions or advising a team. Controleer plandetails voordat je abonnementen wijzigt of een team adviseert.
  • Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
model apps video pricing

Since its inception, the program has supported a diverse array of projects. We are excited to highlight a few of them.

_Wagner Lab from UC Berkeley_

Professor David Wagner’s security research lab at UC Berkeley is pioneering techniques aimed at defending against prompt-injection attacks in large language models (LLMs). The group is working with OpenAI to enhance the trustworthiness of these models and protect them against cybersecurity threats.

_Coguard_

Albert Heinle, co-founder and CTO at Coguard⁠(opens in a new window), uses AI to reduce software misconfiguration, a common cause of security incidents. Software configuration is complex, which is compounded when connecting software to networks and clusters. Current software solutions rely on outdated rules-based policies. AI can help automate the detection of misconfigurations and keep them updated.

_Mithril Security_

Mithril has developed a proof-of-concept to fortify inference infrastructure for LLMs, including open-source tools to deploy AI models on GPUs with secure enclaves based on Trusted Platform Modules (TPMs). This project aims to demonstrate that data can be sent to AI providers without any data exposure, even to administrators. Their work is available publicly on GitHub⁠(opens in a new window), and as a whitepaper detailing their architecture⁠(opens in a new window).

_Gabriel Bernadett-Shapiro_

An individual grantee, Gabriel Bernadett-Shapiro, created the AI OSINT workshop and AI Security Starter Kit, offering technical training on the basics of LLMs and free tools for students, journalists, investigators and information-security professionals. In particular, Gabriel has emphasized affiliated training for international atrocity crime investigators and intelligence studies students at Johns Hopkins University to help ensure they have the best tools to leverage AI in both critical and challenging environments.

_Breuer Lab at Dartmouth_

Neural networks are vulnerable to attacks where adversaries reconstruct private training data by interacting with the model. Defending against these attacks typically requires costly tradeoffs in terms of model accuracy and training time. Professor Adam Breuer’s⁠(opens in a new window) Lab at Dartmouth is developing new defense techniques that prevent these attacks without compromising accuracy or efficiency.

_Security Lab Boston University (SeclaBU)_

Identifying and reasoning about code vulnerabilities is an important and active area of research. Ph.D candidate Saad Ullah, Professor Gianluca Stringhini from SeclaBU⁠(opens in a new window) and Professor Ayse Coskun from Peac Lab⁠(opens in a new window) at Boston University are working to improve the ability of LLMs to detect and fix vulnerabilities in code. This research could enable cyber defenders to catch and prevent code exploits before they are used maliciously.

_CY-PHY Security Lab from the University of Santa Cruz (UCSC)_

Professor Alvaro Cardenas⁠(opens in a new window)’ Research Group from UCSC is exploring how we can use foundation models to design agents that respond autonomously to computer network intruders, otherwise known as autonomous cyber defense agents. The project intends to compare the advantages and disadvantages of foundation models with their counterparts trained using reinforcement learning (RL) and, subsequently, how they can work together to improve network security and the triage of threat information.

_MIT Computer Science Artificial Intelligence Laboratory (MIT CSAIL)_

Stephen Moskal, Erik Hemberg and Una-May O’Reilly from MIT Computer Science Artificial Intelligence Laboratory⁠(opens in a new window) are exploring how to automate the decision process and perform actionable responses using prompt engineering approaches in a plan-act-report loop for red-teaming. Additionally, the group is exploring LLM-Agent capabilities in Capture-the-Flag (CTF) challenges - exercises aimed at discovering vulnerabilities in a controlled environment.

Help shape what we cover next Help bepalen wat we hierna volgen

Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.

More from OpenAI Meer van OpenAI

All updates Alle updates

Gemini komt eraan