The Next Input updates The Next Input-updates
Browse every published The Next Input update in a calm card overview with images, dates, and direct access to each article. Bekijk alle gepubliceerde The Next Input-updates in een rustig kaartenoverzicht met beelden, datums en directe toegang tot elk artikel.
The Next Input update The Next Input-update
Scaling AI for everyone AI op schaal voor iedereen
Loading… Laden…
The Next Input update The Next Input-update
Pacific Northwest National Laboratory and OpenAI partner to accelerate federal permitting Pacific Northwest National Laboratory en OpenAI werken samen om federale vergunningverlening te versnellen
Title: Pacific Northwest National Laboratory and OpenAI partner to accelerate federal permitting Titel: Pacific Northwest National Laboratory en OpenAI werken samen om federale vergunningverlening te versnellen
The Next Input update The Next Input-update
OpenAI Codex and Figma launch seamless code-to-design experience OpenAI Codex en Figma lanceren naadloze code-naar-design-ervaring
Title: OpenAI Codex and Figma launch seamless code-to-design experience Titel: OpenAI Codex en Figma lanceren naadloze code-naar-design-ervaring
pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval
Today we are releasing pplx-embed-v1 and pplx-embed-context-v1, two state-of-the-art text embedding models built for real-world, web-scale retrieval. Today we are releasing pplx-embed-v1 and pplx-embed-context-v1, two state-of-the-art text embedding models built for real-world, web-scale retrieval.
The Next Input update The Next Input-update
Disrupting malicious uses of AI Kwaadwillig AI-gebruik tegengaan
Our latest report featuring case studies of how we’re detecting and preventing malicious uses of AI. Ons nieuwste rapport met casestudy's over hoe we kwaadwillig gebruik van AI detecteren en voorkomen.
Capable, Open, and Safe: Combating AI Misuse Capable, Open, and Safe: Combating AI Misuse
Today, Black Forest Labs’ FLUX models are among the most popular AI models for visual generation. We are excited to share early results that help validate our efforts to mitigate emerging risks. Today, Black Forest Labs’ FLUX models are among the most popular AI models for visual generation. We are excited to share early results that help validate our efforts to mitigate emerging risks.
The Next Input update The Next Input-update
Arvind KC appointed Chief People Officer Arvind KC benoemd tot Chief People Officer
Helping OpenAI grow and adapt as AI changes how work gets done. OpenAI helpen groeien en zich aanpassen terwijl AI verandert hoe werk wordt gedaan.
The Next Input update The Next Input-update
Introducing Frontier Alliances Introductie van Frontier Alliances
Loading… Laden…
The Next Input update The Next Input-update
Why SWE-bench Verified no longer measures frontier coding capabilities Waarom SWE-bench Verified frontier-codering niet langer meet
SWE-bench Verified is increasingly contaminated. We recommend SWE-bench Pro. SWE-bench Verified raakt steeds meer besmet. We raden SWE-bench Pro aan.
The Next Input update The Next Input-update
Our First Proof submissions Onze eerste First Proof-inzendingen
We’re sharing our proof attempts for First Proof, a math challenge testing if AI can produce checkable proofs on domain-specific problems. We delen onze bewijspogingen voor First Proof, een wiskunde-uitdaging die test of AI controleerbare bewijzen kan produceren voor domeinspecifieke problemen.
The Next Input update The Next Input-update
Advancing independent research on AI alignment Onafhankelijk onderzoek naar AI-afstemming bevorderen
We’re committing $7.5M to The Alignment Project to fund independent research developing mitigations to safety and security risks from misaligned AI. We committeren $7,5 miljoen aan The Alignment Project om onafhankelijk onderzoek te financieren dat mitigerende maatregelen ontwikkelt tegen veiligheids- en beveiligingsrisico’s door niet-uitgelijnde AI.
The Next Input update The Next Input-update
Introducing EVMbench EVMbench geïntroduceerd
Making smart contracts safer by evaluating AI agents’ ability to detect, patch, and exploit vulnerabilities in blockchain environments. Slimme contracten veiliger maken door het vermogen van AI-agenten te evalueren om kwetsbaarheden in blockchainomgevingen te detecteren, te patchen en uit te buiten.
Showing 109 to 120 of 993 updates. Je bekijkt 109 tot 120 van 993 updates.