The Next Input updates 993 published updates 993 gepubliceerde updates

The Next Input updates The Next Input-updates

Browse every published The Next Input update in a calm card overview with images, dates, and direct access to each article. Bekijk alle gepubliceerde The Next Input-updates in een rustig kaartenoverzicht met beelden, datums en directe toegang tot elk artikel.

The Next Input update The Next Input-update

The Next Input
27 Feb 2026 27 feb. 2026

Scaling AI for everyone AI op schaal voor iedereen

Loading… Laden…

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
26 Feb 2026 26 feb. 2026

Pacific Northwest National Laboratory and OpenAI partner to accelerate federal permitting Pacific Northwest National Laboratory en OpenAI werken samen om federale vergunningverlening te versnellen

Title: Pacific Northwest National Laboratory and OpenAI partner to accelerate federal permitting Titel: Pacific Northwest National Laboratory en OpenAI werken samen om federale vergunningverlening te versnellen

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
26 Feb 2026 26 feb. 2026

OpenAI Codex and Figma launch seamless code-to-design experience OpenAI Codex en Figma lanceren naadloze code-naar-design-ervaring

Title: OpenAI Codex and Figma launch seamless code-to-design experience Titel: OpenAI Codex en Figma lanceren naadloze code-naar-design-ervaring

Open article → Open artikel →
pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval
The Next Input
26 Feb 2026 26 feb. 2026

pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval pplx-embed: State-of-the-Art Embedding Models for Web-Scale Retrieval

Today we are releasing pplx-embed-v1 and pplx-embed-context-v1, two state-of-the-art text embedding models built for real-world, web-scale retrieval. Today we are releasing pplx-embed-v1 and pplx-embed-context-v1, two state-of-the-art text embedding models built for real-world, web-scale retrieval.

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
25 Feb 2026 25 feb. 2026

Disrupting malicious uses of AI Kwaadwillig AI-gebruik tegengaan

Our latest report featuring case studies of how we’re detecting and preventing malicious uses of AI. Ons nieuwste rapport met casestudy's over hoe we kwaadwillig gebruik van AI detecteren en voorkomen.

Open article → Open artikel →
Capable, Open, and Safe: Combating AI Misuse
The Next Input
24 Feb 2026 24 feb. 2026

Capable, Open, and Safe: Combating AI Misuse Capable, Open, and Safe: Combating AI Misuse

Today, Black Forest Labs’ FLUX models are among the most popular AI models for visual generation. We are excited to share early results that help validate our efforts to mitigate emerging risks. Today, Black Forest Labs’ FLUX models are among the most popular AI models for visual generation. We are excited to share early results that help validate our efforts to mitigate emerging risks.

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
24 Feb 2026 24 feb. 2026

Arvind KC appointed Chief People Officer Arvind KC benoemd tot Chief People Officer

Helping OpenAI grow and adapt as AI changes how work gets done. OpenAI helpen groeien en zich aanpassen terwijl AI verandert hoe werk wordt gedaan.

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
23 Feb 2026 23 feb. 2026

Introducing Frontier Alliances Introductie van Frontier Alliances

Loading… Laden…

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
23 Feb 2026 23 feb. 2026

Why SWE-bench Verified no longer measures frontier coding capabilities Waarom SWE-bench Verified frontier-codering niet langer meet

SWE-bench Verified is increasingly contaminated. We recommend SWE-bench Pro. SWE-bench Verified raakt steeds meer besmet. We raden SWE-bench Pro aan.

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
20 Feb 2026 20 feb. 2026

Our First Proof submissions Onze eerste First Proof-inzendingen

We’re sharing our proof attempts for First Proof, a math challenge testing if AI can produce checkable proofs on domain-specific problems. We delen onze bewijspogingen voor First Proof, een wiskunde-uitdaging die test of AI controleerbare bewijzen kan produceren voor domeinspecifieke problemen.

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
19 Feb 2026 19 feb. 2026

Advancing independent research on AI alignment Onafhankelijk onderzoek naar AI-afstemming bevorderen

We’re committing $7.5M to The Alignment Project to fund independent research developing mitigations to safety and security risks from misaligned AI. We committeren $7,5 miljoen aan The Alignment Project om onafhankelijk onderzoek te financieren dat mitigerende maatregelen ontwikkelt tegen veiligheids- en beveiligingsrisico’s door niet-uitgelijnde AI.

Open article → Open artikel →

The Next Input update The Next Input-update

The Next Input
18 Feb 2026 18 feb. 2026

Introducing EVMbench EVMbench geïntroduceerd

Making smart contracts safer by evaluating AI agents’ ability to detect, patch, and exploit vulnerabilities in blockchain environments. Slimme contracten veiliger maken door het vermogen van AI-agenten te evalueren om kwetsbaarheden in blockchainomgevingen te detecteren, te patchen en uit te buiten.

Open article → Open artikel →

Showing 109 to 120 of 993 updates. Je bekijkt 109 tot 120 van 993 updates.

Gemini komt eraan