Worth checking before choosing or changing a subscription. Handig om te checken voordat je een abonnement kiest of wijzigt.
Doppel’s AI defense system stops attacks before they spread Doppel’s AI-verdedigingssysteem stopt aanvallen voordat ze zich verspreiden
With GPT‑5 and reinforcement fine-tuning (RFT), Doppel cut analyst workloads by 80% and now mitigates threats in minutes instead of hours. Met GPT‑5 en reinforcement fine-tuning (RFT) verlaagde Doppel de werklast van analisten met 80% en worden dreigingen nu binnen minuten in plaats van uren beperkt.
Quick editorial signal Snelle redactionele duiding
- Track this as a OpenAI update, not just a standalone headline. Bekijk dit als OpenAI-update, niet alleen als losse headline.
- Check plan details before changing subscriptions or advising a team. Controleer plandetails voordat je abonnementen wijzigt of een team adviseert.
- Likely worth revisiting after people have used the release in practice. Waarschijnlijk de moeite waard om opnieuw te bekijken zodra mensen het in praktijk gebruiken.
Contact sales
Company size: Startup
Region: North America
Industry: Technology
Products: API
Results
80%
reduced analyst workflows
3x
threat handling capacity
A single impersonation site can launch, target thousands of users, and vanish in under an hour. That’s more than enough time for an attacker to do real damage. And with generative tools, they can spin up hundreds more just like it.
Doppel was built to defend organizations from deepfakes and online impersonations, but quickly realized AI meant threats could scale infinitely. Attackers no longer needed to handcraft scams; they could generate endless variants of phishing kits, spoofed domains, and impersonation accounts in seconds.
> “Damage from phishing attacks can happen within minutes as they spread across social media and messaging channels. The ability to generate infinite persuasion at almost no cost changed everything.”
—Rahul Madduluri, Co-founder and CTO, Doppel
Inside the rollout
To stay ahead, Doppel developed a new kind of social engineering defense system built on OpenAI GPT‑5 and o4-mini models. Doppel’s platform detects, classifies, and takes down threats autonomously, cutting analyst workloads by 80%, triples threat-handling capacity, and reduces response times from hours to minutes.
Staying ahead of infinitely faster threats
Traditional digital risk protection relied on humans to manually review impersonation sites, phishing domains, and social media profiles and posts. Doppel saw that model breaking down as attackers began to automate, launching threats faster, and across more surface areas, than humans could evaluate them.
> “Our system processes a constant flood of signals to identify the real threats amongst the noise. Once a threat is detected, there is a very narrow window to act before the damage is done. Using AI to automate decision-making is one of the greatest unlocks for the company, allowing us to combat attacks at internet scale and speed.”
That speed is critical for Doppel’s customers, organizations that can’t afford to wait hours to confirm a threat. Doppel’s system classifies most threats automatically, using OpenAI models for reasoning and a structured feedback loop known as reinforcement fine-tuning (RFT) to improve the model over time. In RFT, human feedback is used as graded examples, helping models learn to make consistent, explainable decisions on their own.
Orchestrating LLM-driven threat detection
Doppel’s LLM-driven pipeline sits at the center of its detection stack. After signals are sourced and filtered, the system performs a series of targeted reasoning tasks: reasoning through potential threats, confirming intent, and driving classification decisions. Each stage is designed to balance speed, accuracy, and consistency, while keeping analysts focused on the edge cases that need human judgment.
Here’s how it works:
Orchestrating LLM-driven threat detection
Doppel’s LLM-driven pipeline sits at the center of its detection stack. After signals are sourced and filtered, the system performs a series of targeted reasoning tasks: reasoning through potential threats, confirming intent, and driving classification decisions. Each stage is designed to balance speed, accuracy, and consistency, while keeping analysts focused on the edge cases that need human judgment.
Here’s how it works:
* Signal filtering and feature extraction:Doppel’s systems ingest millions of domains, URLs, and accounts daily. A combination of heuristics and OpenAI o4-mini filters out noise and extracts structured features to guide downstream model evaluations.
* Parallel threat confirmation:Each signal is passed through multiple GPT‑5 prompts purpose-built for different types of threat analysis. These prompts assess factors like impersonation risk, brand misuse, or social engineering patterns.
Training models through reinforcement fine-tuning (RFT)
Doppel had already seen meaningful gains from its original LLM-enhanced detection pipeline, but when it came to cases where the same threat might be judged differently depending on the analyst, consistency became the limiting factor.
> “One real benefit that came out of RFT is you’re making that model’s decisions more consistent.”
—Kiran Arimilli, Software Engineer, Doppel
To build that consistency, Doppel applied RFT using its own analyst data as the feedback source. Each decision to classify a domain as malicious, benign, or unclear became a graded example. Those labeled examples trained the model to replicate expert judgment, even on ambiguous edge cases.
Working closely with OpenAI’s applied engineering team, Doppel designed grader functions that evaluated not only accuracy but explanatory quality, rewarding models that reasoned clearly, not just correctly. By turning analyst feedback into structured training data, Doppel helped show how RFT could make automated detection more consistent and reliable.
Operationalizing trust through transparency
Hyperparameter tuning and iterative evals brought the model closer to human-level consistency. But for Doppel, completing the final mile of automation also meant making decisions immediately understandable.
Each automated takedown now includes an AI-generated justification explaining why a threat was removed, giving customers immediate insight into why action was taken—something that once required analyst intervention.
That visibility enhances trust, which is a critical factor for Doppel’s users. Seeing not just what action was taken, but why, gives teams the confidence to respond quickly and the context to explain those decisions internally or to stakeholders.
Results at a glance
Each automated takedown now includes an AI-generated justification explaining why a threat was removed, giving customers immediate insight into why action was taken—something that once required analyst intervention.
That visibility enhances trust, which is a critical factor for Doppel’s users. Seeing not just what action was taken, but why, gives teams the confidence to respond quickly and the context to explain those decisions internally or to stakeholders.
Results at a glance
* Cut analyst workloads by 80%
What’s next
Having reached near-complete automation for phishing and impersonation domains, Doppel is now applying the same model-driven framework to other high-variance channels.
“Domains are probably the hardest channel we handle,” said Madduluri. “The signals are messy, content changes constantly, and threats evolve fast across several surfaces at once. If we can automate that end to end, we can do it for anything: social media, paid ads, you name it.”
The next milestones include scaling their RFT dataset by an order of magnitude, experimenting with new grading strategies, and using GPT‑5 for upstream feature extraction. These changes will allow Doppel to consolidate pipeline stages and reason over more complex threat indicators earlier in the process.
With each iteration, Doppel is building toward a system that defends what’s real across every surface where trust is under attack.
OpenAI
Join the communityStart building(opens in a new window)
With each iteration, Doppel is building toward a system that defends what’s real across every surface where trust is under attack.
OpenAI
Join the communityStart building(opens in a new window)
Help shape what we cover next Help bepalen wat we hierna volgen
Anonymous feedback, no frontend account needed. Anonieme feedback, zonder front-end account.
More from OpenAI Meer van OpenAI
All updates Alle updatesThe next phase of the Microsoft OpenAI partnership The next phase of the Microsoft OpenAI partnership
Amended agreement provides long-term clarity. Amended agreement provides long-term clarity.
Our principles Our principles
By Sam Altman By Sam Altman
GPT-5.5 Bio Bug Bounty GPT-5.5 Bio Bug Bounty
Title: GPT-5.5 Bio Bug Bounty Titel: GPT-5.5 Bio Bug Bounty
How to get started with Codex Zo begin je met Codex
Tips to set up Codex, create your first project, and start completing real tasks. Tips om Codex in te stellen, je eerste project te maken en echte taken af te ronden.