LevelUp’s DFIR challenges are AI-generated, validated by exploit agents end-to-end, and calibrated to the skill vector of every analyst on your team. Fresh scenarios every night. No writeups to Google. No two analysts on the same campaign see the same IOCs.
Static commercial platforms ship a fixed catalogue. Three months in, the answers are on forums, on GitHub, in discord screenshots. Three months after that, your senior analysts are bored, your juniors are copying, and you have no signal on who actually learned anything.
A hand-authored DFIR scenario has a useful life of weeks, not years. Once a writeup exists, the exercise becomes a memorisation test. Vendors refresh on an annual cycle. The threat landscape does not.
A single content track cannot match a Tier 1 freshly onboarded against a Tier 3 who has shipped six breach retros. Without per-analyst adaptation, your seniors check out and your juniors stall.
Completion percentages are vanity. They tell you nothing about which techniques an analyst can detect unaided, which evidence sources they pivot through, or where the cohort’s coverage gap actually lives.
Designer drafts. Static Analysis lints. Validator walks the triage path end-to-end. Calibrator sets the difficulty band with confidence intervals. Deploy ships a hardened sandbox. Nothing reaches the library until every stage passes.
Drafts a DFIR brief — ticket payload, evidence bundle, expected verdict, MITRE technique breadth — against the category and difficulty target.
Deterministic linter rejects missing IOCs, malformed evidence artefacts, and red-herring fields that leak the answer.
Builds the sandbox, serves the evidence, and walks the full triage path end-to-end. If the scenario is not solvable from the evidence alone, it is rejected.
Hybrid rule-based + LLM scoring. Difficulty is derived from IOC count, source diversity, anti-forensics, and false-positive discrimination — not from a guess.
Hardened container ships to the library with skill-vector tags, par time, and lineage back to the generator prompt. Revocable if a flag ever leaks.
DFIR-specific: the Designer generates an evidence bundle, not a quiz. Sysmon-like JSON, Zeek logs, netflow, DNS, proxy, endpoint process trees. Grading is platform-side — verdicts and IOCs are F1-scored against a hidden answer key, MITRE techniques are matched against ATT&CK, timeline questions are hash-compared one answer at a time.
Per-user variants via ScenarioInstance mean two analysts on the same campaign see different IOCs, different actors, different timestamps. The investigative skill transfers. The answer key does not.
Every solve updates the analyst’s skill vector across categories — triage, timeline reconstruction, malware analysis, threat hunting, onchain tracing. The next challenge is queued in their growth zone, not a random grab-bag.
Par time and difficulty band are recalibrated nightly against the previous day’s solve telemetry. Drifted challenges are demoted or retired. Coverage gaps in the skill-vector grid get filled by the Designer on the same run.
Your analysts log into a tenant subdomain — your logo, your palette, your product name. SAML 2.0 against your IdP. Private challenge libraries that never bleed into the public catalogue or the community site.
Three analyst archetypes, three modes of engagement. A BFSI training programme usually runs all three in rotation.
Ticket triage against realistic alert payloads. Verdict, IOCs, MITRE techniques — F1-scored against a platform-side answer key. False-positive discrimination at ~30–40% prevalence so alert fatigue is trained, not pampered.
Multi-stage scenarios that flow through initial access, lateral movement, exfiltration, and impact. Evidence bundles spanning Sysmon-like JSON, Zeek logs, netflow, DNS, proxy, and endpoint process trees.
Question-bank mode. Each question forces a pivot through evidence. LOLBins, DoH beaconing, jittered C2, domain fronting — graded on the signal you recover, not the grep you run.
We do not claim certifications we have not earned. We do ship the artefacts auditors ask for — training-hour attestations, session replay, keystroke-level telemetry, mapped to the frameworks your programme reports against.
Training-hour attestations, session replay for auditors, keystroke-level telemetry mapped to the analyst.
Exercises aligned to Identify / Protect / Detect / Respond / Recover functions, with per-category coverage reports.
Split-infra deployment available for Singapore-regulated buyers — REACTOR runs in our cloud, delivery runs inside your AWS or GCP tenant.
A solutions engineer walks your team through a live REACTOR run, cohort setup against your IdP, and a pricing quote shaped to your seat count and deployment model. Thirty minutes, no slideware.