NIST Cybersecurity Framework 2.0 phishing simulation mapping

Blog

Mapping Phishing Simulation to NIST CSF 2.0

Mapping Phishing Simulation to NIST CSF 2.0

NIST released CSF 2.0 in early 2024, and by 2026 it has firmly become the lingua franca of cybersecurity governance - used inside enterprises as the internal scoring rubric, by federal contractors as the umbrella above NIST SP 800-171 and 800-53 and by mid-market firms as the framework regulators ask about by name. The version 2.0 changes were meaningful: the new GOVERN function elevated cyber to a business-risk topic, and the Subcategory language was rewritten to be outcome-focused.

For security awareness leaders, the practical question is straightforward: where does the phishing simulation program fit and what evidence do CSF assessors actually want to see? This post walks through every CSF function, identifies the Subcategories most directly supported by simulation evidence and explains how to produce that evidence cleanly.

Why CSF treats phishing as a cross-function control

Phishing simulation is unusual among security controls because its outputs are relevant in five of the six CSF 2.0 functions. A single well-instrumented program produces metrics for risk reporting (GOVERN), confirms human-layer protections are in place (PROTECT), generates detection signal when users report suspicious mail (DETECT), trains response procedures (RESPOND) and feeds lessons learned back into program design (RECOVER). Few controls have this many anchor points.

The official framework documentation is at nist.gov/cyberframework and is the canonical reference for Subcategory wording.

GOVERN: phishing as enterprise risk reporting

The GOVERN function - added in version 2.0 - is where most CSF maturity uplift now happens. Subcategories most relevant to phishing:

  • GV.OC-01 through GV.OC-05 - Organizational context, mission, stakeholder expectations. Phishing program metrics belong in the enterprise risk register, with management ownership documented.
  • GV.RM-01 through GV.RM-07 - Risk management strategy. Click-through rate and time-to-remediation should be defined as risk indicators with tolerance thresholds approved by leadership.
  • GV.RR-01 and GV.RR-04 - Roles, responsibilities and authorities. Document who owns the phishing program (typically CISO or security awareness lead), who reviews results and who approves policy updates.
  • GV.OV-01 through GV.OV-03 - Oversight. Quarterly written reports to a board or risk committee covering program metrics, trend lines and material incidents.

Programs that miss the GOVERN function are typically running good simulations but reporting them only inside the security team. Adding a one-page quarterly summary to the enterprise risk packet is the lowest-effort, highest-impact change for CSF Tier uplift.

IDENTIFY: phishing in the asset and risk picture

  • ID.AM-07 - Inventories of personnel, including third parties, with access to organizational systems. The same roster that drives phishing simulation targeting feeds this Subcategory.
  • ID.RA-01, ID.RA-02 - Vulnerabilities identified and threat intelligence integrated. Phishing simulation results, broken down by department, identify where the human-layer vulnerabilities concentrate.
  • ID.IM-01, ID.IM-02 - Improvements identified from evaluations. Post-campaign reports, when read against the prior period, are direct evidence here.

PROTECT: the most direct mapping

PROTECT is where phishing training evidence most cleanly applies. The PR.AT family - Awareness and Training - was carried forward from CSF 1.1 and refined for 2.0:

  • PR.AT-01 - Personnel are provided with awareness and training so they possess the knowledge and skills to perform their duties securely.
  • PR.AT-02 - Privileged users understand their roles and responsibilities. Higher-difficulty simulations targeted at admin and finance staff demonstrate this directly.
  • PR.AT-03 - Third-party stakeholders understand their roles. Including contractors and key vendor staff in simulation campaigns is the practical evidence.

Auto-assigned remediation training the moment a user clicks is the design pattern that turns PR.AT from a checkbox into measurable behavior change.

DETECT: phishing reporting as detection telemetry

This mapping is often overlooked. When users report a suspicious email through a one-click button or an Outlook add-in, that report is detection telemetry - and CSF acknowledges it:

  • DE.CM-03 - Personnel activity and technology usage are monitored to find potentially adverse events.
  • DE.AE-02, DE.AE-03 - Adverse events analyzed and aggregated. The volume of user-reported phishing relative to detected campaigns is a usable indicator.

An Outlook add-in for one-click reporting turns the workforce into a distributed detection layer, and the report-rate metric becomes a positive program indicator alongside click rate.

Outlook add-in and one-click reporting as detection

The DETECT mapping deserves a separate practical note. Many CSF programs treat detection as a SIEM-and-EDR concern and overlook the human-layer detection signal entirely. A workforce with a one-click reporting tool generates a continuous stream of suspicious-mail reports - most are benign, but the ones that are not are detected at human speed, often before automated tooling correlates the campaign.

An Outlook add-in or equivalent reporting button in the user's primary mail client makes reporting a single-click action, which directly affects the speed at which suspicious activity surfaces to the SOC. CSF Tier 3 programs typically have this in place; Tier 4 programs treat report-rate as a positive program metric tracked alongside click rate.

RESPOND and RECOVER

Phishing simulation contributes to RESPOND by giving the response team a continuous stream of low-stakes drills. Subcategories supported include RS.MA-01 (response process executed during or after an incident) and RS.MI-01 (incidents contained). The triage workflow on a real phishing report is exactly the same as the workflow on a simulated one - running simulations is, in effect, continuous IR practice.

RECOVER mappings are more limited but real: RC.IM-01 captures lessons learned, and the post-campaign retrospectives feeding into the next simulation are direct evidence.

CSF Tier expectations: what each level looks like

  • Tier 1 (Partial). Annual training, no simulation, no measurement.
  • Tier 2 (Risk Informed). Periodic simulations, basic click-rate tracking, no formal remediation loop.
  • Tier 3 (Repeatable). Documented program with monthly or quarterly campaigns, automated remediation training assignment, written policy approved by management, regular reporting to leadership.
  • Tier 4 (Adaptive). Tier 3 plus multi-channel coverage (SMS phishing, voice phishing), AI-generated lure testing, integration with the broader detection stack and program metrics feeding the enterprise risk dashboard.

Most organizations targeting Tier 3 in 2026 should be running monthly campaigns spanning the five common phishing intents - credential harvest, malware delivery, BEC, link-based info theft, account spoof - at three difficulty levels (easy, regular, hard) with remediation training auto-assigned within minutes of a click.

The five template intent categories mapped to CSF

A program running across the five common phishing intent categories produces evidence that maps cleanly across the framework. The categories and their CSF anchors:

  • Credential harvest (fake login pages). Maps to PR.AT-01 awareness, PR.AA access control reinforcement, and DE.CM-03 monitoring of personnel activity. The most common vector and the foundational template type.
  • Malware delivery (weaponized attachments). PR.AT plus PR.PS-04 (technology infrastructure resilience) when paired with sandboxing/EDR signals. Tests both human recognition and technical layers.
  • Business email compromise (CEO/CFO/wire-fraud). Highest dollar impact category; touches PR.AT-02 privileged-user awareness for finance and executive teams. Should run at higher difficulty levels.
  • Link-based information theft (data harvest beyond credentials). Maps to PR.DS data security and PR.AT awareness simultaneously.
  • Account spoof (alerts impersonating internal services or vendors). Maps to PR.AT plus DE.AE-02 anomaly analysis when correlated with reporting telemetry.

Programs that report click rate by category, rather than as a single average, give CSF assessors much richer evidence on where the human-layer risk concentrates.

Threat-landscape currency: the AI and multi-channel update

CSF 2.0 places significant weight on whether the program is keeping pace with the threat landscape. Three template areas are now expected to appear in the program's content:

  • AI-generated lures. Large-language-model-generated phishing emails are largely indistinguishable from human-written ones at the surface level. Programs that include AI-generated templates demonstrate active threat-landscape tracking.
  • SMS phishing (smishing). Mobile-channel attacks targeting work numbers, including delivery-failure pretexts, MFA-prompt impersonation and HR/payroll lures.
  • Voice phishing (vishing). Phone-based social engineering, including the rising sub-category of deepfake voice attempts impersonating executives.

The CSF Tier 4 (Adaptive) language explicitly references programs that adjust to threat intelligence - multi-channel coverage is the practical evidence that adjustment is happening.

Producing CSF evidence without doubling the workload

The cleanest CSF artifact set for a phishing program is a single dated PDF per quarter containing: campaign list with dates, target counts, template categories and difficulty levels; click-through rate and report-rate trend charts; training completion statistics with median time-to-completion; coverage breakdown showing inclusion of contractors and privileged users; multi-channel sample reports (one SMS, one voice if used); and the written policy version reference.

Bait & Phish exports this artifact natively, which is why customers running framework programs typically produce one set of evidence and reuse it across NIST CSF, NIST SP 800-50/171, SOC 2, ISO 27001 and HIPAA assessments. If you're starting your CSF journey, a 25-user free trial is the fastest way to see the report format. Pricing for production deployments and walk-through demos is on the pricing page.

For organizations using CSF as the basis for cyber insurance attestation, our companion guide on what cyber insurers ask about phishing training covers the overlap between CSF evidence and underwriting questionnaires - most carriers now reference CSF Subcategories directly in their renewal applications.

See also: Phishing training compliance comparison across SOC 2, HIPAA, PCI DSS, NIST CSF, ISO 27001, GDPR and NIS2 - side-by-side table of clauses, expected cadence and audit posture.

This post is informational and reflects publicly-available CSF 2.0 guidance. CSF is voluntary; specific assessment rubrics vary by assessor and use case.

Related compliance guides