Phishing Click-Through Rate Benchmarks by Industry (2026)
"What's a good phishing click rate?" is the most frequently asked question in this category and the most poorly answered one. The number you'll find in most vendor benchmark reports is suspiciously specific and methodologically opaque. Different vendors report against different denominators, different template mixes, different campaign cadences and different cohort definitions, then headline a single industry-average number as if it were apples-to-apples. It isn't.
This post does the opposite. It gives you defensible ranges grounded in the broad pattern that holds across Verizon DBIR-class data, broker reports and a decade of practitioner experience - without inventing precise figures from research that doesn't exist. Use these ranges as the floor for board conversations, audit packets and renewal applications. Anyone quoting tighter precision is selling something.
The cross-industry pattern
Three claims hold up across virtually every credible source:
- First-time programs report click rates in the 25-35% range. The variation inside that range is mostly driven by template difficulty mix and target population - easier templates push toward the top of the band, harder templates push higher still.
- Programs in their second year typically trend into the 10-15% range. The drop comes from a combination of repeat-clicker remediation, organizational awareness and template recognition.
- Mature programs (3+ years, monthly cadence, automated remediation) typically run below 5%. The remaining clickers are concentrated in a small repeat-clicker cohort that warrants targeted intervention.
These ranges are widely consistent with what Forrester and Gartner research on the category has reported, with what Marsh and Aon broker materials cite when discussing security awareness as a premium-reduction factor and with the pattern reflected in Verizon DBIR class data on phishing's persistence as an initial-access vector.
Industry-by-industry expectations
Healthcare: Often runs above the cross-industry average, particularly in clinical and front-line operational roles. The drivers are well-documented: distributed workforce, high volume of legitimate external email (insurance, vendors, patient communication), 24/7 operating cadence that makes scheduled training compete with patient care. Healthcare programs that push below 10% are doing genuinely impressive work.
Financial services: Typically reports lower click rates because programs in this sector have been running longest. Banking and insurance carriers themselves operate mature programs; that maturity shows up in the numbers. A new bank program will still hit the 25-35% first-time band - institutional history doesn't help individual users - but second-year and beyond cohorts trend below the cross-industry average.
Technology and software: Wide variance. Engineering-heavy organizations sometimes underperform expectations because of high context-switching and high legitimate external email volume; sales-heavy organizations underperform because of high inbound prospecting and link-clicking habits. Technology firms that take the program seriously achieve some of the best numbers in the field; ones that don't, regress.
Manufacturing and OT-adjacent: Above-average click rates persist in this segment, driven by hourly workforces, lower historical investment in awareness training and shared-device environments. Programs here often show the largest absolute year-over-year improvements when they invest seriously, because they're starting from a higher baseline.
Education (K-12 and higher ed): Among the highest first-time click rates, often in the 35-45% range. Drivers include large transient populations (students, faculty, staff), distributed governance and high legitimate external communication. School district programs consistently report the steepest year-one improvement curves once a continuous program is in place.
Government (state and local): Wide variance by jurisdiction size. Larger agencies with dedicated security teams perform near financial-services norms; smaller jurisdictions often resemble education. Federal cohorts are not comparable to state and local because of separate training mandates.
Hospitality, retail and food service: Higher click rates persist due to hourly workforces, shared-device environments and high turnover. The fix here is less about per-user adaptation and more about clean onboarding cohorts; getting new hires through a baseline simulation in their first 30 days produces measurable program improvement.
Professional services (legal, accounting, consulting): Performance correlates strongly with firm size. Larger firms with dedicated IT and security functions outperform; smaller firms (under 50 employees) are operating with the SMB pattern. Either way, the trend line is what brokers and auditors care about.
Why the absolute number is the wrong question
Cyber insurance underwriters, SOC 2 auditors and audit committees all converged on the same conclusion: the absolute click rate is much less informative than the trend. A program that goes from 32% to 14% in 12 months is producing more evidence of working than a program that started at 8% and stayed at 8%. The first one is teaching its workforce; the second one might be doing nothing at all.
That's why executive reporting should always include four-quarter trend data, why cyber insurance applications ask for 12-month history rather than current-period snapshot, and why "what's our number" is the wrong question for a board to ask. The right question is "where are we trending and why."
Difficulty mix changes everything
A 4% click rate on easy templates is meaningfully different from a 4% click rate on hard templates. Mature programs report click rate per difficulty tier - easy, regular and hard - because the same headline number can hide a fragile workforce or reveal a resilient one.
A reasonable rule of thumb for established programs:
- Easy templates: Should be running below 3% in a mature program. If easy templates are still catching 8% of users, the basics aren't sticking.
- Regular templates: 3-7% is the band most mature programs settle into.
- Hard templates: 8-15% is normal in mature programs because hard templates are designed to mimic real targeted attacks. A 2% rate on hard templates probably means the templates aren't actually hard.
Reporting all three tiers together gives auditors and brokers a much more useful picture than a single aggregate number. Bait & Phish's template library spans all three tiers across five lure categories so this difficulty-stratified reporting is native rather than reconstructed.
Channel-specific benchmarks
SMS phishing (smishing) and voice phishing (vishing) campaigns consistently report higher engagement rates than email - often 1.5x or more in first-time programs. The reasons are structural: users have less defensive training for non-email channels, mobile preview windows make it harder to spot red flags and voice has near-zero established defensive culture. Programs that run multi-channel simulations and report each channel separately tend to score better on cyber insurance applications, which now ask about multi-channel coverage explicitly.
Cohort splits that matter more than industry averages
Industry averages are the headline number. The numbers that actually predict program effectiveness live one level below, in cohort splits within the organization. The four cohort splits that consistently produce the most actionable information:
- Tenure-based cohorts. First-90-days hires consistently click at 1.5-2x the program average across virtually every industry. The pattern is structural - new employees are still calibrating which internal communications are legitimate, are likely to defer to authority cues and have not yet been through a full simulation cycle. Reporting click rate by tenure cohort separately is one of the highest-value cohort splits a program can introduce.
- Department cohorts. Sales and Marketing teams typically run higher click rates than the company average because of the volume of legitimate external email and link-clicking these roles handle. Finance teams often run lower than average because of mature treasury and AP fraud-awareness training that exists independently of the phishing program. IT teams' click rates depend strongly on whether the IT cohort has been carved out of the program (don't carve them out).
- Privileged-access cohorts. Users with administrative or privileged access to sensitive systems represent a small population with disproportionate breach impact. Reporting their click rate separately, even when the population is below 20 users, is what audit committees and underwriters increasingly want to see.
- Repeat-clicker cohort. Users who failed two consecutive campaigns are a small population that warrants targeted intervention. Reporting this cohort's size and trend over four quarters is more predictive of program effectiveness than any aggregate click rate.
What changes in 2026 versus prior years
Three changes in 2026 affect how to read benchmarks against historical data:
- AI-generated lures raise the difficulty floor. Templates that would have classified as "regular" in 2022 increasingly read as "easy" because the broader threat landscape has produced more sophisticated real attacks. Programs that haven't refreshed their template difficulty calibration in two or more years are typically reporting artificially low click rates that don't reflect current resilience.
- Multi-channel coverage shifts the denominator. Programs that added SMS and voice campaigns in the last 18 months tend to see aggregate "all channels" click rates rise, because these channels have less established defensive culture. This is not a regression in program effectiveness; it is the program now measuring exposure that was previously invisible.
- Cyber insurance underwriting expectations have tightened. What carriers viewed as acceptable in 2023 (quarterly cadence, manual remediation) is now treated as below-baseline. Premium-reduction credit for phishing programs is increasingly conditional on monthly cadence, automated remediation and multi-channel coverage. The Marsh and Aon broker-side guidance reflects this tightening explicitly.
If your benchmarks are being compared against 2022-2023 figures, the comparison is no longer valid. Use 2025-2026 reference ranges and acknowledge the methodology shift in any report that crosses the boundary.
How to use these benchmarks in your reporting
- Frame your number as a band, not a point. "We are tracking in the second-year program range" is more defensible than "we are at 12.4%."
- Pair every number with a trend line. Single-period numbers invite the "is that good?" question and don't survive it well.
- Segment by difficulty. Aggregate numbers hide important variance.
- Cite the framework, not the vendor. Verizon DBIR for breach attribution; Forrester or Gartner for category research framing; broker reports (Marsh, Aon) for premium-impact framing. Avoid leading with vendor benchmark reports - they are marketing materials.
If you're running a Bait & Phish program, start a free 25-user trial to produce your own first benchmark. Pricing for full deployments is visible on the site, and our team will walk through your numbers against industry context on request.

