Cyber pricing’s credibility gap: Why models lack reality and what needs to change

Blog -- 20 October 2025

Author: Taha Ahmad

related article image

Historic loss curves are no match for today’s fast-changing cyber landscape. Here’s how to close the credibility gap between modelled and real-world risk.

Cyber insurance is now one of the most volatile and fast-growing specialty lines in the London market, yet many underwriters are still pricing it with tools built for a different era. Historic loss curves remain widespread, even as ransomware, supply-chain breaches and AI-enabled attacks reshape the loss environment on an almost monthly basis.

Nearly half of UK businesses (43%) have experienced a cyber breach or attack in the past 12 months – and many experienced a huge number of them. In fact, businesses that were victims of cybercrime experienced an average of 30 individual crimes, indicating rampant repeat victimisation. It’s no surprise then that cyber insurance, too, is on the rise: 62% of businesses are now insured for cyber risk, up from 49% last year.1

The speed of development by bad actors – spurred on by AI technologies – is unnerving for insurers, but it is a solvable challenge. Data, when transparent and regularly updated, can be used to predict the likelihood and severity of cyber incidents. The challenge for insurers is not the amount of data – more than enough is out there, waiting to be used. Rather, it is making that data credible, timely and explainable. In a word: making it trustable.

From deterministic to forward-looking

Actuarial pricing in specialty lines has come a long way from deterministic, spreadsheet-based models. Today’s frameworks are data-rich, integrating exposure, claims and external insights to support underwriters to make decisions in near-real-time.

However, many pricing engines are still calibrated on outdated assumptions about frequency and severity. This is especially true in cyber, where the nature of risk can change dramatically in a matter of mere weeks. Historic pricing models fail to capture those frequent changes, missing key insights such as how new attack vectors emerge during geopolitical conflict or how AI tools amplify both threat velocity and complexity.

This gap between live risk and modelled reality has created something of a credibility crisis in cyber lines. The answer to this crisis isn’t more data; the industry already has plenty. What it lacks is data timeliness and explainability to make data-led insights meaningful for underwriters, pricing teams, boards, chief underwriting officers, regulators, and anyone else relying on them.

“Sometimes, even if we try to capture the most recent historic dataset, it’s still not fast enough to reflect what’s really happening in the market,” noted Hyunjin Park, Actuarial Director, Excess & Reinsurance at Verisk, at the recent Verisk Insurance Conference.

“Right after Covid-19, risk profiles changed overnight. Some businesses could operate fully remotely, but historical loss models still assumed traditional business interruption exposure. Clearly, rapid market shifts can render solely historical models obsolete.”

The trade-off between speed and rigour

The instinctive response to volatility is to slow down: to prioritise validation, peer review and audit. Yet in cyber, the window for model relevance is measured in months, not years. The challenge, then, is not to choose between speed and rigour, but to find a way to balance both.

Some leading carriers are already experimenting with this balance by integrating live threat intelligence, scenario testing and dynamic portfolio monitoring, each of which can help test portfolios against evolving threat actors and spotlight early signs of aggregation or exposure drift.

Verisk’s Rulebook solution, for example, enables insurers to roll out pricing changes faster and with full auditability. It centralises pricing logic, enabling faster and transparent rollout of approved changes to keep cyber rates aligned with live threat intelligence.

Steps such as these help improve the responsiveness of cyber pricing as the nature of risk quickly evolves. They also improve credibility, because when underwriters and other stakeholders understand why a rate has changed based on observable market signals, it strengthens confidence across the organisation and with regulators.

Explainability is key

Live threat intelligence, scenario testing and dynamic portfolio monitoring have brought huge promise to cyber risk analysis, helping insurers keep pace with the ever-increasing tenacity of attacks. However, they also amplify the need for strong governance to explain these tools to stakeholders.

Carriers that rely on opaque ‘black box’ models risk eroding trust among the very audiences that need reassurance: underwriters, actuaries, boards and regulators. Models and other tools must be explainable, auditable and open to challenge.

This will be the answer to today’s rapidly intensifying cyber risk landscape. Competitive advantage does not come from simply collecting more data. Instead, organisations must focus on ensuring their data is credible and combined with well-governed and agile rating logic.

 

Join our lunchtime symposium

If you’d like to explore these ideas in practice, join us at our lunchtime symposium at 22 Bishopsgate on Tuesday 4 November, to see how Verisk and KYND are helping carriers turn credible data into competitive advantage.

Related Product

Rulebook

Pricing, underwriting and distribution, for even the most complex classes of business.