Realization is changing the dynamic of cyber risk management, pushing damage limitation to the forefront and, as a result, turning the spotlight on attack detection.
Detecting when a malicious actor has access, or is attempting to gain access, to an organization’s systems is the foundation on which response and recovery are built.
And it is an element whose benefit can be measured in cash. The average cost of a cyber breach lasting 200 days or more was $4.86 million, technology company IBM said in its “Cost Of A Data Breach Report“.
Reducing that breach to less than 200 days cut the cost by just under a quarter, to an average $3.74 mn.
Sums like that underline the importance of detection to companies’ cyber risk management. And they highlight why S&P Global Ratings considers effective cyber detection to be integral to cyber risk management and ultimately a potential factor in the assessment of issuers’ credit worthiness.
Cyberattack Lifecycle Not An Event
To better understand the role that detection plays in cyber risk management it helps to understand the nature of a typical cyberattack (see What are the Most Common Types of Cyberattacks?). Contrary to common perception, cyberattacks are not singular events.
Rather they are a chain of events that can take place over a matter of weeks or even months. We refer to this as the cyberattack lifecycle.
Understanding the nature of that lifecycle offers a foundation from which to better analyze cyber risk (see Risk Management With Cyber Insurance). It also enhances the ability to manage that risk, not least because it demonstrates how each step provides an opportunity for a target to detect malicious activity and thus break the cyberattack lifecycle and minimize damage.
As IBM’s report shows, early detection of a cyberattack is key to limiting its cost. The longer that an attacker remains undetected and with access to a target’s technology systems the greater is the opportunity for them to establish control and thus achieve their objectives (see How to Reduce the Impact of Cybercrime?).
Moreover, retaking control of a system from an attacker, and remedying their actions, are major elements of the cost of a cyberattack.
Detecting cyberattacks faster. Racing vs Time
Defenders are detecting cyberattacks faster, but they also face shorter attack lifecycles. About 70% of breaches not unveiled by an attacker are detected in a day or less, while 20% take “months or more” to detect, according to the “Data Breach Investigations Report,” published in 2022 by Verizon, a U.S. wireless network operator.
The detection speeds have notably improved in the past five years, with almost 50% of non-actor disclosed breaches taking “months or more” to detect in 2017.
Verizon’s figures also demonstrated the extent to which defenders are locked in an arms race, with faster detection offset by shortened attack lifecycles, notably due to the growing prevalence of ransomware attacks.
In 2016, about 6% of breaches were characterized by an attacker notifying their presence to the target (known as actor disclosure), according to Verizon.
By 2022, that figure had grown to 58% – suggesting that a majority of cyberbreaches go undetected until after attackers have found what they want and sent a ransom note (see Ransomware Attacks in the United States).
Cyberattacks detection is accelerating
Thankfully, defenders don’t have to face the threat posed by accelerating cyberattacks alone. Government organizations are playing a growing role in incident detection.
The U.S.’s Cybersecurity & Infrastructure Security Agency (CISA), the U.K.’s National Cyber Security Centre (NCSC) and the Australian Cyber Security Centre (ACSC) are developing capabilities to support public and private sector entities with incident detection, technical support, communication, and outreach.
Early Detection Is Key To Breaking The Attack Lifecycle
Issuers that quickly detect an attack afford themselves a chance to break the attack lifecycle at an early stage and thus limit financial damage and potential credit quality impacts. An attacker doesn’t gain access to a target’s systems until step three of five (exploitation) in the attack lifecycle.
Detecting malicious activity in steps one or two (preparation and delivery) can thus nullify an attack.
For example, an employee may receive a phishing email during the delivery phase of an attack. If that employee is trained in phishing awareness, and has a mechanism to report the malicious email, the target entity can respond before the attack gains access to an IT system and is able to deliver malware.
Even if that malware is delivered, resulting in a system breach, rapid detection remains crucial to damage limitation.
Response and recovery typically become progressively harder and more expensive as an attack progresses through its lifecycle.
Immediately following entry, an attacker may only have access to a small segment of an entity’s systems and networks. Defenders, at that stage, can isolate infected systems, and or cut-off unauthorized access through targeted action.
Once an attacker gains persistence and control (phase four of the lifecycle), removing them is likely to be significantly more costly and time consuming, requiring a concerted effort to identify all affected systems and restore their integrity.
If the attacker carries out their desired actions (phase five), the target entity may have to bear additional costs including business interruption, reputational damage, and regulatory penalties.
The snowballing of cost and damage as a cyberattack progresses through its lifecycle makes early detection (and an effective response plan) key to an organization’s cybersecurity program–not least because an attack’s impact, when significant, can lead to a deterioration in credit quality.
Threat Detection Elements And Their Roles
Logging is the practice of recording activity across an organization’s systems and networks. We consider logging a foundational capability and expect it from even a basic cyber security program at a small organization.
In the event of a cyberattack, effective logs will provide a record of how a company’s defenses were breached, what has been compromised, and thus enable an organization (and its external assistance) to assess how best to respond and recover.
This makes logging essential to enacting a coordinated and successful response to a cyberattack.
Larger or higher risk organizations may require sophisticated and automated logging operations capable of aggregating logging data and organizing it in a format useful for rapid and accurate decision making. Software that carries out these functions is widely available. Nonetheless, it remains a challenge to establish an integrated logging system that encompasses an entire organization and produces a holistic view of system activity to facilitate comprehensive incident analysis.
Monitoring is the scrutinizing of log data, and other sensors, for indicators of unauthorized activity on an organizations systems or networks. It should (ideally) be a real-time process, that provides the capability to detect attacks at the earliest opportunity.
We expect most organizations with an internet interface, including email systems, to have a monitoring capability.
- For small and low-risk organizations that may consist of a series of automated alerts that warn of suspicious activity.
- Larger- or higher-risk organizations may require a dedicated operation, known as a security operations center (SOC), that is responsible for limiting damage by detecting and responding to cyberattacks that bypass preventative security.
Such organizations may also operate a security information and event management (SIEM) platform, which spans all their systems and networks and combines log management software with real-time monitoring of security events.
Not all SIEM platforms are the same and an investment in more advanced systems can have a material impact in reducing risk.
Indeed, security AI and automation, which facilitate rapid detection and response to cyberattacks were more effective in reducing cyberbreach costs than any other investment analyzed by IBM.
The Cost of a Data Breach
According to IBM, reaching an all-time high, the cost of a data breach averaged $4.35 mln in 2022. This figure represents a 2.6% increase from last year, when the average cost of a breach was $4.24 mln. The average cost has climbed 12.7% from $3.86 mln.
83% of organizations studied have experienced more than one data breach, and just 17% said this was their first data breach.
60% of organizations studied stated that they increased the price of their services or products because of the data breach.
The average cost of a data breach for critical infrastructure organizations studied was $4.82 mln — $1 mln more than the average cost for organizations in other industries.
Average total cost of a data breach
Critical infrastructure organizations included those in the financial services, industrial, technology, energy, transportation, communication, healthcare, education and public sector industries.
29% experienced a destructive or ransomware attack, while 17% experienced a breach because of a business partner being compromised.
Average per record cost of a data breach
65.2% difference in average breach cost — between $3.15 mln for fully deployed versus $6.20 mln for not deployed — represented the largest cost savings in the study.
Breaches at organizations with fully deployed security AI and automation cost $3.05 mln less than breaches at organizations with no security AI and automation deployed.
Companies with fully deployed security AI and automation also experienced on average a 74-day shorter time to identify and contain the breach, known as the breach lifecycle, than those without security AI and automation — 249 days versus 323 days. The use of security AI and automation jumped by nearly one-fifth in two years, from 59% in 2020 to 70% in 2022.
Average cost of a data breach by industry
TOP-5 countries and regions for the highest average cost of a data breach were the United States at $9.44 mln, the Middle East at $7.46 mln, Canada at $5.64 mln, the United Kingdom at $5.05 mln and Germany at $4.85 mln.
The United States has led the list for 12 years in a row. Meanwhile, the country with the fastest growth rate over last year was Brazil, a 27.8% increase from $1.08 mln to $1.38 mln.
Threat hunting is the proactive and hypothesis-driven investigation of menaces to an organization’s systems or networks, utilizing a combination of log data and threat intelligence.
The practice is both relatively complex, specialized, and resource intensive so is conducted mostly by larger organizations, or those with particularly valuable assets to protect.
Threat hunting tends to pick up activity at the later stages of the attack lifecycle, so has limited use against short-lifecycle attacks such as ransomware. It can, however, be an effective countermeasure for advanced and persistent attacks, for example in uncovering and blocking a threat actor that has used long-term and undetected access to an organization’s system to steal intellectual property data on an ongoing basis.
It is unlikely that many small and midsized organizations will develop significant in-house threat hunting capabilities, yet the provision of threat hunting as a service by outside providers could grow.
AUTHORS: Martin Whitworth – Cyber Risk – Analytical Innovation at S&P Global Ratings and Head of Cyber and Information Security at “Which?”, Paul Whitfield – Editorial Lead S&P Global