For years, many organizations have treated data resilience as a low priority. But over time, growing threats, evolving regulations and stronger industry standards have raised expectations across the board. Resilience is now a competitive edge.
Awareness is only the starting point. True resilience requires action. As industry benchmarks evolve and give organizations a clearer picture of what effective preparedness looks like, many are confronting a hard truth: They are falling short. A recent report from Veeam, in collaboration with McKinsey, focusing on data resilience among large enterprises, reveals that even basic pillars of resilience, such as people and processes, are often self-reported as significant gaps. For many in the C-suite, resilience has rarely topped the list of priorities. Usually seen as a subset of cybersecurity, backup and recovery have been treated like airbags: easy to ignore until disaster strikes. But that mindset no longer holds. Today’s threats demand that resilience be seen as a strategic necessity.
While law enforcement is targeting and stopping some of the most prominent groups, like BlackCat and LockBit, it’s a mistake to assume cyberattacks are declining. In reality, 69% of companies experienced a cyberattack in 2024, yet only 26% had adequate data resilience standards. The threat landscape is shifting, not shrinking, with smaller groups and ‘lone wolf’ attackers filling in gaps. These emerging attackers are also using faster, more aggressive data exfiltration methods, making resilience more critical than ever.
McKinsey revealed that merely one out of four participating enterprises had the maturity to recover quickly and confidently from a disruption. Data and cyber resilience failures are often recognized only after the damage is done, but in this case, many of the shortcomings were self-reported. But if organizations are aware, why haven’t they plugged these gaps?
For some organizations, it could be that they’ve only just realized these flaws. A recent uptick in global regulations and governmental guidance has spotlighted the issue, with some (such as the NIST Cybersecurity Framework 2.0 in the U.S.) calling for improved resilience of technology infrastructure across the board. As compliance deadlines approached over the past year, many organizations had to evaluate their data resilience for the first time, uncovering previously overlooked blind spots.
However, while organizations may have just identified these systemic flaws, they didn’t happen overnight. For the majority, it happened slowly but surely, with data resilience standards remaining the same as they adopt new technologies. Now, as most organizations have begun to adopt AI to optimize business processes, the impact on their data repositories has been overlooked. The volumes of data required and produced by these tools have led to sprawling, fragmented data profiles that often extend beyond the reach of existing data resilience strategies.
Combining this with a limited understanding of modern data resilience, the risks quickly multiply. Many organizations are unknowingly measuring themselves against the wrong standards. Take the typical tabletop exercise; while better than nothing, it falls short of truly testing resilience. In theory, their processes could work, but in reality, things can go wrong quickly.
Rather than waiting for a disruption to strike and expose their weaknesses, organizations need to embrace a mindset of proactive discomfort, deliberately seeking out and addressing vulnerabilities, even when it feels uncomfortable.
For organizations struggling with data resilience, the first step is to gain a clear understanding of their data profile, what data they have, where it’s stored, and whether it’s truly necessary. This insight allows them to cut through data sprawl by eliminating obsolete, redundant, or low-value information and shift their focus to protecting the data that truly matters. From there, the real work of securing it can begin.
The work doesn’t end with implementation. Once new data resilience measures are in place, they need to be rigorously and repeatedly stress-tested. These defenses must be pushed to their limits because real-world attackers won’t ease up when systems start to strain, and they won’t wait for a convenient time to strike.
Test different situations in which team members are unavailable, whether they’re out of office or where security teams are occupied with something else entirely, to expose all the potential gaps in the organization’s measures. While this might seem excessive, it’s better to uncover gaps in a controlled testing environment first, rather than identifying these vulnerabilities during or following a real incident.
It’s a major undertaking, but investing in data resilience pays off, as a 10% higher yearly revenue growth is achieved by organizations with advanced data resilience capabilities than those without.
Improving data resilience won’t magically translate into revenue growth, but raising data resilience standards will have a ripple effect across the organization. Cyberattacks are only getting more complex, and so is organizational data. It’s an issue that every organization will eventually have to deal with, so it’s better to get to work now than be overwhelmed when an attack inevitably hits.
Recent Articles By Author