February 05, 2014
Always-On IT Systems more Resilient to Failure, Cyber-Attacks
By: Ariel Bernstein

Security company Prolexic reports that in the third quarter of 2013, its clients experienced a 58 percent increase in the total number of DDoS attacks compared to the year-earlier quarter. Cyber threats are not just about gaining access to systems. In the case of distributed denial of service (DDoS) attacks, it’s also about shutting down or disabling services—or at least causing enough secondary discomfort to damage a company’s brand.

In an always-on world, business leaders have to expect and accommodate for the risks posed by internal and external disruptions. This notion is so critical that it serves as the foundation for a chapter included in the latest edition of the Accenture Technology Vision, titled “Architecting Resilience.”

The economic risks associated with business discontinuities can grow incredibly high, incredibly fast. This is especially true for digital companies that rely on Internet-based business models. Take Google’s five minute outage in mid-August 2013 as an example; it’s reported to have cost the company $545,000 in revenue. Not all outages are so costly; a 2013 Ponemon Institute study found the average cost of data center downtime across industries is approximately $7,000 per minute in losses. The cost of disruption varies by industry and the scale of the compromised infrastructure.

But it should come as no surprise that service downtime equates to a loss of revenue. The question is how to prepare and protect your infrastructure? Leading companies’ technology chiefs understand something that IT leaders everywhere must grasp: failure is a normal operating condition. It must be anticipated, accommodated and designed into IT systems.

Just look at Netflix. Netflix loves to fail. Not by delivering movies late or overbilling customers, but instead its engineers try to find fault with their own IT systems. Teams at Netflix are deploying automated testing tools that they refer to as a Simian Army to deliberately wreak havoc in unpredictable but monitored ways. Why? Because Netflix’ engineers know that what doesn’t kill their company makes it stronger. Netflix is not alone; these practices were pioneered at Amazon a decade ago and have seen adoption at the likes of Flickr, Yahoo, Facebook, Google, and Etsy.

A surprisingly large proportion of companies concede that they are unprepared for the scope, severity and sophistication of today’s attacks. Nearly 45 percent of CIOs surveyed in Accenture’s 2013 High Performance IT Research admit they have been underinvesting in cybersecurity. Many feel overwhelmed about where to begin; their chances of catching up seem daunting and expensive.

Fortunately, there are already a myriad of services that – if strategically implemented – can make IT systems better able to withstand failure, notifying administrators of dysfunction, increasing portability, and providing self-healing capabilities—features that circumvent the deficiencies of the highly available, state-of-the-art systems of just a few years ago.

Rather than trying to design resiliency into every component, it is now best to take a systemic approach where the service delivery architecture should be able to survive the loss of any component—including that of entire data centers. And when components or data centers do fail in a resilient architecture, it’s no longer a disaster recovery event; it is a high-availability event.

The time to start architecting for resiliency is right now—not when customers expect it or when losses in trade secrets, revenue, or brand value, have reached painful levels. To learn more about the security strategy and tools you have at your disposal, and Accenture’s view on Architecting Resilience, read the 2014 Accenture Technology Vision.

Popular Tags

    More blogs on this topic