Research shows companies that are more strategic in their approach to technology achieve more than twice the average revenue growth of Laggards.
Hybrid deployments combine the public cloud benefits of innovation and speed with private benefits of regulatory compliance and performance.
In developing a hybrid strategy, there are three key factors to consider.
According to Accenture’s recently released Future Systems research, companies that are more strategic in their approach to technology are doing better financially—achieving more than twice the average revenue growth of Laggards.
In fact, 95 percent of Leaders have adopted sophisticated cloud services as compared to only 30 percent of Laggards. Clearly, cloud is at the heart of modern IT transformations.
It’s easy to see why. Cloud solutions promise enterprises a foundation of lower costs, greater elasticity and innovation. Now the economic upheaval caused by the COVID-19 pandemic is shifting their cloud agenda from an aspiration to an urgent mandate.
But despite years of investment, most companies still have no more than 20 percent of their workloads in the cloud. What’s more, our research shows that nearly two-thirds of companies are dissatisfied with the results to date from their cloud initiatives.
To capture the full value of cloud and deliver more value to the business, companies should develop strategies to modernize their applications and data. These application and data strategies, in turn, should drive cloud and infrastructure strategies and road maps to ensure they provide the right IT foundation to meet their business objectives.
As they develop their application and data strategies, companies today are increasingly landing on a multi-cloud provider approach (leveraging multiple public providers) with hybrid application and data workload placement distributed across public and private clouds. This approach balances the business need for increased agility and innovation with optimal performance, service levels, security and cost effectiveness. It typically includes using a variety of Software-as-a-Service (SaaS) providers, commercial off-the-shelf (COTS) platforms, and Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) powered by hyper-scale providers (AWS, Azure, Google and Alibaba).
The application and data led approach to cloud usually begins with an assessment of the application estate, including an analysis of the data. Many leading organizations use a methodology known as the “Six Rs” as the foundation for their application assessment and modernization process. With the growing prevalence of cloud native, we have introduced a seventh “R”, called “Reimagine” for the custom application and data landscape.
Accenture’s “Seven Rs” methodology
Determine which applications are no longer needed and can be decommissioned.
Identify applications which lack a cloud business case or will be too challenging to migrate.
Look for custom-built applications which can be switched out with market-leading SaaS or COTS solutions (such as those from Salesforce, ServiceNow, SAP, Oracle and others) whose innovation cycles will inevitably outpace in-house development.
Take advantage of the accelerated innovation offered by the hyper-scalers, as well as hybrid and private cloud infrastructure solutions. Rehost applications and data that can be moved seamlessly.
Replatform those that require some modification.
Refactor those that need significant changes (digitally decoupling where appropriate).
Step back and rethink how applications and data can be rebuilt using cloud native methodologies, patterns and technologies to dramatically enhance the value IT can deliver to the business in order to differentiate, better compete and disrupt markets.
Hybrid cloud: Optimizing workload and data placement
As companies embark on their application and data modernization programs and look to the cloud and infrastructure required to support their plans, most land on a hybrid cloud strategy with application and data workloads balanced across both public and private clouds.
Hybrid deployments combine the hyper-scalers’ public cloud benefits of innovation, speed, consumption and scale with private benefits of regulatory compliance, performance, data gravity and recouping of existing investments. Hybrid also enables increasingly dynamic workload placement over time, allowing them to optimize for performance, service levels, security and compliance, and cost.
In developing a hybrid strategy, there are three key factors to consider.
Establishing a primary hyper-scaler is usually the best bet
The hyper-scalers offer significant benefits of agility and scale. But the real value of public cloud lies in the innovation and power of their PaaS solutions, including serverless computing (AWS Lambda or Azure Functions) or new AI capabilities (GCP’s TensorFlow).
While digitally decoupled microservices allow companies to mix and match best-in-class PaaS solutions from multiple providers, such an approach does have its limitations. These include having to split and dilute developer skills across platforms, incurring additional operational complexity and cost resulting in a higher total cost of ownership, and limited sharing of large datasets over expensive network connections due to data gravity. Further, by giving a single hyper-scaler 51% or more, the provider can provide better incentives and discounts. For these reasons, it’s usually preferable to select a primary hyper-scale provider to maximize innovation, improve investment in skills, minimize operational complexity, and optimize TCO across the whole of the cloud estate.
Hybrid offers the best of both worlds
Application and data workload placement are determined by a number of business and technical factors. In the public cloud, companies can take advantage of rapid innovation cycles, spin up new environments faster, rapidly scale out deployments and leverage consumption-based OpEx models.
Many companies, however, will also need to evolve their data centers into private clouds that replicate many of the attributes and benefits of the hyper-scalers to accommodate other business requirements. For some, this may be driven by regulatory requirements (such as GxP for pharma, HIPAA for healthcare, as well as GDPR). For others, it may be to support business-critical and highly transactional applications/datasets that have significant scale processing requirements – which may be difficult to optimize in public, shared environments. It may often be necessary to co-locate other applications that need to integrate with private cloud applications and large datasets due to latency, bandwidth and cost considerations. Lastly, many companies will still need to recoup existing investments in their data centers and equipment.
In addition to these drivers, Hybrid Cloud can also be used to enhance disaster recovery by backing up Private Clouds and data centers in the Public Cloud and potentially arbitrage application and data workload placement vs. the public environment.
As a result of these varied needs, many companies look to Hybrid Cloud to optimize application and data workload placement across Public and Private Clouds to maximize innovation, performance and contain costs—offering the “best of both worlds”.
Lines are blurring between public and private clouds
In the past betting on a hyper-scaler meant picking Public over Private. That is no longer the case. To support regulatory, performance, and data gravity requirements, the hyper-scalers are now offering Private Cloud carveouts in Public environments. VMware on AWS (VMC), Azure VMware Services (AVS), and Google’s SAP, Oracle and Bare Metal solutions are good examples. Similarly, the hyper-scalers have been working on Private Cloud extensions, pushing their PaaS and IaaS solutions to their customers’ data centers or even further to support manufacturing and other OT use cases. Examples include Microsoft’s Azure Stack, AWS Outposts, Google’s Anthos, and Alibaba’s Apsara. Additionally, platforms like Red Hat’s Openshift and Cloud Foundry have created what are essentially hybrid environments by introducing a heterogeneous technology layer at the foundational level, enabling connectivity across disparate technology platforms. This blurring of Public and Private under a Hybrid Cloud umbrella is likely to accelerate in the future. Over time, we will no longer see a delineation between "public" and "private" but instead, between "dedicated" and "shared."
Another consideration: Optimizing the run
Our Future Systems research shows that to drive innovation and agility, 97 percent of Leaders adopt critical technologies such as microservice architectures, containers and Kubernetes that allow for decoupling compared to only 30 percent of Laggards.
However, in order to harness the value of these technologies, a key part of any company’s cloud transformation requires rethinking how to operate. Historically, companies managed their data centers and networks separately from the data, applications and business services those data centers and networks supported. Today, this siloed approach is ineffective and perhaps even harmful.
Consider how much more complicated technology landscapes are becoming. As a result of digital decoupling and the adoption of microservices, applications are evolving to more complex patterns and topologies, increasingly requiring more dynamic underlying compute, storage and networking infrastructure. Cloud native patterns and technologies are typically more ephemeral than traditional environments, where containers may last hours, minutes, or even seconds, compared to servers and virtual machines that may be in production for months or years. The extreme is serverless computing, where functions are spun up on demand to execute service and are torn down immediately upon completion. Not only are these environments more ephemeral, but they increasingly scale up and down more rapidly as Kubernetes controllers orchestrate their deployments.
At the same time business users are far more focused on outcomes than the underlying applications, data and infrastructure that enable them. The boundaries between the services delivered to customers or the business, and the applications, data and infrastructure that deliver them are blurring. Consequently, it is critical to manage “services” and technologies more seamlessly. To effectively manage and optimize this increasingly complex landscape, enterprises need to rethink how they construct and operate new services to improve innovation and agility, enhance service levels, streamline operations and minimize costs while seamlessly managing the applications, cloud and infrastructure that deliver these services. We refer to this as “Build Different/Run Different.”
To effectively operate in the New, there are four activities to focus on:
In order to take advantage of new capabilities and drive operational efficiencies in a hybrid cloud, it’s important to standardize services and the underlying infrastructure that supports them. This includes not only aligning the organization around an enterprise-wide IT services catalog, but also standardizing hardware and platforms to allow for increased automation.
Create an agile cloud operating model
Dual velocity application delivery is critical. That means supporting the agile delivery of cloud-native applications while maintaining legacy code bases that are increasingly exposed through microservices. This operating model should also deliver on the promise of DevSecOps by tightly aligning and integrating cloud, infrastructure, security and operational requirements.
Upskill organization and transform culture
New application patterns, technologies and operating models require new skills. This includes not only upskilling developers, but also transforming “eyes on glass” operators into developers who write data ingestion scripts, create analytics algorithms and visualizations, develop automation scripts and tune AI engines.
Use hyper-automation and applied intelligence
Companies should be investing in platforms and tools that deliver the hyper-automation needed to drive agility, streamline operations and minimize cost. Such hyper-automated intelligent operations—sometimes referred to as AIOps—leverage analytics to drive predictive operations, automation to eradicate unnecessary tasks and AI to continually optimize environments.
Accenture’s “Build Different/Run Different” philosophy is explained in more detail in a dedicated report, Cloud Management: Time to “Run Different.”
Deploy and manage: Getting started with hybrid cloud
To effectively deploy and manage the next-generation of IT services, enterprise strategies must be application and data led, cloud and infrastructure enabled, secure and optimized for the operational run. For these reasons, a hybrid application and data placement will be the natural choice for most large enterprises.
So how should they get started?
First, make sure there are well-defined application modernization and data strategies and road maps to help drive the cloud strategy. This should include making a bet on the organization’s preferred public or hybrid PaaS strategy for developing applications.
Then start developing a hybrid cloud strategy and road map aligned to the application and data strategy, with workload placement simultaneously optimized for performance and cost. The strategy should also integrate the infrastructure road map to ensure critical enabling capabilities are in place at the time needed.
At the same time, start transforming operations to provide tighter integration between applications, data, cloud, infrastructure and security, as well as building on a cloud operating model powered by applied intelligence or AIOps (analytics, automation and AI).
Look for technology and service partners with the skills and experience that can help the organization successfully navigate to the New and deliver real value to the business faster.