Skip to main content Skip to Footer

Blog - Cloud ComputingCommentary from our cloud experts around the globeCommentary from our cloud experts around the globePUTINGCOMCLOUDBLOG

September 19, 2017
Cloud optimization: Doing the job right
By: Anthony “TJ” Johnson

With cloud adoption continually growing, companies really need to optimize their cloud presence—both during cloud migration and ongoing cloud management—to ensure they’re taking full advantage of cloud solutions.

Many companies rely solely on third-party tooling reports to provide recommendations for optimization—and that’s a big mistake. The reason is, these tools’ algorithms generally calculate only the technology-context recommendation.


Determine the right course of action

To determine the right course of action you also need to look at the business and code context of the workload. Here are two examples:

  1. One is a report that recommends the purchase of a typical 12-month committed-period discount (AWS Reserved Instance, Google Committed Usage and Azure Compute Pre-purchase). By only accounting for the technology context to the workload, it ignores the business perspective that the product for which the workload delivers services will soon be sunset. So even though the technology context asks for a 12-month committed-period contract, the business context would recommend against buying it.

  2. Another is that most tools represent third-party interpretations of the data and may not reflect the relevant information the business requires. Without that information, it’s difficult to make that data actionable and determine the correct order of implementation to maximize savings and operational efficiencies.

The reality is, too often we see companies making myopic cloud optimization decisions with only one viewpoint or limited key information, resulting in errors and missed opportunities.


So what’s the answer?

We can’t overstate the importance of completing a fundamental task before you do anything to optimize a cloud estate: Develop and implement a global tagging strategy.

For instance, Accenture has identified several optimization categories that include both “use” and “buy” tactics as the basic approach to cloud optimization.

Our model is built around optimization tactics within the optimization categories, and the outcome of these optimization tactics is “candidate recommendations.” When the full workload context—business plus technology plus code—is applied, the candidate recommendations are winnowed to a list a company can act on.

The figure below illustrates at a high level a few prescriptive processes for both cloud compute and storage optimization categories and their related optimization tactics. As we follow this prescriptive optimization process, the pool of candidate recommendations improves: First, we eliminate waste and then focus on the remaining cloud optimization levels and related tactics.

Each level within the optimization categories has its own strategy and benefits that return value. Importantly, optimization is not a one-time activity; it should be continuous, based on optimization cycles with a unique cadence.


"Cloud optimization is not a one-time activity. It should be continuous, with a unique cadence." TJ Johnson via @AccentureOps
 
 

So what's the answer?

When looking to optimize their cloud environment, companies can face big roadblocks:

  1. Competing priorities can torpedo optimization efforts. We’ve found that almost one-third of optimization recommendations don’t get implemented for this reason.

  2. Savings aren’t considered significant enough for action, but that differs by organization and the threshold it has set for what it deems material. Some enterprises consider $1,000 or less in annual savings per instance or VM to be immaterial. Others will find an annual savings of $55 per instance or VM to be substantial.


But there’s arguably an even bigger obstacle:

  1. And that’s basing decisions on third-party tooling that provide technology context-only recommendations with potentially misleading information.

An organization may, for example, run a report that has identified $1,000,000 million in savings. But after reviewing the data, applying business and code context to the workload, and implementing the optimization recommendations, the results may be closer to $500,000-$600,000. That’s why it’s important to always produce candidate recommendations until you can review the business, technology and code context to a workload running in the cloud.

The bottom line is that running a report is only the beginning of the optimization journey—you can’t read a report and take recommendations at face value. Successful optimization requires:

  • global tagging

  • an understanding of workload context

  • proper tooling and data sets

  • knowing what to look for in the data

  • making the data actionable

The payoff can be significant: With the right skills and approach to optimization, organizations can cut their spend on cloud applications and infrastructure by as much as 30 percent through efficiency gains and other value-generating opportunities. As disruption continues to change the competitive dynamics of virtually all industries and markets, that’s certainly a welcome addition to the bottom line.

Popular Tags

    More blogs on this topic