Skip to main content Skip to Footer

BLOG


May 29, 2019
Beyond CPUS: The heterogeneous future
By: Kirby Linvill and Bill (Zhijie) Wang

Collaboration is key to innovation. That’s why our research efforts at Accenture Labs involve partnerships and knowledge exchange with other colleagues throughout Accenture, as well as those from academia and enterprise. Members of our systems & platforms R&D group recently visited Hewlett Packard Enterprise to discuss a key challenge in enterprise computing: supporting architectures that combine both general-purpose and specialized compute.

Organizations are beginning to apply the power of specialized compute to their business challenges, through solutions like GPUs (graphics processing units), FPGAs (field-programmable gate arrays) and even neuromorphic processors. As of November 2018, an unprecedented 25 percent of the top 500 fastest supercomputers in the world leveraged Nvidia GPUs to achieve their top-shelf speeds, including the two fastest (Summit and Sierra). Microsoft routinely uses FPGAs to accelerate Bing results, allowing them to run some models that are more than 10 times as large as the models they ran on CPUs with less than 10 times the latency. This is what allows for intelligent searches at a global scale.

Each of these solutions offer advantages that make them superior to traditional general-purpose compute in solving particular types of problems. And there are more types of specialized compute on the way: Intel’s neuromorphic processor, inspired by the spiking neural architecture of the human brain, was recently shown to run in real-time while consuming less than 1/8th the energy per inference as a CPU—potentially enabling efficient execution of sophisticated models in edge environments. Down the road, quantum computers are expected to allow for the accurate simulation of molecules that would be impossible using conventional CPUs, ultimately reducing the number of bench experiments required to create a new drug.

Moving forward, companies will increasingly combine specialized solutions like these with more general-purpose CPUs. These heterogeneous architectures will become the standard in enterprise computing: By assigning particular parts of an overall task to the different types of processors and accelerators best suited to tackle their varying compute needs, organizations can solve large problems faster and with less power consumption than before. But some of these specialized solutions process data so quickly that rather than the computations, the inputs become a bottleneck. In short, systems can’t yet provide data fast enough to realize the full benefits of heterogenous architectures.

Which brings us back to our visit with Hewlett Packard Enterprise: HPE is developing a protocol to allow for direct communication and shared memory between entirely different types of processors, a key step toward addressing this challenge at scale. You can think of this benefit in terms of the assembly-line style preparation you find at fast-food restaurant chains: When you order a burrito, it is assembled in steps by various workers. One employee heats up the tortilla, the next employee adds the rice, another adds your meat (or veggies) and yet another adds your toppings. Computing workflows today often look similar, where a task is passed assembly-line style between different accelerators.

To go back to the fast-food analogy, this is an efficient process given the constraints of the kitchen—but there’s still room for improvement. What if we didn’t have to worry about space in these assembly lines and instead all employees (or in the case of computing, accelerators) could somehow work on the same burrito (or task) at the same time? This is just the style of processing that HPE’s Memory-Driven Computing work, along with the Gen-Z consortium they helped found, could enable. When carefully orchestrated, this approach offers massive performance improvements. Already HPE has seen 85x performance improvements (albeit only using CPUs). Moving forward, we’re exploring potential use cases with the security R&D group in our Washington, DC Labs to leverage these heterogeneous architecture-enabling innovations as they move forward.

Our newest point of view explores the potential of specialization at scale through heterogeneous architectures. If you’re not already considering this approach in your planning and design processes, check out our work to learn more—and keep watching this space to see how you can harness the different types of emerging infrastructures for your business.

Download “Embracing Computational Variety” from Accenture Labs to learn more about heterogenous architectures today. For more information about the work we’re doing in this space, contact Teresa Tung.

Popular Tags

    More blogs on this topic

      Archive