Today, it seems everything is getting “smart”. From home appliances to vehicles to industrial equipment, more and more products and services are using artificial intelligence—AI—to interpret commands, analyze data, recognize patterns, and make decisions for us.
It’s easy to see why AI-powered products are so popular. Smart interfaces let people control devices by voice and gesture rather than buttons or touchscreens. It’s a much more natural way to interact with a device.
AI can also make a product more autonomous, relieving us from tedious work or repetitive activities. On top of this, smart products can enable data analysis and continuous optimization, such as monitoring and alerting us about our health or predicting when a piece of equipment needs to be serviced or replaced.
Taking AI to the edge
The catch? The successes of smart devices to date are fueling demand for ever more sophisticated AI-powered experiences. And we’re starting to hit the limits of what existing hardware can deliver.
Much of the processing that powers today’s smart products is actually handled remotely (in the cloud or a data center) where there’s enough computing power to run the necessary algorithms. This means that a network connection is essential, and that can also increase latency as data is transferred back and forth. There are also real and perceived data privacy considerations to think about when sending some kinds of data to the cloud.
These considerations point to advantages of putting more of the smart processing within the device itself. This is known as “edge AI” since the processing is done in devices at the edge of the network rather than the centralized cloud.
But edge processors are often constrained to be compact, and many are mobile, which often means powered by batteries. How can we run power-hungry AI algorithms on smaller, low-power edge devices? For that, we’re going to need some new thinking about how AI hardware is designed.
Time for some brain power
This is where neuromorphic computing comes in. It’s a new kind of computing architecture based on the way a biological brain processes information.
Consider the fact that the average human brain contains between 80 and 100 billion neurons, each of which works highly efficiently and asynchronously to provide massive parallel processing. It’s this power and efficiency combination that enables us to be so smart without needing to continuously ingest vast amounts of energy.
One of the most promising forms of neuromorphic computing uses “spiking neural networks” to emulate the way a biological neuron fires (or “spikes”) to transmit a signal before returning to a silent state.
The result is a system which is far more power efficient than the artificial neural networks used for most AI systems today. And that efficiency opens up the possibility of carrying out much more AI processing on smaller low-power devices at the network edge.
In fact, in addition to high efficiency, neuromorphic systems have several key advantages:
- They have low latency. Neuromorphic systems are ideal for processing continuous streams of data since there is no need to transfer the data elsewhere for analysis.
- They’re adaptive. Neuromorphic system architectures let devices adapt to changes in context.
- They learn quickly. Recent advances in training neuromorphic systems have enabled rapid learning from little data—beyond what most conventional AI systems can achieve.
Some way to go … but the journey will be worth it
We’ve already seen a lot of progress in scaling and industrializing neuromorphic architectures. Accenture Labs has been experimenting with Intel’s Loihi system since 2019, for example.
But there are still challenges to resolve in building complete neuromorphic solutions. For a start, existing machine learning algorithms won’t be directly compatible. So some modification or rework will be needed for them to take advantage of a neuromorphic system.
What’s more, to scale up adoption, we need the tools for developing, debugging, and deploying neuromorphic solutions to be as robust and user-friendly as the tools we already have for existing AI hardware. Finally, the processors themselves are still in development. There are several research-grade chips available now, but none yet being produced at the large industrial scale of CPUs and GPUs.
In an emerging field, challenges like these are to be expected. But as the technology matures, the potential business benefits are significant. Early use cases are likely to be in areas like adaptive robots, smart vehicles, and advanced consumer interfaces.
Neuromorphic is made for the edge
As AI-enabled services become ever more pervasive, organizations will need to think about how their computational strategies need to evolve to keep pace with edge technology—and with customer expectations.
Neuromorphic processors and sensors will fill an important niche in that strategy by enabling real-time intelligence and continuous onboard learning—on a tight energy budget—at the network edge.
For many businesses, that’s going to be a key source of future competitive advantage. So it’s important to understand the potential and start experimenting with neuromorphic now.
To learn more about this technology, click here to read our recent report.