While not always well understood, the phrase “artificial intelligence” has been with us for many years.
Indeed, it was first used as long ago as 1956. And in the years since, it’s been a rocky ride. We’ve seen waves of optimism followed by disappointments and periods of inertia (the “AI winters”). Each breakthrough could only ever live up to some of the hype it generated. And, until now, none could kick-start the technology into the mainstream.
So what’s changed?
There has been a significant increase in technology and AI enablers. For instance, today’s AI applications can make use of virtually unlimited cloud processing and ever greater levels of computational efficiency.
Add the falling cost of data storage to the mix, along with the emergence of open source frameworks and the exponential growth in data volumes for training AI models, and you’ve got a uniquely potent combination of technologies and capabilities.
And that combination is taking AI into the mainstream. Virtually all the leading technology giants around the world—Google, Amazon, Facebook, Microsoft, Baidu, Alibaba, Tencent—along with many others, are all sharply focused on AI.
As the critical mass continues to grow, and the technologies evolve to handle ever more complex datasets and assist with ever more sophisticated work activities, companies are industrializing and scaling their use of AI. Seventy-five percent of executives say AI will be actively implemented in their organization within three years.