In the past, a company could plot a directional setting that would last a decade. Today, this is no longer possible due to the blistering pace of technological change. This is particularly true of Artificial Intelligence (AI) and businesses that fail to keep pace with AI development risk falling by the wayside.
New roles emerge
AI will disrupt traditional workforce structures. What we’re seeing is that while AI is making certain roles redundant, this won’t necessarily correlate to worker redundancies. Rather, people will retrain and find new and more rewarding positions.
Take call centre operations. Many day-to-day call centre transactions, such as mortgage or loan applications, will soon be serviced by AI-enabled automated agents. However, these agents won’t be coded: they’ll be trained; learning how to operate through algorithms. This in turn will give rise to new jobs for human employees, who will be employed to train AI systems to make sure they run optimally. The goal will be to drive more authentically "human" and personalised interactions between man and machine.
Skills gaps proliferate
In our AI future, one thing won’t change: skilled human workers will be valued. However, the scope of these skills will change fundamentally. We’re increasingly seeing that the rate at which the market can absorb new technology is much faster than the rate at which people can learn new skills. As a result, new skills gaps emerge every time a technology launches. Businesses need to be aware of this and work to mitigate it through partnerships and training, using AI to free up workers to take on new tasks.
As AI gets smarter and machines start to think for themselves, the ethicality of their use must be considered. For example, there was a case just last when the self-learning chatbot of a global technology organisation had to be removed from twitter for picking up racist language from the human users of the service. This is something to consider as chatbots become part of the workforce—we have a responsibility to think through the full implications of machine learning and ensure that we programme AI to learn in the right way.
The echo chamber
Another potential risk area is extreme personalisation. It’s great that the services we use ‘know’ us as individuals and serve us the content we like, but there’s a danger we’ll only be fed information and opinions we already agree with, potentially leaving us with a skewed view of the world. Facebook’s algorithms are great at anticipating the sort of news stories that’d interest us, but isn’t it also important that we’re also exposed to new ideas; ones that we don’t necessarily agree with? I think this issue needs to be addressed if we’re to avoid the sort of polarisation we saw during Brexit and the US election.
The future is automated
We’re at a crucial point in the history of AI. The technology’s about to reach tipping point and play an increasingly sophisticated role in business services. Forget today’s chatbots—AI will become more natural and convincing so that soon it’ll be indistinguishable from having a conversation with a human—albeit a very well informed and efficient human. However, there’ll also be pitfalls to avoid and business must start planning to ensure that when they do implement AI they do so in a way that’s completely ethical.