Let’s imagine you are assembling a team to implement an AI-driven customer service system for your organization.
First, you look to hire a talented data scientist. This is no easy hire. Data scientists, and other varieties of AI specialists, are in short supply while demand is red hot around the world. In fact, being a data scientist is now rated as the best job in America, according to Glassdoor, largely because of high salaries and an abundance of available openings.
The challenge of finding the right data and AI specialists makes it easy to overlook the other skills that are crucial to success with AI applications. Rich domain knowledge is a prerequisite for almost any AI solution, and most often this can be found within your organization. For an oil and gas project, you need oil and gas knowledge. For a legal project, you need legal skills. And for a hospital project, you need relevant healthcare expertise—the deeper, the better.
Technical skills, domain knowledge, what else?
Getting back to your team, let’s say you have now hired a top data scientist and assembled some seasoned domain experts from within your organization. You might not be done. There are often a range of other specialists you may need, some of whom are doing jobs that have only recently been created.
According to an estimate in a 2018 report by the World Economic Forum, “75 million jobs may be displaced by a shift in the division of labor between humans and machines,” and 133 million new roles may be added. Not all of these are technical, statistical and numerical, so bear in mind that your skill needs could be much broader than you think. AI projects have a habit of generating new and unexpected demand for knowledge and skills that are not readily available within the organization, so anticipating them is important.
For example, let’s say you are building a conversational chatbot service. How can you be sure it will not alienate certain groups of people by failing to adapt to local nuances? Could your chatbot be more effective if it adapted to common behavioral biases? What about changing legal, social or cultural sensitivities? Without the right people to make these adjustments, your chatbot could cause lost revenue, complaints or reputational harm.
The right skills: Better late than never
Many organizations only realize they have a skills deficit when they are already deep into development. Sometimes that is hard to avoid, particularly if your organization has little experience with similar AI projects. A recommended approach to minimize the chance for skills shortfalls is to break your AI project into three key stages and then plan your requirements accordingly:
Framing. At the outset, your AI specialists should work with domain experts to frame the business issue and understand the objectives and limitations from all sides. Detailed planning should extend past testing and implementation and into normal service—all the while, you should try to predict any additional skill requirements at every point.
Sometimes new needs are clear at the framing stage. For instance, we worked with a political think tank on an AI model that could identify the discriminatory or politically incorrect language in the published articles. It was obvious that we had nobody on our team that could provide informed insight into political debates and the evolving language of discrimination and biases—we would need to find and access these skills early in the project.
Training. Known as the “knowledge transfer” phase, this is where knowledge from data and domain experts is “taught” to the AI model. Typically, this involves creating training data that shows how inputs should connect to outputs.
Sometimes it is only once training has commenced that it becomes clear that you need additional skills. For example, in an AI-driven search project for our legal department we needed certain keywords to be interpreted in a specific legal sense and also in the context of where they appeared in a contract. Searches also needed to account for synonyms and close meanings under legal linguistic conventions. While we did not anticipate the depth of these needs, we had budgeted for undefined additional skills, so could quickly access the expertise we needed.
Validating. Once your AI model is performing as designed, it should move to a testing phase to assess the accuracy and quality of the system. This needs to happen from a number of perspectives. The most obvious checks are around functional performance and security in every possible circumstance, but dozens of others are needed.
For example, we worked on a project that needed to generate correctly pronounced German and Italian street names from text inputs. Checking the pronunciations ultimately needed more than just people fluent in the languages, because some street names have a specific pronunciation that differs from the phonetics of regular words. We did not have this combination of linguistic and local knowledge available, so needed to source the right kind of experts for these quality checks.
As you push into new frontiers, AI will probably throw you an odd skills curveball from time to time. But planning carefully over these three phases will help identify most of the skills you need, consider how the skills requirements can evolve and anticipate what specialists you’ll need to bring in at various intervals.