The last time you called customer service and started demanding "AGENT!" or "REPRESENTATIVE!", what led you to the conclusion that you needed to speak to a human?
We interact with chatbot systems all the time, and we have an intuitive sense that they’re adequate for simple questions. Think about calling an airline’s automated assistance line: "What time is flight 71 scheduled to depart?" should be a simple question that a chatbot can answer with a database lookup. But anything far beyond that, we safely assume, takes a human.
At Accenture Labs, we’re working to understand these limitations of current systems and go beyond them. Think about a different question: "Can I bring my crutches on board?" Today’s bots might be able to supply the right answer, but it would be a standard, scripted policy reply. A good human customer service agent, however, would recognize it as more than a "question answering" task limited to retrieving a fact. There’s a problem that needs to be collaboratively addressed: we have a passenger with physical difficulties. The point isn’t just to look up a policy, it’s to help ensure the person can travel. We need to understand her limitations and anticipate and address the challenges she’ll encounter along the way. Does she have luggage? Is she able to carry it with her crutches? Does she need a wheelchair? Will she need help once on board? Does she need transportation arranged at her destination? And so on.
We're laying the groundwork for an interaction that's truly smart.
A conversation along these lines helps anticipate obstacles and plan around them. That’s what a lot of interactions with customer service agents are about. So how do we enable systems to have conversations like these? Just adding more question-answer pairs to the system won’t accomplish that. We need to enable the system to reason about tasks and the domain constraints that apply.
As humans, roughly the same kind of reasoning applies if we are told about someone on crutches as if we see someone on crutches. So, this is not fundamentally a language problem—it’s a knowledge and reasoning one.
We can draw a lot of inferences from seeing someone on crutches, and they can span different domains: maybe she’s clumsy; she won’t be able to play tennis with her kids for a while; she can’t walk to work anymore; I should give her my chair; I wonder if she’s doing this to cut the line at Disney. We can draw dozens, even hundreds of possible inferences. The bad news is that no AI-driven system will be able to draw all these inferences any time soon. The good news is that most of those inferences are unnecessary for the current problem of helping a customer with a smooth travel experience.
For the conversation to be successful we need to know the task at hand: (e.g. "travel to Toledo"), and the constraints on this instance (e.g. "passenger has physical limitations"). What do we need to know to anticipate problems? For starters, we need to know more precisely what the passenger is capable of doing: can she handle stairs? Lift a bag? How many? And we need the circumstances: Is she traveling with a companion who could possibly help with bags? Is her plane boarding via a jetway or will she have to navigate air stairs? Does she know how to get transportation once she reaches her destination airport?
The challenge is representing this kind of task-related knowledge with enough granularity and relevant information to anticipate obstacles and remedies without overdoing it. Capturing fine distinctions like the color and size of the crutches would lead the system astray; it can’t take action on those. We want to support just enough inference to break past the stilted limitations of today’s scripted bots and have conversations that answer problems, not simply replace a FAQ or database lookup. This is how we’ll scale beyond the incremental and ultimately unmaintainable current approach of dialog trees, and it’s the groundwork we’re laying with our AI research at Labs: a pathway to an interaction that’s truly smart.
Stay tuned to learn more about the work we’re doing in this space! For more information, contact Andy Fano and Shubhashis Sengupta.