Even with true artificial general intelligence many years away, the pursuit of AGI offers businesses tangible value today.
Suppose you ask an airline customer service agent “Can my mother bring crutches on board?”
As a person hearing or reading this, we recognize that there’s more to it than just a policy question. Sure, the question itself is about whether the rules allow crutches on board a flight. But we also understand that a customer is calling on behalf of someone who is physically incapacitated in some way. Maybe it’s due to age or injury; we don’t know the cause, but we realize that they may also need a wheelchair when they get to the airport, or help getting on and off the plane, or transportation that can accommodate their mobility issues upon arrival.
It might take some training to know how to arrange these services. But it takes no training to recognize the need for these services when we hear the question. Similarly, a five-year-old seeing a traveler on crutches at the airport will immediately understand her need for assistance. As humans, we effortlessly bring common-sense reasoning to our interactions with the world; it’s how we navigate situations even when we haven’t experienced them before. Yet for all the advancements made in artificial intelligence, this type of inference is still beyond the reach of today’s AI systems.
Of course, this lack of common-sense reasoning in AI hasn’t kept us from applying AI effectively in a wide range of applications. Computer vision systems, for example, may outperform physicians in spotting certain forms of cancer in images. It’s just one instance where machine learning has been applied to a narrow problem in a very useful way. These systems can’t explain why cancer is bad – they don’t have that kind of common-sense reasoning – but we don’t need that system to know why cancer is bad, just to find it. There are lots of similar situations where high performance on a narrow task is very useful.
If we wanted this same kind of AI system to help our incapacitated traveler, though, it would fall short. It might answer the policy question – does the airline allow crutches on board or not? – but it would not understand that the customer likely needs more help to have a safe and pleasant travel experience.
That’s why there’s a renewed focus on enabling common-sense reasoning in AI systems. This research theme is referred to as “artificial general intelligence” (AGI). The intent is to enable broad competence on a wide range of cognitive abilities, similar to that exhibited by humans, and even animals, to a degree. AGI is really just a return to the original goals of the field: in a nutshell, to replicate human intelligence. And while this goal was articulated in the famed 1956 Dartmouth conference, full AGI remains, by most accounts, at least decades away.
So why am I bothering you with something decades away? Because in practice, even incremental improvements on the way to AGI still offer practical value to businesses today. And to see that, we can start with something you’ve probably already heard about: GPT-3.
Language models and artificial intelligence
GPT-3 is OpenAI’s newest language model, a deep learning model trained on around 300 billion words. It’s able to “predict” the text that is most likely to follow a given prompt.
For example, if you ask GPT-3 a question, it will produce a stream of text that the model predicts (based on all the text it’s been trained on) is the most likely combination of words that would follow. In other words, an answer.
To be clear, this is not “artificial general intelligence” (AGI) as we currently understand it. GPT-3 is not replicating any of the broad human cognitive capabilities that we associate with AGI, such as common-sense reasoning. Rather, it is an extremely powerful language prediction tool. The results, though, can give an uncanny impression of intelligence.
We’ve seen examples of entire newspaper articles written by GPT-3. And it can produce a surprisingly real sounding “fake” press release given two just samples as an input. So while GPT-3 isn’t really doing common-sense reasoning, it’s a step closer to it.
GPT-3 as a practical business tool?
What does this mean in practical terms for business? This is where French patisserie comes in. (I know you’ve been wondering.)
In one of our own experiments with GPT-3, we wanted to explore how it could handle the “localization” of a particular passage of text. In other words, change the text to be more customized for a particular geography. This is a key need for marketing teams trying to tailor their global content and make it resonate in local markets.
Among the many things that GPT-3 has learned is the association between locations, typical foods, and landmarks. So, given a series of prompts like “Stroll down Michigan Avenue then stop for a deep-dish pizza in this casual and trendy jacket” it was able to produce localized versions for different cities:
- Milan—Stroll down the Via Montenapoleone then stop for a cappuccino in this casual and trendy jacket
- Tokyo— Stroll down a Tokyo backstreet then stop for a bowl of ramen in this casual and trendy jacket
- Buenos Aires—Stroll down the Paseo de la Recoleta then stop for a dulce de leche ice cream in this casual and trendy jacket
- Paris— Stroll down the Champs-Élysées then stop for a macaroon in this casual and trendy jacket
These were really interesting—and impressive—results. But did you spot the mistake? Yes, GPT-3 mixed up a macaroon and a macaron. A minor error for the model perhaps, but a potentially big deal for a brand that wants to appeal to French consumers.
So, while this example neatly illustrates the strengths of GPT-3 and how they might be applied to offer business value today, it also shows its limits. We will often still need a human “sanity check” of the results of language models as they’re being put to practical use in business.
A step closer to AGI?
There are lots of other business problems that GPT-3 could help solve, from populating knowledge graphs to enriching metadata—situations where imperfect but broad results from a language model can complement deep but narrow results from other forms of machine learning. Plus, of course, it can offer a complement to good old human common sense.
This brings us back to the larger point about AGI. Whether or not language models represent a step closer to machines having common-sense reasoning, there are numerous real-world scenarios where just a little more machine intelligence can make a big difference.
For example, we’ve been looking into areas like “scriptless” chatbots that allow a virtual assistant to draw a whole series of inferences from a conversation, and start solving broader problems outside of the narrow questions being asked. The intent is precisely to allow us to provide assistance of the kind needed for our incapacitated traveler and her cumbersome crutches.
This is still not AGI. But it’s a potentially very big step forward for customer service all the same, while we’re on the path to AGI.
Focus on the here and now
And that’s really the key point here. Businesses don’t need to wait for all-powerful general artificial intelligence to develop significantly more powerful applications today.
There’s a great deal to be achieved with systems that go beyond traditional machine learning and begin to use a little more knowledge and reasoning to handle a broader range of use cases. And as AGI research progresses, those use cases will continue to expand.
So companies should be looking for business problems in which a little more knowledge or reasoning would let a particular system do a lot more – even if it still needs to be complemented by some human common sense.
To learn more about these technologies and the opportunities they’re driving today, read our new report, “The Path to Artificial General Intelligence” — and don’t hesitate to get in touch with me at firstname.lastname@example.org.