The act of having a natural conversation with a machine was depicted in numerous science-fiction books and movies, but it wasn’t until three months ago that it became (close to) a reality. Back in June this year, OpenAI released beta access to its latest large-scale natural language model called GPT-3. The new algorithm represents a major breakthrough in the way a machine can understand human language and it took most of the AI community by surprise.
Since then, there have been many experiments done and showcased online, some of them quite impressive. In this article I will describe what is GPT-3, how it has been used so far and talk about its future impact on customer experience and customer service teams.
Transfer learning and GPT-3
Just a few years back, it was extremely difficult to build a software app that could understand and "talk” a natural language like any other human. It used to take an incredible amount of labeled training data (and effort), to produce a decent AI model that could accurately “understand” intents and topics from text, never mind generating a coherent response in a conversation. Labeled data is significantly difficult to produce in large quantities as it requires other humans to read thousands of texts and annotate them correctly.
This changed after the Google Brain team published the now famous article “Attention is All You Need”, which proposed a new neural network architecture that paved the way to the practice of transfer learning in Natural Language Processing.
So what is transfer learning? Something that us, humans, naturally do when we learn something new. We utilize the knowledge that we learn while doing one task, to resolve related tasks. We have an inherent ability to transfer knowledge across tasks. For example, just the simple act of reading this article utilizes knowledge that was acquired in the past, such as reading in English, and is now re-used for this particular task.
Until transfer learning was introduced, AI and machine learning has been designed to work mostly in isolation. Which means, for every new task, a completely new representation had to be learned from zero. Imagine that every time you need to read an article, you have to learn English all over again!
Transfer learning allowed researchers to run algorithms through huge amounts of unlabeled data, which doesn’t require any time-consuming preparation, and generate bigger and bigger models. A model is an AI term for a language representation that machines can understand. Each model is usually made of hundreds of millions of parameters, or variables that capture the specificity of the language data it was trained on.
Then the language model racing began. Research teams started to publish ever bigger and better models, starting with Google’s BERT, who had 340 million parameters in its largest form, and more recently GPT-2, a model released by OpenAI last year, with 1.5 billion parameters.
Playing with GPT-3
Back in June this year, the OpenAI team released GPT-3 as a web API available through a restricted beta program. Which meant that anybody with access and basic programming knowledge was able to use its capability. Since then there have been quite a few app examples developed on top and published, some of them quite impressive. I am going to highlight its potential by showcasing a few of these examples here.
Graphic charts are a great tool to summarize data and most of us had to produce many of them during the course of our professional lives. The steps taken to generate a chart depends on the tool being used, but in all cases it involves some sort of interaction with a graphic interface. GPT-3 might change that altogether. Instead of clicking our way through menus and buttons, we could simply describe in natural language the chart we want.
Credit to @nutanc
In the example above, the developer built a simple app using the GPT-3 API, where you describe in your own language what the chart should display and then, a few seconds later, you’ll get the whole chart produced for you.
Rephrasing sentences to be more polite
Have you ever thought after sending an email or text that perhaps you could have used a more polite tone? Quite often I find myself trying to rephrase the message I’m writing, especially in a more formal situation like work-related communication. This could be completely automated in the near future with the help of language models like GPT-3.
Credit to @eturner303
This example showcases how, using GPT-3, one can turn a rude message into a polite one, just by giving an example of how you’d like the output text to be mapped to the input text (the first Input/Output pair from the picture above).
Generate email reply using bullet points
Crafting a professional response to an email takes a lot of time, especially if you need to address multiple points or questions raised in the initial message. This could change with the help of language models like GPT-3. Instead of composing the full sentences, one could simply write down the main bullet points in simple words or short phrases. Then in one click the entire email will be generated and be ready for review.
Input: received email & reply bullet points
Output: generated reply (automated)
Credit to @OthersideAI
In this example, writing down just a few words as bullet points is enough to compose a full reply to a given email message. The reverse is also possible, where, instead of having to read a whole long email, an app could summarize it for you as bullet point topics.
These are just a few of the exciting apps that were built on top of OpenAI’s GPT-3 API. The range of applications developed in the short amount of time since its release is vast and spans many domains and use cases, from program generation to creative content creation. Although most of them are just toy apps built quickly in one afternoon, it won’t be long until a new generation of AI-driven productivity tools will develop.
Impact on customer support
AI advancements like GPT-3 could significantly impact customer support and customer experience teams. And I am not referring to the “Agents will be replaced by AI” mantra, which is a bluff. Any significant technological advancement produced a boost in individual and team productivity, and this is what will change in customer support as well.
Many tasks will be easier to do, from crafting responses to finding the right information needed to resolve a customer inquiry. Below are just a few examples of how revolutionary language models like GPT-3 could be used in a customer service setting.
The main job of any customer service agent is to respond to customer messages, but crafting a good reply requires good writing skills and can take a significant amount of time. Less experienced support reps in particular struggle with this. Message templates can often help, but they also have their disadvantages: they take time to build up, they’re not always being used when they should and not every situation has a template ready for it.
The example above, where an email is generated from bullet points, is one way to make response writing easier for customer service reps: note down a few bullet points and let the AI app do the rest.
In addition, a language model can be fine tuned based on a certain company’s tone and brand, so that any generated response matches the way a company wants to communicate with its customers. Think of it as templates dynamically generated and personalized for each customer message and company voice.
Writing Knowledge base Articles
It’s not a secret that writing, maintaining and localizing knowledge base articles is a cumbersome job. This is the reason why many customer support teams have dedicated people working just on this task. Some of the time consuming things they have to do are: finding and putting together the right information to include in the article, crafting the paragraphs and the sections based on this information, arrange and correct the translation of the article, for localization.
Each of these steps could be improved using AI language models. For example, instead of spending a lot of time working out how to phrase a section of an article, one can simply write down in short phrases what each section should include and the actual content of the article will be automatically generated, in one or many languages.
Some customers are more verbose than others, and often will explain their issue in many long paragraphs that take a long time to read and understand. Problems that could be described in a few sentences, can expand over multiple pages. With automatic text summarization, long emails could be condensed in a few main points, improving readability and saving time.
The same approach could be applied to those long conversation tickets or chats with many back and forward messages. Instead of spending many minutes trying to understand what the subject of the conversation is, an agent could grasp this in a few seconds by reading a good summary that is automatically generated.
Troubleshooting and finding information
In many medium size and large companies, finding the information required to help a customer often feels like looking for a needle in a haystack. This is especially the case for new team members who are less familiar with the products or services customers need help with. Existing solutions are usually based on a keyword search workflow, where the agent has to write one or many keywords in a search box, hit enter and hopefully a list of relevant links are returned.
The latest (and future) generation of AI language models will change how support reps find the right information to help them resolve a customer issue. Instead of trying out keywords, the correct paragraph that helps with the issue will be automatically pushed in the agent's workspace based on what the customer query is. As depicted in the example above, Google has already moved in this direction.
Conversational AI has changed a lot since the dawn of the chatbots and it’s being used currently across many industries, from insurance to e-commerce. Getting starting with bots is now fairly easy and accessible to most businesses thanks to open source projects like Rasa, or cloud service APIs like Google’s Dialogflow or IBM Watson. However, building a bot that accurately understands what the customer is asking and is able to respond appropriately is still very difficult.
One of the biggest friction points in implementing AI automation for customer service is the need to manually “teach” the algorithm with many rigorously selected example phrases for each possible customer intent. This approach takes a significant amount of time and effort, as you often need hundreds of such examples for each intent to achieve a good accuracy.
One of the core principles behind GPT-3 is that you should only need a few examples (or ideally none) to teach an AI model how to understand a particular intent or question. In AI terms this is called few-shot learning.
In a way, this is what us, humans, effortlessly do. If somebody tells you that when a customer asks to cancel an order you should perform these three steps, you’ll be able to do that no matter how the customer is phrasing the request. “Cancel my order” or “I don’t want these shoes anymore”, they both mean the same thing to us in a given context. We don’t need to look over hundreds of examples to get it.
With language models like GPT-3, building bots that answer simple customer questions will be much easier, less time consuming and therefore cheaper. Just give one or few examples for each different question, and the bot will automatically expand its understanding to the many other ways somebody can phrase that question.
What the future holds
The recent advancements in Natural Language AI, such as GPT-3, will pave the way for the development of a new generation of AI-driven applications, across many verticals, including customer support. OpenAI’s decision to commercialize the latest model and make it available as a cloud-based API, although questionable from an academic research point of view, will allow software companies to build smart and useful products without having to hire scarce and expensive AI talent.
Like with any other significant technology, we will see a boost in productivity across many jobs that involve activities related to natural language, such as writing text, reading text or finding information in text. Obviously, this includes customer service activities.
Chatbots and conversational AI apps will require less effort to help them “understand” customers' language and the way they ask questions or describe problems. This, together with a better way to write and maintain knowledge, will make it easier (and cheaper) to help the customers help themselves.
Most importantly, the development of better and more accessible technologies will continue the democratization of AI, which will soon become a commodity available to teams of all shapes and sizes.
At Swifteq, we are busy working on applying the latest advancements in AI such as GPT-3 to help support teams be more productive at helping their customers.