These days, we are seeing chatbots virtually everywhere, from pizza ordering to psychological counseling to customer service.
Some of these chatbots seem to be very smart. ChatGPT is now writing master thesis’ for students and is threatening to replace Google as a search engine. Google is not going to give up, however, and is coming soon with its own AI-powered chatbot, Bard.
Apart from general purpose conversations and passing exams, there are many areas where chatbots can bring a value.
Customer service is one of the obvious areas where using chatbots may look promising. While human support agents are still required to handle complex requests, wouldn’t it be great to automate the first tier issues and let human agents focus on resolving more complicated issues? This would reduce the cost of customer service and help your business grow without affecting the profit margin.
Another way to use chatbots in customer service is to make it work side-by-side with human agents. The chatbot can be used to train them and help them find answers faster and easier.
A really big question is what these chatbots are actually capable of doing and especially at what cost.
To create a value, the chatbot has to be able to go well beyond pre-defined FAQ. It has to be able to address all variety of possible user’s requests within the subject domain. The chatbot’s answer has to be not just grammatically correct, it has to make sense and be helpful.
To do so, the chatbot needs to:
Recognize what the user wants to accomplish.
Ask the user additional questions to figure out the details that may affect the answer. For example, the user who faces troubles with connecting their printer to a computer might receive different answers depending on the printer model and type of connection (cable or wireless).
Find a specific piece of content that matches the user’s situation.
This is when most chatbots fail. They either provide very limited options, and your issue doesn’t necessarily fit their rigid pre-defined structure or they do provide meaningful answers, but are very expensive in development, training, and configuration.
It happens because for most chatbots, the set up process is lengthy and expensive. Typically, a chatbot requires you to manually define all intents (the goal the user may want to achieve), a dozen of utterances for each intent (different variations of how the user may express the intent in a natural language), various parameters that influence the answer, and then map intents, utterances, and parameter combinations to answers.
As long as the amount of content is static and not growing, this approach may work. After all, you do it once and then enjoy how the chatbot automatically answers questions. However, if the amount of content is constantly growing, maintaining such a chatbot will become an expensive challenge. Every time your company releases a new product or introduces more content variations, the chatbot has to be reconfigured and retrained.
To make a chatbot both smart and scalable, several components are required
Single sourced knowledge base: ideally, you don’t have to write anything specifically for the chatbot and thus, create just another silo. Instead, the chatbot should leverage your existing content. It should be able to search for answers in the same content repository from which you are already producing content in other formats (PDF, HTML, etc.).
Componentized content to give meaningful precise answers: because user’s requests are usually specific and granular, the content which addresses these requests has to be granular as well. These content granules are also known as components: relatively small pieces of information that address a specific issue.
Semantically-rich content: the content has to be also machine-readable to let the chatbot identify the required piece of content and retrieve only specific part of content from a bigger content component.
Parameter identification: the chatbot should be able to dynamically identify parameters that affect the content to be provided to the user without additional training or configuration, automatically generate questions to request the missing information from the user, and find the content that matches a specific parameter combination.
This is how we’ve built the DITAToo chatbot.
It takes a fundamentally different approach to how answers are found. First, instead of making you to manually define intents, utterances, and parameters, the DITAToo natural language processing (NLP) engine analyses the content in the knowledge base. It identifies concepts, objects, and actions described in the content.
Second, when the user asks a question, the DITAToo chatbot parses it and tries to match it to the concepts, objects, and actions identified in the content. In some cases, the chatbot may find that important parameters required to give a precise answer are missing. For example, after analyzing the content in the knowledge base the NLP engine finds that there are two pieces of content describing how to connect the printer to a wi-fi network for two different printer models.
The NLP engine tags these content pieces with the printer model information along with the metadata indicating that the content describes connection to a wi-fi network. If the user asks “how can I connect my printer to a wi-fi network”, the chatbot recognizes that the printer model determines which of the two pieces of content should be returned to the user. So the chatbot automatically generates a question about the printer model.
Then it extracts from the piece of content that precisely describes the procedure for the specified printer model.
When new content is added to the knowledge base, the NLP engine automatically analyses and tags it, thus immediately making it available for the chatbot.
Do you want to see how a chatbot can help you reduce the cost of your own customer service? Let us know, and we’ll be happy to discuss it with you. Just send us an email to firstname.lastname@example.org, and we'll schedule a free consultation!