Bots today are not at the level where they can flawlessly replicate a conversation. Some chatbots today are not fed enough data. Others are fed enough data and leverage machine learning, using natural language processing in a way that can pick up keywords and phrases to understand human language.
That said, the quality of the bot often depends on the amount of data it has learned to understand the language.
For example, there are still some bots that cannot recognize that “20” and “twenty” are the same thing. When the bot asks, “How old are you?” it’s able to recognize numeric data, but if it were to ask again and the user responded alphabetically (“twenty”), the bot would not recognize that the user has given the same answer, albeit in different formats.
Unlike less sophisticated examples, there is a bot that can distinguish a variety of location-based words that have the same meaning, for example: “SF” AND “San Francisco.”
We have some great technologies like the IBM Watson API, Facebook bot API, and Microsoft Bot Framework that can help developers create interactive bots based on different taxonomies. (You can try Watson for yourself here with this sample app.)
Line, a leading Japanese chat platform, is providing a bot API, and users are making a bot by connecting other APIs. One example is a bot that responds to user queries with images or recommends some restaurants nearby. These bots can be used for fun and entertainment, but they still can’t perfectly decode the intricacies and nuances of human dialogue in different languages.
While discussing the future of chatbots, we cannot avoid mentioning the deep learning technology, as we see a lot of leading companies like Facebook, Microsoft, and others already using that technology for bots. When a deep learning system receives data, it captures the signature of data through its multiple layers of neural networks and can identify categories for each piece of data.
It’s a technique for building a computer program that learns from data. It is based very loosely on how we think the human brain works. First, a collection of software “neurons” are created and connected together, allowing them to send messages to each other. Next, the network is asked to solve a problem, which it attempts to do over and over, each time strengthening the connections that lead to success and diminishing those that lead to failure.
Neural networks are the core of deep learning architecture. Soumith Chintala of Facebook AI noted in a recent article, “Deep learning — neural networks that have several stacked layers of neurons, usually accelerated in computation using GPUs — has seen huge success recently in many fields such as computer vision, speech recognition, and natural language processing, beating the previous state-of-the-art results on a variety of tasks and domains such as language modeling, translation, speech recognition, and object recognition in images.”
Essentially, chatbots developed in neural networks are more likely to replicate human conversation in a believable way.
Currently, deep learning language processing is still in the R&D phase. With improvements in this technology, computers will be able to understand human language almost perfectly, and we may be able to create a very high quality, real-time machine translation bot.
Just a box of tech? A bot is simply a box of technology and within the box we put deep learning technology or A.I.-powered tech. Today, the shape of the box looks like a chatbot but it doesn’t have to be chat or even a bot in the future. It may be a tangible device, like an iPhone. We may start calling it a device bot, but it will still have the A.I.-powered technology to make it act human. This could replace the information-based work that humans are doing right now.
Is the conversational-based interface the best technology for bots of the future? Consider that calling Uber from Facebook Messenger doesn’t deliver the same experience you’d have if you used Uber’s smartphone app. Similarly, a chatbot in Facebook Messenger wouldn’t be the ideal way to use the bot system. If a user wants to have everything rooted in chat, then a chatbot is perfect. However, those who don’t use chat heavily will be disappointed at the chatbot’s limitations, as with the Uber scheduling scenario just described.
Google announced that its natural language processing system called Parsey MacParseface is now able to identify 94 percent of word dependencies within an English sentence. After that technology is improved, it would be possible for computers to write a book called, hypothetically, “Why Yosemite is the best place in California for families to vacation” by analyzing all the opinions on review sites.
But of course, there are so many things we can’t decide based on statistics, and sometimes we need to make decisions based on vague facts and instincts only a specialized professional might have. Those skills are hard for a computer to understand.
To have that ability, an expert would have to input data for the bot. Look at a genius like Elon Musk. His decision-making skills are only in his head, and a bot cannot translate something like that. Yet. But could this considerable ability be incorporated into the bot of the future? Time will tell.