Since their first appearance decades ago, chatbots have come a long way thanks to leaps in natural language processing and generation (NLP/NLG), the branches of artificial intelligence that enable us to interact with computers in a conversational manner. Today AI-powered chatbots have established a prominent role in various fields, including customer service, healthcare, banking and more.

Meanwhile, the technologies that power chatbot assistants are growing smarter and more efficient. I had a chance to talk with Rob High, Chief Technology Officer at IBM Watson, on the evolution of chatbots and where the trend is leading to. He shared some very interesting insights on the prospects and challenges that lie ahead.

The next step in chatbot evolution

In a TC Disrupt presentation last year, the cofounder and CEO of Viv Dig Kittlaus, showed that chatbots no longer need to be explicitly commanded to do things. They have evolved to the point where they can discern the meaning of queries by correlating different elements using knowledge graphs.

Rob High, Chief Technology Officer at IBM Watson (Photo Credit: Tech Talks)

While this is a huge upgrade from the rudimentary chatbots of old, High says there’s still room for improvement. “The next step in development will crack the uniquely human nature of communication,” he says. “We are going to see a transition from simple command response scenarios where essentially everything is centered around a single turn, moving into deeper conversations. The purpose is to get beyond the surface level utterance to the real issue at hand.”

Most conversation agents are becoming adept at understanding and responding to variations of queries. But those entries usually hide deeper meanings, requests and problems that AI engines need to understand and address.

“For example, when somebody asks what’s my balance, that’s actually not their problem. Their real problem is they are trying to figure out how to buy something, pay a bill or save for their kid’s education. There’s something bigger behind that question,” High explains. “This where we are going to see a major shift in value from what we are seeing today to where this could really go. To get to those deeper issues, AI and cognitive systems have to be able to reason deeply about the nature of the problem that’s being presented in questions or in the conversation.”

Reflecting on High’s comments, I can see a number of fields that can benefit immensely from such developments. Education, where AI is steadily making progress, is one of them. AI-powered chatbots are helping provide personalized content and assistance to students. However, a confused learner often doesn’t even ask the right questions. An AI assistant should be able to find the real cause of misunderstanding and confusion hidden deep within its interaction with users and steer them in the right direction as a human teacher would.

IBM Watson is exploring deep understanding with Floral gift retailer 1-800-Flowers. Watson powers the company’s chatbot GWYN (Gifts When You Need) and helps it detect user tone. GWYN interacts with online customers using natural language and is designed to understand human intention behind each purchase by interpreting and asking several questions. “Through this process, GWYN goes beyond acting merely as an automated tool,” High says.

The risks and responsibilities

Earlier this year, in an MIT Technology Review op-ed, Liesl Yearsley, the former CEO of Cognea, an AI company that was acquired by IBM Watson in 2014, revealed that, contrary to general perception, people are more inclined to form relationships and share information with chatbots than they are with other humans.

In the same article, Yeasley warned about the influence that even today’s simple AI agents can have on users, an effect that can become stronger as AI continues to advance.

“As enterprises deploy these conversation agents that have a more direct and personal bearing on the end user, it’s important that these organizations take responsibility for protecting the data that is being presented, maintaining privacy of that person and looking out for their best interests to ensure the user is not unnecessarily revealing details that are not relevant to the problem they are trying to solve,” High said when I asked him how he views the threats that loom ahead as these conversational agents become more ingrained in services and applications that run critical operations.

High also pointed out that companies should be transparent about whether the agent is interacting with is human or AI. “Not just so the end user has that clarity but more specifically to reinforce the importance that the user reveal themselves only in a way that they feel comfortable,” he said.

This is an important point as AI agents become more and more adept at mimicking human behavior. Last year, an AI teaching assistant powered by IBM Watson helped moderate an online forum for a computer science class at Georgia Tech University, and most students didn’t find out they were interacting with AI.

“We have to be very mindful of the uncanny valley and at the same time reinforce we should never attempt to fool the end user into thinking that what they are dealing with is a real person on the other end,” High says. “Avatars will surface in different forms, some more pseudorealistic than others – and those that are pseudorealistic can be quite beneficial under certain circumstances.”

IBM Watson did a pilot with the Australian government around Nadia, a virtual assistant platform that helps disabled people get information about government services.

“One of things we discovered is that people who have hearing impairments are actually able to lip-read the avatar and that was a great benefit for them,” High says.

“It’s very clear that for us to fulfill the objective we have for cognitive computers in amplifying human cognition they’re going to have to have a way of interacting with us that is activating our own imagination and our ability to create ideas,” High says. “That will in turn require that we employ mechanisms for humans to feel more natural – more in tune with the way we naturally communicate with each other as human beings.”

Finally, High laid out three rules that developers must adhere to in order to make sure we can reap the benefits of AI-powered chatbots and artificial intelligence in general:

  1. Never attempt to fool your end user into believing what you’re presenting is a human being
  2. Protect and respect your user’s privacy and personal rights
  3. Ensure that the data you’re operating with has been cleansed of unnecessary bias
  • This article was originally published on Tech Talks. Read the original article here.
Share.
Leave A Reply

Exit mobile version