Today’s market for conversational AI applications is in the billions and growing. People can stay connected to brands 24/7 and sometimes, without lifting a finger. Businesses can be omnipresent, offering their customers lightning-fast support, instant access to personalized information, and answers to questions over voice or text, anytime, anywhere.
AI is now in our cars, homes, and our pockets, and as it becomes more universal, it’s starting to radically alter how work gets done, and who does it. This creates a huge opportunity for businesses to embrace conversational interfaces and offer their customers more personal and engaging experiences.
But as more companies foray into the world of conversational AI, many are failing to provide the high-quality experience customers expect. If you’ve ever chatted with a virtual assistant, you’ve probably noticed major differences in the abilities of some vs. others. Where one virtual assistant can answer any question, or do something complex like find you a hotel, another might not even be able to do a simple task. Why are some virtual assistants great, and others so terrible? We reached out to Ian Collins, CEO of Wysdom.AI, a Canadian leader in conversational AI to find out.
“It’s most likely the difference in performance is due to the quality of their ongoing AI training, and the tools they use,” says Collins. “It’s the people behind the machine who can ensure an AI delivers a good experience. Many companies want to build virtual assistants, but few have the ability or the desire to optimize them daily. We focus on the optimization of conversational AI rather than its creation and have built out a suite of tools and an AI optimization practice to support our clients.”
Collins points out that success comes down to daily optimization, continuously training the AI, analyzing conversations, uncovering behavioral insights, predicting and exploring satisfied vs. unsatisfied interactions, performing root-cause analysis, and prioritizing improvements. “It’s really a first in the industry, having Wysdom Studio, our set of tools that help understand the finer points of a customer’s engagement with AI.”
To highlight the benefit of tools that analyze similarities across conversations, generate titles for topics, predict satisfaction when feedback isn’t given, review the top issues the AI has, and see how important it is to fix certain issues, Collins gives an example.
“We work with a large bank whose internal IT organization had set up a virtual assistant using a third-party toolkit. After finding over a hundred thousand customer interactions within a few months and a huge amount of negative feedback, the company was drowning to try and understand what to fix first, so it sought our help in understanding them. Wysdom Studio was used to ingest historical interactions and apply our models to prioritize improvements. After taking over the bank’s virtual assistant and optimizing it based on the historical data, the performance metrics improved, and containment rates outperformed even their most optimistic goals.”
As the demand for conversational AI increases, Wysdom has big plans to grow and serve a wider variety of large and mid-sized businesses. “We will continue adding customer channels as new technologies become viable and focus on building the best optimization tools in the market,” Collins concludes.
With the realization that AI optimization is crucial to any conversational AI deployment, it’s clear that it’s important to closely monitor customer activity to recognize possible weak points in the content, user experience, or underlying AI, and modify the same to increase end-user satisfaction.