Natural language processing is improving automated customer support

By Ravi Raj

CEOs, CIOs, CMOs, and CXOs alike are increasingly focused on creating customer experience (CX) that is more responsive, intelligent, versatile, and accurate. What’s proving a potent way to ensure a continual positive customer experience for your hard-won customers is to leverage automated conversational interfaces (chatbots) in your CX ecosystem.
Consumers are increasingly using AI voice assistance devices (Amazon Echo and Google Home) and text-based communications apps (Facebook Messenger and Slack) to engage with companies and each other. Yet corporations, by and large, have not leveraged the full capabilities of conversational tools such as messaging platforms and voice assistants to make it easier to interact with customers and create a positive CX. And while many companies are exploring the use of chatbots, only four percent have successfully deployed them.

Customer support implementations also have yet to tap into the full benefits of machine learning and natural language processing to improve the customer experience at a reduced cost. Both large and small businesses can do so by implementing next-generation CX tools that leverage ML and NLP-based conversational interfaces.

Basic chatbot technology

There’s no reason corporations and customer support organizations should not implement conversational AI, as there are simple solutions available that can be deployed in as soon as two weeks and without hiring extra staff. Some of the baseline requirements for implementing automated conversational interfaces that drive superior customer experience include:

  • Having the kind of depth that enables the AI to understand its users, no matter how they express themselves.
  • Using long short-term memory, one of the most sophisticated deep-learning models, to bring the same sort of AI “horsepower” to an NLP interface that self-driving cars and package-delivering drones employ.
  • Ensuring they can be deployed without the need to write a single line of code.

While many companies are building AI-powered chatbots on a messaging or voice platform, the challenge they are facing is making the bot intelligent enough to understand and more readily respond to natural language, which is key to its success. Not everyone can deliver on the promise of providing the fundamental building blocks for conversational AI. These building blocks include natural language understanding, intent identification, information extraction, action triggers, query understanding and transformation, sentiment analysis, natural language response generation, speech processing, personalization, and more. Only recently have groundbreaking advancements in deep learning made many of these feasible.

AI integration in customer service

Many customer service queries can easily be resolved with an automated interface powered by AI, eliminating the need for a phone or chat-based discussion with a person. In many cases, an AI system that uses NLP to recognize user intent is configured to seek answers to a set of questions based on a decision tree. They can diagnose and instantly resolve a problem — a welcome change for consumers away from their computer or frustrated by long hold times on customer support calls.

One of the most common issues raised by customers is often resolved by the most obvious solution, such as when a customer loses internet connectivity and the solution is to simply switch off the router and power it back on. A bot can relieve the consumer of the frustration of a long wait on hold by instructing them to reboot.

With an automated conversational interface, the system can almost immediately detect an unhappy customer and automatically connect them to an agent. This system can also seamlessly hand calls back to the automated interface, and vice versa, as needed. This reduces the load on call center staff, leading to lower wait times for customers. Deploying NLP-based automated interfaces results in significantly lower support costs and improved customer satisfaction.

Agent assist technology

Another use case for an AI-based automated interface is “agent assist,” which has applications in the contact center business and other enterprises. Today, companies have to support an ever-increasing volume of products, documents, and information, and must adapt to the constant software updates to stay current on the various releases, features, bugs, and troubleshooting methods.

An “agent assist” automated conversational interface helps the support staff answer questions accurately when a customer calls with a problem. With machine learning and integration with CRM and help desk systems, the system learns customer and agent data so agents are better equipped to quickly resolve more issues.

The components of an effective chatbot

The key attributes of the automated conversational interface system must work seamlessly in a range of messaging and voice-based platforms, while being easy to configure without the need for computer programming. It intuitively specifies intents, attributes, and entities while easily inputting knowledge base documents. Further, it allows for webhooks to interface with various databases and systems.

The system should use the best and latest machine learning and deep learning algorithms to constantly learn, improve, and understand multiple languages. Moreover, it should seamlessly pass incoming customer calls to a human when necessary, easily picking up them back up, to remain sensitive to sentiments of users. Finally, it should provide a rich set of analytics to help understand, train, and improve the system.

The reality of creating a superior, cost-effective customer experience lies not only in AI-driven automated conversational interfaces but in the hands of those who can now easily and swiftly deploy them to exponentially improve customer support as well.

Ravi N. Raj is chief executive officer and cofounder of Passage.AI, a platform that provides the AI, NLU/P, and deep learning technology as well as the bot building tools to create and deploy a conversational interface for businesses.

A New Age of Intelligent Conversational Interfaces

By Mitul Tiwari

Two trends are driving massive changes in the technology industry today: First, the rise of conversational mediums such as messaging platforms like Facebook Messenger and smart speakers like Amazon Echo. And second, the recent and disruptive advancements in Deep Learning and Artificial Intelligence.

Messaging Dominance in Mobile

There are more than 2.5 billion smart mobile devices in the world and people are spending more than 80% of their screen time on mobile devices. Messaging is the dominant activity on mobile and in fact, it’s the fastest growing activity on mobile devices. The numbers of users on some of the messaging platforms are staggering, for example, there are more than 1 billion users each on Facebook Messenger and WhatsApp. Very recently messaging has also surpassed social networks in usage.

Messaging as a Platform

Recently Apple announced that iMessage is opening up for developers, which follows last year’s trend where Facebook Messenger, Slack, Twitter Direct Messages, Skype, Kik, Telegram, etc. opened up for development of interactive applications or “bots”. Now bots have mobile native platform capabilities as well such as location, voice, camera and images. Now businesses have a way to build conversational interfaces and interact with their customers on a platform where the customers are spending most of their screen time.

Conversing with a device is a reality (picture from movie “Her”)

Rise of Voice Platforms

Smart speakers like Amazon Echo with Alexa came out a couple of years ago and are growing like wildfire. Google Home with Assistant, Microsoft Cortana, and Apple HomePod with Siri have also joined the market. In 2017, 35M people are expected to interact with smart speakers. Furthermore, LG is embedding Alexa in their refrigerators, Ford is putting Alexa in their cars, Ecobee is adding a voice interface in their thermostats and light switches. Very soon almost all homes and cars will have a smart speaker with a voice assistant. All these voice platforms are open for developing conversational applications such as Alexa skills or Google Home actions.

Conversational Artificial Intelligence

Building a bot on a messaging platform or voice platform is not that difficult but making the bot intelligent enough to understand natural language and to respond naturally is non-trivial. Some of the fundamental building blocks for conversational AI are natural language understanding, intent identification, information extraction, action triggers, query understanding and transformation, sentiment analysis, natural language response generation, speech processing, personalization, etc. Many of these conversational AI building blocks are feasible now because of groundbreaking advancements in Deep Learning.

Some of the building blocks of Conversation AI

Deep Learning and Natural Language Processing

In traditional machine learning, humans analyze data and design features, and machine learning optimizes a function to combine the features. On the other hand, in deep neural networks a.k.a. deep learning, the network learns multiple representations of the data and eliminates the need for complex feature engineering. In the last few years deep learning has been successfully applied in various Natural Language Processing (NLP) tasks such as language translation, text summarization, image captioning, information extraction, question-answering, speech recognition, etc.

At Passage AI, we are using the latest deep learning technologies to build an NLP engine that includes various conversational AI building blocks to understand natural language text and speech, to identify intents, to extract various useful information, to understand and transform query, to search over vast amount of data, and to create an intelligent conversational interface. To deliver that technology to businesses, we created a bot builder platform that anyone can use to build an intelligent bot without coding and in a simple drag-and-drop fashion. And most importantly, to ensure that the bots can reach the largest audience, we built a pipeline to deploy an intelligent bot using our bot builder across multiple messaging and voice platforms with ease, that is: build once, deploy anywhere.

Passage AI’s Bot Builder Platform

We are thrilled that Passage AI already offers the complete package for a conversational interface building platform: (1) a powerful NLP engine, (2) a bot building platform, and (3) a pipeline to deploy across multiple messaging and voice platforms.

Contact us to learn more and see a demo!

Next Generation Customer Service

By MuckAI Girish

Customer service, a key function of any successful business, can make or break a company. Customer dissatisfaction leads to defections and a tarnished brand. On the other hand, effective customer service helps companies differentiate themselves with a superior value proposition while increasing customer retention and improving metrics, such as their NPS (net promoter score), CSAT (customer satisfaction) and CES (customer effort score) scores.

Consumers are increasingly using voice (such as Amazon Echo and Google Home) and text-based (such as Facebook Messenger and Slack) communications as well as a variety of devices and applications to engage with companies and each other. Yet corporations by and large have not leveraged the full capabilities of modern conversational tools, like messaging platforms and voice assistants, to make it easier for their customers to interact with their customer support teams. Nor have customer support implementations tapped into the full benefits unleashed by machine learning (ML) and natural language processing (NLP) to improve customer experience at a reduced cost. Both large and small businesses can improve their customer experience and reduce costs by implementing next-generation of the customer support, which optimizes these ML and NLP-based conversational interfaces.

Many customer service queries typically include the same questions or concerns voiced repeatedly, which can easily be resolved by an automated interface powered by AI, eliminating the need for a phone or chat-based discussion with a person. In many cases, an AI system uses NLP to recognize user intent and is configured to seek answers to a set of questions based on a decision tree. These can often diagnose a problem and instantly provide a resolution — a welcome change for customers not in a position to use their phone or who are frustrated by long hold times that are so common on such a call.

For example, when Internet access goes down, often the solution is simply to switch off the modem and power it back on. However, many customers call their broadband provider, endure long wait times until they get connected, and then discover that all they had had to do was this simple step. With an automated conversational interface, this could have been accomplished immediately. When the system detects an unhappy customer, it automatically connects them to an agent. This system can also seamlessly hand off calls back to the automated interface, and vice versa, as needed. This reduces the load on call center staff, leading to lower wait times for all customers. Deploying NLP-based automated interfaces results in significantly lower support costs and improved customer satisfaction.

Another use case of AI-based automated interface is agent assist, which has applications in the contact center business in addition to enterprises. Businesses must not only cope with an ever-increasing volume of products, documents and information, but also adapt to the constant software updates, forcing support staff to stay current on the various releases, features, bugs and troubleshooting methods. An automated conversational interface can help the support staff answer questions accurately when a customer calls with a problem. With machine learning and integration with CRM and help desk systems, the system can be trained with customer and agent data, leaving agents being better equipped to more quickly resolve more issues. The company can hire higher quality personnel, train them on aspects that are harder to automate or program, leading to a more effective customer service staff and improving employee retention.

Let us examine the key attributes of such a system. It should work seamlessly in a range of messaging and voice-based platforms — something customers are used to — while being easy to use and configure, without the need for computer programming. It should have an intuitive way to specify intents, attributes and entities, it should be easy to input knowledge base documents, and it should allow web hooks to interface with various databases and systems. It should use the best and latest machine learning and deep learning algorithms to constantly learn, improve, and understand multiple languages. Moreover, it should seamlessly hand calls off to a human, as easily as it should pick up them back up, as it remains sensitive to sentiments of users. Finally, it should provide a rich set of analytics to help understand, train and improve the system.

At Passage AI, we are committed to helping you take customer service to the next level working across the myriad of text and voice-enabled platforms. The Passage AI solution offers an easy-to-use and intuitive UI, intent, entity and attribute-specification methods, API-based architecture — all while leveraging state-of-the-art deep learning algorithms to enable superior customer service. If you would like to learn more about Passage AI, please visit us at www.passage.ai.

AI/NLP’s Role in Operational Effectiveness for Private Equity Portfolio Companies

By MuckAI Girish

Private equity (PE) has been consistently outperforming other asset classes and has played an ever increasing role in reshaping global industries. According to the 2017 Bain & Company’s Global Private Equity Report, in 2016 PE firms raised $589B and the buyout dry powder reached an all-time high of $534B, while the average US acquisition multiple (purchase price to EBITDA) rose to its highest level of 10.9 in the third quarter of 2016. The report further identifies that more than two thirds of portfolio companies did not achieve projected EBITDA margin expansion over the holding period. Finding and tuning the additional cost reduction levers continue to be high on the list of priorities.

A typical Private Equity firm has portfolio companies operating in a variety of industries and verticals. In addition to functions such as finance, HR and IT among the companies across various verticals, customer support has similarities within two key categories — consumer and business. A PE company can leverage the commonalities in each of the consumer and business segments and can streamline operational effectiveness. Customer service has become a major part of the value proposition in today’s hyper-connected digital world. Users expect seamless, frictionless and instant answers and resolutions to their questions and problems. The contact center industry has evolved from an onshore human customer service agent model to a hybrid onshore and offshore model to the use of omnichannel solutions. Though this evolution has certainly helped improve productivity and reduce costs, it still scales up only linearly with the number of calls. For PE firms looking to contain costs while increasing customer satisfaction scores, limiting the trajectory of the linear scale up of costs is an attractive option.

One of the avenues by which PE portfolio companies can achieve the desired scalable cost structure is to implement automated chatbots that would supplement customer service representatives for an experience commensurate with the digital age expectations. Powered by an AI/NLP (artificial intelligence/natural language processing) engine, chatbots offer instant gratification to customers by providing immediate responses, provide expanded hours of customer support availability and a higher quality customer experience. In addition, it allows the company to handle heavy call volumes during peak seasons — for example, between Thanksgiving and New Year for e-commerce, retail companies and brands.

We believe that an AI/NLP-based, well trained and trainable text and voice-based conversational chatbot would address many of the challenges faced by the customer support teams. Support over multiple platforms such as the desktop or mobile web, messaging platforms such as Facebook Messenger, WeChat and voice assistants such as Amazon Echo and Google Home would enable vast coverage and frictionless and ubiquitous access for users. By integrating seamlessly with various IT systems through webhooks such as REST APIs (Representational State Transfer Application Programming Interfaces), the resulting solution becomes a very powerful and an effective communication tool.

In our experience of rolling out AI/NLP conversational interfaces for Global 2000 companies, we find a number of common elements within a vertical such as retail, online education or broadband/wireless service provider. For example, many of the intents that most users would interact with the business are the same for various companies in that industry. On the other hand, there are commonalities among the service paradigm, SLAs (service level agreements) and interfaces with IT systems across the portfolio companies of a private equity firm. Though PE portfolio companies typically are in various stages of IT integration, their goal is to have a consistent and uniform experience over time. In addition, a private equity company can leverage economies of scale in implementing a chatbot solution to enhance customer experience across its entire portfolio. By combining the best of human power and AI power, Private Equity portfolio companies can reduce costs and improve customer experience significantly, while allowing the PE firm to have a cost-effective and consistent service offering.

For more information about Passage AI, please look us up at: http://www.passage.ai

Our Kubernetes Journey at Passage.ai

By Deepak Bobbarjung

Introduction

At Passage AI, we offer a platform to build, manage, and deploy AI powered chatbots. Our technology stack comprises of a set of microservices that perform both message handling via AI and also bot configuration and management. In this blog, I will describe our journey of migrating our microservices from AWS ASGs (auto scaling groups) to Kubernetes.

During the early days of the company we created a Jenkins CI/CD pipeline based on multi-branch pipelines and also moved to a model where we hosted each microservice in our architecture as an AWS Auto Scaling Group that could be configured to be scaled up or down based on load. Each code deployment of a microservice would involve updating the code in the current instances of the ASG and then also updating the AMI/Launch Configuration backing that ASG. While this worked, we found that the deployment process would take a long time, especially during AMI updates. Further, the deployment process would fail intermittently during AMI updates. This would mean that when the ASG would try to scale up, the newly spun instances would often be from a non latest version of the code, resulting in inconsistent behavior.

The Challenge.

During those early months, we kept getting requirements by the business team to be cloud agnostic for the following reasons:

1. In the field, we will encounter customers that will work with us only if we were hosted on a particular cloud or a particular region of a particular cloud.

2. We will encounter customers who are concerned about moving to the cloud, and will ask us if we can provide an on-prem solution that they will host.

3. As a startup we had credits with all major cloud providers that expired at different times. We wanted to be able to leverage the credits we had across all our cloud provider partners, and not just our AWS credits.

So while sticking to one cloud provider is a reasonable choice in the very early days of a startup, in our case, we concluded that it was in our long term interest to design our infrastructure to be cloud agnostic.

However, designing infrastructure to be cloud agnostic comes with a set of challenges. The way you configure and manage scalability, redundancy and load balancing of traffic (just to name a few aspects of microservice management) differs significantly from one cloud to the other. This has the following implications across the engineering organization.

1. This implies the CI/CD tooling we have to write to deploy new code would differ based on the cloud we are deploying to.

2. Microservice owners in charge of reliability and scale for their microservice will have to learn APIs or processes to configure auto scaling and reliability on each of the clouds that we support.

3. Developers will need to understand the process of testing and debugging their code running on all major cloud providers. Integration tests and stress tests would need to be written to potentially test against code running across multiple clouds.

For a small company like ours, the above concerns make it virtually impossible to run our infrastructure across multiple cloud providers. We want most of our engineering team to contribute to our core product which is our Natural language processing (NLP) and bot builder platform rather than have to write devops automation to support three or more cloud providers. And finally, even if we somehow solved the above challenges, we would still not acquire the ability to host our services on our customers’ on-prem datacenters.

The Solution

Given the above concerns, it made sense to explore a way to deploy and manage our fleet of microservices in a cloud agnostic way. After doing some due diligence, we chose to move all of our services from AWS ASGs to Kubernetes. Kubernetes acts as an abstraction layer that can run on any of the major clouds. It can also run in our customers’ on-prem cloud if necessary. Once we brought up Kubernetes clusters for our different environments (integration, staging and production), we switched all our CI/CD tooling to deploy our microservices onto a Kubernetes cluster rather than to a specific cloud. Similarly we configured scale and high availability using Kubernetes commands and scripts rather than scripts or tools that are specific to a cloud. Yes, microservice owners and developers are now required to learn the constructs needed to deploy, debug, and configure availability and autoscaling of their microservice in Kubernetes. On the upside, they do not have to care about the underlying cloud infrastructure that Kubernetes is running on. Given the rise of Kubernetes as the de-facto standard for orchestrating and scheduling microservices, we anticipate that our development team will benefit by adding a bit of Kubernetes knowledge under their belts.

The Transition

I won’t claim the transition from AWS ASGs to Kubernetes was easy. The challenge was that we had bots in production with hundreds of thousands of users per day, and we had to make the transition while ensuring zero downtime to our existing customers.

To support our AWS ASG pipeline, we had written Jenkinsfile pipelines for each of our microservices that would deploy the latest code to AWS ASGs in staging and production environments and then also update the AMI for the staging and production ASGs.

The plan was to create a new integration (INT) environment in K8s only, then move our AWS staging environment to K8s, and finally move our AWS production environment to K8s. These were the steps we followed to transition to Kubernetes.

1. Bring up Kubernetes clusters for our 3 new deployment environments — integration, staging and production.

2. Create a common docker registry for all the environments and dockerize all of our existing microservices.

3. Create configmap, deployment and service files for each of our microservices and for each of our environments.

4. Deploy nginx reverse proxy on each of our environments with routes configured for each of our microservices.

5. Change our Jenkinsfile pipeline to create/apply a Kubernetes deployment from the latest docker image during the integration and staging deployments instead of updating the corresponding AWS ASG.

6. Switch the routes for our external microservices running on staging to point to the new routes exposed by nginx running in the new staging environment. Create new routes for our integration environment and point them to the K8s integration environment.

The above steps effectively switched our integration and staging environments from AWS ASGs to Kubernetes. We then let things simmer for some time allowing our engineering team to familiarize themselves with Kubernetes. During this time, we were still running our production environment on AWS ASGs. This was not an ideal scenario, but was worth it as it gave us time to understand several considerations of running our code on Kubernetes, such as sizing, performance and latency. We were able to conduct several stress tests and monitor performance. These tests gave us the confidence that we were finally ready to switch over our production environments to Kubernetes

Switching Production to Kubernetes 

The major concern with moving our production environment to Kubernetes was to ensure zero downtime for our production users. We identified an order in which our microservices could be switched over to production. This order guaranteed that at any given step, it was ok for some of our microservices to have been switched over to K8s and for the rest to still be running on the original AWS ASG environment. We also set up a mongo mirror to continuously replicate our mongodb database from AWS East region to the Azure region where we were hosting our Kubernetes environment.

Following the above processes allowed us to perform the migration with truly zero disruption to our customers and their users. We now have all of our microservices running on Kubernetes on all our environments.

Reaping the benefits

Transitioning to K8s has allowed us to become cloud agnostic giving us the ability to deploy our services on any cloud or on on-prem datacenters. We are also realizing the benefits of dockerizing our microservices — for example, our CI/CD pipeline takes much less time now that we are creating a docker container with every code deployment instead of creating a new AWS AMIs. Scaling up in Kubernetes is fast and reliable — as new docker containers can be created in a matter of seconds as opposed to spinning instances from AMIs which would take minutes. Using configmaps also allows us to use the exact same image for a microservice across all our environments, whereas previously we had to maintain a different AMIs for staging and production.

Conclusion

We realize we have only scratched the surface in terms of leveraging all of Kubernetes’ capabilities. We have several exciting features on our devops roadmap including blue/green deployments, creating developer sandboxes, scheduled jobs, collecting microservice metrics by setting up API proxy via  sidecar containers and exploring training of machine learning models via Kubeflow. We are excited about the possibilities and will continue to leverage Kubernetes to provide additional value to our developers, allowing them to focus the bulk of their efforts on building the best platform for AI powered chatbots.

For more information about Passage AI, please look us up at: http://www.passage.ai