Heather Jones joined Geckoboard as Customer Success Champion in 2017. Here she talks about her keen interest in AI, how she built Geckoboard’s first Customer Success Bot and how she sees it fitting in with the team going forward.We all know what happens in Terminator: AI system Skynet creates a race of Machines that take over the world and try to exterminate all humans. Sure, it’s an extreme example. Machines will not cause a nuclear apocalypse anytime soon (at least not without a human helping hand...) but the idea of our current workforce potentially being replaced by AI is certainly a hot topic.
A more positive vision is, of course, a world where automation doesn’t kill us but instead helps us prosper, like it did following the Industrial Revolution. This is the worldview in which we started considering a bot in our Customer Success team.
Why use AI in our CS team?
We’re always looking for ways to reduce friction and improve our customers’ experience. Our project started with the goal of improving the efficiency of the Customer Success team.
We knew that adding a bot to the team would allow us to delegate a few specific tasks we’ve found to be repetitive, basic, and easily solved through our self-service resources.
I’ve been interested in AI, machine learning, and deep learning since college, mostly reading various papers on the subject (lots of which can be found on deeplearning.net) and tinkering with my own bots. So, I was keen for us to build the CS bot ourselves.
Building a chatbot to function with our existing support platforms – Intercom and Zendesk – and given our other responsibilities supporting customers meant that we needed tools to help us build and modify the bot quickly and in a sustainable manner.
The whole team scoured the internet for tools to make the task simple. In this process, we learned a lot about important design aspects
to consider, how to craft a good conversational experience, and what success looks like for other bots.
We found a lot of great tools, but most of them did not directly integrate with Intercom. Narrowing our search, we discovered that Meya.ai directly integrated with Intercom while also providing integration for Natural Language Processing (NLP)/Natural Language Understanding (NLU) platforms like wit.ai and API.ai.
At a high level, our bot is built on Meya.ai, using a content management system (CMS). The bot parses incoming messages and decides whether or not to send a reply, based on what’s in the CMS. We use NLP and NLU to broaden our bot's capabilities. These enable it to understand what our customers are asking it and to speak in a language they are familiar with – to make our bot more conversational.
If you’re super into bots and what goes into building them, here’s a more technical explanation:
(Skip to the next section if you’d rather stick to the basics.)
The Meya platform relies on a set of objects: Intents, flows and components. Intents trigger flows, and flows are a set of states that get executed in a defined order, and sometimes components are executed within a flow.
To keep things simple, we’re also using a CMS feature within Meya that limits the amount of code we need to write. The CMS is organized into spaces for each topic within our bot’s scope.
![Meya.ai CMS Space Example](/content/images/Meya.ai CMS Space Example.png)
We have a handful of flows dedicated to executing components for our CMS, and others to handle requests for a human and unknown inputs.
Using an NLP/NLU platform was something I really wanted to incorporate, not only to ensure that unstructured inputs could be trained and handled with a high success rate, but we eventually want to parse inputs for entities and extract them for use in more involved processes.
With these benefits in mind, we connected our CMS to API.ai, which accepts inputs from users and matches the inputs against pre-trained models.
We learned through a lot of reading and research that for a bot to be successful, it should have a narrow scope of responsibility. This aligned well with our needs to address only the repetitive, yet simple queries we receive such as, “How do I download a copy of my invoice?"
We’re currently testing the bot and defining success is fairly simple: We’re looking at whether the bot is providing accurate responses and only responding to the queries it is trained for. We’re also tracking how the customer experience is impacted.
Going forward, success will be measured by whether or not we see a significant reduction in the number of queries that need to be handled by humans and if our First Response Time improves.
We know that we receive specific types of queries relatively often. We also know that we have answers and guides that will empower users to answer those questions by themselves. Our hypothesis is that they reach out to us only because they didn’t know we had that content or couldn’t find it, not because they want to interact with a human.
We believe they want a solution, and speed matters the most in these cases. If we’re right and our bot succeeds within seconds to bring the right documentation in reaction to requests, customers will be delighted. If we’re wrong and what they truly care about is having a personable experience, or if the bot fails at understanding the requests and provides links to incorrect articles, customers will be unsatisfied. Therefore, Customer Satisfaction is a health metric we’re keeping a close eye on.
The bot isn’t yet accountable for anything as we’re still exploring limitations and the best way to deploy. If we succeed, the bot will be responsible for providing quick and accurate responses to simple yet repetitive questions and therefore freeing up human team members’ time so they can spend it writing or improving documentation that the bot will bring straight to customers.
Bots will not replace human CS champions
Since we started with a rather small scope of topics, our bot hasn’t had that many opportunities to handle customer queries. We’re working on expanding the scope so hopefully, we’ll see an impact on our metrics soon.
Geckobot’s future will depend on how it does in this phase. If everything goes well, the bot will help us with a large portion of ‘support requests’, i.e. customers asking for past invoices or presale questions about pricing, but it’s very far from replacing any human team members. In fact, inspired by this article on naming bots, we decided not to give the bot a gender or a human name so as not to pretend it’s human.
As McKinsey & Company explains, there are certain things AI can definitely do: Simple Tier 1 tasks like updating information or finding how-to articles, repetitive, labor-intensive tasks that humans generally don’t want to do anyway. The tasks AI can’t do are ones that take more consideration, empathy, creativity, and ingenuity to do. For example, no customer is going to take an apology from a robot when something goes wrong!
![McKinsey Will a Robot Take My Job Survey](/content/images/McKinsey Will a Robot Take My Job Survey.png)
The key takeaway from our experiments thus far is that rather than replacing us, bots will simply be making our working lives easier and less tedious.
Want to know more about how we built the bot? Leave your questions in the comments below!
Are you a customer support pro based in or around London? Join our London Support Lab event on 28 September and learn from your peers.