1. Insights
  2. Digital Experience
  3. Article
  • Share on Facebook
  • Share via email

Machine learning security for chatbots

Posted August 5, 2021
Person typing on laptop with graphics symbolizing security

In this digital age, customers like to choose how they get in touch with a business. For many, this involves picking up the phone or sending an email, but a growing number of customers prefer self-service methods, like chatbots, for quick answers to their questions.

The benefits of chatbots are clear. Not only do your customers get the information they need quickly, but you’ll free up customer service staff so they can focus on handling the requests that need a human touch.

Chatbots can also reduce costs and improve customer satisfactions scores. Eighty-two percent of technology leaders surveyed by TELUS Digital reported cost savings of up to 50%, while nearly three-quarters (74%) of leaders saw an increase in customer satisfaction scores following the implementation of a chatbot.

While there are clear benefits to the technology, there are also challenges, including the possibility of chatbot security breaches or hacks. Below are a few key considerations to ensure your chatbot is designed with machine learning security in mind.

How are chatbots vulnerable?

The first step to avoiding security flaws is to understand how a machine learning chatbot might be vulnerable. Knowing potential issues when designing your chatbot can help you safeguard your customers and their data from the very start. Here are three chatbot security risks to look out for:

Man-in-the-middle: This type of vulnerability won’t necessarily come from your chatbot, but from a replica. Man-in-the-middle is an advanced type of phishing where a hacker creates a chatbot to look like it’s part of your business. This can lead to users handing over sensitive information or clicking a link that installs malware on their device.

Data poisoning: One of the great benefits of an AI-enabled chatbot is that it can change and adapt as it’s used. However, this can mean it’s open to data poisoning attacks if it hasn’t been trained properly. Data poisoning occurs when a chatbot is fed enough false information to assume the incorrect data to be true. Alongside incorrect information, a chatbot could also be trained to point users to malicious URLs. Unfortunately, there have been a number of cases of chatbots being poisoned with false and offensive data after trolls caught wind of security vulnerabilities.

Security flaws that lead to leaked data: Lax security protocols could leave doors open for hackers to steal sensitive information. If your employees use or work on chatbots, an unlocked phone or computer could present a risk. The information garnered could be used to maliciously direct customers or request information such as passwords. Proper policies and training for employees can help avoid these situations.

How to protect chatbots

The good news is that you can protect machine learning chatbots with the proper design process. Here are a few considerations when building your next chatbot:

Adversarial training: You can train chatbots to recognize adversarial inputs. This is a brute force solution that teaches the chatbot not to be fooled by certain images or text strings.

Defensive distillation: This encourages the machine learning model to output probabilities of classes rather than making a hard decision. This makes it more difficult for an attacker to sway the chatbot to an incorrect answer through data poisoning.

End-to-end encryption: This stops anyone but the sender and receiver from seeing a message. It’s widely used and is a very robust method of ensuring chatbot and machine learning security. It’s also vital for fulfilling legal obligations surrounding data protection.

Authentication: Authentication verifies the identity of the user. This is important to protect your chatbots from unauthorized access. Both authentication and authorization can come in a number of forms, from biometric scans (iris or fingerprint) to two-factor authentication.

Testing: Along with the training of your machine learning chatbots, it’s important to test them, too. Not only will this help you improve the customer experience, but it may also uncover potential flaws that might be exploited.

Education of employees: Training employees on basic cybersecurity practices can help avoid a range of issues. Encouraging team members to use two-factor authentication, complex passwords and secure connections can help ensure your chatbots are protected. This training, alongside the practice of locking any machine as soon as the user steps away, is an easy way to make all your systems more secure.

No matter how a chatbot is compromised, a hack could damage trust between a business and its customers. But critically, with the right security and planning in place, your chatbot can remain a powerful, and safe, conversational tool.


Check out our solutions

Leverage our proven process for adding bots to your customer service operations.

Learn more

Related insights