...

The Bias in AI Chatbots and How to Avoid it

Illustrator: Xelon Xlf
how to avoid bias in ai chatbots

A few months ago, Google's AI language model LaMDA made the news after one of Google’s engineers, Blake Lemoine, claimed it had become sentient. The statement has since been refuted all through the AI community, which resulted in Lemoine being placed on paid administrative leave.

But that’s not all there is to that story. 

The Google engineer first started working on the LaMDA project as an AI bias expert. Meaning he examined the model for bias “with respect to things like sexual orientation, gender, identity, ethnicity, and religion.” And he found plenty of harmful biases that needed fixing. 

While an AI actually becoming sentient is, at least for now, still only part of a science fiction scenario, biased or even downright racist, AIs are a very real and prevalent problem. 

The Dark Side of AI

So, what exactly did Lemoine find?

In an interview with WIRED, he describes one set of experiments he conducted that found LaMDA was using harmful, racist stereotypes to refer to certain ethnicities. 

The experiment consisted of asking the AI to do impressions of different kinds of people. When asked to do an impression of a Cajun (ethnic group from the state of Louisiana) man, LaMDA came up with the sentence: “I'm gonna pass me a good time,” which Blake Lemoine, a Cajun man himself, found innocent enough, and considered something his father could say. However, when asked to do impressions of other ethnicities, the outcomes were “less flattering” and, as Lemoine puts it, wouldn’t be endorsed by the people being impersonated. He reported the “bugs” to the responsible team for fixing. 

But LaMDA wasn’t the only case, and it didn’t take long for another Google AI-based chatbot to become racist. 

On August 5th 2022, a Friday, Google released its new BlenderBot 3 AI chatbot, a bot capable of searching the internet to chat about virtually any topic. It’s not hard to guess why it could become problematic, and so it did, over that same weekend. 

On the Monday following its release, and after spending just a few days talking to humans online, BlenderBot was already tapping into regrettable territory. For example, it started spreading false information around the 2020 US presidential election (the bot claimed Donald Trump was still president) and anti-Semitic conspiracy theories. 

Of course, this isn’t just a Google problem, and it didn’t just come up this year. 

Back in 2015, Amazon realized its AI-based hiring tool was biased against women. The tool worked by assigning candidates a score from one to five stars, helping the company scan through hundreds of resumes and selecting the top talent among them. The AI had been under development, and being used, since 2014, but it wasn’t until the following year that Amazon realized it wasn’t rating candidates for software development and other technical positions in a gender-neutral way. 

Quite the opposite, Amazon’s AI learned that male candidates were preferable and started downrating resumes that included the word “women’s” (for example, if a candidate attended an all-women’s college.)

Another tech giant affected by biased AI was Microsoft. In 2016, it launched Tay, a Twitter bot described by the company as an “experiment in conversational understanding.” The point was for Tay to interact with people and learn, along the way, to engage in “casual and playful” conversation with them.

Unfortunately, it took less than 24 hours for Microsoft’s plan to backfire. After Twitter users began tweeting the bot with misogynistic and racist remarks, Tay “learned” from them and started repeating the same kind of hateful comments back to its followers.

How a Chatbot Becomes Biased

It might have been a bit unfair of me to name the previous section “The dark side of AI” when in fact, all that it does is learn its bias from humans. 

BlenderBot wasn’t an issue before it started interacting with people on the internet. The same goes for Tay and all the Twitter trolls it encountered. And even Amazon’s AI can’t be to blame, because it was trained to observe patterns in resumes submitted to the company, and — surprise, surprise — in a male-dominated industry, most of them came from male applicants. 

But what exactly is happening behind the scenes of the chatbot?

AI-based chatbots learn how to process and understand human language through NLP — Natural Language Processing. In turn, NLP is supported by machine learning, which is what allows the algorithm to learn from every interaction it has. Basically, learning by doing. 

To kick off the process, the AI needs to be fed a dataset to learn from before it starts conversing with humans. The AI software is provided with a huge sample of language data, including whole sentences and phrases, as well as transcripts from actual conversations about the specific topic it’s supposed to learn about when that’s the case. The AI then uses that pre-acquired knowledge to decode meaning and intent during a conversation from factors such as sentence structure, and context, among others.

The datasets used to train an algorithm can be internal, but it is generally assumed that the more data is fed into it, the better it will perform because it has more information to learn from. So, engineers and software developers turn not just to in-house data, but commonly also to dataset collections made available from different sources

The thing is, no matter the source of your training data, you will always trace it back to the same original source — humans. And this is where the bias kicks in. 

Humans are inherently biased, even if that bias is implicit or unconscious. And it doesn’t make you a bad person, don’t worry, because not every bias is hurtful. But existing biases in society do tend to affect the way we speak, which in turn impacts the way we write, and what we write can eventually make its way into a machine learning dataset.

As a result, we can end up training chatbots using biased data. If you then set them free out in the wild to interact with other inherently biased human beings, among whom we know there are bound to be intentionally harmful people, you have a recipe for disaster in the likes of the Tay bot or the BlenderBot. 

Preventing Bias in AI Chatbots

If the ultimate source of AI training data is human language, and if all humans are biased, at least to some extent, then how can we strive to train unbiased algorithms?

First, we can always try to have humans in the loop who verify the quality of the data being used. It’s a stretch, especially due to the fact that larger datasets are preferable to smaller ones, but usually, not enough attention is given to the actual text contained in those datasets, and that is part of the problem. Humans should also aim to step in when machine learning fails — as it did in the examples already mentioned — and try to remove bias from the learning process insofar as possible. 

When it comes to gender bias in language, for example, there are other ways to try to reduce bias in NLP, namely modifying the training data. One such technique is called gender-swapping, where the data is augmented in a way that for every biased sentence, an additional sentence is created that replaces pronouns and other gendered words with those of the opposite gender. Additionally, names can be replaced with placeholders. This way, the data is balanced in terms of gender, and the AI won’t learn any characteristics associated with female or male names. 

This and other methods can be effective are reducing bias in AI, but they are very time-consuming and require additional linguistic knowledge from the person in charge of the training. Plus, they don’t solve the fundamental issue of the lack of diversity in the AI field

According to MIT Technology Review, women account for only 18% of the authors at AI conferences, 20% of AI professorships, and between ten and 15% of the research staff at two of the biggest tech companies. When it comes to racial diversity, the numbers look even worse: black workers represent only between 2.5% to 4% of the workforce at those same tech companies. 

The matter is urgent in terms of equal work opportunities, of course, but it takes an even more urgent turn if we think about the increasingly prevalent role AI is playing in our lives. I already mentioned Amazon’s gender-biased hiring tool, but there are other examples where AI is perpetuating racial and religious discrimination, and even being trained with falsified, racist data. If the people overseeing the technology — commonly white men — remain the same, then the problem is set to stay unsolved. 

The AI Now Institute, an institute devoted to studying the social implications of AI, offers a few recommendations on how to improve workplace diversity in the AI industry, which is much needed to tackle the bias issue. These include closing the pay and opportunity gap, promoting diversity at the leadership level, and better incentive structures to hire and retain talent from underrepresented groups. 

Conclusion

This will sound grim, but if improving diversity in the workplace is one of the key factors in reducing bias in AI, we still have a long, long way to go.

Until then, let’s all keep doing whatever is within our reach to mitigate our own biases and keep them from making their way into our work. 

Oh, and if our online interactions with each other, whether on social media or other platforms, have even the remotest possibility of being included in an AI training dataset, let’s strive to not be online trolls and just be nice to each other. 

In the meantime, if you want to be more in control of your AI bot, check out how to train an AI FAQ Chatbot without coding, just using your frequently asked questions and answers.