Since publishing this episode, we've rebranded to TELUS Digital.
On this episode, we discuss algospeak — online language meant to evade algorithmic detection — and how you can keep up. Listen for the compelling perspectives of Dr. Jamie Cohen, assistant professor at Queens College, City University of New York, and Siobhan Hanna, former vice president and managing director of AI Data Solutions at TELUS Digital.
A combination of 'algorithm' and 'speak,' algospeak is the collection of codewords, slang, deliberate typos, emojis and the use of different words that sound or have a meaning that is similar to the intended one.
The use of algospeak raises important questions for brands with online communities.
Show notes
Learn more about our algospeak survey that is referenced in the episode:Survey: 'Algospeak' on the Rise in Attempt to Avoid Automated Content Moderation
Guests

Assistant professor at Queens College, City University of New York

Former vice president and managing director of AI Data Solutions at TELUS Digital
Transcript
Robert Zirk: What comes to mind when you hear the word "panorama"?
Now, consider this context.
You're watching a short video. It's a "panorama sourdough recipe," and you're a bit confused. I mean, this loaf doesn't look any wider than any other sourdough bread loaf you've seen.
You swipe up. But the next video doesn't sit right.
This person is spreading misinformation about the "panorama." And you've seen reliable sources debunk this, but the number of shares on this post is concerning. You report it.
The next video talks about working from home during the "panorama".
Or maybe it's the "panini". Or maybe it's the... "Backstreet Boys reunion tour"?
What they're really referencing is the COVID-19 pandemic.
These are examples of algospeak, an online extension of our continuously evolving language. And on today's episode of Questions for now, we'll ask: what is algospeak and how can brands keep up?
Welcome to Questions for now, a podcast from TELUS International where we ask today's big questions in digital customer experience. I'm Robert Zirk. Before we get into keeping up with algospeak, we need to define algospeak, and somewhat ironically...
Dr. Jamie Cohen: It's a fun word because it's actually algospeak. The word is a form of algospeak.
Robert Zirk: That's Dr. Jamie Cohen, Assistant Professor of Media Studies at CUNY Queens College, and his area of focus is on internet culture.
Dr. Jamie Cohen: Algospeak is shorthand for algorithmic speech, which is just to move around an automated algorithmic system. To put it in better terms is that we use an internet that's content moderated.
Though we think the internet is basically open and full of information, we're actually watching it through the filter of content moderators, which are sometimes human based and sometimes AI based.
Robert Zirk: And together they work in tandem to screen this content for anything inappropriate for a general audience.
AI does a lot of the heavy lifting in filtering out a lot of the clear terms of service violations. But when users use or change their language beyond the words that are in the dictionary, it can get a little tricky.
Dr. Jamie Cohen: So it ends up being not seen by the AI, or they end up finding some ways of changing some of the letters so it's invisible to the screen, or even scratching them out. But algospeak is one step further.
That is when the user starts adopting that fake language that they made to avoid the algorithm in verbal speech. So now they start saying the words that are made for the screen to avoid content moderation.
Robert Zirk: And while the word "algospeak" itself might be a relatively new term, this concept predates the internet. It's part of how we as humans communicate.
Dr. Jamie Cohen: A lot of the keywords or terms that we use are filtered through would be considered the Overton window. Like, what is acceptable speech? What can you say out loud? What words are okay where you don't get in trouble for them?
So people have figured out how to mismanage language. Mismanagement of language is part of the human condition. It's how we communicate. It's part of shorthand and making sure that the in-groups understand the information. Before the internet, we've always been using word replacement, or at least word collapsing, or a nicknaming that sticks.
One example that a linguist often gives is the term "hussy" comes from the word housewife. And so it's turned from something that was pretty standard into something derogatory over time. We don't actually know sometimes that these words have origins or etymologies that predate other terms that we aren't even used to, like P. T. Barnum, who's basically the guy who came up with modern forms of advertising, used such different language that we'll never use today.
But at the time, it was, like, the most important way that people interacted with content. And so, neologisms, meaning the invention of new words, is just very much part of how humans have created new ways of reflecting things that the public adopts to. Over the course of time, pre-internet, we've either tightened or loosened our language depending on what communities you're in.
And then post-internet, now we have machines that tighten or loosen the language we interact with.
Robert Zirk: So why do people use algospeak?
Siobhan Hanna: Culture, youth culture, trends, meme culture, and then in some instances, there are certain groups within culture that may be influencing language for less benign reasons.
Robert Zirk: That's Siobhan Hanna, managing director and vice president at TELUS International AI Data Solutions. It's the AI training data division of TELUS International, and Siobhan and her team work with brands to leverage AI in ways that create a safer internet for all.
Siobhan Hanna: So it could be hate speech, or political groups, or political cultures could be incubating their own algospeak. And then I think safety can drive the genesis of algospeak in some instances as well, because there may be the need to be evasive in communication and it can be designed to mask meaning.
Robert Zirk: So from a brand perspective, is algospeak a bad thing?
Siobhan Hanna: I certainly don't believe that algospeak is all bad, and of course I know you're asking rhetorically. I think it's relatively neutral.
There are instances where of course it's problematic, whether that is because it's offensive or could result in a safety, a harm issue. But I certainly think, actually, it can help to create affinity in some ways. It actually can help to reinforce safety in some ways. I just think it's organic, right? I think it's just part of how language has always evolved. This is just a different means of how that is happening.
Robert Zirk: And Dr. Jamie Cohen concurs that the measure of whether algospeak is a good or bad thing, depends entirely on how it's being used.
Dr. Jamie Cohen: I think algospeak is a very savvy way of interacting with this content. Trust and safety is a very important part of all social media platforms. And I think without that, safety is the biggest concern. Not just for people, but for brands. You don't want your advertisement showing up next to something that's wildly inappropriate. Even on the basis of like marginally inappropriate. When people come up with algospeak, they're actually figuring out ways to continue being expressive and using the platform in their style while simultaneously being aware that safety exists. So I think they work in tandem, but I think without algospeak, that actually the platform becomes stale. I think the update of language is actually pretty important to the update of internet culture, and internet culture moves way faster than standard culture so it's very interesting to watch that.
And what happens with language, especially internet cultural language, is it becomes a meme, and the only way to feel like you're attached to that content is by repeating it because you know that the person watching the video already knows what you mean.
Robert Zirk: A 2022 TELUS International survey indicated that 44% of Americans use social media to make their opinions known on societal issues, and Siobhan noted the well-intentioned use of algospeak has helped to drive awareness and even make some progress.
Siobhan Hanna: Algospeak helps them to find affinity and to express, and to understand landscapes and where they find their people. I think we all are aware of instances where there have been safety issues, whether it's a natural disaster or whether it's a cultural event where, certain groups maybe were marginalized, and you could identify them and provide support based on their use of certain terminology or algospeak.
Robert Zirk: And it can build a sense of community among marginalized folks, which is so important, especially when they feel isolated or uncomfortable being themselves in their day-to-day life offline.
Dr. Jamie Cohen: One nice thing about algospeak is it helps young folk who are struggling with identity see other people speaking openly about their identity. It is the bridging of disparate people who may not have felt like somebody was like them, and using social media the way it's supposed to be. That's why it is social media. It does create community. It creates an ability for others to know that you're not alone.
Robert Zirk: It's that sense of connection that the internet makes possible.
Dr. Jamie Cohen: And it allows you to communicate or connect across vast spaces and cultures. It helps people specifically in like, not just marginalized folk themselves, but places of less privilege. People who can't actually speak out loud in their community, like you mentioned. People who can't speak out loud at work.
People who use censored language just in everyday speech just to remain safe can now find a space to exhibit this online. What's nice about the reverse is that when they learn the coded language, the algospeak, they might learn where those in-groups may exist. And just by using those keywords, they're kind of signaling to people who already know these keywords and now allowing them to find their community offline as well.
Robert Zirk: But algospeak can just as easily be used to spread hatred, misinformation and illegal content.
Siobhan Hanna: It really can be a safety issue. Online platforms have a responsibility to do their very best and help to ensure the safety of the users' platform. So when there is a safety element, they have a responsibility to evolve those guidelines.
And that is a very serious focus within content moderation, and it has a lot of considerations around training and around quality and around evaluation and evolution guidelines.
Robert Zirk: So if algospeak tricks the algorithms, why doesn't it trick us? How does a coded language become generally understood?
Siobhan Hanna: Algorithms are informed by artificial intelligence and before there's artificial intelligence, there's human intelligence. And then, the fact that, you know, as humans we have the ability to understand nuance and context. That context could be cultural or could be geographical, or it could be any form of context.
Robert Zirk: And all of these things shape the language of the internet.
Dr. Jamie Cohen: It has a grammar, it has a function, and you don't have to be terminally online to get it. But it does help to see other people using it. And I think memes are the answer here.
I think people lift language, place it into graphical language, which are memetics or memes, and post it on pages, and then it becomes slightly out of context of the original speaker. And people ask questions. What does that mean? And then that becomes, again, that in-group/out-group sensibility. "Oh, you don't know what that means?" Or you ask, or in the caption it tells you. So there's a way of understanding or learning it.
Robert Zirk: But once algospeak is introduced to the public...
Dr. Jamie Cohen: ...the algorithm updates to block that algospeak. So it is a recursive problem where, if the algorithm learns you've replaced the word and that word eventually starts to hold that meaning at the same value, the algorithm's gonna start blocking that word too.
Robert Zirk: So does this just become a continuous game of cat and mouse?
Dr. Jamie Cohen: I think most platforms assume that when you create a version of algospeak, it's coded enough not to bother the general safety of the platform. So I think most algospeak does stop at level one.
Robert Zirk: That being the first level of content moderation, whether it can make it onto the platform from the outset.
Dr. Jamie Cohen: But that being said, we are now in... whew, I wouldn't even know what language we're in. We're probably in internet language eight or nine, where LOLcats or textspeak used to be language one.
So text speak was a form of algospeak back when we had T9 typing and LOL was 5, 5, 5, 6, 6, 6, 5, 5 5, you know, so you had to, like, type that letter out.
Robert Zirk: Or even pagers had their own lexicon too, right?
Dr. Jamie Cohen: Absolutely. And so, those are like coded tech so you could get language across digital spaces.
And that was like the first version of internet language, txtspk (text speak). T X T S P K. And then we created emojis to collapse that language. And then we created memes, and then we created LOLcats, then we created doge, and then we created the, we live in today.
Robert Zirk: So in a way, algospeak is really accelerating the change in our language more than maybe we would organically have.
Dr. Jamie Cohen: Without a doubt. Yeah, algospeak because it is verbal and visual simultaneously, has more of an effect than your typical graphical language. The one note I always try to remind people is like sarcasm. Sarcasm can't be translated in texts very easily. You do have to signal it or at least give somebody a tone warning, like, this is my tone.
Robert Zirk: There's like the little slash S.
Dr. Jamie Cohen: Yeah, that's right! Yeah. It's a little slash bracket S, or sarcasm, or just sarc. Or even just tone warning, like, you could even say, like, tone warning, sarcastic, you know. You have to explain that so people read it and hear the tone in their ear. So it doesn't, it isn't like, "Oh, he's brilliant!" to "Oh, he's brilliant..." you know, so the tone itself, same words, just different toning.
But with algospeak, you can't change the tone, you have to change the word, and so it is both visual and audible at the same time, or oral, and the work that gets done for that is far more impressive language-wise and linguistically and pragmatically than would be done without the type of algospeak enforcing change in this rapid manner.
Robert Zirk: And the audiovisual element of algospeak isn't just limited to the words themselves.
Dr. Jamie Cohen: Most video producers realize that accessibility is really important, adding text to the image. But I've also noticed that algospeak has taken place of sub language. In other words, if you listen to the video, it might not say the same words as what's on the screen.
So the screen itself is displaying a set of language, or algospeak, that when you read it, says something different than the content that's being produced. That to me is very creative, dual layered production. So that requires them to realize that: A, they gotta work around whatever filter system is gonna detect what they're doing, their deception, and B, they're aware that they're telling two stories simultaneously, probably both of which contain different versions of algospeak.
Robert Zirk: And algospeak usage is growing, according to a 2022 TELUS International survey. 30% of respondents said that they've used algospeak to get around content moderation filters, and 42% of respondents said they're noticing an increase in algospeak online.
Siobhan Hanna: I'm going to guess that I speak for most consumers and users of the platform, which is, "Hey, when I use this platform, I am assuming and I feel entitled to a safe experience, and I expect that other users will be too."
Robert Zirk: 41% of Americans surveyed say that if they come across the kinds of algospeak that negatively impact user safety, it leaves them with a negative impression of the platform or brand that it's being associated with. And another 9% say that it's enough for them to stop buying a brand altogether.
Dr. Jamie Cohen: They have to make a value judgment. Is the content valuable enough to maintain its ability to communicate, share information and keep people on the site without alienating anybody at the same time? And is it dangerous?
Robert Zirk: Before we move on, a quick content warning ahead. This next example of algospeak briefly references suicide and self-harm. If you want to skip this example, fast forward to 15 minutes and 26 seconds.
Dr. Jamie Cohen: One thing I always remember with this is like the term "unalive," you know, so you can't talk about self-harm in any way. So the terms like, I can't say, "I'm gonna kill myself. I'm gonna suicide." Those words are banned no matter what, text or verbal. So they say, "I'm going to unalive myself," and they mean it as a joke. So now it requires context. is this said in a joking manner? Is this, like, the eye roll or the little skull emoji? Like is it supposed to be sarcastic?
Language is geographic as well. Language we'd use in the United States may not be usable in the Middle East. Language from Canada may not be the same as what might be acceptable in South America.
Robert Zirk: Considering the rapid pace at which language is changing, what can brands do to keep up?
Dr. Jamie Cohen: AI systems - keep updating them. Language is not static ever. As long as we've told stories, we've made neologisms and avoided the content moderators, like I said, whether that's society, friends or what others. So forever it's there, so if there was a staticness to it, we wouldn't need trust and safety as a team. We have to keep them on because tomorrow language can be different. Algospeak doesn't happen overnight. It only was noticeable when enough people were participating in it. And I think that is when trust and safety then probably should hire another person that's a little more savvy, maybe even a user or influencer and somebody who could come in and consult with them, or somebody who can work on the trust and safety team, sort of like the, what is it, the white hat hacker, you know, somebody who can work in and make sure that the language is being noticeable.
Robert Zirk: And working with an experienced partner like TELUS International can be beneficial if you don't know where to start.
Siobhan Hanna: It's important to note in content moderation and trust and safety, generally we do not define the guidelines. Our customers do that. We are typically entrusted with building a highly qualified, well cared for, well enabled, educated, well managed workforce of content moderators, so professional moderators that adhere to a very high quality standard.
We don't typically make those decisions around how it's evolved, but we do help to consult and help our customers to evolve them. It's not our decision ultimately. It is our job to help our workforce, our talent to evolve, maintain cultural awareness. There have been numerous instances where our team have said, "Hey, we've noticed that this has happened. Hey, our team in our team meetings, you know, we identified that this phenomenon just cropped up, let's talk about it." That's encouraged.
Robert Zirk: Given the important role humans play in understanding context, is there a possibility that generative AI can potentially help content moderators keep up with algospeak?
Siobhan Hanna: I wouldn't count it out just yet in terms of understanding tone and voice and style. because of course generative AI has made a number of breakthroughs lately, particularly in the area of LLMs and so on.
But anticipate that that will have an impact, I think a positive one, in content moderation in future as well.
More and more, LLMs are informed by realtime data, so what that would mean is, as algospeak, for example, proliferates, that it absolutely could play a role in supporting and understanding algospeak in the content that is moderated.
Robert Zirk: However...
Siobhan Hanna: Generative AI is also prone to hallucinating. And there's a veracity, you know, a factuality component to the outputs of generative AI still, and I'm not casting aspersions in any, one generative AI product at all. It's a normal part of the product development and evolution cycle, and that's part of where we're focused is the training and understanding and helping our customers, those builders of those foundational models, to evolve in tune, and enhance that output so that it is trustworthy.
Robert Zirk: And be sure to subscribe to this podcast on your media player of choice as we'll be exploring more generative AI topics in future episodes.
In the meantime, Siobhan noted that language and algospeak is constantly evolving and content moderators need to ensure their guidelines and practices are evolving with it.
Siobhan Hanna: When there is a major cultural event, or it could be a natural disaster, or it could be something more negative than that, it can have an impact on the way content moderation plays out, its impact, its risks. And then same with, you know, sort of more benign events as well. Really key is language skills and cultural skills, and that could be simply about having native speaking, in market content moderators that are thoroughly trained, thoroughly cared for, with thoroughly tailored and constantly evolved guidelines.
Robert Zirk: And Dr. Cohen stressed the importance of ensuring internet literacies are on par with media literacy.
Dr. Jamie Cohen: And I'm glad we're speaking about it 'cause I think it does need to be talked about in the terms of media literacy. It's media, it's produced, it's content, it's online. And like I was saying before, like memes are really important to read 'cause memes just add a level of reference.
It's content that refers to another piece of content. And the more we know meaning and etymology and leveling up, it allows us to feel like we don't have to ask questions. We already feel connected to each other and we feel empowered to make our own content. And that only happens through literacies, which is reading and writing of this material. I think more people should not see the internet as a distinct and separate space, but rather just a part of our life that we share with a digital tool.
Robert Zirk: Thanks so much to Dr. Jamie Cohen and Siobhan Hanna for joining me and sharing their insights on algospeak. And thank you for listening to Questions for now, a TELUS International podcast. That's all... for now.
Explore recent episodes
Hear from experts discussing the most timely topics in customer experience.
Suggest a guest or topic
Get in touch with the Questions for now team to pitch a worthy guest or a topic you’d like to hear more about.
Email the show