A Chatbot’s Self-Censoring Language: Necessary Evil or a Dystopia?

Why Trust Techopedia
KEY TAKEAWAYS

AI chatbots are steering towards a neutered language where words and concepts are cut out. What the broader effects on society and human communication will be?

One of the most pressing issues of modern media is the terrifying prospect of saying something that could offend someone, some category, some protected group, or just something.

The process of sacrificing any form of freedom in communication is also slowly creeping into machines. Generative AI is steering towards a neutered language where many words and even concepts are entirely cut out from any conversation.

However, besides potentially making new content incredibly boring, there are some unexpected consequences to this form of self-censorship.

Chatbots such as ChatGPT are used by and for kids and teens, and as new generations use these tools more and more to write almost everything, certain concepts may be erased from their minds since they are filtered out.

What the broader effects on society and human communication will be? Are we witnessing the birth of the dystopian Orwellian Newspeak?

Are Chatbots’ Communication Skills Actually De-Evolving?

When George Orwell first mentioned Newspeak as a fictional language in his novel Nineteen Eighty-Four, he certainly didn’t think that machines would be the heralds of this subtle form of manipulation.

Advertisements

In his book, Newspeak was a means to an end: in this case, the Party’s will to limit citizens’ ability for critical thinking. The smaller the vocabulary, the harder it is to articulate the abstract and the harder it is to grasp advanced concepts such as freedom, self-determination, and, ultimately, free will.

Modern chatbots also show signs of devolving into a neutered form of speech. Entire concepts such as suicide are, in some cases, censored outright, as algorithms block any conversation delving into the realm of “dangerous” topics.

Currently, self-censorship algorithms are harming chatbots more than anyone else, with reports that the implementation of self-censoring caused many people to leave traditional chatbots altogether. The recent emergence of NSFW AI chatbots suggests a gap in the market to be filled.

Is ChatGPT an Enemy of Free Speech?

The fact that certain topics cannot be explored means that the usefulness of the chatbots is seriously hampered. For example, one Reddit user said that ChatGPT is prohibited from talking about Hitler’s speeches, even when this topic is analyzed from the evolution of propaganda perspective. Not everyone who wants to delve into the former Germany’s dictator is there to agree with him.

According to the Redditor, when asked to talk about the ideological speeches of other world leaders such as Mahatma Gandhi, there’s no such block.

Whatever the “hot” topic may be, be it racism, gender disparities, or certain ideologies, ChatGPT will eventually revert to the usual, repetitive, extremely stereotyped banter. You might agree this is the very method used by propaganda: telling people what’s good and right and what’s bad and wrong.

The truth, especially regarding history and human character, is never black and white and should always be searched by examining all nuances. We should live in a world where dangerous ideas can at least be discussed, and preventing people from knowing anything about certain topics is much, much worse.

There’s a reason why we don’t erase “bad” people from history and why we should never burn “problematic” books.

Who Controls the Controller?

When some topics are removed from any conversation, a new question arises: Who has the power to decide what should be kept out?

It doesn’t take a scientist or a sociologist to understand how the self-censorship of generative AI can go overboard. Its hard-coded rules to never “encourage actions that are unethical, immoral or harmful to others” has reached the point where it will reject answering a question on how to get a married man to leave his wife.

Your judgment on that kind of question may be personal — we may all agree it’s certainly in the “grey” zone of morality.

But chatbots are tools, and tools are not supposed to make judgments, and when we pass responsibility over to them as to what is morally acceptable and what is not, we abrogate our own responsibility — or a collective societal responsibility — to a bot whose own decisions are within a ‘black box‘ — where it’s process is so complex that its decision-making process cannot be explained in a way that humans can easily understand.

That has to have repercussions. What is “morally acceptable” on one side of the globe can be extremely wrong on the other side. At its worst, we are witnessing the imposition of someone else’s point of view on life and ethics.

The Bottom Line

When a topic is banned from conversation, it becomes taboo. When a taboo is generated, society changes the way that topic is perceived to the point of marginalizing anyone who speaks about it or is linked with it.

Chatbots are currently unregulated while imposing their own moral code — either deliberately or unintentionally — onto their users.

Unsurprisingly, governments in more authoritarian countries have already spotted the potential for influencing younger generations through AI chatbots.

Is the Western self-censorship imposed through chatbots truly different from the authoritarian drift of its Eastern counterparts? Or is it just a subtler way to vertically impose a way of life and a way of thinking?

Even if the original aim is to protect someone from “harm”, censorship leads to the same effect: prohibiting people from expressing themselves.

When politics, ideology, opinions, and propaganda start spilling over into what is supposed to be just a technological tool, the good faith of those who handle it should be called into question.

In any case, we need to acknowledge that words hold a lot of power – and, in this instance, the power to influence entire generations. And power must always be handled carefully.

Advertisements

Related Reading

Related Terms

Advertisements
Claudio Buttice
Data Analyst
Claudio Buttice
Data Analyst

Dr. Claudio Butticè, Pharm.D., is a former Pharmacy Director who worked for several large public hospitals in Southern Italy, as well as for the humanitarian NGO Emergency. He is now an accomplished book author who has written on topics such as medicine, technology, world poverty, human rights, and science for publishers such as SAGE Publishing, Bloomsbury Publishing, and Mission Bell Media. His latest books are "Universal Health Care" (2019) and "What You Need to Know about Headaches" (2022).A data analyst and freelance journalist as well, many of his articles have been published in magazines such as Cracked, The Elephant, Digital…