The main social networks have algorithms that use artificial intelligence to detect, in the publications and comments of users, sensitive content or that may violate its rules of use.
There are controversial topics that it is forbidden to talk about on certain social networks. For example, on Facebook it is not allowed to advertise weapons, or content of a sexual nature, when they involve minors, are not allowed on most platforms.
It would be impossible that, given the large amount of content that is published every day on social networks, all that content could be moderated by human technicians. At the same time, it would mean that those employees could have access to the content published by users.
That is why the algorithms in charge of moderating the content are used, which carry out a first screening and alert when illegal or prohibited content is found. Then there are social networks that resort to human moderators to confirm what their algorithms have eliminated, while others leave all the weight to those Artificial Intelligences and only question their task when a user, for example, files a lawsuit for content that they believe is has been unfairly withdrawn.
Be that as it may, there is that first automatic filter to determine what can and cannot be talked about on social networks. However, users also try to circumvent that first screen and “jump the algorithms” with something that It has come to be called “AlgoSpeak” (from “algorithm” and “speak” – speaks, in English) and that it is becoming a term that is a trend in social networks and that is used more and more. We tell you what “AlgoSpeak” is in this video.
In short, the “AlgoSpeak” it is a secret code, created by users of social networks, to be able to talk about a prohibited or inappropriate topic, without those algorithms knowing about it. The “AlgoSpeak” is usually made up of code words, emojis with a hidden meaning, or deliberate typos. All this to prevent the Artificial Intelligence of moderation of social networks from detecting that content that would not be allowed on the platform.
Only those who talk about these contents know their true meaning. For example, to talk about issues related to abortion – in the United States the Supreme Court recently abolished it – users speak of “camping”. Content on how to set up a tent can hide, if you read between the lines, another type of message. The algorithms are not able to identify them.
Many emojis also have other connotations beyond the icon they represent. This is what happens, for example, in apps to flirt with emoticons of certain fruits and vegetables. One can also speak of “420” to refer to cannabis use, or even pizza emojis are used – especially “cheese pizza” – to refer to sexual content in which minors are involved.
With this type of strategy, it is about outwitting the algorithms, which are incapable of detecting these mentions. This algorithmic language is increasingly used by users of social networks, who use it not only to deal with sensitive issues, but also to circumvent methods of avoiding harassment and bullying on social networks.
Using emojis and alternative phrases is becoming more common on social networks, as well as using more written words – such as “seggs” instead of sex – is also a trend. Artificial Intelligences try to recognize them -for example «pron», instead of «porn» but in clear reference- it is already a term that many of these algorithms recognize.
However, users benefit from the time it takes for these AIs to learn new trends to take advantage of and talk about those topics. Once the algorithms recognize what they mean by “cheese pizza,” they will stop using it. The terms in the «AlgoSpeak» have a short period of life, since when many people recognize them and they stop being a code, they stop being safe.