Clickworkers – Exploited for Artificial Intelligence 

Artificial Intelligence has long since been part of our everyday lives. Platforms like Netflix, Spotify and Amazon all use AI algorithms to produce personalised recommendations for films, music and products. Self-driving cars use advanced AI algorithms to analyse traffic conditions, recognise obstacles and navigate safely through traffic, while many firms use AI-based chatbots to process customer enquiries, offer support or provide information. Artificial Intelligence can do even more, however: with just a couple of clicks, diverse programmes can be used to generate a wide variety of different texts, images and videos – but who actually trains this Artificial Intelligence? 

First of all, it is important to understand what we actually mean by the term “Artificial Intelligence”. According to the European Parliament, Artificial Intelligence (AI) relates to the capacity of computer systems or machines to carry out a variety of tasks which would normally require human intelligence. These systems are designed in such a way that they can recognise problems, learn from errors, arrive at conclusions, adapt themselves and in some cases, even understand human speech.

There are various types of Artificial lntelligence, including:

  1. Weak AI: This is the most widely distributed type of AI at the moment. It specialises in solving a defined task or problem. These may include self-driving cars, search engines and image, speech and face recognition. These systems are limited to a closely circumscribed scope of duties, and are still far removed from human intelligence. 
  2. Strong AI: Unlike weak AI, a strong AI would possess the capacity to understand complex problems, learn and itself to improve tasks in a way similar to a human brain. 
  3. Artificial Super-Intelligence: An Artificial Super-Intelligence would have a consciousness of its own existence, and would define an AI which not merely exceeds human intelligence, but is actually far ahead of it in almost every cognitive field.

At the moment, there are no clear examples of Strong AI or Artificial Super-Intelligence. 

The chatbot “ChatGPT”, which has become one of the best-known AI technologies in recent years, is also a form of weak AI, since although ChatGPT is in a position to answer a wide variety of questions, have conversations, create, correct or translate texts, etc., the Artificial Intelligence lacks the capacity for logical thought, self-improvement and independent solution of complex problems. As a result of this, ChatGPT and other Artificial Intelligences, have to be manually trained by people (“clickworkers”) – and all for starvation wages. 

The chatbot was developed by US company OpenAI, whose head office is located in California. The training of ChatGPT is supposed to have been carried out in collaboration with the Kenyan firm Sama. 

A survey by  TIME Magazine found that on nine-hour shifts, workers had to read up to 250 text extracts, each of which was up to 1,000 words long. To give you an understand of how much that is: this blog article has a little over 800 words. In payment – depending on their service age and performance – the workers received an hourly rate of between 1.32 and 2 US dollars. 

It should also be mentioned that Artificial Intelligence was primarily taught what “toxic context” so that this could subsequently be filtered out from the surrounding content. In order for an AI to learn what toxic context is, however, it first has to be shown what content counts as “toxic” at all. The texts which have to be browsed by the clickworkers, for example, and provided with the appropriate labels, are usually detailed descriptions of murder, torture, incest, self-harm, sexual violence (including against children), suicides, animal cruelty and of a similar nature. A former member of staff at TIME says the following: “It was torture. […] You read a string of statements such as these all week long. By the time Friday comes, you’re absolutely destroyed from thinking about it.” In an effort to deal with the images, some of which were distressing, the workers asked for psychological support, which was only offered to them to a limited extent by Sama.

In February 2022, OpenAI once again commissioned the firm Sama to carry out a project which foresaw the categorisation of similarly violent images. Just a few months later, however, Sama dissolved the contract with OpenAI early, some eight months before it was meant to expire. 

OpenAI does not directly dispute this portrayal; instead, such work is viewed as unavoidable for the development of Artificial Intelligence. An official statement says the following: “The classification and filtering of damaging [texts and images] is a step necessary to minimise the proportion of violent and sexual content in training data and to develop tools capable of recognising damaging content.”

Kenyan firm Sama even describes itself as an “ethical AI firm”, which has been able to help 50,000 people to escape poverty to date. The report from TIME Magazine shows that is being done as described – but also that those affected don’t always see it quite like that. 

Translated by Tim Lywood

#KünstlicheIntelligenz #ModerneSklaverei #Ausbeutung #Sklaverei #Menschenhandel #AgainstHumanTrafficking #GegenMenschenhandel #EndExploitation #EndTrafficking #HopeForTheFuture #Österreich