Are algorithms really sexist

Deep learning makes AI systems particularly sexist

Researchers in Linz show that the results of search engines that use deep learning are particularly sexistically distorted.

The discrimination by algorithms has been proven in numerous studies. People with a migrant background are classified as less creditworthy, are more often suspected by police computers or they receive offers for poorer jobs or apartments from social platforms. Microsoft had to take chatbot software offline after a short time because users made it deny the Holocaust and insult black people.

In their study, Navid Rekab-Saz and Markus Schedl from the Institute for Computational Perception at the University of Linz and the Human-centered AI Group at the Linz Institute of Technology (LIT) analyzed AI Lab models or algorithms that are used in search engines - based on real ones Search queries.

CEO = male, caregiver = female

In their work, the researchers divided search questions into two groups: On the one hand, those that are gender-neutral and provide answers without gender bias. The researchers cite the question of Roman emperors as an example. The answer was only provided by men, but because only men actually made it to the throne, there is no bias in the answer either.

The other group was made up of inquiries that - at least in English - are not explicitly gender-specific, such as the question of the income of a nurse (the term «nurse» stands for both nurse and nurse) or the question of a synonym for « beautiful".

Although these questions are gender-neutral, the search engines returned mostly women-related answers, with men far behind. Conversely, the search for “CEO”, i.e. the managing director of a company, or “programmer” provides mostly male coded answers.

Deep learning increases discrimination

It has been shown that "the latest deep learning algorithms in particular cause a particularly pronounced gender-specific bias," said Rekab-Saz to the APA news agency. This has some implications, since such algorithms have recently been used by two of the largest search engines - Google and Bing.

The reason for the bias is that those systems that use deep learning not only search for the search term alone - for example “nurse” or “CEO”, but also similar terms or subject areas. In the case of “nurse”, for example, they would also include “matron” (head nurse). As a result, they increasingly tended towards the female interpretation of the search query.

The background is that the data collected and processed by people on which the AI ​​is based already contain these tendencies, "the search by the AI ​​only reinforces the effect," says Rekab-Saz. In their study, the experts examined gender bias, but they are sure that such effects also occur in other areas such as age, ethnicity or religion.

AI not bad, just badly programmed

For the scientists, the result is no reason to reject AI, it is an "enormously valuable tool". Rather, the goal of her group is to become aware of the distortion of AI results due to human prejudice and to take them into account when programming the algorithms.

“Deep learning is a tool that has two sides: On the one hand, it can intensify a certain bias, as the current study results show. On the other hand, it has so much flexibility that we can design better models that explicitly avoid such distortions, ”said Rekab-Saz.