Can we censor swear words on Quora?

The internet is not an egalitarian space: Ingrid Brodnig on hate speech and anger on the internet

The subject of hate speech and anger on the Internet is now also a concern of the Ministry of Justice in Germany. On Monday this week, Gerd Billen, State Secretary in the Ministry of Justice, welcomed “the initiative for moral courage online” presented by Facebook in Berlin ”. The growing pressure on platform operators to do something about hate speech is offset by concerns about increasingly private law enforcement. This post deals with the question of how aggression affects public debate and our democracy and how it can be responded to.

The following guest contribution was written by Ingrid Brodnig, author (“The invisible man”) and editor of the Austrian news magazine profil, where she is responsible for reporting on digital topics. The contribution is the written version of your lecture as part of the 2nd Network Political Evening AT in Vienna (see also video recording). We publish the article with your kind permission.

I would like to start with a sentence from a study. Scientific studies are usually not particularly crisp or formulated in an easily understandable manner. In this case, it's different: It's a brand new study. Italian and American researchers looked at how information is used and shared online. You've analyzed 32 pages on Facebook dealing with conspiracy theories and looked at all of their posts over the past five years. They also evaluated 35 Facebook pages that deal with natural sciences. I will read out a basic finding from this study:

“Users tend to aggregate in communities with the same interest, which leads to an intensification of the 'confirmation bias', to demarcation and polarization. This damages the quality of information and leads to a strong increase in biased points of view fueled by unsubstantiated rumors, suspicion and paranoia.

So it is measurable that we have a problem on the internet. We can already see a polarization and fragmentation of the public. The researchers speak of “homogeneous and polarized clusters”. These clusters, that is, crowds of people on different sides, have very different perceptions of the world. We are currently also experiencing the extent to which this perception can diverge in the refugee debate.

The researchers describe a discussion culture in which users mainly exchange ideas among like-minded people - at least that's what the study suggests on Facebook. The findings confirm the thesis of the "echo chamber", ie digital spaces in which we mainly share and consume information that corresponds to our worldview.

Now some will surely say: Well, wasn't it always like that? Haven't people always read the news that confirms their opinion (formerly in the party newspaper, for example)? Is there something new to it? The answer is yes.

The structure of the Internet increases the opportunities to find like-minded people. Certainly I was able to get information about conspiracy theories before, for example when I was a subscriber to the UFO publishing house. Ufo-Verlag is now called Kopp-Verlag. And I was able to meet like-minded people at some events: only it was much more difficult.

Often this fragmentation that has become possible is convenient: I like it when Netflix suggests series that I might like. Often it is pleasant. But sometimes this becomes problematic. In general, I think it is worrying if we have little contact with people who think differently and isolate ourselves in a democracy. But this is certainly a problem with conspiracy theories when certain unprovable narratives solidify: For example, that vaccination promotes autism. Or the story that the earth is actually much bigger, there are more continents than known - and the dinosaurs are still alive (one of my favorite theses). Such isolation means that we can no longer reach people - that is, for us “others” who try to understand the world with scientific theses. These users sit in their echo chamber and you can hear the calls from afar: Lies press, lie press!

In my opinion, this isolation also explains phenomena like Pegida. I doubt that the internet is primarily responsible for the rifts in society or the general distrust of politics and the media. But it is like a catalyst that drives social development forward more quickly.

We may not have seen some social developments for too long - also on the Internet. In the early days of the web, there was hope that the very existence of digital communication would lead to an enlightened debate. The social scientist Howard Rheingold wrote the following in 1992:

Since we cannot see each other, we cannot form prejudices about others before we have read what they want to communicate: race, gender, age, national ancestry and external appearance are only known if someone wants to state these characteristics.

This passage reflects what has long been believed: that on the Internet everyone would be equal because their external characteristics are not visible. And that people can finally discuss with each other on an equal footing. That was a very nice idea that online the better argument wins. But this idea has largely not become reality.

On the contrary: those who are aggressive are often rewarded online. Another study that shows this. It was created by the scientists Daegon Cho and Alessandro Acquisti from Carnegie Mellon University.

The two researchers analyzed 75,000 postings from South Korean media sites. Among other things, they looked at which postings received the most likes. Explanation: In many newspaper forums you can rate comments on the banks. You can click on plus or minus, on "red" or "green" or on "I like" and you can use it to signal whether you liked a post. The researchers evaluated this and saw: Postings with insults received more likes.

You have to go through your head for once: If a comment is underhanded or filled with insults, it will likely get more likes. This is a human factor that leads to the hooters being often more noticeable. In addition, there are technical factors: in a media reality sorted by algorithms, the person who has received many likes is shown even more people. That means a combination of human and technical factors helps those who formulate particularly rough. Perhaps this also explains why Heinz-Christian Strache is the most visible Austrian politician on Facebook.

I think the Internet is very much an invention that could make a fairer debate possible. Only that doesn't happen automatically. For too long there has been this belief that everyone has the same opportunities online. But is that true? Do the cautious have the same opportunities as the intrusive? Do women have the same opportunities as men? Do brittle politicians have the same opportunities as populists?

One of the biggest myths surrounding the internet is that it is automatically an egalitarian space. There are no security mechanisms, for example strict moderation, which makes an egalitarian debate possible in many rooms.

I often give the following example of this: Forums have not changed significantly since the 1990s. Everyone knows what an internet forum looks like. The latest comment is usually at the top. That made sense in the early days of the internet - there were often not that many requests to speak. You were already happy when you had seven requests to speak in the “guest book”. This is different today: Nowadays, 1000 postings can appear under an article in the standard. In such a situation, chronologically ordered comments become problematic because they are useful for the bullies.

If someone visits a newspaper forum today and posts their opinion a hundred times, they are prominently visible in the first place a hundred times. But if I am less intrusive, read the article and think to myself, “This one aspect should be added”. Then the probability is high that I will only post it once and therefore only be visible once at the top. But is the opinion of someone who only posts once really a hundred times less relevant than the opinion of someone who just writes down everyone else?

I do not think so. But we can see exactly this behavior on many websites - and often there is also an aggressive tone of individual users that makes others fall silent. These withdraw from the discussion.

Wikipedia is now 15 years old, I recently read an interesting number about it: An internal survey from 2011 found that only one in ten Wikipedia helpers is a woman. Only one in ten Wikipedians is a woman. Just a few weeks ago, the Atlantic brought an interesting article on how to deal with women on Wikipedia. The author wrote about the behavior of some users:

To prevent harassment targeting them, some Wikipedians use gender-neutral pseudonyms and avoid adding any personal information to their username.

It is shocking that women hide the fact that they are a woman - so that no one becomes uncomfortable with them. That is the state of the Internet in 2016, more than a quarter of a century after the World Wide Web was invented. I was invited to speak about hate speech and anger online. In my opinion, two of the biggest problems are those already mentioned: First, the polarization that people split off from the others - from the “lying press” or the lying scientists - and radicalize themselves with obscure sources. And secondly, that aggressive postings are extremely visible, while nuanced requests to speak are often not so visible.

As I said: the internet is not an egalitarian space at the moment. The bizarre thing about it - it was never meant to be angry. The reasons for this are even benevolent. In many digital debates, since the beginning of the Internet, there has been an effort not to sanction any opinion, i.e. to allow a speech that is as free of sanctions as possible. In itself a noble endeavor!

It's just obvious that this strategy doesn't always work. If I treat every opinion in an egalitarian way, those who feel great when they make others small will prevail. If I treat every type of statement in an egalitarian manner, I give a lie as much weight as a verifiable statement.

I want to give an example of the wickedness that has entered the public debate. It is probably known to some:

A photo of Eva Glawischnig, the head of the Austrian Greens, circulated on Facebook last autumn. Next to it the alleged quote from her: “Those seeking protection must have the right to attack girls! Anything else would be racist towards refugees! ”So it is claimed that Eva Glawischnig thinks rape of Austrian girls is okay. The image spread like wildfire on the Internet. It has been shared thousands of times and the Greens assume that such subjects were seen by more than a hundred thousand Austrians.

Of course this quote is a fake. Nonetheless, many users believe that the statement is real. They posted very hurtful things in Rage. In some cases they even demanded that Eva Glawischnig be raped, and wrote comments like: "Then take this aunt as a shining example". Or: "Stand up, then they can attack you!"

The Greens are now taking legal action against these subjects. I think it's good when there is increasing awareness among politicians and judges of the types of hate speech that are circulating. It is also right to take action against such fakes, because otherwise users will believe: The person concerned really said that.

It is just a mistake that every voice and every user on the Internet is equally worthy of protection:

  • A culture of discussion in which women are consistently labeled as “sluts” or “naive brats” is not worth protecting.
  • A culture of discussion in which conspiracy theories are treated on an equal footing with scientific facts is not worth protecting.
  • A culture of discussion where - if you address all of this - the aggressor tells the victim, "Let your skin grow thicker", which is not worth protecting.

The debate about it has grown stronger. But I think we need to keep addressing these issues until we have really good answers. So far we have at least a few ideas.

What could you do?

The final, always functioning solution does not yet exist, but there are a few approaches. I think we have to raise the level of discussion and draw a clear red line. This means that website operators and online media have to take more responsibility for the tonality - and create spaces that are as free of swearwords as possible.

These swear word-free spaces are not just a matter of taste. It's not about having thin skin. Swear words have an effect. That suggested a study by the University of Wisconsin.

These researchers wrote a blog post on nanotechnology that was really very balanced. It described the positive and negative sides of nanotechnology - and there were also posts among them. 1100 Americans read this post and read the comments below. Only half read posts in which there was lively discussion - but without swear words. And the other half read the same postings, only swear words were interwoven. There was then in the direction: "And who does not understand that, is an idiot."

The researchers compared how these two groups felt about nanotechnology. And they call their own results "disturbing". The group that had read the swear words suddenly rejected the subject of nanotechnology much more strongly. They were much more negative than the group that hadn't read swear words.

This suggests the following: I can destroy a debate purely with aggression, not with arguments. Of course, this has far-reaching consequences: If I can ruin a debate with swear words, then let's just look at what happens to feminism. There are downright swear words among the newspaper articles that deal with feminism. If there are effects similar to those in the text on nanotechnology, then we have a problem.

For such considerations, too, there are media like the blogging empire “Gawker”, which has already changed the comment system significantly. The latest posts are no longer displayed there, but posts to which the author of the text somehow reacted, interacted with. Perhaps the author recommended or responded to this for others. This is an attempt to highlight the posts that are most interesting to other users - and not always the loudest users.

On the one hand, we should pay attention to a respectful tone - if only to make it not so easy for those who simply come across with swear words. Second, I also believe that we have to communicate this more strongly, that not every request to speak has the same weight in terms of content.

Right-wing conspiracy blogs in particular are quite entertaining because they like to speak of the term “truth”. A bit of a culture of truth-finding would actually do the internet good - so just do some research to find out what is true and what is not and how can this be proven? What led me to the result of my research? Even traditional media can reveal where they got information from in a much more transparent way. That often doesn't happen. But it is also very good that blogs are increasingly fulfilling this function. For example, the Kobuk blog, which controls media. Or the association Mimikama, which runs the Facebook page “Think first - then click”: They often point out that posts are fake, that a 75-year-old Swede was not recently raped most brutally by refugees and a picture clearly shows this - but that this picture is older and comes from South Africa.

Very often these fakes can be checked: We need a culture of discussion in which the principles of logical thinking and fair discussion are again more respected. One where conspiracy theories are not put on the same level as demonstrable results.

Here is an example.When two discussants stand on a stage and one of them says: “There was never evolution, God created the earth.” And the other person says: “Yes! There was evolution because this and that scientific considerations and measurements have led us to this realization. ”Then the answer is not, as it is sometimes called,“ somewhere in between ”. The correct answer is not automatically the middle point between two opposing opinions. But you sometimes experience such a point of view on social media - sometimes it says in the comments: “You read about these nasty stories so often. There will be something to it. "

A: “Earth = flat” B: “Earth = geoid” false balance: the truth is not automatically at the center of two opinions. @ brodnig # netzpat

- Sonja Fischbauer (@sonkiki) January 15, 2016

Incidentally, something like this is called "false balance", it is a misconception that it is balanced, that the midpoint between two points of view automatically reflects the truth. It is not balanced to put a scientifically proven statement and a scientifically unprovable statement on the same level.

It will be difficult to prevent some users from withdrawing to conspiracy-related rooms. It's likely not negotiable. But at least we can see that we fill the more serious rooms, the rooms where not only users with an affinity for conspiracy theory are out and about, a little more with facts and less with emotion.

The public debate is currently too much distracted by rhetorical smoke grenades - this is the case both offline and online. But especially on the internet we have a great opportunity: Here we can link, here we can often check statements quite easily or even - if we proceed transparently - even show where we got information from. If we do this more intensely in the future, we may no longer be easily distracted. In any case, I believe that a different digital discussion culture must be possible.

Would you like more critical reporting?

Our work at is financed almost exclusively by voluntary donations from our readers. With an editorial staff of currently 15 people, this enables us to journalistically work on many important topics and debates in a digital society. With your support, we can clarify even more, conduct investigative research much more often, provide more background information - and defend even more fundamental digital rights!

You too can support our work now with yours Donation.

Clicking on the link loads our donation widget. In doing so, data is sent to our donation service provider twingle. You can find more information in our privacy policy.

About the author

Guest Post

Guest contributions are contributions from people who do not belong to the editorial team. Sometimes we approach authors and publishers to ask them about guest contributions, sometimes the authors approach us. Guest contributions do not necessarily reflect the opinion of the editors.
Published 01/19/2016 at 11:26 am