The best ways to protect VIPs and celebrities in the public eye from hateful comments, attacks and threats on social media
Why the networks and regulator can't fix online hate on their own
Whilst action is needed and welcome, it's unlikely either will be silver bullets. Tools like Arwen play a valuable role.
It’s clear that online hate upsets everyone. Alas it’s also true that few are willing to bear the responsibility of fixing it. The enduring response is either: “we’re waiting for the network to fix it” or “it’s the government’s job to fix it”.
I can sympathise with both points of view. Most organisations don’t have a budget line for anti-hate software, and it’s easier to pray for a solution than make a case on this sensitive subject.
But it needs to be addressed, and urgently.
- Hate costs businesses $165Bn in lost social commerce each year
- 60% of contact centre staff have experienced abuse
- 2 in 5 people witness online abuse each year
- Victims abound - body shaming, ethnicity, sexual orientation, political views, race, gender. Online hate has become a major contributor to anxiety and depression
This is a huge problem with a big cost, both financially and mentally. Why, with stories like this and this, aren’t we urgently fixing it with the tools already available? Why are people still waiting?
Arwen is proud to be an authorised partner of Facebook, Instagram and Twitter. We’re one of a handful of AI-enabled anti-toxicity providers globally, providing a valuable third-party service in an area where they themselves, for reasons I’ll come onto, face limitations. We’re also collaborating with the UK regulators, who also recognise the value of third parties to solve this problem. There are lots of good reasons why this three-way relationship – the networks, the regulators and us – works.
The networks can’t solve it alone
Social network algorithms are trained to create engagement, without favouring positive or negative engagement. These algorithms are hugely sophisticated and powerful data models, but they were not designed to be moral or ethical. Sadly, negative engagement appears to be more powerful than positive engagement. It drives a lot of profit, whether that was the intention or not.
They aren’t and don’t want to be publishers
If newspapers and traditional publishers put out hateful content, they face sanctions. That’s why they have editors and editorial structures.
The networks created platforms for people to come together and exchange content. Each individual on the platform is responsible for their own content. They didn’t set out to be publishers, as the cost of moderating billions of content items pre-publication each day is more than they would generate in ad revenue. They are also US companies and their position is enshrined in law. Section 230 of the US Communications Decency Act of 1996 states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
Leave Social Media
A common piece of advice to people unhappy about hate is to “get off the platform”. Although some do leave, many rely on it for part of their profile and income.
Social Media has become very entrenched into our habits. 40% of users would give up their pet or car before they'd give up their accounts. And more than 70% said they would not permanently scrap their social media for anything less than $10,000. It’s also why boycotts and protests, even by very large high-profile sponsors, have also shown to have little effect.
The regulators can’t solve it alone
Given these are all US-based firms, when might US regulators take action? Although there has been a long-running debate around changing Section 230, success has been limited, with arguments for free-speech taking a lead.
However, according to Ethan Zuckerman, founder of the Initiative for Digital Public Infrastructure at the University of Massachusetts Amherst, it won’t happen any time soon: “the US federal government seems so sclerotic and paralysed that it is extremely unlikely to pass significant legislation on a topic as sensitive as online speech.”
National and regional regulators face a number of challenges
Which is why regulators around the world are considering action. The UK, Canada and Germany all have anti-hate or online safety bills at some stage of legislation, most of which are focusing on forcing the networks to take more responsibility. Given the above points, this is leading to quite a bit of conflict and brinkmanship. Everyone is waiting to see who blinks first.
Any regulation will be very difficult to enforce
Hate is a slippery thing. Hidden in nuance and symbols, and continually shape shifting. Regulators on the other hand tend to rely on objective rules of what is and isn’t hate. But hate is very subjective. It will be very hard to regulate.
The UK’s Institute of Public Affairs puts it this way: “They want to force social media companies to uphold ‘morals’, but whose morals? They want to remove content that ‘undermines, or risks undermining ‘public health’, which could see the censoring of contrarian opinions. And they want to define content that ‘risks the reputation of others’ as harmful, which could encourage the removal of any negative comments.”
The most probable outcome will be a compromise agreement between the networks and regulators. A compromise that’ll move us forward, but probably satisfy no-one. My guess is it’ll preserve the network’s position as non-publishers, enforce them to provide some better post-publication anti-hate technologies, but still put the onus on the victim to find and report.
Solving online hate isn’t necessarily someone else’s responsibility
Let’s come to the final point - the expectation that someone should solve online hate. First up, this is a lot to expect. After all, no-one has solved offline hate, so why would we expect anyone to be able to solve online hate? Hate is a human emotion that runs deep, and there’s very little evidence that we’ll remove it from our lives any time soon.
Instead, to avoid online hate, we need to look at how we tackle real world hate.
We’ve learnt to avoid certain areas at night, lock our front doors, employ bouncers, hire personal security. We do this because we recognise the limits of any one organisation, government or police force to eliminate hate. We understand that we need to take steps to look after ourselves. We need to do the same online.
We set up Arwen because we were tired of seeing hate prevail and grow. We don’t want a future where our children have to experience current unchecked levels of hate and toxicity. We were frustrated with hearing it couldn’t be fixed by people who wouldn’t. We knew it could be fixed and we had the ability to fix it.
No AI can be perfect and we don’t claim perfection. But Arwen identifies and removes 23 types of hate in under a second, and we think that is 100% better than nothing. We know it’s a lot better than passively waiting for someone else to solve it.
Hate hurts more than people – it hurts everything – and it’s not going away. It’s time to take action. For more information on how we can help, get in touch.