Social spam volumes are rising 350% year-on-year, posting multiple NSFW images within seconds. Only AI has the speed and capacity to beat them back.
Shielding TV personalities from online abuse
The best ways to protect VIPs and celebrities in the public eye from hateful comments, attacks and threats on social media
Television broadcasters and presenters gain popularity by being personable and entertaining, with a talent for forming close bonds with their viewers.
Unfortunately, however, people love to hate public figures. And with so many polarising opinions in the news and on TV, presenters and personalities often take the brunt of criticism and abuse generated by the programmes they work on.
From reality shows to football games, trolls will jump at any opportunity to belittle and bully the faces they regularly see on screen — often without provocation. As a result, countless reality stars, pundits and reporters have come forward with stories of horrifying abuse as trolls fill their comment sections and inboxes with hateful messages.
Public figures are subjected to online abuse so often that it’s easy to write it off as something that comes with the job. But no person is immune to this sort of content, and the consequences of cyber bullying can be severe — even fatal.
Not only is online harassment detrimental to the mental health and careers of victims, but it’s also damaging to the networks and companies they work for. So, what can be done to stop it?
The dark world of cyber bullying
A quick Google search of ‘TV presenters online abuse’ garners millions of results, with page after page of different victims — usually women — telling their stories.
In an interview with The Times, presenter and former England international footballer Alex Scott shared the bullying she’s experienced on social media. After making history as the first female Sky pundit on a Sky Sports Super Sunday and commentating on an array of sporting events, including the Tokyo 2020 Olympics, she became a new target for trolls. The presenter admitted she was left scared for her life as she faced an onslaught of racist messages, even turning to alcohol to cope with the pressure.
Another football presenter, Karen Carey, deleted her Twitter account following the sexist abuse she suffered after comments made during a Leeds United game were mocked by the club’s official Twitter account.
And we can’t forget the passing of Caroline Flack, a successful TV presenter who hosted Love Island before being harassed to such an extent that she took her own life in February 2020. Despite the #BeKind movement that swept social media in the wake of this tragedy, Laura Whitmore — who hosted the popular show from 2020 until she resigned earlier this year — still faced vile abuse targeting her age, appearance, accent and position.
Being a public figure shouldn’t mean accepting abuse in your line of work. Yet, many media companies still have a long way to go in managing this problem.
With the volume of online hate growing by the day, employers in the TV industry need to understand who’s at risk of being targeted by trolls — and have a comment moderation system in place to filter out toxic content before it can do damage.
Protecting employees with content moderation
The deaths of several reality show contestants and presenters have led organisations and networks to reconsider their duty of care guidelines following increasing public backlash for failing to protect the mental health of TV personalities.
In 2020, the UK’s communications regulator, Ofcom, released a statement requiring broadcasters to ‘take due care over the welfare of a contributor who might be at risk of significant harm as a result of taking part in a programme’. However, many news outlets have lobbied for exemption from these rules due to concerns that these standards could deter programmes from tackling sensitive subjects and investigations.
The purpose of many programmes and broadcasts is to invite discussion and incite emotional reactions from viewers. Still, like in any workplace, TV producers have a duty of care to their employees. So, how can networks protect individuals without preventing audiences from engaging positively and constructively on social media?
Arwen believes automated content filtering is the answer. Our bespoke comment moderation tool is powered by artificial intelligence (AI) and allows users to preserve freedom of speech whilst removing unwanted messages from social channels in under a second.
We’ve worked with celebrities such as comedian Rosie Jones and former footballer Michael Owen to reduce the impact of trolls on their accounts, creating healthier online spaces for them and their communities.
By minimising the volume of toxic content that individuals and their audiences see, this hands-free solution provides a simple and effective way for television producers and operators to do their due diligence for employees and take affirmative action against hate on social media.
No one deserves to suffer at the hands of cruel trolls. Book a demo of Arwen’s AI-powered comment moderation tool to take a firm stance against harmful content today.