In the rapidly evolving landscape of AI technology and its impact on the digital world, marketing agencies face new challenges in protecting their clients from online hate speech and harmful content.
One challenger brand is leading the charge ahead of ambiguous legislation and puzzling social media platform policies: Bodyguard.ai.
I first heard about Bodyguard.ai through its partnership with the French Open tournament (I have no affiliation with them). This year, the organization offered its athletes a content moderation tool to help stop hateful comments from reaching their social media feeds. Using AI technology, the company’s CEO boasted a 90% success rate at identifying and blocking toxic content targeting social platforms in real time in an interview with Tech Crunch.
Unlike its competitors that simply filter words or phrases that could be considered toxic, Bodyguard.ai’s technology contextualizes online content depending on who it is for and how harmful it is in real time.
The partnership between Bodyguard.ai and the French Tennis Federation underlines the urgent need for such interventions.
“The social media accounts of tennis players attract insults, death threats and hateful and sometimes racist and homophobic comments made by trolls,” says the FFT. This troubling reality echoes similar organizations, like FIFA, that highlighted the increase in online hate directed towards its athletes during international tournaments.
Yet, despite the insurmountable evidence and societal concern, governments and corporations struggle to develop effective strategies against online hate speech.
Tech giants like Meta and Twitter continue to grapple with the persistent spread of toxic comments, despite automated tools and around-the-clock content moderators. At most, Meta, Youtube or Twitter might remove the post or suspend accounts in response to slurs, death threats and targeted harassment.
And then there’s Google, which recently cut one-third of its ‘trust and safety’ teams, raising fears that the company will no longer prioritize its efforts to curb online abuse.
Meanwhile, the U.S. Supreme Court offers limited recourse unless speech incites criminal activity or “specific threats of violence” because of free speech rights. Germany, in comparison, has begun to criminally prosecute people for online hate speech.
Simply turning off the comment section won’t cut it anymore.
As a founder and fractional CMO of a few brands, I’ve witnessed firsthand the spike in online hate speech and harmful content. While Bodyguard.ai’s solution is promising, it’s far from being widely available outside, especially since the French company has its sites primarily in the sports and gaming industries.
The Rolling Stone Culture Council is an invitation-only community for Influencers, Innovators and Creatives. Do I qualify?
Until then, here are some recommendations and considerations on how brands can protect themselves against hate speech:
1. Get on the same page and define hate speech or harmful content with your team. Once defined, tangibly outline a recourse for action once a piece of content has been flagged so that your team is on the same page. At 5&Vine, we take a lot of inspiration from the Center of American Progress and its three-step approach.
2. Reimagine your metrics. In this day and age of bots and spam, engagement can no longer be characterized by quantity alone. Gather your team and redefine what is a quality lead or real engagement.
3. Familiarize yourself with social media hate speech policies. Although imperfect and mostly vague, it’s important to understand where the rights of a user begin and where it ends.
Does this mean that online safety is no longer a government issue?
The reason why Bodyguard.ai is a powerful brand is trust. Or rather, the growing distrust of consumers in traditional avenues such as government, legal systems and corporate bureaucracy. At the end of the day, people trust those with solutions — timely, effective, actionable solutions.
AI-powered content moderation tools can certainly be an effective way to curb online hate speech, but I can’t help but wonder what it will mean if we turn to corporations over governments for issues such as safety and security.
What’s next? Education? What will it mean if we turn to corporations instead of governments for issues such as security or safety?
Regardless of which side of the debate you are on, it is important that we all engage, question and clearly communicate our expectations from our government and legal system in this new age. Additionally, we must consider how to hold corporations and big tech accountable for ensuring our online safety.
We’ve already seen social media’s impact on youth’s mental health and the emergence of deep fake photographs and videos in influencing public opinion. We simply can’t wait for more harm to be done to develop regulation to counteract it. We must actively engage.
Only then can we chart a course toward digital coexistence.