There’s more than meets the eye to fixing Big Tech’s hate speech problem

Peter A. McKay
3 min readMar 21, 2019

In the understandably horrified response to the recent livestreamed massacre of Muslims peacefully worshiping in New Zealand, a certain consensus has emerged that Big Tech’s platforms need to police hateful content more aggressively.

Since the Christchurch shooter specifically used Facebook and YouTube during his horrific crime, those two platforms in particular are getting flogged in the mainstream press. Take this recent discussion on MSNBC for example, including remarks about the platforms starting ~2 minutes in:

I don’t necessarily disagree with a word in this entire video. Far be it from me to absolve Big Tech of cluelessly looking the other way on hate speech, to the detriment of society. Yes, they’re guilty as charged.

That said, I think the solution to this problem should go beyond simply calling for the platforms to moderate and take down hate speech more aggressively. In particular, I’d say the two shared characteristics of YouTube and Facebook listed below also sorely need to be addressed in the fight against hate — one for obvious reasons, the other perhaps not so obvious:

1) Their workforces have a glaring, persistent lack of diversity. Perhaps if these companies had more people of color, women, and other underrepresented groups as employees, they’d be more prescient and attentive about white supremacist threats like the New Zealand shooter.

2) They’re virtual monopolies. YouTube essentially owns user-generated video, and Facebook owns social networking in general, spread across its flagship platform, plus its Messenger, Instagram, and WhatsApp properties. When Mark Zuckerberg testified to Congress last year that the average mobile user has eight messaging apps installed on their phone, he neglected to mention that his company owns four of them.

This is obviously an antitrust issue, as governments are increasingly realizing. But it’s also relevant to combating hate, since tech concentration makes it easier for the bad guys to propagate their poison quickly to the public in one convenient place.

To be sure, if the audience’s attention were spread around more widely online, supremacist miscreants would still find someplace else to congregate with one another online. For instance, it’s notable that the New Zealand shooter also posted a manifesto on 8chan — essentially a hive of hateful trolls — prior to last week’s tragedy.

But a more competitive market for content distribution, with less of a role for the YouTubes and Facebooks, would make it harder for supremacists to terrorize the rest of us merely by distributing their propaganda in just one or two major company-controlled spaces.

Look, I’m all for getting the big platforms to use their considerable power more responsibly to counter hate on a day-to-day basis. But I also see that as a short-term fix. While we’re at it, let’s also remember to address the bigger question of whether they deserve so much power in the first place.

This post originally appeared on Indizr, my blog about Web 3.0. For regular updates on that topic, subscribe to Indizr’s free email newsletter.

--

--

Peter A. McKay

I publish #w3w, a newsletter about decentralization. Former Head of Content & Writer Development at Capsule Social. Other priors: WSJ, Washington Post, Vice.