A review by grisostomo_de_las_ovejas
Speech Police: The Global Struggle to Govern the Internet by David Kaye

3.0

David Kaye has changed my mind. Kind of.

Kind of, I say, because before reading this book, I held the sort of amorphous opinion on social media companies' role in policing content that I think many well-meaning people do. "They should do something, right? That sangfroid son-of-a-Zuckerberg is cashing in while the internet's overrun by fake news hucksters" And now I'm not so sure of that opinion.

As David Kaye points out, putting the regulatory burden of action on social media companies just isn't so simple.

Should we choose to do so, overworked and underpaid content moderators suddenly become the gatekeepers of what's okay to say and what's not. And given that social media companies will generally want to avoid friction with governments, said content moderators will be incentivized to operate with a sledgehammer, rather than a scalpel. Plus, Kaye notes, governments may not be the best judge of what belongs on the internet. In America, Donald Trump probably doesn't think his critics' tweets do. In Kenya, President Uhuru Kenyatta doesn't think the posts of opposition bloggers do. In Myanmar, one-time peacenik Aang Saun Suu Kyi doesn't think video recordings of government genocide do.

And even if all governments were well-intentioned, allowing a regulatory environment in which whispers from the feds could influence social media content moderation would amount to giving the government new powers sans legal process. If we agree governments shouldn't exercise prior restraint on certain topics, then it likely follows that they shouldn't be able to do the same while hiding behind a middleman.

So, if we're not okay with social media companies playing speech jury and executioner, how do we solve the problems of social media chicanery? Fake news and threatening content won't leave the platforms just because governments decide to get constitutionally circumspect. Unfortunately, it's here that Kaye begins to falter.

He suggests social media companies to tamp down the virality of bad content. That obviates the prior restraint issue (Facebook now says "You can shout down the well, but you can't use our super-megaphone "), but it still forces social media companies to play jury on what's bad.

Kaye also suggests that social media companies could take a decentralized approach to figuring out what's "bad", partnering with independent bodies in countries to form local speech guidelines. Recognizing the difficulty of that solution, Kaye adds that some of those bodies might need to reside out-of-state for their own safety. But something just doesn't sound right about out-of-state bodies deciding what in-state people get to say. Plus, who gets to pick the composition of these bodies? "The people"? What if the people don't like a minority? What if a minority (pick a reactionary religious sect) is represented and they don't like the people? Answer unclear.

Wrap-up: this book does a nice job illuminating the concrete consequences of asking social media companies to regulate content. The solutions it proposes are awkward and loose (as is the book's prose), but I'd still recommend it to anyone interested in the topic.