Twitter’s abuse problem

I’ve been an avid Twitter user for years. I’ve developed great friendships, made professional connections, learned, laughed, and generally had a good time. Of course, I also happen to be a relatively-anonymous white male, which means my direct exposure to abuse is fairly limited. I can’t say the same for some of my friends. Last week’s BuzzFeed article calling Twitter “a honeypot for assholes” didn’t seem all that shocking to me.

Twitter, of course, denied it in the most “that article is totally wrong, but we won’t tell you why because it’s actually spot on” way possible:

In response to today’s BuzzFeed story on safety, we were contacted just last night for comment and obviously had not seen any part of the story until we read it today. We feel there are inaccuracies in the details and unfair portrayals but rather than go back and forth with BuzzFeed, we are going to continue our work on making Twitter a safer place. There is a lot of work to do but please know we are committed, focused, and will have updates to share soon.

To it’s credit, Twitter has publicly admitted that it’s solution to harassment is woefully inadequate. It’s in a tough spot: balancing free expression and harassment prevention is not an easy task. Some have suggested the increased rollout of Verified status would help, but that’s harmful to some the people best served by anonymous free expression. I get that Twitter does not want to be in the business of moderating speech.

It’s important to distinguish speech, though, so I’m going to invent a word. There’s offensive speech and then there’s assaultive speech. Offensive speech might offend people or it might offend governments. Great social reform and obnoxious threadshitting both fall into this category. This is the free speech that we all argue for. Assaultive speech is less justifiable. It’s not merely being insulting, but it’s the aggressive attempt to squash someone’s participation.

I like to think of it as the difference between letting a person speak and forcing the audience to listen. I could write “Jack Dorsey sucks” on this blog every day and while it would be offensive, it is (and should be) protected. Even posting that on Twitter would fall into this category. If instead I tweeted “@jack you suck” every day, that’s still offensive but now it’s assaultive, too.

This, of course, is a in the context of a comany deciding what it will and won’t allow on its platform, not in the context of what should be legally permissible. And don’t mistake my position for “you can never say something mean to someone.” It’s more along the lines of “you can’t force someone to listen to you say mean things.” Blocks and mutes are woefully ineffective, especially against targeted attacks. It’s trivially easy to create a new Twitter account (and I have made several on a lark just because I could). But if the legal system can have Anti-SLAPP laws to prevent censorship-by-lawsuit, Twitter should be able to come up with a system of Anti-STAPP rules.

One suggestion I heard (I believe it was on a recent episode of “This Week in Tech”, but I don’t recall for sure) was the idea of a “jury of peers.” Instead of having Twitter staff review all of the harassment, spam, etc. complains, select some number of users to give it a first pass. Even if just a few hundred active accounts a day are selected for “jury duty”, this gives a scalable mechanism for actually looking at complaints and encouraging community norms.

Maybe this is a terrible idea, it’s clear that Twitter needs to do something effective if it wants to continue to attract (and retain!) users.

Full disclosure: I own a small number of shares of Twitter stock. It’s not going well for me.

Leave a Reply

Your email address will not be published. Required fields are marked *