Twitter’s abuse problem

I’ve been an avid Twitter user for years. I’ve developed great friendships, made professional connections, learned, laughed, and generally had a good time. Of course, I also happen to be a relatively-anonymous white male, which means my direct exposure to abuse is fairly limited. I can’t say the same for some of my friends. Last week’s BuzzFeed article calling Twitter “a honeypot for assholes” didn’t seem all that shocking to me.

Twitter, of course, denied it in the most “that article is totally wrong, but we won’t tell you why because it’s actually spot on” way possible:

In response to today’s BuzzFeed story on safety, we were contacted just last night for comment and obviously had not seen any part of the story until we read it today. We feel there are inaccuracies in the details and unfair portrayals but rather than go back and forth with BuzzFeed, we are going to continue our work on making Twitter a safer place. There is a lot of work to do but please know we are committed, focused, and will have updates to share soon.

To it’s credit, Twitter has publicly admitted that it’s solution to harassment is woefully inadequate. It’s in a tough spot: balancing free expression and harassment prevention is not an easy task. Some have suggested the increased rollout of Verified status would help, but that’s harmful to some the people best served by anonymous free expression. I get that Twitter does not want to be in the business of moderating speech.

It’s important to distinguish speech, though, so I’m going to invent a word. There’s offensive speech and then there’s assaultive speech. Offensive speech might offend people or it might offend governments. Great social reform and obnoxious threadshitting both fall into this category. This is the free speech that we all argue for. Assaultive speech is less justifiable. It’s not merely being insulting, but it’s the aggressive attempt to squash someone’s participation.

I like to think of it as the difference between letting a person speak and forcing the audience to listen. I could write “Jack Dorsey sucks” on this blog every day and while it would be offensive, it is (and should be) protected. Even posting that on Twitter would fall into this category. If instead I tweeted “@jack you suck” every day, that’s still offensive but now it’s assaultive, too.

This, of course, is a in the context of a comany deciding what it will and won’t allow on its platform, not in the context of what should be legally permissible. And don’t mistake my position for “you can never say something mean to someone.” It’s more along the lines of “you can’t force someone to listen to you say mean things.” Blocks and mutes are woefully ineffective, especially against targeted attacks. It’s trivially easy to create a new Twitter account (and I have made several on a lark just because I could). But if the legal system can have Anti-SLAPP laws to prevent censorship-by-lawsuit, Twitter should be able to come up with a system of Anti-STAPP rules.

One suggestion I heard (I believe it was on a recent episode of “This Week in Tech”, but I don’t recall for sure) was the idea of a “jury of peers.” Instead of having Twitter staff review all of the harassment, spam, etc. complains, select some number of users to give it a first pass. Even if just a few hundred active accounts a day are selected for “jury duty”, this gives a scalable mechanism for actually looking at complaints and encouraging community norms.

Maybe this is a terrible idea, it’s clear that Twitter needs to do something effective if it wants to continue to attract (and retain!) users.

Full disclosure: I own a small number of shares of Twitter stock. It’s not going well for me.

Getting support via social media

Twitter wants you to DM brands about your problems” read a recent Engagdet article. It seems Twitter is making it easier to contact certain brand accounts by putting a big contact button on the profile page. The idea being that the button, along with additional information about when the account is most responsive, will make it easier for customers to get support via social media. I can understand wanting to make that process easier; Twitter and other social media sites has been an effective way for unhappy customers to get attention.

The previous sentence explains why I don’t think this will end up being a very useful feature. Good customer support seems to be the exception rather than the rule. People began turning to social media to vent their frustration with the poor service they received. To their credit, companies responded well by providing prompt responses (if not always resolutions). But the incentive there is to tamp down publicly-expressed bad sentiment.

When I worked at McDonald’s, we were told that people are more likely to talk about, and will tell more people, the customer service they experienced. Studies also show complaints have an outsized impact. The public nature of the complaint, not the specific medium, is what drives the effectiveness of social media support.

In a world where complaints are dealt with privately, I expect companies to revert to their old ways. Slow and unhelpful responses will become the norm over time. If anything, the experience may get worse since social media platforms lack some of the functionality of traditional customer support platforms. It will be easier, for example, for replies to fall through the cracks.

I try to be not-a-jerk. In most cases, I’ll go through the usual channels first and try to get the problem resolved that way. But if I take to social media for satisfaction, you can bet I’ll do it publicly.

Fourth Amendment protection and your computer

Back in January, I wrote an article for Opensource.com arguing that judges need to be educated on open source licensing. A recent decision from the Eastern District of Virginia makes it clear that the judiciary needs to better understand technology in general. Before I get into the details of the case, I want to make it clear that I tend to be very pro-defendant on the 4th-8th Amendments. I don’t see them as helping the guilty go free (although that is certainly a side effect in some cases), but as preventing the persecution of the innocent.

The defendant in this case is accused of downloading child pornography, which makes him a pretty unsympathetic defendant. Perhaps the heinous nature of his alleged crime weighed on the mind of the judge when he said people have no expectation of privacy on their home computers. Specifically:

Now, it seems unreasonable to think that a computer connected to the Web is immune from invasion. Indeed, the opposite holds true: in today’s digital world, it appears to be a virtual certainty that computers accessing the Internet can – and eventually will – be hacked.

As a matter of fact, that’s a valid statement. It’s good security advice. As a matter of law, that’s a terrible reason to conclude that a warrant was not needed. Homes are broken into every day, and yet the courts have generally ruled that an expectation of privacy exists in the home.

The judge drew an analogy to Minnesota v. Carter, in which the Supreme Court ruled that a police officer peering through broken blinds did not constitute a violation of the Fourth Amendment. I find that analogy to be flawed. In this case, it’s more like the officers entered through a broken window and began looking through drawers. Discovering the contents of a computer requires more than just a passing glance, but instead at least some measure of active effort.

What got less discussion is the Sixth Amendment issue. Access to the computer was made possible by an exploit in Tor that the FBI made use of. The defendant asked for the source code, which the the judge refused:

The Government declined to furnish the source code of the exploit due to its immateriality and for reasons of security. The Government argues that reviewing the exploit, which takes advantage of a weakness in the Tor network, would expose the entire NIT program and render it useless as a tool to track the transmission of contraband via the Internet. SA Alfin testified that he had no need to learn or study the exploit, as the exploit does not produce any information but rather unlocks the door to the information secured via the NIT. The defense claims it needs the exploit to determine whether the FBI closed and re-locked the door after obtaining Defendant’s information via the NIT. Yet, the defense lacks evidentiary support for such a need.

It’s a bit of a Catch-22 for the defense. They need evidence to get the evidence they need? I’m open to the argument that the exploit here is not a witness per se, making the Sixth Amendment argument here a little weak, but as a general trend, the “black boxes” used by the government must be subject to scrutiny if we are to have a just justice system.

It’s particularly obnoxious since unauthorized access to a computer by non-law-enforcement has been punished rather severely at times. If a citizen can get 10 years in jail for something, it stands to reason the government should have some accountability when undertaking the same action.

I have seen nothing that suggests the judge wrote this decision out of malice or incompetence. He probably felt that he was making the correct decision. But those who make noise about the “government taking our rights away” would be better served paying attention to the papercut cases like this instead of the boogeyman narratives.

The easy answer here is “don’t download child pornography.” While that’s good advice, it does nothing to protect the innocent from malicious prosecution. Hopefully this will be overturned on appeal.

I like technology, but I like owning the things I own

As easy as it is to hate computers, every so often I like to look around and remind myself that we’re living in The Future. Technology that is fairly routine today seemed so impossible when I was a kid. It’s not slowing down, either. Consumer technology, in particular the “connected home”, is making great advancements. But with great functionality comes great headaches. There are any number of reasons to be concerned about the rise of the machines, but here’s one.

If you’re one of the (apparently few) people who bought a Revolv hub, you probably wanted to make your life easier. The ability to control your lights, thermostat, etc from a smartphone is incredibly appealing. But come June 19, you can’t. Alphabet’s Nest bought Revolv back in October 2014 and has decided to shut down the service. It’s not just that the hubs will be unsupported, they will essentially become hummus-container-shaped paperweights.

Despite my hatred for computers, I actually like technology in general and I really like having fun new toys to play with. Even so, I have a hard time talking myself into purchases where I don’t really own what I own. I understand if a company decides that keeping a central service running for an unused or outdated product is no longer viable, but I’d still like to be able to play with it in standalone mode.

I have friends who strongly distrust relying on locked-in external services. If they can’t host it themselves (or at least have it hosted by a third party where they can freely move should the need arise), they don’t use it. I sympathize with that position, but I tend to take a more practical approach. There are a lot of things I let other people do — either in exchange for payment or in exchange for serving me ads — that I could do myself. I’d just rather spend my time and energy elsewhere.

A smart home system that is self-contained appeals to me greatly. I’d love to be able to go away during the winter and leave my thermostat at “just don’t let the pipes freeze, okay?” but when I get an hour from home, have the furnace fire back up. If that system requires the vendor to decide to keep their servers on, I’m not really interested (without even considering privacy and security implications). The “*aaS-ification” of technology offers great benefits to those who cannot implement technology solutions for themselves, but it also creates great risk.

April Opensource.com articles

Well, Opensource.com didn’t set any monthly traffic records this time around, but it was still another great month. I stepped up my contribution game this month, too:

The amorality of technology

Technology enthusiasts often argue that technology is amoral. When a technology is used for unsavory purposes, that’s a failing of the user not the technology itself. That’s valid to some degree. This argument is sometimes extended to the development community around a technology. That’s never valid.

The delusion that technology, particularly open source projects, is a meritocracy because computers are incapable of moral judgements is ludicrous. Too often “merit” is used to mean “competence from people like me”. To ignore issues of social justice under the guise of “meritocracy” is to implicitly support discrimination.

As I wrote a few weeks ago, even the most well-intentioned of us have implicit biases that color our thoughts and actions. Pretending that they don’t exist or don’t matter does nothing to counteract them. I won’t go so far as to suggest that no one with unsavory views be allowed to participate in a community, but community leaders should expect pushback when their words or actions make contributors or potential contributors feel unwelcome. The contributions made to the community are just as important as those made to the code.

So what about the technology itself? Do developers owe a duty of care to the morality of a technology’s use? Yes, I believe they do. Microsoft’s recent embarrassment with a Twitter chat bot shows how quickly a supposedly amoral technology can be corrupted. I don’t expect a completely incorruptible chat bot, nor do I think adding some guardrails is an easy task. I do think that putting something like that out in public is asking for trouble (a lesson Microsoft has learned in the past and apparently forgotten). It’s a poor reflection on humanity that this is an issue at all, but here we are.

When we develop or promote technology, it is vital that we not use the amorality of ones and zeros as an excuse to ignore the human context. We owe it to our communities and to our users to understand and acknowledge the human element. In the end, what we create is a reflection of us.

April Foolishness

As should surprise no one, the Internet loves April Fools’ Day. The Internet also hates April Fools’ Day. Although it has apparently been celebrated for centuries, only in recent years have corporate marketing departments gotten into the act with such gusto. As a result, every web posting must be viewed with suspicion on April 1.

Some people are of the opinion that this corporate foolery is played out. Even Google, a perpetual home-run-hitter, had a big strikeout this year. A few hours into the “mic drop” feature, it was pulled from GMail after users complained of problems when they accidentally triggered it. Will this be the end of Google’s April Fools efforts? I’m sure it won’t be, but it may cause them to be more conservative next year. Will that mean it ends up not being funny? Quite possibly.

Not everyone had a bad day, though. The election insurance commercial from Esurance was brilliant. Virginia Commonwealth University had an amusing video about its “Tats, not SATs” policy. But the winner has to be the adult video website Pornhub, which became “Cornhub” for the day.

These examples show what a good corporate April Fools’ Day joke is like. Like Hippocrates, first do no harm. Funny videos or blog posts are good strategy because your users can’t do much more than not get the joke. Anything that involves actual functionality should only be used with extreme caution. Make it safe.

Next, the post should be clearly fake. This is tough to do because you want your joke to have the appearance of being serious while still having that air of self-awareness. Think of it like a Saturday Night Live sketch. Everyone knows it’s over the top and the actors play to that, but as soon as they crack a smile, it loses something. (As an aside, that’s why I’m not a Jimmy Fallon fan.) The point is your want people laughing at your joke, not at people who didn’t get it.

Last, and most importantly, it must be funny. If it’s not funny, stop. Find something else to do. Don’t try to be funny and then be unfunny because it’s painful for everyone. The Verge has a post ranking some of this year’s jokes.

What3Words as a password generator

One of my coworkers shared an interesting site last week. What3Words assigns a three-word “address” to every 3m-by-3m square on Earth. The idea behind the site is that many areas of the world don’t have street numbers and names, and a three-word combination is much easier to remember than latitude/longitude pairs. Similar combinations are deliberately placed far apart so as to make them unambiguous.

It’s an interesting idea, but I immediately began thinking of a different use for it. What if people used it to come up with long, memorable, and hard-to-guess passwords? After all, the longer a password is (generally speaking), the better it is. And while correcthorsebatterystaple might be amusing, it’s much easier to remember a place. So you pick a memorable spot on the map and now you have a long password that you can look up if you forget it.

image

XKCD "Password Strength" by Randall Munroe. Used under the Creative Commons Attribution-NonCommercial 2.5 license.

This method isn’t perfect. The main problem is that with a 3x3m grid, it’s very sensitive to differences in location. But especially for the technically unsavvy, it can be a good way to enable better password habits.

Sidebar: why Randall Munroe is wrong (-ish)
There’s another reason What3Words isn’t perfect, and the XKCD cartoon above is subject to the same weakness. If a password cracker knows people are mostly using concatenated words, they’ll start guessing combinations of words instead of combinations of characters. These sorts of passwords are stronger when they’re rare. Of course, there are trivial ways to mitigate the risks (insertion of special characters, selective capitalization, etc.).

Still, given the choice between a 20-character random string and a 20-character set of words, I’ll take the random string as my password (unless the site/app disables paste, in which case I’ll cry). I use a password manager precisely so I don’t have to worry about trying to balance security and memorability. The What3Words method could be helpful as a password for my password safe, though.

Further defense of 140 characters

Last fall, when rumors began swirling that Twitter was looking at increasing the 140 character limit on tweets, I wrote a defense of the 140 character constraint. Last week, Re/Code and others reported that the limit change may come in March and that it could be as large as 10,000 characters.

Everything I wrote back in October still holds true. 140 characters, now that SMS is no longer a primary method of interacting with Twitter, is probably to small. But 10,000 is too large. The first four paragraphs of this post are 1,244 characters. Can you imagine a timeline full of that (or more)?

It’s not just “oh noes! They are changing a thing!”, which is a common reaction whenever Facebook changes anything. Twitter has made a lot of changes that I think are great: retweets (yes, kids, retweets used to be a manual process that often required editing the tweet in order to be able to fit “MT @name” in front of it), quoted tweets, embedded images, polls (even though there’s a lot to be improved on there), and 10k character direct messages.

In this case, the short limit is what makes Twitter. As my friend Zachary Baiel said “The medium is the message. The character limit of Twitter defines itself. Otherwise, it’s a stream of blogs.”

Twitter emphasized four characteristics in its IPO filing (thanks to Karen Demerly for bringing this to my attention):

  • Public
  • Real Time
  • Conversational
  • Distributed

10,000 characters does not seem very real time (it takes a while to type that out and longer to read a lot of them) and certainly not conversational (perhaps more a series of short speeches). There’s been some talk of the UI presenting a “read more” kind of option, and as a co-maintainer of a Twitter client, I’m inclined to resist having to make changes to my application.

But more than just laziness, I think 10k is actively harmful. Whenever a new feature is announced, the biggest complaint I see is “why aren’t you addressing abuse instead?” I get it, abuse is a hard subject to deal with, particularly on an unmoderated medium such as Twitter. One way that abuse happens is that abusers get their followers to dogpile the mentions of the target. Imagine how many targets you could include in 10,000 characters.

More innocuously (even though I find it super annoying), is the phenomenon of “I took a picture of some weather, let me tag all of the meteorologists in my market so that they’ll see it any maybe retweet me or put it on the news broadcast.” Those people will certainly make use of the extra characters, but it will add nothing to the conversation, only make it worse.

I get it, Twitter stock is plummeting. (Full disclosure: I own a few shares and expect to get quite the tax write-off from them.) There’s a lot of pressure to improve revenue, user engagement, and (most importantly to the people applying the pressure) the stock price. But this change will just make the user experience worse, and that doesn’t seem to be a reasonable way for Twitter to turn itself around.

I’m hoping that 10,000 is just a trial balloon. Nobody seems committed to making that the final number, so hopefully when the feature lands, it’s more reasonable. Or not. Will I stop using Twitter if the character limit changes to 10,000? Not right away. Maybe I will at some point, though.

By the way, this entire post (including this line), checks in at 3,398 characters.