Cell phone plans are changing

If you’re a cell phone plan geek (and certainly someone out there is, right?), last week was pretty interesting for you. First AT&T announced they’d be eliminating one plan and halving the data limit on a more expensive plan. Then T-Mobile followed up with their announcement of going to a single post-paid offering. This unlimited plan has some limits, which the EFF is looking into for a possible net neutrality complaint.

Moore’s law is an observation of the count of transistors on an integrated circuit over time, but it has been more broadly generalized to apply to many aspects of technology. Of particular note is the general trend of a technology to become significantly cheaper over time. This does not seem to be the case in the world of mobile phone service, which should be an immediate red flag for anti-consumer behavior.

I compared my current T-Mobile bill to my hypothetical bill under the new “T-Mobile One.” We pay $50/month for the unlimited talk/text plan, plus $20/month for my line (which includes unlimited data) and $15/month for my wife’s line (6 GB of data per month). The total bill before taxes and fees comes to $85/month. With T-Mobile One, we’d pay $120/month ($130 if we don’t autopay). This $35/month increase adds service that I’d pay just $5 more to get and also takes away the ability to use my phone as a WiFi hotspot (without being throttled to 2G speeds or paying an additional $3/GB).

I’ll admit that I don’t use my phone as a hotspot, in part because the coverage is questionable (or non-existent) in a lot of places that I might want to use it. But I’m already overpaying for data: my wife uses a few hundred megabytes a month and I average around 1 gigabyte or so. Only in July of this year when I was out of town for three of four weeks did I use more than 6 GB, and even then it was only 8 GB.

Perhaps if T-Mobile were going to put that extra money into expanding coverage, I’d be more inclined to go along with their plan. Instead, if I were to switch I’d get the same level of technology for a higher price. That’s not how this is supposed to work. It’s not clear at this point if existing customers will be able to keep their current plan. I assume in the short term that will be the case. If I’m forced to change at some point, I’ll have to go with a different carrier. If I’m going to get raked over the coals on price, I might as well get coverage.

Twitter’s abuse problem

I’ve been an avid Twitter user for years. I’ve developed great friendships, made professional connections, learned, laughed, and generally had a good time. Of course, I also happen to be a relatively-anonymous white male, which means my direct exposure to abuse is fairly limited. I can’t say the same for some of my friends. Last week’s BuzzFeed article calling Twitter “a honeypot for assholes” didn’t seem all that shocking to me.

Twitter, of course, denied it in the most “that article is totally wrong, but we won’t tell you why because it’s actually spot on” way possible:

In response to today’s BuzzFeed story on safety, we were contacted just last night for comment and obviously had not seen any part of the story until we read it today. We feel there are inaccuracies in the details and unfair portrayals but rather than go back and forth with BuzzFeed, we are going to continue our work on making Twitter a safer place. There is a lot of work to do but please know we are committed, focused, and will have updates to share soon.

To it’s credit, Twitter has publicly admitted that it’s solution to harassment is woefully inadequate. It’s in a tough spot: balancing free expression and harassment prevention is not an easy task. Some have suggested the increased rollout of Verified status would help, but that’s harmful to some the people best served by anonymous free expression. I get that Twitter does not want to be in the business of moderating speech.

It’s important to distinguish speech, though, so I’m going to invent a word. There’s offensive speech and then there’s assaultive speech. Offensive speech might offend people or it might offend governments. Great social reform and obnoxious threadshitting both fall into this category. This is the free speech that we all argue for. Assaultive speech is less justifiable. It’s not merely being insulting, but it’s the aggressive attempt to squash someone’s participation.

I like to think of it as the difference between letting a person speak and forcing the audience to listen. I could write “Jack Dorsey sucks” on this blog every day and while it would be offensive, it is (and should be) protected. Even posting that on Twitter would fall into this category. If instead I tweeted “@jack you suck” every day, that’s still offensive but now it’s assaultive, too.

This, of course, is a in the context of a comany deciding what it will and won’t allow on its platform, not in the context of what should be legally permissible. And don’t mistake my position for “you can never say something mean to someone.” It’s more along the lines of “you can’t force someone to listen to you say mean things.” Blocks and mutes are woefully ineffective, especially against targeted attacks. It’s trivially easy to create a new Twitter account (and I have made several on a lark just because I could). But if the legal system can have Anti-SLAPP laws to prevent censorship-by-lawsuit, Twitter should be able to come up with a system of Anti-STAPP rules.

One suggestion I heard (I believe it was on a recent episode of “This Week in Tech”, but I don’t recall for sure) was the idea of a “jury of peers.” Instead of having Twitter staff review all of the harassment, spam, etc. complains, select some number of users to give it a first pass. Even if just a few hundred active accounts a day are selected for “jury duty”, this gives a scalable mechanism for actually looking at complaints and encouraging community norms.

Maybe this is a terrible idea, it’s clear that Twitter needs to do something effective if it wants to continue to attract (and retain!) users.

Full disclosure: I own a small number of shares of Twitter stock. It’s not going well for me.

Getting support via social media

Twitter wants you to DM brands about your problems” read a recent Engagdet article. It seems Twitter is making it easier to contact certain brand accounts by putting a big contact button on the profile page. The idea being that the button, along with additional information about when the account is most responsive, will make it easier for customers to get support via social media. I can understand wanting to make that process easier; Twitter and other social media sites has been an effective way for unhappy customers to get attention.

The previous sentence explains why I don’t think this will end up being a very useful feature. Good customer support seems to be the exception rather than the rule. People began turning to social media to vent their frustration with the poor service they received. To their credit, companies responded well by providing prompt responses (if not always resolutions). But the incentive there is to tamp down publicly-expressed bad sentiment.

When I worked at McDonald’s, we were told that people are more likely to talk about, and will tell more people, the customer service they experienced. Studies also show complaints have an outsized impact. The public nature of the complaint, not the specific medium, is what drives the effectiveness of social media support.

In a world where complaints are dealt with privately, I expect companies to revert to their old ways. Slow and unhelpful responses will become the norm over time. If anything, the experience may get worse since social media platforms lack some of the functionality of traditional customer support platforms. It will be easier, for example, for replies to fall through the cracks.

I try to be not-a-jerk. In most cases, I’ll go through the usual channels first and try to get the problem resolved that way. But if I take to social media for satisfaction, you can bet I’ll do it publicly.

Disappearing WiFi with rt2800pci

I recently did a routine package update on my Fedora 24 laptop. I’ve had the laptop for three years and have been running various Fedorae the whole time, so I didn’t think much of it. So it came as some surprise to me when after rebooting I could no longer connect to my WiFi network. In fact, there was no indication that any wireless networks were even available.

Since the update included a new kernel, I thought that might be the issue. Rebooting into the old kernel seemed to fix it (more on that later!), so I filed a bug, excluded kernel packages from future updates, and moved on.

But a few days later, I rebooted and my WiFi was gone again. The kernel hadn’t updated, so what could it be? I spent a lot of time flailing around until I found a “solution”. A four-year-old forum post said don’t reboot. Booting from off or suspending and resuming the laptop will cause the wireless to work again.

And it turns out, that “fixed” it for me. A few other posts seemed to suggest power management issues in the rt2800pci driver. I guess that’s what’s going on here, though I can’t figure out why I’m suddenly seeing it after so long. Seems like a weird failure mode for failing hardware.

Here’s what dmesg and the systemd journal reported:

Aug 01 14:54:24 localhost.localdomain kernel: ieee80211 phy0: rt2800_wait_wpdma_ready: Error - WPDMA TX/RX busy [0x00000068]
Aug 01 14:54:24 localhost.localdomain kernel: ieee80211 phy0: rt2800pci_set_device_state: Error - Device failed to enter state 4 (-5)

Hopefully, this post saves someone else a little bit of time in trying to figure out what’s going on.

Q: What’s the point of FAQs?

A: Last week Justin Searls had a series of tweets using the #HonestFAQ hashtag. This one, in particular, got me thinking:

As a newly-minted marking pro, my first thought was “yeah, that’s true.” Most prospective users or customers will not read through the entirety of your documentation just to see if they want to play around with your product or not. FAQs can be an excellent rhetorical device for preempting reasons to not sign up. Twenty five years of world wide web use has taught us to look first for FAQs, so there’s a quick success when the answers are there.

But I also started thinking about FAQs I’ve written over the years. Mostly, they were in my first professional job. I was doing a lot of desktop support, so I put together some FAQ entries to head off some of the questions that I regularly was asked (or expected to be asked). The goal was to let people handle their own easy problems so that I could focus on the harder problems for them.

The more I thought about it, the more I came to the conclusion that FAQs (particularly the non-marketing ones) are indicators of a bad user experience. This may be due to technical issues, documentation, or something else. But if the platonic ideal of software or a service includes the fact that everything is self-evident, then there’s no other conclusion.

This makes FAQs good, not bad, as they serve as a guide post for what needs to be fixed from the user perspective. The ability to preemptively write good FAQs means you’re thinking like your users. The earlier in the process you start doing that, the fewer FAQs you may need to write.

DevOps is dead!

“$thing is dead!” is one of the more annoying memes in the world of technology. A Tech Crunch article back in April claimed that managed services (of cloud infrastructure) is the death knell of DevOps. I dislike the article for a variety of reasons, but I want to focus on why the core argument is bunk. Simply put: “the cloud” is not synonymous with “magical pixie dust.” Real hardware and software still exist in order to run these services.

Amazon Web Services (AWS) is the current undisputed leader in the infrastructure-as-a-service (IaaS) space. AWS probably underlies many of the services you use on a daily basis: Slack and Netflix are two prime examples. AWS offers dozens of services for computation, storage, and networking that roll out updates to datacenters across the globe many times a day. DevOps practices are what make that possible.

Oh, but the cloud means you don’t need your internal DevOps team! No. Shut up. “Why not simply teach all developers how to utilize the infrastructure tools in the cloud?” Because being able to spin up servers and being able to effectively manage and scale them are two entirely different concepts. It is true that cloud services can (not “must”!) take the “Ops” out of “DevOps” for development environments. But just as having access to WebMD doesn’t mean I’m going to perform my own surgery, being able to spin up resources doesn’t obviate the need for experienced management.

The author spoke of “managed services provider” as an apparent synonym for “IaaS provider”. He ignored what I think of as “managed services” which is a contracted team to manage a service for you. That’s what I believe to be the more realistic threat to internal DevOps teams. But it’s no different than any other outsourcing effort, and outsourcing is hardly a new concept.

At the end of the article, the author finally gets around to admitting that DevOps is a cultural paradigm, not a position or a particular brand of fairy dust. Cloud services don’t threaten DevOps, they make it easier than ever to practice. Anyone trying to convince you that DevOps is dead is just trying to get you to read their crappy article (yes, I have a well-tuned sense of irony, why do you ask?).

Crowdfunding academic research?

A few months ago, the Lafayette Journal & Courier ran a story about Purdue University turning to crowdfunding Zika research. Funding sources in higher ed are special. Grants from federal and other agencies require the submission of sometimes lengthy proposals. The approval process is slow and bureaucratic. Private sector funding can indirectly bias fields of study (why would a company fund a study that is expected to be bad for the company?) or at least lead to accusations of bias.

There are benefits to a crowdfunding model for academic research. Getting the public involved in the process means they’re interested, which is good for scientific literacy. Crowdfunding can be a powerful tool for raising a large amount of money.

On the other hand, we already have a crowdfunding model for research: the tax-supported National Science Foundation, National Institutes of Health, etc. Basic research generally lacks the pizzaz to attract large amounts of crowdfunding, but it is a key foundation for higher-level research.

As the article pointed out, a crowdfunding pitch on the heels of a major fundraising campaign is a bit of a sour note. But overall, using crowdfunding to augment research is an appealing idea. I just worry about the day that researchers become dependent on it.

How gamification can change our habits

A while back, a friend posted the following tweet:

Sleeping in the buff means you don’t get Fitbit credit for the steps taken to clean up dog piss in the wee hours of the morning.

I laughed at first, but then I thought about it. Gamification changes how we behave: I certainly walk a lot more since I started counting my steps on my phone, for example. Adding small rewards and leveling up is used both in games but also to promote desired behavior in “serious” situations.

So what will long-term gamification side effects look like? Will people who normally sleep naked start sleeping with socks on so they can wear their FitBit? Will they instead buy a Jawbone Up that they can wear on their ankle?

Changing how HTCondor is packaged in Fedora

The HTCondor grid scheduler and resource manager follows the old Linux kernel versioning scheme: for release x.y.z, if y is an even number it’s a “stable” series that get bugfixes, behavior changes and major features go on odd-numbered y. For a long time, the HTCondor packages in Fedora used the development series. However, this leads to a choice between introducing behavior changes when a new development HTCondor release comes out or pinning a Fedora release to a particular HTCondor release which means no bugfixes.

This ignores the Fedora Packaging Guidelines, too:

As a result, we should avoid major updates of packages within a stable release. Updates should aim to fix bugs, and not introduce features, particularly when those features would materially affect the user or developer experience. The update rate for any given release should drop off over time, approaching zero near release end-of-life; since updates are primarily bugfixes, fewer and fewer should be needed over time.

Although the HTCondor developers do an excellent job of preserving backward compatibility, behavior changes can happen between x.y.1 and x.y.2. HTCondor is not a major part of Fedora, but we should still attempt to be good citizens.

After discussing the matter with upstream and the other co-maintainers, I’ve submitted a self-contained change for Fedora 25 that will

  1. Upgrade the HTCondor version to 8.6
  2. Keep HTCondor in Fedora on the stable release series going forward

Most of the bug reports against the condor-* packages have been packaging issues and not HTCondor bugs, so upstream isn’t losing a massive testing resource here. I think this will be a net benefit to Fedora since it prevents unexpected behavior changes and makes it more likely that I’ll package upstream releases as soon as they come out.

Fourth Amendment protection and your computer

Back in January, I wrote an article for Opensource.com arguing that judges need to be educated on open source licensing. A recent decision from the Eastern District of Virginia makes it clear that the judiciary needs to better understand technology in general. Before I get into the details of the case, I want to make it clear that I tend to be very pro-defendant on the 4th-8th Amendments. I don’t see them as helping the guilty go free (although that is certainly a side effect in some cases), but as preventing the persecution of the innocent.

The defendant in this case is accused of downloading child pornography, which makes him a pretty unsympathetic defendant. Perhaps the heinous nature of his alleged crime weighed on the mind of the judge when he said people have no expectation of privacy on their home computers. Specifically:

Now, it seems unreasonable to think that a computer connected to the Web is immune from invasion. Indeed, the opposite holds true: in today’s digital world, it appears to be a virtual certainty that computers accessing the Internet can – and eventually will – be hacked.

As a matter of fact, that’s a valid statement. It’s good security advice. As a matter of law, that’s a terrible reason to conclude that a warrant was not needed. Homes are broken into every day, and yet the courts have generally ruled that an expectation of privacy exists in the home.

The judge drew an analogy to Minnesota v. Carter, in which the Supreme Court ruled that a police officer peering through broken blinds did not constitute a violation of the Fourth Amendment. I find that analogy to be flawed. In this case, it’s more like the officers entered through a broken window and began looking through drawers. Discovering the contents of a computer requires more than just a passing glance, but instead at least some measure of active effort.

What got less discussion is the Sixth Amendment issue. Access to the computer was made possible by an exploit in Tor that the FBI made use of. The defendant asked for the source code, which the the judge refused:

The Government declined to furnish the source code of the exploit due to its immateriality and for reasons of security. The Government argues that reviewing the exploit, which takes advantage of a weakness in the Tor network, would expose the entire NIT program and render it useless as a tool to track the transmission of contraband via the Internet. SA Alfin testified that he had no need to learn or study the exploit, as the exploit does not produce any information but rather unlocks the door to the information secured via the NIT. The defense claims it needs the exploit to determine whether the FBI closed and re-locked the door after obtaining Defendant’s information via the NIT. Yet, the defense lacks evidentiary support for such a need.

It’s a bit of a Catch-22 for the defense. They need evidence to get the evidence they need? I’m open to the argument that the exploit here is not a witness per se, making the Sixth Amendment argument here a little weak, but as a general trend, the “black boxes” used by the government must be subject to scrutiny if we are to have a just justice system.

It’s particularly obnoxious since unauthorized access to a computer by non-law-enforcement has been punished rather severely at times. If a citizen can get 10 years in jail for something, it stands to reason the government should have some accountability when undertaking the same action.

I have seen nothing that suggests the judge wrote this decision out of malice or incompetence. He probably felt that he was making the correct decision. But those who make noise about the “government taking our rights away” would be better served paying attention to the papercut cases like this instead of the boogeyman narratives.

The easy answer here is “don’t download child pornography.” While that’s good advice, it does nothing to protect the innocent from malicious prosecution. Hopefully this will be overturned on appeal.