The irony of automation

Automation is touted – often rightly – as a savior. It saves time, it saves effort, it saves money, and it saves lives. Except when it doesn’t. A while back, I read a two-part post about how a mistake with an automated pharmacy system lead to a 38x overdose. It’s not that the system itself made a mistake, but it enabled the medical professionals to make a mistake that they’d never have made in a pen-and-paper system.

This story has two ultimate lessons. First, modes are dangerous in user interfaces, because they are easy to overlook and can lead to incredibly different outcomes. In this story, had the dosage input always required either the total dosage or the dosage per patient weight, this would have never happened. Allowing either makes it easy to make a lethal mistake. Perhaps a better option would be to have a optional popup that calculates the dosage from the per-weight dosage and the patient weight. That retains the convenience of being able to prescribe the dosage either way, while making it explicitly obvious which way is being used.

The second lesson is that it’s important for experts with specialized knowledge to apply that to their use of automation. When something doesn’t seem right, it’s easy to find ways to explain it away, especially if the automation is reliable. But “that doesn’t seem right” must remain a feeling we pay attention to.

Giving up is the better part of valor

There’s a lot to be said for sticking with a problem until it is solved. The tricky part can be knowing when it is solved enough. I’ve been known to chase down an issue until I can explain every aspect of it. The very existence of the problem is a personal insult that will not be satisfied as long as the problem continues to insist on being.

That approach has served me well over the years, even if it can be annoying sometimes. I’ve learned by chasing these problems to their ultimate resolution. Sometimes it even reveals conditions that can be solved before they become problems.

But as with anything, there’s a tradeoff to be made. The time it takes run down a problem is time not spent elsewhere. It’s a matter of prioritization. Does having it 80% fixed do what you need? Can you just ignore the problem and move on? 

A while back, I was trying to get a small script to build in our build system at work. For whatever reason, the build itself would work, but using the resulting executable to upload itself to Amazon S3 failed with an indecipherable (to me at least) Windows library error. It made no sense to me. This was a workflow I had used on a local virtual machine dozens of times. And it worked if I executed the build artifact by hand. Just not in the build script.

I spent probably a few hours working on solutions. But no matter what I tried, I made no progress. When I got to the point where I had exhausted all of the reasonable approaches I could think of, I implemented a workaround and moved on to something else.

It can be hard to know when to give up. Leaving a problem unsolved might come back to bite you later. But what else could you be doing if you’re not spinning your wheels?

Don’t memorize what you can look up

“Never memorize something that you can look up” is a quote often attributed to Albert Einstein. And it happens to be pretty solid advice in most cases. No value is added by being able to recite facts from memory. Value comes from being able to piece together the facts to make something new. It’s one thing to know the syntax of a command or a language function. It’s something else entirely to know how to use it to get the desired result.

I recently had a conversation with a gentleman who was applying for a volunteer gig at a non-profit. The role involved doing some work with spreadsheets, and they had him sit down and implement a few features while they watched. At one point, he looked up how to perform a particular task. They ended up not accepting him.

I don’t think we have quite caught on to the idea of looking up instead of memorizing. As Seth Godin points out, ubiquitous lookup is a very new concept. The idea of being able to rattle off easily discoverable facts is still appealing to us. In some cases, that’s still desirable. I really want EMTs to know how to perform first aid without Googling it. Pilots should know what the various switches and buttons in the cockpit do. Programmers? Meh.

Anecdotally, the tech industry is ahead of the general population in terms of avoiding memorization. I have a hunch as to why that may be. Memorization comes from repetition, and in tech repetition is something we strive to avoid. If you’re repeating the same thing over and over, you’re doing something wrong. That’s not necessarily the case in other fields.

When I think the things I have memorized, few of them are useful. I’ll probably never need to know that the McDonald’s restaurant in Georgetown, Indiana is store 12895. Or the one in Clarksville is 383. Or the one on Grantline Road is 12900. I remember IP addresses for defunct hosts that I haven’t worked on in eight-plus years. I don’t remember the argument order for Perl’s split function (it’s always the opposite of what I think it is), but that’s okay. When I need to split a string in Perl, I can look it up. It’s more important that I know when it’s appropriate to do that than to instantly recall the implementation details (and I suspect if I spent more time coding, I’d remember more syntax).

I hope the “don’t memorize” philosophy continues to take hold. For my part, I’ll never reject someone because they had to use Google. If anything, the ability to use search engines and other fact-finding tools is among the most important skill one can have.

Journalism and leaks

Over at Lawfare, Jack Goldsmith had a great article called “Journalism in the Doxing Era“. Professor Goldsmith examined the differences between data published by Wikileaks and The New York Times. I’m no journalist, but I am a journalish, and the thing that stood out to me is what makes the act of publication journalism.

Two attributes, in my mind, make the publishing of leaked or stolen information journalism. First, authentication. Responsible journalism requires presenting facts, not rumors. If documents are published, they had better be the real deal. It’s easy to fake correspondence that looks authentic, but if you publish it, it had better be real.

The second attribute is editorial filtering. Once you’re left with true (or at least authentic) documents, what’s newsworthy? There’s an argument that everything should be published so the public can decide for themselves what they think is important. I’m sympathetic to that, but it’s also a little lazy. Journalists should not just be gatherers of information, but they should be curators of it. That means chucking out what’s not important in favor of what is.

Of course, importance is very context-sensitive, but some things are pretty clear. John Podesta’s risotto recipe? Not important (unless there’s a food blog that wants to run with it). The Clinton campaign receiving debate questions in advance? Important. (As an aside, the whole “but her emails” thing overall may prove to be one of the great tragedies of the 21st century. That doesn’t make this particular example unnewsworthy.)

An editorial filter does lend itself to bias, and an even greater perception of bias by those biased in the opposite direction. Nonetheless, most news consumers don’t have time to examine everything and draw their own informed conclusions. Journalists serve the public interest when they collect facts, but also when they curate them.

Maybe your tech conference needs less tech

My friend Ed runs a project called “Open Sourcing Mental Illness“, which seeks to change how the tech industry talks about mental health (to the extent we talk about it at all). Part of the work involves the publication of handbooks developed by mental health professionals, but a big part of it is Ed giving talks at conferences. Last month he shared some feedback on Twitter:

So I got feedback from a conf a while back where I did a keynote. A few people said they felt like it wasn’t right for a tech conf. It was the only keynote. Some felt it wasn’t appropriate for a programming conf. Time could’ve been spent on stuff that’d help career. Tonight a guy from a company that sponsored the conf said one of team members is going to seek help for anxiety about work bc of my talk. That’s why I do it. Maybe it didn’t mean much to you, but there are lots of hurting, scared people who need help. Ones you don’t see.

Cate Huston had similar feedback from a talk she gave in 2016:

the speaker kept talking about useless things like feelings

The tech industry as a whole, and some areas more than others, likes to imagine that it is as cool and rational as the computers it works with. Conferences should be full of pure technology. And yet we bemoan the fact that so many of our community are real jerks to work with.

I have a solution: maybe your tech conference needs less technology. After all, the only reason anyone pays us to do this stuff is because it (theoretically) solves problems for human beings. I’m biased, but I think the USENIX LISA conference does a great job of this. LISA has three core areas: architecture, engineering, and culture. You could look at it this way: designing, implementing, and making it so people will help you the next time around.

Culture is more than just sitting around asking “how does this make you feeeeeeeel?” It includes things like how to avoid burnout and how to train the next generation of practitioners. It also, of course, includes how to not be a insensitive jerk who inflicts harm on others with no regard for the impact they cause.

I enjoy good technical content, but I find that over the course of a multi-day conference I don’t retain very much of it. For a few brief hours in 2011, I understood SELinux and I was all set to get it going at home and work. Then I attended a dozen other sessions and by the time I got home, I forgot all of the details. My notes helped, but it wasn’t the same. On the other hand, the cultural talks tend to be the ones that stick with me. I might not remember the details, but the general principles are lasting and actionable.

Every conference is different, but I like having one-third of content be not-tech as a general starting point. We’re all humans participating in these communities, and it serves no one to pretend we aren’t.

My 2016 in review

Well 2016 is over. Looking back on the previous year seems to be the in thing to do around now, and it sure beats coming up with original content, so let’s take a look at the past year.

Between this blog, Opensource.com, and The Next Platform, I published 102 articles in 2016. That doesn’t count blog posts, conference papers, marketing materials, and other things I wrote for work. Writing has certainly claimed a central part of my life, and I like that.

In 2016, I got to see my articles in print (thanks to the Open Source Yearkbook). I started getting paid to contribute (I was even recruited for the role, which is a great stroke to my ego). I presented talks at two conferences, chair sessions at two others (including one where I was the co-chair of the Invited Talks). My writing has given me the opportunity to meet and befriend some really awesome people. And of course, it has helped raised my own profile.

Blog Fiasco

Blog Fiasco had what is probably its best year in 2016. I was able to keep to my Monday/Friday posting schedule for much of the year. Only in May — when I was traveling extensively — did I have an extended stale period. I only published 78 articles here compared to 99 in 2015, but I also have done more writing outside of this blog. With just over 8,000 views in 2016, traffic is up by about 5%. As a matter of contrast, my Opensource.com article on a bill working its way through the New York Senate had more views than all of Blog Fiasco.

Top 10 articles in 2016

These are the top Blog Fiasco articles in 2016

  1. Solving the CUPS “hpcups failed” error
  2. Reading is a basic tool in the living of a good life
  3. When your HP PSC 1200 All-in-One won’t print
  4. Fedora 24 upgrade
  5. Accessing Taleo from Mac or Linux
  6. A wrinkle with writing your resume in Markdown
  7. elementary misses the point
  8. Hints for using HTCondor’s credd and condor_store_cred
  9. Book review: The Visible Ops Handbook
  10. What do you want in a manager?

Top articles published in 2016

Here are the top 10 Blog Fiasco articles that I published in 2016.

  1. Fedora 24 upgrade
  2. Hints for using HTCondor’s credd and condor_store_cred
  3. What do you want in a manager?
  4. Product review: Divoom AuraBox
  5. A culture of overwork
  6. Disappearing WiFi with rt2800pci
  7. mPING and charging for free labor
  8. What3Words as a password generator
  9. My new year’s resolution
  10. left-pad exposed the real problem

So 2017 then?

I’m pleased to see that a few of my troubleshooting articles have had such a long and healthy life. I’m not sure what it means that the article I published on December 30th was the ninth-most viewed article of the year, but it certainly says something. This blog has never really been for anyone’s benefit but my own, as evidenced by the near-zero effort I’ve put into publicizing articles. In part due to having other, audience-established outlets for writing, Blog Fiasco has become a bit of a dumping ground for opinions and articles that don’t really fit on “real” sites. I’m okay with that.

Will I put more effort into promoting content in 2017? We’ll see. I think I’d rather spend that time writing in places that already have visibility. The monthly “where have I been writing when I haven’t been writing here?” posts will make it easy to find my work that doesn’t end up here.

On a personal note

Outside of my writing, 2016 has been a year. Lots of famous people died. Closer to home, it was a year with a lot of ups and downs. My own perception is that it was more down than up, but I think 2017 is heading in the right direction again. I’ll let you know in early 2018.

Professionally, I’ve changed positions. I left an operations management (but really, operations doing-ment) role to do technical marketing and evangelism. It was an unexpected change, but a hard-to-pass-up opportunity. I don’t regret the decision, except that it has changed what I thought my career trajectory was, and I haven’t yet figured out if I want to curve back that way at some point or if I want to continue down this (or another) path. I know better than to make specific plans, but I take comfort in having a vague target in mind.

And then of course, there’s stuff going on in the world at large. I try to avoid politics on this blog, but I’ll take a moment to say that the next few years are shaping up to be “interesting”. I have a lot of concerns about social and environmental protections that may cease to exist. Nationalist movements in the U.S. and Europe are gaining steam. I know that even if things get as bad as some fear, society will eventually recover (depending on what happens with climate change, “eventually” could be pretty long), but I also know that for some people it will really suck.

Whatever 2017 brings, I wish you health, happiness, and success, dear readers.

My new year’s resolution

I’m not usually one for making resolutions for the coming year. I know myself well enough to know that my resolve will wane pretty quickly. (I may be lazy, but at least I admit it!) But for 2017, I have decided to make one resolution.

I resolve to read. 

Not to read more books, blogs, magazines, etc., though I would like to do that. My resolution involves what I share. 2016 had many lessons for us, one of which is that it’s far too easy to share something that reinforces our existing views, even if that something happens to be totally false. Or even if the article is factually correct, the headline could be way off.

So in 2017, I will not share articles that I have not read. No more sharing based on the headline or the opening paragraph. I can’t independently fact check every article I read, but I’ll do my best to validate claims that seem to wild – or too good – to be true.

Does this mean I won’t share as much? Almost certainly. But it also means that what I share will be higher quality. I’d like to think people read my writing and follow me on Twitter for quality information, not just my stunning good looks and fiery hot takes.

As you consider your 2017 resolutions, I urge you to please join me in adopting this one for your own.

Airlines race to the bottom

A race to the bottom is rarely an attractive concept, particularly in a submarine or an airplane. And yet the airline industry seems to be dead set on racing to the bottom. Case in point: United announced the addition of a new “Basic Economy” fare tier. This tier does not permit use of the overhead bins and does not assign seats until the day of departure.

The cynical (and perhaps correct) view is that this is an opportunity to raise prices on tickets people would actually want to buy while keeping the “as low as!” price the same. But it’s also an attempt to compete with budget airlines like Spirit and Frontier, according to an industry source. Being able to match the low fares is “absolutely non-negotiable.”

I don’t have the benefits of seeing the financial models for this, but from an outside perspective, this seems like a bad move. Not all customers are created equal and it damages your brand to go after the wrong market. Some customers will buy based solely on price, and if that’s who you want to go after, do it. But someone buying solely on price probably won’t be that loyal, so the minute your competitor drops prices, you’ve lost them.

Itemizing everything enables the customer to pay for exactly what they want. It also gives the impression they’re being nickeled and dimed. It’s much easier to just have the price than to add up all the line items. I find it amusing that no-frills carrier Southwest is the holdout for free checked luggage. (As an aside, I’ll probably never fly Frontier again because the notion of paying $40 to check a single bag insulting.)

I’m also curious to see how this affects behavior. By adopting checked bag fees, airlines incentivize passengers to push the limits of carry-ons. This slows down the boarding and deplaning process. Will this Basic Economy tier get people to shove everything into their personal item that’s just barely wedged under the seat in front of them? Will it lead to upset customers who didn’t pay attention trying to use an overhead bin they’re not entitled to?

Most likely, we’ll grumble about it and then end up buying the cheapest ticket anyway. That seems to be the pattern, so I suppose it makes sense for airlines to follow the customer. But maybe there’s room for one or two airlines to buck that trend.

What counts more for community: labels or actions?

I was recently in an argument on Twitter (I know, I know). The summary is that there was disagreement on whether stated party affiliation or cast votes were more indicative of the state of the body politic. We didn’t arrive at a consensus, but it got me thinking about open source communities.

Communities are notoriously difficult to pin down. Where are the boundaries? Is a person a member of a community when they (or someone else) decides to apply that label to them? Are they a member when they make some overt participation effort? Is it a mix of both?

In general, I tend to think that if it looks like a duck and quacks like a duck, there’s a pretty good chance it’s a duck. That is to say a person is a member of a community if they participate in the community, even if they don’t self-assign the label. For example, if someone considers themselves a political independent but they vote for Democratic candidates 80% of the time, they’re probably a Democrat. Similarly, someone who frequently answers questions in an open source project’s IRC channel or mailing list is a member of the contributor community, even if they don’t think they are (perhaps because they’ve never contributed code).

This isn’t to say that communities shouldn’t welcome people willing to self-assign membership. Unless someone has behaved in a way to warrant exclusion, they should be welcomed and encouraged to become active participants. That doesn’t necessarily mean giving them full access, though. I still consider myself a contributor to Fedora Documentation, even though I haven’t really made a contribution in a while. I still have commit access to the repo, but if someone decided to suspend that, I’d understand.

There’s not a good answer here. How a you define community is largely context-dependent. It’s worth considering how we define the boundary.

The AWS/VMWare partnership

Disclosures: My employer is an AWS partner. This post is solely my personal opinion and does not represent the opinion of my employer or AWS. I have no knowledge of this partnership beyond what has been publicly announced. I also own a small number of shares of Amazon stock.

Last week, Amazon Web Services (AWS) and VMWare announced a partnership that would make AWS the preferred cloud solution for VMWare. AWS will provide a separate set of hardware running VMWare’s software managed by VMWare staff. Customers can then provision a VMWare environment from that pool that looks the same as an internal data center.

As others have pointed out, this is essentially a colocation service that just happens to be run by Amazon. I share that view of it, but I don’t take the view that AWS blinked. It’s true that AWS has eschewed hybrid cloud in favor of pure cloud offerings, and they’ve done quite well with that strategy.

I don’t think the market particularly cares about purity, nor do I think the message will get muddled. Here’s how I see this deal: VMWare sees people moving stuff to the cloud and they know that the more that trend continues, the smaller their market becomes. Meanwhile AWS is printing money but is aware of the opportunity to print more. Microsoft Azure, despite having an easy answer for hybrid, doesn’t seem to be a real threat to AWS at the moment.

But I don’t think AWS leadership is stupid or complacent, and this deal represents a low-risk, high-reward opportunity for them. With this partnership, AWS now has an entry into organizations that have previously been cloud-averse. Organizations can dip their toes into “cloud” without having to re-tool (although this is not the best long-term strategy, as @cloud_opinion points out). As the organization becomes comfortable with the version of the cloud they’re using, it becomes easier for AWS sales reps to talk them into moving various parts to AWS proper.

Now I don’t mean to imply that AWS is a sheep in wolf’s clothing here. This deal seems mutually beneficial. VMWare is going to face a shrinking market over time. With this deal, they at least get to buy themselves some time. For AWS, it’s more of a long game, and they can put as much or as little into this partnership as they want. For both companies, it’s a good argument to prevent customers from switching to Microsoft’s offerings.

What will be most interesting is to see if Google Cloud, the other major infrastructure-as-a-service (IaaS) provider will respond. Google’s strategy, up until about a year ago, has seemed to be “we’re Google, of course people will use us”. That has worked fairly well for startups, but it has very little traction in the enterprise. Google can continue to be more technically-focused, but that will hinder their ability to get into major corporations (especially those outside of the tech industry).

I don’t see that there’s a natural fit at this point (though I also wouldn’t have expected AWS and VMWare to pair up, so what do I know?). One interesting option would be for Google to buy Red Hat (disclosure: I also own a few shares of Red Hat) and make Open Shift its hybrid solution. I don’t see that happening, though, as it doesn’t seem like the right move for either company.

The VMWare-on-AWS offering will not be generally available until sometime next year, so we have a little bit of time before we can see how it plays out.