Why I choose the licenses I do

In the discussion around Richard Stallman’s speech at Purdue and my subsequent blog post about it, there was a great focus on the philosophy of various licenses. On Twitter, the point was raised that the GPL restricts the freedom of those making derivative works. The implication being that it imposes selfish restrictions. As a response to that (and because I told “MissMorphine” that I would write on this), I thought it would be a good idea to write about my own license choices and the personal philosophy behind them. Never mind the fact that it’s five years late.

First, my general philosophy. I am a proponent of the free sharing of information, knowledge, and software. I’m also aware that producing these things requires effort and resources, so I’m sympathetic to people who prefer to receive compensation. For myself, I generally opt to make things available. In exchange, I ask that people who would build off my work not restrict the freedoms of others. In order to do that, I must restrict the freedom to restrict freedoms. This is the choice I make for my own work.

Open source licenses (and I use the term broadly here to include the Creative Commons licenses and similar) maximize freedom in one of two ways: they maximize freedom for the next person downstream or they maximize freedom to any level downstream. It’s shouldnt be a surprise to learn that the former is often favored by developers and particularly by commercial entities, as it allows open source code to be used in closed products.

My default licenses are the GNU General Public License version 2 for code and Creative Commons Attribution-ShareAlike 4.0 for non-code. The gist (and to be clear, I’m hand waving a lot of detail here) of these licenses is “do what you want with my work, but give me credit and let others have the same rights.” However, I’ve licensed work under different licenses when there’s a case for it (e.g. contributing to a project or ecosystem that has a different license).

The important thing to remember when choosing a license is to pick one that matches your goals.


Why can’t I develop software the right way?

It’s not because I’m dumb (or not just because I’m dumb).

Readers of this blog will know that I have no shortage of opinions. I know what I think is the Right Way™ to develop software, but I have a hard time doing it when I’m a solo contributor. I’ve never written unit tests for a project where I’m the only (or primary) contributor. I’m sloppy with my git branching and commits. I leave myself fewer code comments and write less documentation. This is true even when I expect others may see it.

A few months ago, I was at the regular coffee meetup of my local tech group. We were talking about this phenomenon. I blurted out something that, upon further reflection, I’m particularly fond of.

Testing is a cultural thing and you can’t develop a culture with one person.

Okay, maybe you can develop a culture with one person, but I can’t. One could argue that a one-person culture is called a “habit”, and I’m open to that argument. But I would counter that they’re not the same thing. A habit is “a settled tendency or usual manner of behavior”, according to my friends at Merriam-Webster. In other words, it’s something you do. A culture, in my view, is a collective set of habits, but also mores. Culture defines not just what is done, but what is right.

Culture relies on peer pressure (and boss pressure, and customer pressure, etc) to enforce the norms of the group. Since much of the Right Way™ in software development (or any practice) is not the easy way, this cultural pressure helps group members to develop the habit. When left alone, it’s too easy to take the short term gains (e.g. not going to the effort of writing tests) even if that involves long term pain (yay regressions!).

If I wrote code regularly at work, or as a part of an open source project that has already developed a culture of testing, I’m sure I’d pick up the habit. Until then, I’ll focus on leading thoughts and let “mostly works” be my standard for code.

“Oh crap, I have to manage”

The logical extension of what I wrote above is that code testing and other excellent-but-also-no-fun practices will not spontaneously develop in a group. Once the culture includes the practice in question, it will be (to a large degree) self-perpetuating. Until then, someone has to apply pressure to the group. This often falls to a technical lead who is probably not particularly fond of putting on the boss hat to say “hey, you need to write tests or this patch doesn’t get accepted”.

Too bad. Do it anyway. That’s one of the key features of project leadership: developing the culture so that the Right Way™ becomes the way things are done.

Climatune: correlating weather and Spotify playlists

I don’t remember how I stumbled upon it, but I recently discovered Climatune, a joint effort between Spotify and Accuweather that presents a playlist to match the current weather. According to Accuweather’s blog post on the topic, the playlists were developed based on an analysis of 85 million streams across 900 cities. This is an incredibly interesting project, even if the Lafayette playlists don’t seem to vary much.

What I like about projects such as Climatune is the reminder that we are still animals affected by our surroundings. When I worked at McDonald’s, we noticed anecdotally that sales of the Filet-o-Fish apparently increased on rainy days. I regret that I never ran daily sales reports to test this observation. I suppose in either case, there was an effect. Either customers ordered more, or we were more aware of the sales.

Correlating weather with other data is hardly a new concept. The most common example is probably comparing Chicago crime reports to the temperature. But researchers have investigated mood-weather correlation from Twitter posts. Several studies have examined the effects of weather on the stock market.

These sorts of studies can be hard, since it’s hard to control for all of the possible factors. But even if we can’t draw statistically sound conclusions, it’s fun to look at the connections. And if weather isn’t your thing, Spotify also has a site that makes a custom playlist that fits the cook time of your Thanksgiving turkey.

Your crappy UI could be lethal

The tech industry embraces (some might say “fetishizes”) the philosophy of “move fast and break things”. Mistakes are common and celebrated, and we don’t always consider the effects they have. Autonomous cars may help bring that to the forefront, but many dangerous effects already exist. This is a story of how a UI decision can lead to dangerous consequences.

A friend of mine has a daughter (we’ll call her “Edith”) who has diabetes. Like many people with the disease, she checks her blood sugar and takes an appropriate dose of insulin to regulate it. One night a few months ago, Edith checked her blood sugar before dinner. The meter read “HI” (over 600 mg/dL), so even though she felt fine, Edith gave herself 15 units of insulin.

As she ate, Edith began feeling shaky. She re-checked her sugar: 85 mg/dL. It was then that my friend realized “HI” was actually “81” and had Edith disconnect her insulin pump. Severe hypoglycemia can lead to a coma or worse. Had Edith been alone, misreading the meter could have resulted in getting the full dose of insulin from the pump, which could have caused a dangerously low blood sugar level.

How could this have been prevented? Using the word “high” instead of “hi” perhaps. Or any other unambiguous message. If it’s the “digital clock” style screen, have the elements race around the edges. Or put a plus sign and have it read “600+” with the plus sign blinking in a demonic color. Whatever. So long as it’s not easy to misread (especially if the user wears glasses but does not have them on, as an example).

Unambiguous UIs are always good. When it comes to medical devices, unambiguous UIs are critical. If misunderstood, the consequences can be lethal.

P.S. Happy birthday, “Edith”!

A quick look at the Weather Research and Forecasting Innovation Act

When I first heard that Congress had passed a bill it called the “Weather Research and Forecasting Innovation Act“, I was a little concerned. The majority has a history of being hostile to science (and one former senator was hostile to the National Weather Service in particular), and the title of the bill just screams “legislative doublespeak”. But I’ve read skimmed the bill and there doesn’t seem to be much to object to.

Much of the bill is of a “duh, we’re already working on that” nature; it requires the National Weather Service to conduct research to improve forecasts and warnings. One think I liked is that it specifically called out communication of forecasts and warnings as an area of improvement. It requires the current system to be examined, with necessary changes made where the current system is unsatisfactory. The current watch/warning/advisory system leaves a lot to be desired.

This bill is probably most notable for what it doesn’t say. On the positive side, it does not proscribe specific metrics (e.g. “the average lead time for a tornado warning must be 60 minutes”). It seems clear that meteorological experts were consulted for this bill. 

On the other hand, the only time “climate” is mentioned is when the full name of the COSMIC satellite program is given. There’s nothing in the bill that specifically precludes the NWS from conducting research related to climate change, but I couldn’t help reading into  the stated focus or short-term and seasonal weather. Legislation isn’t written in a vacuum, and the plain fact is that the current administration and Congress aren’t big supporters of climate change research or mitigation.

The main area of concern for me is the budget. A few programs have specific dollar amounts assigned, but it’s not clear to me if those are new appropriations or a directive to use the existing budget to that purpose. Certainly the main budget will have an impact on how this bill, should it be signed, is implemented. Given the initial reports about the Trump administration’s first budget, I remain solidly pessimistic. But returning to the provisions of this specific bill, it requires a lot of reporting, much of which appears to be new. The reporting, and even the substantive efforts, could end up being an unfunded mandate.

I can’t predict what the outcome of this bill will be. It got bipartisan support and I haven’t heard of any major complaints from my friends within the Weather Service. That in itself is encouraging; it should at least do no harm. If backed with sufficient funding, this may lead to improved forecasts for a variety of weather hazards. This, of course, is the stated mission of the National Weather Service: to protect life and property.

The most dangerous part of storm chasing is the road

People who have never gone storm chasing don’t always believe me when I say it’s a very boring hobby. Hours of driving can lead to…steady rain. Or blue skies. Or any number of outcomes that were probably not worth the time and money invested. They see movies or “reality” TV shows and assume it’s constantly a dangerous, edge-of-your-seat thrill ride.

Well it is dangerous, as we were reminded last week. Three chasers were killed in an automobile accident after one driver apparently ran a stop sign and hit another vehicle. I had never heard of the driver in question, so I can’t begin to speculate about his approach. Chances are he was a safe and conscientious driver most of the time. But it only takes one time. In that picture I saw that purports to be the fatal intersection, the stop sign is several feet away from the road. It’s easy to miss if something else grabs your attention.

Any chaser without a “close call” story is full of it (or just has a really bad memory). Distracted driving is dangerous, and chasing — especially near the storm — is an exercise in distracted driving. Maps, radar, radios, storm structure, cameras. All of these things compete for attention, but still you have to watch the road. Even with someone in the passenger seat, it can be hard to focus on the task at hand.

Truth be told, I’m surprised there haven’t been more deaths due to road accidents. The tornado isn’t the dangerous part.

Other writing in March 2017

Where have I been writing when I haven’t been writing here?

The Next Platform

I’m freelancing for The Next Platform as a contributing author. Here are the articles I wrote last month:

  • Solving HPC conflicts with containers – I finally understand the benefits that containers bring to HPC specifically.
  • Apache Kafka gives large-scale image processing a boost – Modern camera technology can produce images too large for a single machine to process in real time. This is where software solutions come in handy.
  • Peering through opaque HPC benchmarks – If Xzibit worked in the HPC field, he might be heard to say “I heard you like computers, so we modeled a computer with your computer so you can simulate your simulations.” But simulating the performance of HPC applications is more than just recursion for comedic effect, it provides a key mechanism for the study and prediction of application behavior under different scenarios.
  • Serving up serverless science – “Serverless” is a big deal in the general-purpose cloud, but can it work for scientific computing?
  • Increasing HPC utilization with meta-queues – HPC resources tend to favor very large jobs. One researcher found a way to get better treatment for small runs.


Over on Opensource.com, we had our 6th consecutive million-page-view month. This is getting to be old news. I wrote the articles below.

Also, the 2016 Open Source Yearbook is now available. You can get a free PDF download now or buy the print version at cost. Or you can do both!

Cycle Computing

Meanwhile, I wrote or edited a few things for work, too:

When are you done testing?

On my way to the airport a few months ago, I happened to catch an interesting story on NPR. Tom’s of Maine was looking to remove fossil fuels from their deodorant. After developing what seemed to be a good formula in the lab, they sent it out to customers for real-world testing. When the testers reported back positively, the company released the new formula. That’s when they started getting complaints. It turns out that the new formula had problems in warm weather. All of the testers were in New England and tested it during the winter months.

The folks at Tom’s of Maine certainly thought they were giving their product a thorough test. That they didn’t was a failure of imagination. Perhaps the most valuable skill I’ve gained in my career is the ability to imagine more possible ways something can fail. Indeed, I consider that the hallmark of “senior” status. I don’t consider myself particularly excellent in this regard, just better than I was a decade ago. Much of the ability to imagine failures comes from getting burned by not anticipating what can go wrong.

Thats what makes quality assurance a challenging (and underrated) discipline. It’s more than just trying some things and seeing what breaks. Good QA first requires identifying the entire universe of possible failures and then designing tests to make sure the outcome in each of those cases is the desired one.

When are you done testing? Hopefully when you’ve exhausted the space of possible conditions. In reality, it’s closer to when you’ve exhausted the space of likely conditions that are worth handling. Excluding conditions you don’t care about (e.g. not testing your software on end-of-life operating systems because you’re not going to provide support for those platforms) is a great way to shrink the universe.

The irony of automation

Automation is touted – often rightly – as a savior. It saves time, it saves effort, it saves money, and it saves lives. Except when it doesn’t. A while back, I read a two-part post about how a mistake with an automated pharmacy system lead to a 38x overdose. It’s not that the system itself made a mistake, but it enabled the medical professionals to make a mistake that they’d never have made in a pen-and-paper system.

This story has two ultimate lessons. First, modes are dangerous in user interfaces, because they are easy to overlook and can lead to incredibly different outcomes. In this story, had the dosage input always required either the total dosage or the dosage per patient weight, this would have never happened. Allowing either makes it easy to make a lethal mistake. Perhaps a better option would be to have a optional popup that calculates the dosage from the per-weight dosage and the patient weight. That retains the convenience of being able to prescribe the dosage either way, while making it explicitly obvious which way is being used.

The second lesson is that it’s important for experts with specialized knowledge to apply that to their use of automation. When something doesn’t seem right, it’s easy to find ways to explain it away, especially if the automation is reliable. But “that doesn’t seem right” must remain a feeling we pay attention to.

Giving up is the better part of valor

There’s a lot to be said for sticking with a problem until it is solved. The tricky part can be knowing when it is solved enough. I’ve been known to chase down an issue until I can explain every aspect of it. The very existence of the problem is a personal insult that will not be satisfied as long as the problem continues to insist on being.

That approach has served me well over the years, even if it can be annoying sometimes. I’ve learned by chasing these problems to their ultimate resolution. Sometimes it even reveals conditions that can be solved before they become problems.

But as with anything, there’s a tradeoff to be made. The time it takes run down a problem is time not spent elsewhere. It’s a matter of prioritization. Does having it 80% fixed do what you need? Can you just ignore the problem and move on? 

A while back, I was trying to get a small script to build in our build system at work. For whatever reason, the build itself would work, but using the resulting executable to upload itself to Amazon S3 failed with an indecipherable (to me at least) Windows library error. It made no sense to me. This was a workflow I had used on a local virtual machine dozens of times. And it worked if I executed the build artifact by hand. Just not in the build script.

I spent probably a few hours working on solutions. But no matter what I tried, I made no progress. When I got to the point where I had exhausted all of the reasonable approaches I could think of, I implemented a workaround and moved on to something else.

It can be hard to know when to give up. Leaving a problem unsolved might come back to bite you later. But what else could you be doing if you’re not spinning your wheels?