SCaLE 17x

Last week, I attended the 17th annual Southern California Linux Expo (SCaLE 17x). SCaLE is a conference that I’ve wanted to go to for years, so I’m glad I finally made it. Located at the Pasadena Convention Center, it’s a short walk from nearby hotels, restaurants, and a huge independent bookstore. Plus the weather in southern California almost always beats Indiana — particularly in March.

Having done this a few times before, the SCaLE organizers know how to put on a good event. Code of Conduct information, including contacts, is prominently posted right as you walk in the door. Staff walk around with t-shirts that sport the WiFi information. The break between sessions is 30 minutes, which allows ample time to get from one to another without having to brush people aside if you meet them in the hallway. It was an incredibly-well run conference.

I ended up in the “mentoring” track most of the weekend, which I suppose indicates where I am in this point of my career. “Mentoring” may not be the right word, though. The talks in that room covered being a community organizer, developer advocacy, and a lot about mental health. Quite a bit about mental health, in fact. It’s probably a good thing that we’re discussing these topics more openly at conferences.

The talk that stuck with me the most, though, was one I saw on Sunday afternoon. Bradley Kuhn wondered “if open source isn’t sustainable, maybe free software is.” Bradley compared the budgets and the output of large corporate-backed foundations and smaller projects like phpMyAdmin. I’ll go deeper on that later, either when I recap the Open Source Leadership Summit or in a standalone post.

Bradley also used an “It’s a Wonderful Life” analogy, which is very much my kind of analogy. This may become a longer post at some point, but the general idea is that we have a lot of Sam Wainwrights in the world: people who are willing to throw money at a problem (perhaps with strings attached). Despite being well-meaning, they’re not actually doing that much to help. What we need is more George Baileys: people doing the small but critical work in their communities to help them thrive.

SCaLE was a terrific conference, and I’m looking forward to going back in the future. Especially now that I’ve learned my way around the food scene a little bit.

CopyleftConf was great, you should go next year

Two weeks ago, I was fortunate to attend the inaugural Copyleft Conference. It was held in Brussels, Belgium the day after FOSDEM. Since I was in town anyway, I figured I should just extend my trip by a day to attend this conference. I couldn’t be happier that I did.

Software licensing doesn’t get enough discussion at conference as it probably should. And among the talks that do happen, copyleft licenses specifically get only a portion of that. But with major projects like the Linux kernel using copyleft licenses — and the importance of copyleft principles to open source software generally — the Software Freedom Conservancy decided that a dedicated conference is in order.

I was impressed with how well-organized and well-attended the conference was for a first try. The venue was excellent, apart from some acoustic issues in the main room. The schedule was terrific: three rooms all day, each filled with talks from the world’s leading experts. I commented to a friend that if the building were to collapse, 80% of the worlds copyleft expertise would disappear.

For me, some of the excitement was just being around all of those people:

Molly deBlanc’s keynote was simultaneously inspiring and disturbing. She spoke of how software freedom matters to everyone, but how it matters to marginalized people in different ways. Ad networks can expose that someone at risk is seeking help. “Smart” homes can be used by domestic abusers to torment their victims. The transparency that free software brings isn’t just a nice-to-have, it can materially impact people’s lives.

The other session that was particularly interesting to me was Chris Lamb’s discussion of the Commons Clause. Chris was more focused on the response of the community to Redis Labs’ decision to adopt it than the Commons Clause itself. He viewed Redis Labs’ decision to adopt and subsequent refusal to abandon the Commons Clause as a failure of the copyleft community to make a compelling argument. Drawing on the work of Aristotle, Chris argued that we, as interested and knowledgeable parties, should have done a better job making our case. The question, of course, is who the “we” is that Chris is exhorting. This is a particularly key question for his advice to proactively address the concerns of companies.

Some of the other talks focused more directly on adapting to a new environment. Version 3 of the GNU General Public License was published in 2007. At the time, Amazon Web Services (as we currently know it) was just over a year old. The original iPhone was released on the same day. While the principles behind the GPLv3 haven’t changed, the reality of how we use software has changed dramatically. Van Lindberg’s talk on a new license he’s drafting for a client explored what copyleft looks like in 2019. And Alexios Zavras noted that the requirements to provide source code don’t necessarily apply as-written anymore.

In addition to meeting some new friends and idols, I was also able to spend some time with friends that I don’t get to see often enough. I’m already looking forward to CopyleftConf 2020.

What’s the future of Linux distributions?

“Distros don’t matter anymore” is a bold statement for someone who is paid to work on a Linux distro to make. Fortunately, I’m not making that statement. At least not exactly.

Distros still matter. But it’s fair to say that they matter in a different way than they did in the past. Like lava in a video game, abstractions slowly-but-inexorably move up the stack. For the entirety of their existence, effectively, Linux distributions have focused on producing operating systems (OSes) with some userspace applications. But the operating system is changing.

For one, OS developers have been watching each other work and taking inspiration for improvement. Windows is not macOS is not Linux, but they all take what they see as the “best” features of others and try to incorporate them. And with things like Windows Subsystem for Linux, the lines are blurring.

Applications are helping in this regard, too. Not everything is written in C and C++ anymore. Many applications are being developed in languages like Python, Ruby, and Java, where the application developer mostly doesn’t have to care about the OS. Which means the user doesn’t either. And of course, so much of what the average user does on their computer runs out of the web browser these days. The vast majority of my daily computer usage can be done on any modern OS, including Android.

With the importance of the operating system itself diminishing, distros can choose to either remain unchanged and watch their importance diminish or they can evolve to add new relevance.

This is all background for many conversations and presentations I heard earlier this month at the FOSDEM conference in Brussels. The first day of FOSDEM I spent mostly in the Fedora booth. The second day I was working the distro dev room. Both days had a lot of conversations about how distros can stay relevant — not in those words, but certainly in spirit.

The main theme was the idea of changing how the OS is managed and updated. The idea of the OS state as a git tree is interesting. Fedora’s Silverblue desktop and openSUSE Kubic are two leading examples.

So is this the future of Linux distributions? I don’t know. What I do know is that distributions must change to keep up with the world. This change should be in a way that makes the distro more obviously valuable to users.

Can your bug tracker and support tickets be the same system?

I often see developers, both open source and proprietary, struggle with trying to use bug trackers as support tools (or sometimes support tools as bug trackers). I can see the appeal, since support issues often tie back to bugs and it’s simpler to have one thing than two. But the problem is that they’re really not the same thing, and I’m not sure there’s a tool that does both well.

In 2014 (which is when I originally added this idea to my to-do list according to Trello), Daniel Pocock wrote a blog post that addresses this issue. Daniel examined several different tools in this space and looked at trends and open questions.

My own opinions are colored by a few different things. First, I think about a former employer. The company originally used FogBugz for both bug tracking and customer support (via email). By the time I joined, the developers had largely moved off FogBugz for bug tracking, leaving us using what was largely designed as a bug tracker for our customer support efforts. Since customers largely interacted via email, it didn’t particularly matter what the system was.

On the other hand, because it was designed as a bug tracker, it lacked some of the features we wanted from a customer support tool. Customers couldn’t log in and view dashboards, so we had to manually build the reports they wanted and send them via email. And we couldn’t easily build a knowledge base into it, which reduced the ability for customers to get answers themselves more quickly. Shortly before I changed jobs, we began the process of moving to ZenDesk, which provided the features we needed.

The other experience that drove this was serving as a “bug concierge” on an open source project I used to be active in. Most of the user support happened via mailing list, and occasionally a legitimate bug would be discovered. The project’s Trac instance required the project team to create an account. Since I already had an account, I’d often file bugs on behalf of people. I also filed bugs in Fedora’s bugzilla instance when the issue was with the Fedora package specifically.

What I took away from these experiences is that bug trackers that are useful to developers are rarely useful to end users. Developers (or their managers) benefit from having a lot of metadata that can be used to filter and report on issues. But a large number of fields to fill in can overwhelm users. They want to be able to say what’s wrong and be told how to fix it.

In order for a tool to work as both a bug tracker and ticket system, the metadata should probably only be visible to developers. And the better solution is probably separate tools that integrate with each other.

GitHub’s new status feature

Two weeks ago, GitHub added a new feature for all users: the ability to set a status. I’m in favor of this. First, it appeals to my AOL Instant Messenger nostalgia. Second, I think it provides a valuable context for open source projects. It allows maintainers to say “hey, I’m not going to be very responsive for a bit”. In theory, this should let people filing issues and pull requests not get so angry if they don’t get a quick response.

Jessie Frazelle described it as the “cure for open source guilt”.

I was surprised at the amount of blowback this got. (See, for example the replies to Nat Friedman’s tweet.) Some of the responses are of the dumb “oh noes you’re turning GitHub into a social media platform. It should be about the code!” variety. To those people I say “fine, don’t use this feature.” Others raise a point about not advertising being on vacation.

I’m sympathetic to that. I’m generally pretty quiet about the house being empty on public or public-ish platforms. It’s a good way to advertise yourself to vandals and thieves. To be honest, I’m more worried about something like Nextdoor where the users are all local than GitHub where anyone who cares is probably a long way away. Nonetheless, it’s a valid concern, especially for people with a higher profile.

I agree with Peter that it’s not wise to set expectations for maintainers to share their private details. That said, I do think it’s helpful for maintainers to let their communities know what to expect from them. There are many reasons that someone might need to step away from their project for a week or several. A simple “I’m busy with other stuff and will check back in on February 30th” or something to that effect would accomplish the goal of setting community expectations without being too revelatory.

The success of this feature will rely on users making smart decisions about what they choose to reveal. That’s not always a great bet, but it does give people some control over the impact. The real question will be: how much do people respect it?

Inclusion is a necessary part of good coding

Too often I see comments like “some people would rather focus on inclusion than write good code.” Not only is that a false dichotomy, but it completely misrepresents the relationship between the two. Inclusion doesn’t come at the cost of good code, it’s a necessary part of good code.

We don’t write code for the sake of writing code. We write code for people to use it in some way. This means that the code needs to work for the people. In order to do that, the people designing and implementing the technology need to consider different experiences. The best way to do that is to have people with different experiences be on the team. As my 7th grade algebra teacher was fond of reminding us: garbage in, garbage out.

But it’s not like the tech industry has a history of bad decision making. Or soap dispensers not working with dark-skinned people. Or identifying black people as gorillas. Or voice recognition not responding to female voices. What could go wrong with automatically labeling “suspicious” people?

I’ll grant that not all of these issues are with the code itself. In fact a lot of it isn’t the code, it’s the inputs given to the code. So when I talk about “good coding” here, I’m being very loose with the wording as a shorthand for the technology we produce in general. The point holds because the code doesn’t exist in a vacuum.

It’s not just about the outputs and real world effect of what we make. There’s also the matter of wanting to build the best team. Inclusion opens you up to a broader base of talent that might self-select out.

Being inclusive takes effort. It sometimes requires self-examination and facing unpleasant truths. But it makes you a better person and if you don’t care about that, it makes your technology better, too.

Picking communication tools for your community

Communication is key to the success of any project. The tools we use to communicate play a part in how effective our communication is. Recent discussions in Fedora and other projects have made me consider what tool selection looks like. Should Discourse replace mailing lists? Should Telegram replace IRC? I’m not going to answer those questions.

There’s no one right tool, just a set of considerations to think about in selecting communications tooling. Each community needs to arrive at a consensus about what works best for their workflow and culture, and keep in mind that the decision may attract some contributors while driving others away.

In this post, I’m going to broadly lump tools into two categories: synchronous and asynchronous. Many tools can be used for both to a decent approximation, but most will pretty obviously fall into one category or the other. Picking one tool to rule them all is a valid option, but be aware that it immediately favors one category of communication over the other. And keep in mind that for large projects, some sub-teams may choose different platforms. That’s fine so long as people who want to participate know where to look.

Considerations for all tools

Self-hosted or externally-hosted. Do you have the resources to maintain the tool? If you do, that’s a way to save money and maintain control, but it’s also time that your community members can’t spend working on whatever your community is doing. Externally-hosted tooling (either free or paid) might give you less flexibility, but it can also be more isolated from internal infrastructure outages.

Open source or proprietary. This is entirely a value judgement for your community. For some communities, anything that’s not open source is a non-starter. Others might not care at all one way or another. Most will fall somewhere on the spectrum between.

Federated or centralized. Can the community connect their own tools together (e.g. like with email) or is it a centralized system (like most social media platforms)? The trend is definitely toward centralized systems these days, so you may have to work harder to find a federated system that meets your needs.

Public or private. Can outsiders see what you’re saying? For many open source projects, public visibility is important. But even in those communities, some conversations may need to take place in private or semi-private.

Archived or ephemeral. Do you want to be able to go back and see what was said last month, last year, or last decade? Some conversations aren’t worth keeping, but records of important decisions probably are. Does your tool allow you to meet your archival needs?

Considerations for synchronous tools

Sometimes you really need to talk to people in real time.

Mobile experience. It’s 2019. People do a lot on their phones, especially if their contribution to your community happens during their workday or if they travel frequently. What is the mobile experience like for the tools you’re evaluating? It’s not just a matter of if clients exist, but what’s the whole experience. If they disconnect while on an airplane, do they lose all the messages that were sent in their absence?

Status and alerting. What happens if someone stays logged in and goes away for a little bit? Do they have the ability to suppress notifications? Is there any way to let others know “I’m away or busy, don’t expect an immediate reply”?

Audio, video, and screen sharing. Sometimes you need the high-bandwidth modes of communication in order to get your full message across (or just shortcut a lot of back-and-forth). Does the tool you’re looking at provide this? Is it usable for those who can’t participate due to bandwidth or other constraints?

Integrations. Can you display GIFs? The ability to speak entirely in animated images can be either a feature or a bug, depending on the community’s culture. But if it’s important one way or another, you’ll want to make sure your tool matches your needs. Of course, there are other integrations that might matter to. Can your build system post alerts? Does the tool automatically recognize certain links and display them in an particular manner?

Considerations for asynchronous tools

Of course, you’re not all going to be sitting at your computer at the same time. People go on vacation. They live in different time zones. They step away for 10 minutes to get a cup of coffee. Whatever the reason, you’ll need to communicate asynchronously sometimes.

Push or pull. Email is a push mechanism. Your message arrives in my inbox whether I’ve asked it to or not. Web fora are a pull mechanism. I have to go check them (yes, some forum tools provide an email interface). Which works better for your workflow and community? Pull mechanisms are easier to ignore when you want to step away for a little while, but they also mean you might forget to check when you do want to pay attention.

Is it a ticket system? I haven’t really talked about ticket systems/issue trackers because I don’t consider them a general communication tool. But for some projects, all the discussion that needs to happen happens in GitHub issues or another ticket tracker. If that works for you, there’s no point in adding a new tool to the mix.

NPM helps us learn important open source lessons

NPM is the gift that keeps on giving. Remember back when left-pad “broke the Internet“? This time, a package with two million weekly downloads started stealing cryptocurrency. As with the left-pad incident, it’s not NPM itself that was the problem, it just exposed a general problem: project maintainers don’t want to maintain their projects forever.

Dominic Tarr, the original developer of the event-stream package, started the project for fun. He got tired of maintaining it, someone offered to take it over, and he handed it off. It just turns out the new maintainer wanted to steal Bitcoin.

“You get literally nothing from maintaining a popular package,” Tarr wrote. In fact, the more popular your project becomes, the more it costs you. You have more expectations and responsibility put on you. Responsibility you didn’t ask for and probably don’t want. And all of that comes with no compensation. Paying maintainers is an obvious solution, but implementing that plan can be challenging. 

When someone doesn’t want to keep working on a project, they often hand it off to willing contributors who will take the lead. That works out most of the time, but sometimes it blows up spectacularly. I’m sure it happens in other ecosystems, too, but the “grab your dependencies as you go” nature of Node makes it really easy to bring this issues to light. I think we all owe NPM a big “thank you”.

I (will, pending approval) have a new employer (again)

Note: this is an entirely personal post and does not represent Red Hat or the Fedora Project in any way.

This is not a repeat from August 2017: my employer is about to be acquired. The news that IBM is spending $34 billion to acquire Red Hat came as a surprise to just about everyone. As you might expect, the reaction among my colleagues is widely varied. I’m still trying to come to terms with my own emotions about this.

Red Hat is not just an employer to me. I’ve been applying for various jobs at Red Hat over the last eight years or so. When I got hired earlier this year, I felt like I had finally obtained a significant professional goal. I’ve long admired the company and the people I know that worked there. I saw Red Hat as a place that I could be happy for a very long time.

But I don’t have a crystal ball. So sometime in the second half of next year, I’ll be an IBM employee. Leadership at IBM and Red Hat have said the right things, and the stated plan is that Red Hat will continue to operate as an independent subsidiary. I have no reason to doubt that, but the specifics of the reality are still unknown. It’s a little bit scary.

It makes sense that we don’t have any specifics yet. The plans can’t really be formed until the folks who would work on them can be told. So almost everyone is just coming up to speed, and the next few months will start bringing some clarity. And even more has to wait until the deal actually closes.

My first reaction was “oh no, my health insurance is going to change again.” After having roughly five insurance plans in the last five years, the idea of updating my information with all of my providers yet again is — while not particularly difficult — kind of annoying. My second reaction was “couldn’t they have waited a few years so I could accumulate more stock?”

So what does this all mean? I really don’t know. Ben Thompson is not optimistic. John “maddog” Hall is taking a positive approach. But most importantly, my friend and patronus Robyn Bergeron is reassuring:

So for now, I’ll go about my day-to-day work. Fedora 29 released on Tuesday. We’re hard at work on Fedora 30. In a few months, I’ll know more about what the future holds. In the meantime, I’m proud to be a Red Hatter and a member of the Fedora and Opensource.com communities. Here we go!

You are responsible for (thinking about) how people use your software

Earlier this week, Marketplace ran a story about Michael Osinski. You probably haven’t heard of Osinski, but he plays a role in the financial crisis of 2008. Osinksi wrote software that made it easier for banks to package loans into a trade-able security. These “mortgage-backed securities” played a major role in the collapse of the financial sector ten years ago.

It’s not fair to say that Osinski is responsible for the Great Recession. But it is fair to say he did not give sufficient consideration to how his software might be (mis)used. He told Marketplace’s Eliza Mills:

Most people realized that we wrote a good piece of software that we sold in the marketplace. How people use that software is … you know, you really can’t control that.

Osinski is right that he couldn’t control how people used the software he wrote. Whenever we release software to the world, it will get used how the user wants to use it — even if the license prohibits certain fields of endeavor. This could be innocuous misuse, the way graduate students design conference posters in PowerPoint or businesspeople use Excel for all conceivable tasks. But it could also be malicious misuse, the way Russian troll farms use social media to spread false news or sew discord.

So when we design software, we must consider how actual users — both benevolent and malign — will use it. To the degree we can, we should mitigate against abuse or at least provide users a way to defend themselves from it. We are long past the point where we can pretend technology is amoral.

In a vacuum, technological tools are amoral. But we don’t use technology in a vacuum. The moment we put it to use, it becomes a multiplier for both good and evil. If we want to make the world a better place, we cannot pretend it will happen on its own.