GitHub’s new status feature

Two weeks ago, GitHub added a new feature for all users: the ability to set a status. I’m in favor of this. First, it appeals to my AOL Instant Messenger nostalgia. Second, I think it provides a valuable context for open source projects. It allows maintainers to say “hey, I’m not going to be very responsive for a bit”. In theory, this should let people filing issues and pull requests not get so angry if they don’t get a quick response.

Jessie Frazelle described it as the “cure for open source guilt”.

I was surprised at the amount of blowback this got. (See, for example the replies to Nat Friedman’s tweet.) Some of the responses are of the dumb “oh noes you’re turning GitHub into a social media platform. It should be about the code!” variety. To those people I say “fine, don’t use this feature.” Others raise a point about not advertising being on vacation.

I’m sympathetic to that. I’m generally pretty quiet about the house being empty on public or public-ish platforms. It’s a good way to advertise yourself to vandals and thieves. To be honest, I’m more worried about something like Nextdoor where the users are all local than GitHub where anyone who cares is probably a long way away. Nonetheless, it’s a valid concern, especially for people with a higher profile.

I agree with Peter that it’s not wise to set expectations for maintainers to share their private details. That said, I do think it’s helpful for maintainers to let their communities know what to expect from them. There are many reasons that someone might need to step away from their project for a week or several. A simple “I’m busy with other stuff and will check back in on February 30th” or something to that effect would accomplish the goal of setting community expectations without being too revelatory.

The success of this feature will rely on users making smart decisions about what they choose to reveal. That’s not always a great bet, but it does give people some control over the impact. The real question will be: how much do people respect it?

Inclusion is a necessary part of good coding

Too often I see comments like “some people would rather focus on inclusion than write good code.” Not only is that a false dichotomy, but it completely misrepresents the relationship between the two. Inclusion doesn’t come at the cost of good code, it’s a necessary part of good code.

We don’t write code for the sake of writing code. We write code for people to use it in some way. This means that the code needs to work for the people. In order to do that, the people designing and implementing the technology need to consider different experiences. The best way to do that is to have people with different experiences be on the team. As my 7th grade algebra teacher was fond of reminding us: garbage in, garbage out.

But it’s not like the tech industry has a history of bad decision making. Or soap dispensers not working with dark-skinned people. Or identifying black people as gorillas. Or voice recognition not responding to female voices. What could go wrong with automatically labeling “suspicious” people?

I’ll grant that not all of these issues are with the code itself. In fact a lot of it isn’t the code, it’s the inputs given to the code. So when I talk about “good coding” here, I’m being very loose with the wording as a shorthand for the technology we produce in general. The point holds because the code doesn’t exist in a vacuum.

It’s not just about the outputs and real world effect of what we make. There’s also the matter of wanting to build the best team. Inclusion opens you up to a broader base of talent that might self-select out.

Being inclusive takes effort. It sometimes requires self-examination and facing unpleasant truths. But it makes you a better person and if you don’t care about that, it makes your technology better, too.

Picking communication tools for your community

Communication is key to the success of any project. The tools we use to communicate play a part in how effective our communication is. Recent discussions in Fedora and other projects have made me consider what tool selection looks like. Should Discourse replace mailing lists? Should Telegram replace IRC? I’m not going to answer those questions.

There’s no one right tool, just a set of considerations to think about in selecting communications tooling. Each community needs to arrive at a consensus about what works best for their workflow and culture, and keep in mind that the decision may attract some contributors while driving others away.

In this post, I’m going to broadly lump tools into two categories: synchronous and asynchronous. Many tools can be used for both to a decent approximation, but most will pretty obviously fall into one category or the other. Picking one tool to rule them all is a valid option, but be aware that it immediately favors one category of communication over the other. And keep in mind that for large projects, some sub-teams may choose different platforms. That’s fine so long as people who want to participate know where to look.

Considerations for all tools

Self-hosted or externally-hosted. Do you have the resources to maintain the tool? If you do, that’s a way to save money and maintain control, but it’s also time that your community members can’t spend working on whatever your community is doing. Externally-hosted tooling (either free or paid) might give you less flexibility, but it can also be more isolated from internal infrastructure outages.

Open source or proprietary. This is entirely a value judgement for your community. For some communities, anything that’s not open source is a non-starter. Others might not care at all one way or another. Most will fall somewhere on the spectrum between.

Federated or centralized. Can the community connect their own tools together (e.g. like with email) or is it a centralized system (like most social media platforms)? The trend is definitely toward centralized systems these days, so you may have to work harder to find a federated system that meets your needs.

Public or private. Can outsiders see what you’re saying? For many open source projects, public visibility is important. But even in those communities, some conversations may need to take place in private or semi-private.

Archived or ephemeral. Do you want to be able to go back and see what was said last month, last year, or last decade? Some conversations aren’t worth keeping, but records of important decisions probably are. Does your tool allow you to meet your archival needs?

Considerations for synchronous tools

Sometimes you really need to talk to people in real time.

Mobile experience. It’s 2019. People do a lot on their phones, especially if their contribution to your community happens during their workday or if they travel frequently. What is the mobile experience like for the tools you’re evaluating? It’s not just a matter of if clients exist, but what’s the whole experience. If they disconnect while on an airplane, do they lose all the messages that were sent in their absence?

Status and alerting. What happens if someone stays logged in and goes away for a little bit? Do they have the ability to suppress notifications? Is there any way to let others know “I’m away or busy, don’t expect an immediate reply”?

Audio, video, and screen sharing. Sometimes you need the high-bandwidth modes of communication in order to get your full message across (or just shortcut a lot of back-and-forth). Does the tool you’re looking at provide this? Is it usable for those who can’t participate due to bandwidth or other constraints?

Integrations. Can you display GIFs? The ability to speak entirely in animated images can be either a feature or a bug, depending on the community’s culture. But if it’s important one way or another, you’ll want to make sure your tool matches your needs. Of course, there are other integrations that might matter to. Can your build system post alerts? Does the tool automatically recognize certain links and display them in an particular manner?

Considerations for asynchronous tools

Of course, you’re not all going to be sitting at your computer at the same time. People go on vacation. They live in different time zones. They step away for 10 minutes to get a cup of coffee. Whatever the reason, you’ll need to communicate asynchronously sometimes.

Push or pull. Email is a push mechanism. Your message arrives in my inbox whether I’ve asked it to or not. Web fora are a pull mechanism. I have to go check them (yes, some forum tools provide an email interface). Which works better for your workflow and community? Pull mechanisms are easier to ignore when you want to step away for a little while, but they also mean you might forget to check when you do want to pay attention.

Is it a ticket system? I haven’t really talked about ticket systems/issue trackers because I don’t consider them a general communication tool. But for some projects, all the discussion that needs to happen happens in GitHub issues or another ticket tracker. If that works for you, there’s no point in adding a new tool to the mix.

NPM helps us learn important open source lessons

NPM is the gift that keeps on giving. Remember back when left-pad “broke the Internet“? This time, a package with two million weekly downloads started stealing cryptocurrency. As with the left-pad incident, it’s not NPM itself that was the problem, it just exposed a general problem: project maintainers don’t want to maintain their projects forever.

Dominic Tarr, the original developer of the event-stream package, started the project for fun. He got tired of maintaining it, someone offered to take it over, and he handed it off. It just turns out the new maintainer wanted to steal Bitcoin.

“You get literally nothing from maintaining a popular package,” Tarr wrote. In fact, the more popular your project becomes, the more it costs you. You have more expectations and responsibility put on you. Responsibility you didn’t ask for and probably don’t want. And all of that comes with no compensation. Paying maintainers is an obvious solution, but implementing that plan can be challenging. 

When someone doesn’t want to keep working on a project, they often hand it off to willing contributors who will take the lead. That works out most of the time, but sometimes it blows up spectacularly. I’m sure it happens in other ecosystems, too, but the “grab your dependencies as you go” nature of Node makes it really easy to bring this issues to light. I think we all owe NPM a big “thank you”.

I (will, pending approval) have a new employer (again)

Note: this is an entirely personal post and does not represent Red Hat or the Fedora Project in any way.

This is not a repeat from August 2017: my employer is about to be acquired. The news that IBM is spending $34 billion to acquire Red Hat came as a surprise to just about everyone. As you might expect, the reaction among my colleagues is widely varied. I’m still trying to come to terms with my own emotions about this.

Red Hat is not just an employer to me. I’ve been applying for various jobs at Red Hat over the last eight years or so. When I got hired earlier this year, I felt like I had finally obtained a significant professional goal. I’ve long admired the company and the people I know that worked there. I saw Red Hat as a place that I could be happy for a very long time.

But I don’t have a crystal ball. So sometime in the second half of next year, I’ll be an IBM employee. Leadership at IBM and Red Hat have said the right things, and the stated plan is that Red Hat will continue to operate as an independent subsidiary. I have no reason to doubt that, but the specifics of the reality are still unknown. It’s a little bit scary.

It makes sense that we don’t have any specifics yet. The plans can’t really be formed until the folks who would work on them can be told. So almost everyone is just coming up to speed, and the next few months will start bringing some clarity. And even more has to wait until the deal actually closes.

My first reaction was “oh no, my health insurance is going to change again.” After having roughly five insurance plans in the last five years, the idea of updating my information with all of my providers yet again is — while not particularly difficult — kind of annoying. My second reaction was “couldn’t they have waited a few years so I could accumulate more stock?”

So what does this all mean? I really don’t know. Ben Thompson is not optimistic. John “maddog” Hall is taking a positive approach. But most importantly, my friend and patronus Robyn Bergeron is reassuring:

So for now, I’ll go about my day-to-day work. Fedora 29 released on Tuesday. We’re hard at work on Fedora 30. In a few months, I’ll know more about what the future holds. In the meantime, I’m proud to be a Red Hatter and a member of the Fedora and Opensource.com communities. Here we go!

You are responsible for (thinking about) how people use your software

Earlier this week, Marketplace ran a story about Michael Osinski. You probably haven’t heard of Osinski, but he plays a role in the financial crisis of 2008. Osinksi wrote software that made it easier for banks to package loans into a trade-able security. These “mortgage-backed securities” played a major role in the collapse of the financial sector ten years ago.

It’s not fair to say that Osinski is responsible for the Great Recession. But it is fair to say he did not give sufficient consideration to how his software might be (mis)used. He told Marketplace’s Eliza Mills:

Most people realized that we wrote a good piece of software that we sold in the marketplace. How people use that software is … you know, you really can’t control that.

Osinski is right that he couldn’t control how people used the software he wrote. Whenever we release software to the world, it will get used how the user wants to use it — even if the license prohibits certain fields of endeavor. This could be innocuous misuse, the way graduate students design conference posters in PowerPoint or businesspeople use Excel for all conceivable tasks. But it could also be malicious misuse, the way Russian troll farms use social media to spread false news or sew discord.

So when we design software, we must consider how actual users — both benevolent and malign — will use it. To the degree we can, we should mitigate against abuse or at least provide users a way to defend themselves from it. We are long past the point where we can pretend technology is amoral.

In a vacuum, technological tools are amoral. But we don’t use technology in a vacuum. The moment we put it to use, it becomes a multiplier for both good and evil. If we want to make the world a better place, we cannot pretend it will happen on its own.

Linus’s awakening

It may be the biggest story in open source in 2018, a year that saw Microsoft purchase GitHub. Linus Torvalds replaced the Code of Conflict for the Linux kernel with a Code of Conduct. In a message on the Linux Kernel Mailing List (LKML), Torvalds explained that he was taking time off to examine the way he led the kernel development community.

Torvalds has taken a lot of flak for his style over the years, including on this blog. While he has done an excellent job shepherding the technical development of the Linux kernel, his community management has often — to put it mildly — left something to be desired. Abusive and insulting behavior is corrosive to a community, and Torvalds has spent the better part of the last three decades enabling and partaking in it.

But he has seen the light, it would seem. To an outside observer, this change is rather abrupt, but it is welcome. Reaction to his message has been mixed. Some, like my friend Jono Bacon, have advocated supporting Linus in his awakening. Others take a more cynical approach:

I understand Kelly’s position. It’s frustrating to push for a more welcoming and inclusive community only to be met with insults and then when someone finally comes around to have everyone celebrate. Kelly and others who feel like her are absolutely justified in their position.

For myself, I like to think of it as a modern parable of the prodigal son. As tempting as it is to reject those who awaken late, it is better than them not waking at all. If Linus fails to follow through, it would be right to excoriate him. But if he does follow through, it can only improve the community around one of the most important open source projects. And it will set an example for other projects to follow.

I spend a lot of time thinking about community, particularly since I joined Red Hat as the Fedora Program Manager a few months ago. Community members — especially those in a highly-visible role — have an obligation to model the kind of behavior the community needs. This sometimes means a patient explanation when an angry rant would feel better. It can be demanding and time-consuming work. But an open source project is more than just the code; it’s also the community. We make technology to serve the people, so if our communities are not healthy, we’re not doing our jobs.

FPgM report: 2018-30

Inspired by bex’s “Slice of cake” updates, I present to the community this report of what has happened in Fedora Program Management this week.

Schedule

  • REMINDER — Software string freeze is July 31.

Changes

Announced

Submitted to FESCo

Approved by FESCo

I am on PTO this week, so anything not immediately obviously pertaining to submitted changes will be taken care of early next week.

FPgM report: 2018-29

Inspired by bex’s “Slice of cake” updates, I present to the community this report of what has happened in Fedora Program Management this week.

Schedule

  • REMINDER — Self-Contained Change submission deadline is July 24.
  • REMINDER — Software string freeze is July 31.

Changes

Announced

Submitted to FESCo

Approved by FESCo

I will be on PTO next week, but I will be checking in daily to shepherd last-minute change submissions.

Solved: ports on ThinkPad Thunderbolt dock doesn’t work with Fedora

I got a new ThinkPad X1 Carbon laptop for work. Of course I immediately installed Fedora 28 on it. Everything seemed to work just fine. But the laptop came with a ThinkPad Thunderbolt dock and when I went to go use it, I noticed the Ethernet port didn’t work. Then I noticed the USB ports didn’t work. But at least the HDMI port worked? (Full disclosure: I didn’t try the VGA port).

It turns out the solution was really simple, but I didn’t find a simple explanation so I’m putting one here. (Comment #17 of Red Hat Buzilla #1367508 had the basic solution. I hope this post becomes a little easier to find.)

The dock uses Thunderbolt which includes some security features. A package called bolt provides a management tool for this. Happily, it’s already in the Fedora 28 repo.

First, I installed it

# dnf install bolt

Then I examined the connected device


# boltctl list
● Lenovo ThinkPad Thunderbolt 3 Dock
├─ type: peripheral
├─ name: ThinkPad Thunderbolt 3 Dock
├─ vendor: Lenovo
├─ uuid: 00cd2054-ef95-0801-ffff-ffffffffffff
├─ status: connected
│ ├─ authflags: none
│ └─ connected: Fri 29 Jun 2018 03:13:10 PM UTC
└─ stored: no

Finally, I enrolled the device

# boltctl enroll 00cd2054-ef95-0801-ffff-ffffffffffff
● Lenovo ThinkPad Thunderbolt 3 Dock
├─ type: peripheral
├─ name: ThinkPad Thunderbolt 3 Dock
├─ vendor: Lenovo
├─ uuid: 00cd2054-ef95-0801-ffff-ffffffffffff
├─ dbus path: /org/freedesktop/bolt/devices/00cd2054_ef95_0801_ffff_ffffffffffff
├─ status: authorized
│ ├─ authflags: none
│ ├─ parent: cf030000-0080-7f18-23d0-7d0ba8c14120
│ ├─ syspath: /sys/devices/pci0000:00/0000:00:1d.0/0000:05:00.0/0000:06:00.0/0000:07:00.0/domain0/0-0/0-1
│ ├─ authorized: Fri 29 Jun 2018 03:19:39 PM UTC
│ └─ connected: Fri 29 Jun 2018 03:13:10 PM UTC
└─ stored: yes
├─ when: Fri 29 Jun 2018 03:19:39 PM UTC
├─ policy: auto
└─ key: no

After that, everything worked as expected. I’d like to thank the people who did the work to discover and implement the fix. I hope this post means a little less Googling for the next person.