Apache Software Foundation moves to GitHub

Last week, GitHub and the Apache Software Foundation (ASF) announced that ASF migrated their git repositories to GitHub. This caused a bit of a stir. It’s not every day that “the world’s largest open source foundation” moves to a proprietary hosting platform.

Free software purists expressed dismay. One person described it as “a really strange move In part because Apache’s key value add [was] that they provided freely available infrastructure.” GitHub, while it may be “free as in beer”, is definitely not “free as in freedom”. git itself is open source software, but GitHub “special sauce” is not.

For me, it’s not entirely surprising that ASF would make this move. I’ve always seen ASF as a more pragmatically-minded organization than, for example, the Free Software Foundation (FSF). I’d argue that the ecosystem benefits from having both ASF- and FSF-type organizations.

It’s not clear what savings ASF gets from this. Their blog post says they maintain their own mirrors, so there’s still some infrastructure involved. Of course, it’s probably smaller than running the full service, but by how much?

More than a reduced infrastructure footprint, I suspect the main benefit to the ASF is that it lowers the barrier to contribution. Like it or not, GitHub is the go-to place to find open source code. Mirroring to GitHub makes the code available, but you don’t get the benefits of integrating issues and pull requests (at least not trivially). Major contributors will do what it takes to adopt the tool, but drive by contributions should be as easy as possible.

There’s also another angle, which probably didn’t the drive the decision but brings a benefit nonetheless. Events like Hacktoberfest and 24 Pull Requests help motivate new contributors, but they’re based on GitHub repositories. Using GitHub as your primary forge means you’re accessible to the thousands of developers who participate in these events.

In a more ideal world, ASF would use a more open platform. In the present reality, this decision makes sense.

Releasing open source software is not immoral

Matt Stancliff recently made a bold statement on Twitter:

He made this comment in the context of the small amount of money the largest tech companies use to fund open source. With the five largest companies contributing less than a percentage of their annual revenue, open source projects would have two billion dollars of support. These projects are already subsidizing the large corporations, he argues, so they deserve some of the rewards.

This continues the recent trend of people being surprised that people will take free things and not pay for them. Developers who choose to release software under an open source license do so with the knowledge that someone else may use their software to make boatloads of money. Downstream users are under no obligation to remunerate or support upstreams in any way.

That said, I happen to think it’s the right thing to do. I contributed to Fedora as a volunteer for years as a way to “pay back” the community that gave me a free operating system. At a previous company, we made heavy use of an open source job scheduler/resource manager. We provided support on the community mailing lists and sponsored a reception at the annual conference. This was good marketing, of course, but it was also good community citizenship.

At any rate, if you want to make a moral judgment about open source, it’s not the release of open source software that’s the issue. The issue is parasitic consumption of open source software. I’m sure all of the large tech companies would say they support open source software, and they probably do in their own way. But not necessarily in the way that allows small-but-critical projects to thrive.

Toward a more moral ecosystem

Saying “releasing open source software has become immoral” is not helpful. Depriving large companies of open source would also deprive small companies and consumers. And it’s the large companies who could best survive the loss. Witness how MongoDB’s license change has Amazon using DocumentDB instead; meanwhile Linux distributions like Fedora are dropping MongoDB.

It’s an interesting argument, though, because normally when morality and software are in the mix, it’s the position that open source (or “free software” in this context, generally) is the moral imperative. That presents us with one possible solution: licensing your projects under a copyleft license (e.g. the GNU General Public License (GPL)). Copyleft-licensed software can still be used by large corporations to make boatloads of money, but at least it requires them to make source (including of derived works) available. With permissively-licensed software, you’re essentially saying “here’s my code, do whatever you want with it.” Of course people are going to take you up on that offer.

The Linux desktop is not in trouble

Writing for ZDNet earlier this month, Steven J. Vaughan-Nichols declared trouble for the Linux desktop. He’s wrong.

Or maybe not. Maybe we’re just looking at different parts of the elephant. sjvn’s core argument, if I may sum it up, is that fragmentation is holding back the Linux desktop. Linux can’t gain significant traction in the desktop market because there are just so many options. This appeals to computer nerds, but leads to confusion for general users who don’t want to care about whether they’re running GNOME or KDE Plasma or whatever.

Fragmentation

I’m sympathetic to that argument. When I was writing documentation for Fedora, we generally wrote instructions for GNOME, since that was the default desktop. Fedora users can also choose from spins of KDE Plasma, LXQt, Xfce, plus can install other desktop environments. If someone installs KDE Plasma because that’s what their friend gave them, will they be able to follow the documentation? If not, will they get frustrated and move back to Windows or MacOS?

Even if they stick it out, there are two large players in the GUI toolkit world: GTK and Qt. You can use an app written in one in a desktop environment written in the other, but it doesn’t always look very good. And the configuration settings may not be consistent between apps, which is also frustrating.

Corporate indifference

Apart from that, sjvn also laments the lack of desktop effort from major Linux vendors:

True, the broad strokes of the Linux desktop are painted primarily by Canonical and Red Hat, but the desktop is far from their top priority. Instead, much of the nuts and bolts of the current generation of the Linux desktop is set by vendor-related communities: Red Hat, Fedora, SUSE’s openSUSE, and Canonical’s Ubuntu.

I would argue that this is the way it should be. As he notes in the preceding paragraph, the focus of revenue generation is on enterprise servers and cloud. There are two reasons for that: that’s where the customer money is and enterprises don’t want to innovate on their desktops.

I’ll leave the first part to someone else, but I think the “enterprises don’t want to innovate on their desktops” part is important. I’ve worked at and in support of some large organizations and in all cases, they didn’t want anything more from their desktops than “it allows our users to run their business applications in a reliable manner”. Combine this with the tendency of the enterprise to keep their upgrade cycles long and it makes no sense to keep desktop innovation in the enterprise product.

Community distributions are generally more focused on individuals or small organizations who may be more willing to accept disruptive change as the paradigm is moved forward. This is true beyond the desktop, too. Consider changes like the adoption of systemd or replacing yum with dnf: these also appeared in the community distributions first, but I didn’t see that used as a case for “enterprise Linux distributions are in trouble.”

What’s the answer?

Looking ahead, I’d love to see a foundation bring together the Linux desktop community and have them hammer out out a common desktop for everyone. Yes, I know, I know. Many hardcore Linux users love have a variety of choices. The world is not made up of desktop Linux users. For the million or so of us, there are hundreds of millions who want an easy-to-use desktop that’s not Windows, doesn’t require buying a Mac, and comes with broad software and hardware support.

Setting aside the XKCD #927 argument, I don’t know that this is an answer. Even if the major distros agreed to standardize on the same desktop (and with Ubuntu returning to GNOME, that’s now the case), that won’t stop effort on other desktops. If the corporate sponsors don’t invest any effort, the communities still will. People will use whatever is provided to them in the workplace, so presenting a single standard desktop to consumers will rely on the folks who make the community distributions to agree to that. It won’t happen.

But here’s the crux of my disagreement with this article. The facts are all correct, even if I disagree with the interpretation of some of them. The issue is that we’re not looking at the success of the Linux desktop in the same way.

If you define “Linux desktop” as “a desktop environment that runs the Linux kernel”, then ChromeOS is doing quite well, and will probably continue to grow (unless Google gets bored with it). In that case, the Linux desktop is not in trouble, it’s enjoying unprecedented success.

But when most people say “Linux desktop”, they think of a traditional desktop model. In this case, the threat to Linux desktops is the same as the threat to Windows and MacOS: desktops matter less these days. So much computing, particularly for consumers, happens in the web browser when done on a PC at all.

Rethinking the goal

This brings me back to my regular refrain: using a computer is a means, not an end. People don’t run a desktop environment to run a desktop environment, they run a desktop environment because it enables them to do the things they want to do. As those things are increasingly done on mobile or in the web browser, achieving dominant market share for desktops is no longer a meaningful goal (if, indeed, it ever was).

Many current Linux desktop users are (I guess), motivated at least in part by free software ideals. This is not a mainstream position. Consumers will need more practical reasons to choose any Linux desktop over the proprietary OS that was shipped by the computer’s manufacturer.

With that in mind, the answer isn’t standardization, it’s making the experience better. Fedora Silverblue and OpenSUSE Kubic are efforts in that direction. Using those as a base, with Flatpaks to distribute applications, the need for standardization at the desktop environment level decreases because the users are mostly interacting with the application level, one step above.

The usual disclaimer applies: I am a Red Hat employee who works on Fedora. The views in this post are my own and not necessarily the views of Red Hat, the Fedora Council, or anyone else. They may not even be my views by the time you read this.

Emoji in console output

Recently, my friend was talking about some output he got from running the minikub program. Each line included a leading emoji character. He was not thrilled, and I don’t think they did it well. But when used appropriately, emoji can add valuable context to the output.

root@test# minikube start
😄 minikube v1.0.0 on linux (amd64)
🤹 Downloading Kubernetes v1.14.0 images in the background ...
💡 Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.

💣 Unable to start VM: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
root@test# minikube delete
🔥 Deleting "minikube" from kvm2 ...
💔 The "minikube" cluster has been deleted

I should say, in the interests of full disclosure, that I have written tools that include unhelpful emoji in the output. Some of the emoji are not helpful. The crying cat because it crashed? Doesn’t add anything. The broken heart when the cluster is deleted? I don’t have time for your guilt trips. But the light bulb for a tip and the bomb for a Big Bad Error help draw attention to what could be a wall of text.

Here’s what I see as some guiding ideas for using emoji in output:

  • Have a fallback, but not a stupid one. For the code above, there’s a fallback to ASCII. If you thought the emoji added no value, check out how un-valuable the fallback is. The fallback should probably be “print nothing and go straight to the text output”.
  • Don’t print emoji to logs. The console should be human-readable (and a well-placed emoji can help with drawing attention to the right places), but logs should be machine-readable (well, grep-readable). Log messages should be more structured anyway, so maybe it doesn’t really matter, but don’t rely on your user having an emoji keyboard available when they need to grep the logs.
  • Try to use unambiguous emoji. Pictograms are language-independent, which is nice, but if you’ve ever tried to communicate with A Youth via emoji, you know there’s a lot of room for nuance. Stick to well-understood characters and document them somewhere.
  • Use emoji to enhance context, not to replace text. Log parsing and screen readers are two reasons that you don’t want to get rid of text in favor of emoji. Use the emoji to draw attention to important messages and provide hints as to why they’re important, but make sure the messages still stand on their own.

Where to file an issue?

Recently, the Fedora Engineering Steering Committee (FESCo) entertained a proposal to allow people to file issues in the repo where Fedora RPM spec files live. They ultimately rejected the proposal in favor of keeping those issues in Red Hat Bugzilla. I didn’t weigh in on that thread because I don’t have a set opinion one way or another, but it raised some interesting points.

First, I’d argue that Bugzilla is hostile for end users. There are a lot of fields, many of which aren’t meaningful to non-developers. It can be overwhelming. Then again, there probably aren’t too many end users filing bugs against spec files.

On the other hand, having multiple places to file bugs is hostile for users, too. “Where do I file this particular bug? I don’t know, I just want things to work!”

Having multiple places for bugs can be helpful to developers, so long as the bugs are filed in the right place. Spec file bugs make sense to be filed in the same place as the spec files generally. But they might make more sense elsewhere if they block another bug or fit into another workflow. And the odds of a bug being filed in the right place isn’t great to begin with.

This is a question for more than just Fedora though. Any project that has multiple pieces, particularly upstream dependencies, needs to think about how this will work. My take is that the place the user interfaces with the code is where the issue should be filed. It can then be passed upstream if appropriate, but the user shouldn’t have to chase the issue around. So if an issue manifests in my project, but the fault lies in upstream code, it’s my responsibility to get it fixed, not the user’s.

So now that I’ve typed all this out, I suppose I would argue that issues should be enabled on src.fedoraproject.org and that it’s the package maintainer’s responsibility to make the connection to Bugzilla where required.

Open Source Leadership Summit 2019

Ed. note: my employer is a member of the Linux Foundation. The views in this post are — as are all posts on this site — my personal views and do not represent my employer or any other organizations. You knew this already, but I thought it would be good to remind you.

Last week, after leaving SCaLE, I headed to Half Moon Bay, California for the Open Source Leadership Summit. It’s an invitation-only conference run by the Linux Foundation. This year it was held at the Ritz-Carlton Half Moon Bay, a very nice resort hotel with an ocean view. This was dramatically different from most tech conferences I’ve attended previously. That difference was the source of some internal struggle I wrestled with. I’ll get to that momentarily, but first my more general thoughts.

This is clearly not a typical tech conference. The number of technical sessions was pretty low, with a greater focus on topics like marketing, balancing corporate & community interests, mentoring, et cetera. Given that the target audience is leaders of corporate open source efforts, this makes sense. The talks I attended were good, with Jim Perrin’s “Damaging your project with management (and leadership)” as my clear favorite. The downside is that with 30-minute time slots, I didn’t feel like there was ever enough time to really get deep into any of the topics. That might be better for the state my attention span was in after over a week of travel and conferences, but it’s not great for getting a lot of value out of the talks.

The marketing panel that I was on was well-received. I thought we did well. Panels can be terrific or terrible and I’d like to think we were closer to the terrific end of the spectrum. All of the panelists had different, complimentary experience, so we were able to give non-repetitive answers. And we all kept our answers pretty short so that the conversation could flow. The audience was into it, as well, which always helps. And I’m glad to say that “Jennifer”s outnumbered men.

So now for my internal struggle. I saw the conference referred to as “open source Davos” and it’s hard to disagree with that. Jessie Frazelle was unrestrained in her criticism:

Coming on the heels of Bradley Kuhn’s “It’s a Wonderful Life” analogy at SCaLE, this criticism really stuck out to me. Yes, I get paid well to work on an open source project, but I still live in a world where staying at a resort like the Ritz-Carlton Half Moon Bay leaves me wide-eyed. How many tech conference travels could we have funded with the budget for lunches? How much test gear could be provided for the cost of the evening socials? Why is it called “Open Source Leadership Summit” when the leaders of major open source projects aren’t invited to attend?

Well the answer to that question is “this isn’t the sort of leadership we meant”. A friend said this is the sort of event that corporate execs and senior management attend. If you want to get your message to them, that’s what you have to do. I understand that argument, and the practical side of me gets it. The ideological side of me says “well then we’ll do it without them!”

The Linux Foundation and similar foundations are trade associations, not charities. They’re not obligated to act in the public good. Maybe they could stand do to a little more of that, obligated or no. But ultimately, they’re doing what trade associations do. They advance their corporate interests in the way they see fit. If we want to redirect them toward community benefit, maybe pitching talks that give the message we want them to hear is the approach to take. Or maybe that’s just what I’ll tell myself to justify going on a junket.

SCaLE 17x

Last week, I attended the 17th annual Southern California Linux Expo (SCaLE 17x). SCaLE is a conference that I’ve wanted to go to for years, so I’m glad I finally made it. Located at the Pasadena Convention Center, it’s a short walk from nearby hotels, restaurants, and a huge independent bookstore. Plus the weather in southern California almost always beats Indiana — particularly in March.

Having done this a few times before, the SCaLE organizers know how to put on a good event. Code of Conduct information, including contacts, is prominently posted right as you walk in the door. Staff walk around with t-shirts that sport the WiFi information. The break between sessions is 30 minutes, which allows ample time to get from one to another without having to brush people aside if you meet them in the hallway. It was an incredibly-well run conference.

I ended up in the “mentoring” track most of the weekend, which I suppose indicates where I am in this point of my career. “Mentoring” may not be the right word, though. The talks in that room covered being a community organizer, developer advocacy, and a lot about mental health. Quite a bit about mental health, in fact. It’s probably a good thing that we’re discussing these topics more openly at conferences.

The talk that stuck with me the most, though, was one I saw on Sunday afternoon. Bradley Kuhn wondered “if open source isn’t sustainable, maybe free software is.” Bradley compared the budgets and the output of large corporate-backed foundations and smaller projects like phpMyAdmin. I’ll go deeper on that later, either when I recap the Open Source Leadership Summit or in a standalone post.

Bradley also used an “It’s a Wonderful Life” analogy, which is very much my kind of analogy. This may become a longer post at some point, but the general idea is that we have a lot of Sam Wainwrights in the world: people who are willing to throw money at a problem (perhaps with strings attached). Despite being well-meaning, they’re not actually doing that much to help. What we need is more George Baileys: people doing the small but critical work in their communities to help them thrive.

SCaLE was a terrific conference, and I’m looking forward to going back in the future. Especially now that I’ve learned my way around the food scene a little bit.

CopyleftConf was great, you should go next year

Two weeks ago, I was fortunate to attend the inaugural Copyleft Conference. It was held in Brussels, Belgium the day after FOSDEM. Since I was in town anyway, I figured I should just extend my trip by a day to attend this conference. I couldn’t be happier that I did.

Software licensing doesn’t get enough discussion at conference as it probably should. And among the talks that do happen, copyleft licenses specifically get only a portion of that. But with major projects like the Linux kernel using copyleft licenses — and the importance of copyleft principles to open source software generally — the Software Freedom Conservancy decided that a dedicated conference is in order.

I was impressed with how well-organized and well-attended the conference was for a first try. The venue was excellent, apart from some acoustic issues in the main room. The schedule was terrific: three rooms all day, each filled with talks from the world’s leading experts. I commented to a friend that if the building were to collapse, 80% of the worlds copyleft expertise would disappear.

For me, some of the excitement was just being around all of those people:

Molly deBlanc’s keynote was simultaneously inspiring and disturbing. She spoke of how software freedom matters to everyone, but how it matters to marginalized people in different ways. Ad networks can expose that someone at risk is seeking help. “Smart” homes can be used by domestic abusers to torment their victims. The transparency that free software brings isn’t just a nice-to-have, it can materially impact people’s lives.

The other session that was particularly interesting to me was Chris Lamb’s discussion of the Commons Clause. Chris was more focused on the response of the community to Redis Labs’ decision to adopt it than the Commons Clause itself. He viewed Redis Labs’ decision to adopt and subsequent refusal to abandon the Commons Clause as a failure of the copyleft community to make a compelling argument. Drawing on the work of Aristotle, Chris argued that we, as interested and knowledgeable parties, should have done a better job making our case. The question, of course, is who the “we” is that Chris is exhorting. This is a particularly key question for his advice to proactively address the concerns of companies.

Some of the other talks focused more directly on adapting to a new environment. Version 3 of the GNU General Public License was published in 2007. At the time, Amazon Web Services (as we currently know it) was just over a year old. The original iPhone was released on the same day. While the principles behind the GPLv3 haven’t changed, the reality of how we use software has changed dramatically. Van Lindberg’s talk on a new license he’s drafting for a client explored what copyleft looks like in 2019. And Alexios Zavras noted that the requirements to provide source code don’t necessarily apply as-written anymore.

In addition to meeting some new friends and idols, I was also able to spend some time with friends that I don’t get to see often enough. I’m already looking forward to CopyleftConf 2020.

What’s the future of Linux distributions?

“Distros don’t matter anymore” is a bold statement for someone who is paid to work on a Linux distro to make. Fortunately, I’m not making that statement. At least not exactly.

Distros still matter. But it’s fair to say that they matter in a different way than they did in the past. Like lava in a video game, abstractions slowly-but-inexorably move up the stack. For the entirety of their existence, effectively, Linux distributions have focused on producing operating systems (OSes) with some userspace applications. But the operating system is changing.

For one, OS developers have been watching each other work and taking inspiration for improvement. Windows is not macOS is not Linux, but they all take what they see as the “best” features of others and try to incorporate them. And with things like Windows Subsystem for Linux, the lines are blurring.

Applications are helping in this regard, too. Not everything is written in C and C++ anymore. Many applications are being developed in languages like Python, Ruby, and Java, where the application developer mostly doesn’t have to care about the OS. Which means the user doesn’t either. And of course, so much of what the average user does on their computer runs out of the web browser these days. The vast majority of my daily computer usage can be done on any modern OS, including Android.

With the importance of the operating system itself diminishing, distros can choose to either remain unchanged and watch their importance diminish or they can evolve to add new relevance.

This is all background for many conversations and presentations I heard earlier this month at the FOSDEM conference in Brussels. The first day of FOSDEM I spent mostly in the Fedora booth. The second day I was working the distro dev room. Both days had a lot of conversations about how distros can stay relevant — not in those words, but certainly in spirit.

The main theme was the idea of changing how the OS is managed and updated. The idea of the OS state as a git tree is interesting. Fedora’s Silverblue desktop and openSUSE Kubic are two leading examples.

So is this the future of Linux distributions? I don’t know. What I do know is that distributions must change to keep up with the world. This change should be in a way that makes the distro more obviously valuable to users.

Can your bug tracker and support tickets be the same system?

I often see developers, both open source and proprietary, struggle with trying to use bug trackers as support tools (or sometimes support tools as bug trackers). I can see the appeal, since support issues often tie back to bugs and it’s simpler to have one thing than two. But the problem is that they’re really not the same thing, and I’m not sure there’s a tool that does both well.

In 2014 (which is when I originally added this idea to my to-do list according to Trello), Daniel Pocock wrote a blog post that addresses this issue. Daniel examined several different tools in this space and looked at trends and open questions.

My own opinions are colored by a few different things. First, I think about a former employer. The company originally used FogBugz for both bug tracking and customer support (via email). By the time I joined, the developers had largely moved off FogBugz for bug tracking, leaving us using what was largely designed as a bug tracker for our customer support efforts. Since customers largely interacted via email, it didn’t particularly matter what the system was.

On the other hand, because it was designed as a bug tracker, it lacked some of the features we wanted from a customer support tool. Customers couldn’t log in and view dashboards, so we had to manually build the reports they wanted and send them via email. And we couldn’t easily build a knowledge base into it, which reduced the ability for customers to get answers themselves more quickly. Shortly before I changed jobs, we began the process of moving to ZenDesk, which provided the features we needed.

The other experience that drove this was serving as a “bug concierge” on an open source project I used to be active in. Most of the user support happened via mailing list, and occasionally a legitimate bug would be discovered. The project’s Trac instance required the project team to create an account. Since I already had an account, I’d often file bugs on behalf of people. I also filed bugs in Fedora’s bugzilla instance when the issue was with the Fedora package specifically.

What I took away from these experiences is that bug trackers that are useful to developers are rarely useful to end users. Developers (or their managers) benefit from having a lot of metadata that can be used to filter and report on issues. But a large number of fields to fill in can overwhelm users. They want to be able to say what’s wrong and be told how to fix it.

In order for a tool to work as both a bug tracker and ticket system, the metadata should probably only be visible to developers. And the better solution is probably separate tools that integrate with each other.