The Linux desktop is not in trouble

Writing for ZDNet earlier this month, Steven J. Vaughan-Nichols declared trouble for the Linux desktop. He’s wrong.

Or maybe not. Maybe we’re just looking at different parts of the elephant. sjvn’s core argument, if I may sum it up, is that fragmentation is holding back the Linux desktop. Linux can’t gain significant traction in the desktop market because there are just so many options. This appeals to computer nerds, but leads to confusion for general users who don’t want to care about whether they’re running GNOME or KDE Plasma or whatever.

Fragmentation

I’m sympathetic to that argument. When I was writing documentation for Fedora, we generally wrote instructions for GNOME, since that was the default desktop. Fedora users can also choose from spins of KDE Plasma, LXQt, Xfce, plus can install other desktop environments. If someone installs KDE Plasma because that’s what their friend gave them, will they be able to follow the documentation? If not, will they get frustrated and move back to Windows or MacOS?

Even if they stick it out, there are two large players in the GUI toolkit world: GTK and Qt. You can use an app written in one in a desktop environment written in the other, but it doesn’t always look very good. And the configuration settings may not be consistent between apps, which is also frustrating.

Corporate indifference

Apart from that, sjvn also laments the lack of desktop effort from major Linux vendors:

True, the broad strokes of the Linux desktop are painted primarily by Canonical and Red Hat, but the desktop is far from their top priority. Instead, much of the nuts and bolts of the current generation of the Linux desktop is set by vendor-related communities: Red Hat,ย Fedora, SUSE’sย openSUSE, and Canonical’sย Ubuntu.

I would argue that this is the way it should be. As he notes in the preceding paragraph, the focus of revenue generation is on enterprise servers and cloud. There are two reasons for that: that’s where the customer money is and enterprises don’t want to innovate on their desktops.

I’ll leave the first part to someone else, but I think the “enterprises don’t want to innovate on their desktops” part is important. I’ve worked at and in support of some large organizations and in all cases, they didn’t want anything more from their desktops than “it allows our users to run their business applications in a reliable manner”. Combine this with the tendency of the enterprise to keep their upgrade cycles long and it makes no sense to keep desktop innovation in the enterprise product.

Community distributions are generally more focused on individuals or small organizations who may be more willing to accept disruptive change as the paradigm is moved forward. This is true beyond the desktop, too. Consider changes like the adoption of systemd or replacing yum with dnf: these also appeared in the community distributions first, but I didn’t see that used as a case for “enterprise Linux distributions are in trouble.”

What’s the answer?

Looking ahead, I’d love to see a foundation bring together the Linux desktop community and have them hammer out out a common desktop for everyone. Yes, I know, I know. Many hardcore Linux users love have a variety of choices. The world is not made up of desktop Linux users. For the million or so of us, there are hundreds of millions who want an easy-to-use desktop that’s not Windows, doesn’t require buying a Mac, and comes with broad software and hardware support.

Setting aside the XKCD #927 argument, I don’t know that this is an answer. Even if the major distros agreed to standardize on the same desktop (and with Ubuntu returning to GNOME, that’s now the case), that won’t stop effort on other desktops. If the corporate sponsors don’t invest any effort, the communities still will. People will use whatever is provided to them in the workplace, so presenting a single standard desktop to consumers will rely on the folks who make the community distributions to agree to that. It won’t happen.

But here’s the crux of my disagreement with this article. The facts are all correct, even if I disagree with the interpretation of some of them. The issue is that we’re not looking at the success of the Linux desktop in the same way.

If you define “Linux desktop” as “a desktop environment that runs the Linux kernel”, then ChromeOS is doing quite well, and will probably continue to grow (unless Google gets bored with it). In that case, the Linux desktop is not in trouble, it’s enjoying unprecedented success.

But when most people say “Linux desktop”, they think of a traditional desktop model. In this case, the threat to Linux desktops is the same as the threat to Windows and MacOS: desktops matter less these days. So much computing, particularly for consumers, happens in the web browser when done on a PC at all.

Rethinking the goal

This brings me back to my regular refrain: using a computer is a means, not an end. People don’t run a desktop environment to run a desktop environment, they run a desktop environment because it enables them to do the things they want to do. As those things are increasingly done on mobile or in the web browser, achieving dominant market share for desktops is no longer a meaningful goal (if, indeed, it ever was).

Many current Linux desktop users are (I guess), motivated at least in part by free software ideals. This is not a mainstream position. Consumers will need more practical reasons to choose any Linux desktop over the proprietary OS that was shipped by the computer’s manufacturer.

With that in mind, the answer isn’t standardization, it’s making the experience better. Fedora Silverblue and OpenSUSE Kubic are efforts in that direction. Using those as a base, with Flatpaks to distribute applications, the need for standardization at the desktop environment level decreases because the users are mostly interacting with the application level, one step above.

The usual disclaimer applies: I am a Red Hat employee who works on Fedora. The views in this post are my own and not necessarily the views of Red Hat, the Fedora Council, or anyone else. They may not even be my views by the time you read this.

Emoji in console output

Recently, my friend was talking about some output he got from running the minikub program. Each line included a leading emoji character. He was not thrilled, and I don’t think they did it well. But when used appropriately, emoji can add valuable context to the output.

root@test# minikube start
๐Ÿ˜„ minikube v1.0.0 on linux (amd64)
๐Ÿคน Downloading Kubernetes v1.14.0 images in the background ...
๐Ÿ’ก Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.

๐Ÿ’ฃ Unable to start VM: Error getting state for host: getting connection: looking up domain: virError(Code=42, Domain=10, Message='Domain not found: no domain with matching name 'minikube'')

๐Ÿ˜ฟ Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
๐Ÿ‘‰ https://github.com/kubernetes/minikube/issues/new
root@test# minikube delete
๐Ÿ”ฅ Deleting "minikube" from kvm2 ...
๐Ÿ’” The "minikube" cluster has been deleted

I should say, in the interests of full disclosure, that I have written tools that include unhelpful emoji in the output. Some of the emoji are not helpful. The crying cat because it crashed? Doesn’t add anything. The broken heart when the cluster is deleted? I don’t have time for your guilt trips. But the light bulb for a tip and the bomb for a Big Bad Error help draw attention to what could be a wall of text.

Here’s what I see as some guiding ideas for using emoji in output:

  • Have a fallback, but not a stupid one. For the code above, there’s a fallback to ASCII. If you thought the emoji added no value, check out how un-valuable the fallback is. The fallback should probably be “print nothing and go straight to the text output”.
  • Don’t print emoji to logs. The console should be human-readable (and a well-placed emoji can help with drawing attention to the right places), but logs should be machine-readable (well, grep-readable). Log messages should be more structured anyway, so maybe it doesn’t really matter, but don’t rely on your user having an emoji keyboard available when they need to grep the logs.
  • Try to use unambiguous emoji. Pictograms are language-independent, which is nice, but if you’ve ever tried to communicate with A Youth via emoji, you know there’s a lot of room for nuance. Stick to well-understood characters and document them somewhere.
  • Use emoji to enhance context, not to replace text. Log parsing and screen readers are two reasons that you don’t want to get rid of text in favor of emoji. Use the emoji to draw attention to important messages and provide hints as to why they’re important, but make sure the messages still stand on their own.

Where to file an issue?

Recently, the Fedora Engineering Steering Committee (FESCo) entertained a proposal to allow people to file issues in the repo where Fedora RPM spec files live. They ultimately rejected the proposal in favor of keeping those issues in Red Hat Bugzilla. I didn’t weigh in on that thread because I don’t have a set opinion one way or another, but it raised some interesting points.

First, I’d argue that Bugzilla is hostile for end users. There are a lot of fields, many of which aren’t meaningful to non-developers. It can be overwhelming. Then again, there probably aren’t too many end users filing bugs against spec files.

On the other hand, having multiple places to file bugs is hostile for users, too. “Where do I file this particular bug? I don’t know, I just want things to work!”

Having multiple places for bugs can be helpful to developers, so long as the bugs are filed in the right place. Spec file bugs make sense to be filed in the same place as the spec files generally. But they might make more sense elsewhere if they block another bug or fit into another workflow. And the odds of a bug being filed in the right place isn’t great to begin with.

This is a question for more than just Fedora though. Any project that has multiple pieces, particularly upstream dependencies, needs to think about how this will work. My take is that the place the user interfaces with the code is where the issue should be filed. It can then be passed upstream if appropriate, but the user shouldn’t have to chase the issue around. So if an issue manifests in my project, but the fault lies in upstream code, it’s my responsibility to get it fixed, not the user’s.

So now that I’ve typed all this out, I suppose I would argue that issues should be enabled on src.fedoraproject.org and that it’s the package maintainer’s responsibility to make the connection to Bugzilla where required.

Open Source Leadership Summit 2019

Ed. note: my employer is a member of the Linux Foundation. The views in this post are โ€” as are all posts on this site โ€” my personal views and do not represent my employer or any other organizations. You knew this already, but I thought it would be good to remind you.

Last week, after leaving SCaLE, I headed to Half Moon Bay, California for the Open Source Leadership Summit. It’s an invitation-only conference run by the Linux Foundation. This year it was held at the Ritz-Carlton Half Moon Bay, a very nice resort hotel with an ocean view. This was dramatically different from most tech conferences I’ve attended previously. That difference was the source of some internal struggle I wrestled with. I’ll get to that momentarily, but first my more general thoughts.

This is clearly not a typical tech conference. The number of technical sessions was pretty low, with a greater focus on topics like marketing, balancing corporate & community interests, mentoring, et cetera. Given that the target audience is leaders of corporate open source efforts, this makes sense. The talks I attended were good, with Jim Perrin’s “Damaging your project with management (and leadership)” as my clear favorite. The downside is that with 30-minute time slots, I didn’t feel like there was ever enough time to really get deep into any of the topics. That might be better for the state my attention span was in after over a week of travel and conferences, but it’s not great for getting a lot of value out of the talks.

The marketing panel that I was on was well-received. I thought we did well. Panels can be terrific or terrible and I’d like to think we were closer to the terrific end of the spectrum. All of the panelists had different, complimentary experience, so we were able to give non-repetitive answers. And we all kept our answers pretty short so that the conversation could flow. The audience was into it, as well, which always helps. And I’m glad to say that “Jennifer”s outnumbered men.

So now for my internal struggle. I saw the conference referred to as “open source Davos” and it’s hard to disagree with that. Jessie Frazelle was unrestrained in her criticism:

Coming on the heels of Bradley Kuhn’s “It’s a Wonderful Life” analogy at SCaLE, this criticism really stuck out to me. Yes, I get paid well to work on an open source project, but I still live in a world where staying at a resort like the Ritz-Carlton Half Moon Bay leaves me wide-eyed. How many tech conference travels could we have funded with the budget for lunches? How much test gear could be provided for the cost of the evening socials? Why is it called “Open Source Leadership Summit” when the leaders of major open source projects aren’t invited to attend?

Well the answer to that question is “this isn’t the sort of leadership we meant”. A friend said this is the sort of event that corporate execs and senior management attend. If you want to get your message to them, that’s what you have to do. I understand that argument, and the practical side of me gets it. The ideological side of me says “well then we’ll do it without them!”

The Linux Foundation and similar foundations are trade associations, not charities. They’re not obligated to act in the public good. Maybe they could stand do to a little more of that, obligated or no. But ultimately, they’re doing what trade associations do. They advance their corporate interests in the way they see fit. If we want to redirect them toward community benefit, maybe pitching talks that give the message we want them to hear is the approach to take. Or maybe that’s just what I’ll tell myself to justify going on a junket.

SCaLE 17x

Last week, I attended the 17th annual Southern California Linux Expo (SCaLE 17x). SCaLE is a conference that I’ve wanted to go to for years, so I’m glad I finally made it. Located at the Pasadena Convention Center, it’s a short walk from nearby hotels, restaurants, and a huge independent bookstore. Plus the weather in southern California almost always beats Indiana โ€” particularly in March.

Having done this a few times before, the SCaLE organizers know how to put on a good event. Code of Conduct information, including contacts, is prominently posted right as you walk in the door. Staff walk around with t-shirts that sport the WiFi information. The break between sessions is 30 minutes, which allows ample time to get from one to another without having to brush people aside if you meet them in the hallway. It was an incredibly-well run conference.

I ended up in the “mentoring” track most of the weekend, which I suppose indicates where I am in this point of my career. “Mentoring” may not be the right word, though. The talks in that room covered being a community organizer, developer advocacy, and a lot about mental health. Quite a bit about mental health, in fact. It’s probably a good thing that we’re discussing these topics more openly at conferences.

The talk that stuck with me the most, though, was one I saw on Sunday afternoon. Bradley Kuhn wondered “if open source isn’t sustainable, maybe free software is.” Bradley compared the budgets and the output of large corporate-backed foundations and smaller projects like phpMyAdmin. I’ll go deeper on that later, either when I recap the Open Source Leadership Summit or in a standalone post.

Bradley also used an “It’s a Wonderful Life” analogy, which is very much my kind of analogy. This may become a longer post at some point, but the general idea is that we have a lot of Sam Wainwrights in the world: people who are willing to throw money at a problem (perhaps with strings attached). Despite being well-meaning, they’re not actually doing that much to help. What we need is more George Baileys: people doing the small but critical work in their communities to help them thrive.

SCaLE was a terrific conference, and I’m looking forward to going back in the future. Especially now that I’ve learned my way around the food scene a little bit.

CopyleftConf was great, you should go next year

Two weeks ago, I was fortunate to attend the inaugural Copyleft Conference. It was held in Brussels, Belgium the day after FOSDEM. Since I was in town anyway, I figured I should just extend my trip by a day to attend this conference. I couldn’t be happier that I did.

Software licensing doesn’t get enough discussion at conference as it probably should. And among the talks that do happen, copyleft licenses specifically get only a portion of that. But with major projects like the Linux kernel using copyleft licenses โ€” and the importance of copyleft principles to open source software generally โ€” the Software Freedom Conservancy decided that a dedicated conference is in order.

I was impressed with how well-organized and well-attended the conference was for a first try. The venue was excellent, apart from some acoustic issues in the main room. The schedule was terrific: three rooms all day, each filled with talks from the world’s leading experts. I commented to a friend that if the building were to collapse, 80% of the worlds copyleft expertise would disappear.

For me, some of the excitement was just being around all of those people:

Molly deBlanc’s keynote was simultaneously inspiring and disturbing. She spoke of how software freedom matters to everyone, but how it matters to marginalized people in different ways. Ad networks can expose that someone at risk is seeking help. “Smart” homes can be used by domestic abusers to torment their victims. The transparency that free software brings isn’t just a nice-to-have, it can materially impact people’s lives.

The other session that was particularly interesting to me was Chris Lamb’s discussion of the Commons Clause. Chris was more focused on the response of the community to Redis Labs’ decision to adopt it than the Commons Clause itself. He viewed Redis Labs’ decision to adopt and subsequent refusal to abandon the Commons Clause as a failure of the copyleft community to make a compelling argument. Drawing on the work of Aristotle, Chris argued that we, as interested and knowledgeable parties, should have done a better job making our case. The question, of course, is who the “we” is that Chris is exhorting. This is a particularly key question for his advice to proactively address the concerns of companies.

Some of the other talks focused more directly on adapting to a new environment. Version 3 of the GNU General Public License was published in 2007. At the time, Amazon Web Services (as we currently know it) was just over a year old. The original iPhone was released on the same day. While the principles behind the GPLv3 haven’t changed, the reality of how we use software has changed dramatically. Van Lindberg’s talk on a new license he’s drafting for a client explored what copyleft looks like in 2019. And Alexios Zavras noted that the requirements to provide source code don’t necessarily apply as-written anymore.

In addition to meeting some new friends and idols, I was also able to spend some time with friends that I don’t get to see often enough. I’m already looking forward to CopyleftConf 2020.

What’s the future of Linux distributions?

“Distros don’t matter anymore” is a bold statement for someone who is paid to work on a Linux distro to make. Fortunately, I’m not making that statement. At least not exactly.

Distros still matter. But it’s fair to say that they matter in a different way than they did in the past. Like lava in a video game, abstractions slowly-but-inexorably move up the stack. For the entirety of their existence, effectively, Linux distributions have focused on producing operating systems (OSes) with some userspace applications. But the operating system is changing.

For one, OS developers have been watching each other work and taking inspiration for improvement. Windows is not macOS is not Linux, but they all take what they see as the “best” features of others and try to incorporate them. And with things like Windows Subsystem for Linux, the lines are blurring.

Applications are helping in this regard, too. Not everything is written in C and C++ anymore. Many applications are being developed in languages like Python, Ruby, and Java, where the application developer mostly doesn’t have to care about the OS. Which means the user doesn’t either. And of course, so much of what the average user does on their computer runs out of the web browser these days. The vast majority of my daily computer usage can be done on any modern OS, including Android.

With the importance of the operating system itself diminishing, distros can choose to either remain unchanged and watch their importance diminish or they can evolve to add new relevance.

This is all background for many conversations and presentations I heard earlier this month at the FOSDEM conference in Brussels. The first day of FOSDEM I spent mostly in the Fedora booth. The second day I was working the distro dev room. Both days had a lot of conversations about how distros can stay relevant โ€” not in those words, but certainly in spirit.

The main theme was the idea of changing how the OS is managed and updated. The idea of the OS state as a git tree is interesting. Fedora’s Silverblue desktop and openSUSE Kubic are two leading examples.

So is this the future of Linux distributions? I don’t know. What I do know is that distributions must change to keep up with the world. This change should be in a way that makes the distro more obviously valuable to users.

Can your bug tracker and support tickets be the same system?

I often see developers, both open source and proprietary, struggle with trying to use bug trackers as support tools (or sometimes support tools as bug trackers). I can see the appeal, since support issues often tie back to bugs and it’s simpler to have one thing than two. But the problem is that they’re really not the same thing, and I’m not sure there’s a tool that does both well.

In 2014 (which is when I originally added this idea to my to-do list according to Trello), Daniel Pocock wrote a blog post that addresses this issue. Daniel examined several different tools in this space and looked at trends and open questions.

My own opinions are colored by a few different things. First, I think about a former employer. The company originally used FogBugz for both bug tracking and customer support (via email). By the time I joined, the developers had largely moved off FogBugz for bug tracking, leaving us using what was largely designed as a bug tracker for our customer support efforts. Since customers largely interacted via email, it didn’t particularly matter what the system was.

On the other hand, because it was designed as a bug tracker, it lacked some of the features we wanted from a customer support tool. Customers couldn’t log in and view dashboards, so we had to manually build the reports they wanted and send them via email. And we couldn’t easily build a knowledge base into it, which reduced the ability for customers to get answers themselves more quickly. Shortly before I changed jobs, we began the process of moving to ZenDesk, which provided the features we needed.

The other experience that drove this was serving as a “bug concierge” on an open source project I used to be active in. Most of the user support happened via mailing list, and occasionally a legitimate bug would be discovered. The project’s Trac instance required the project team to create an account. Since I already had an account, I’d often file bugs on behalf of people. I also filed bugs in Fedora’s bugzilla instance when the issue was with the Fedora package specifically.

What I took away from these experiences is that bug trackers that are useful to developers are rarely useful to end users. Developers (or their managers) benefit from having a lot of metadata that can be used to filter and report on issues. But a large number of fields to fill in can overwhelm users. They want to be able to say what’s wrong and be told how to fix it.

In order for a tool to work as both a bug tracker and ticket system, the metadata should probably only be visible to developers. And the better solution is probably separate tools that integrate with each other.

GitHub’s new status feature

Two weeks ago, GitHub added a new feature for all users: the ability to set a status. I’m in favor of this. First, it appeals to my AOL Instant Messenger nostalgia. Second, I think it provides a valuable context for open source projects. It allows maintainers to say “hey, I’m not going to be very responsive for a bit”. In theory, this should let people filing issues and pull requests not get so angry if they don’t get a quick response.

Jessie Frazelle described it as the “cure for open source guilt”.

I was surprised at the amount of blowback this got. (See, for example the replies to Nat Friedman’s tweet.) Some of the responses are of the dumb “oh noes you’re turning GitHub into a social media platform. It should be about the code!” variety. To those people I say “fine, don’t use this feature.” Others raise a point about not advertising being on vacation.

I’m sympathetic to that. I’m generally pretty quiet about the house being empty on public or public-ish platforms. It’s a good way to advertise yourself to vandals and thieves. To be honest, I’m more worried about something like Nextdoor where the users are all local than GitHub where anyone who cares is probably a long way away. Nonetheless, it’s a valid concern, especially for people with a higher profile.

I agree with Peter that it’s not wise to set expectations for maintainers to share their private details. That said, I do think it’s helpful for maintainers to let their communities know what to expect from them. There are many reasons that someone might need to step away from their project for a week or several. A simple “I’m busy with other stuff and will check back in on February 30th” or something to that effect would accomplish the goal of setting community expectations without being too revelatory.

The success of this feature will rely on users making smart decisions about what they choose to reveal. That’s not always a great bet, but it does give people some control over the impact. The real question will be: how much do people respect it?

Inclusion is a necessary part of good coding

Too often I see comments like “some people would rather focus on inclusion than write good code.” Not only is that a false dichotomy, but it completely misrepresents the relationship between the two. Inclusion doesn’t come at the cost of good code, it’s a necessary part of good code.

We don’t write code for the sake of writing code. We write code for people to use it in some way. This means that the code needs to work for the people. In order to do that, the people designing and implementing the technology need to consider different experiences. The best way to do that is to have people with different experiences be on the team. As my 7th grade algebra teacher was fond of reminding us: garbage in, garbage out.

But it’s not like the tech industry has a history of bad decision making. Or soap dispensers not working with dark-skinned people. Or identifying black people as gorillas. Or voice recognition not responding to female voices. What could go wrong with automatically labeling “suspicious” people?

I’ll grant that not all of these issues are with the code itself. In fact a lot of it isn’t the code, it’s the inputs given to the code. So when I talk about “good coding” here, I’m being very loose with the wording as a shorthand for the technology we produce in general. The point holds because the code doesn’t exist in a vacuum.

It’s not just about the outputs and real world effect of what we make. There’s also the matter of wanting to build the best team. Inclusion opens you up to a broader base of talent that might self-select out.

Being inclusive takes effort. It sometimes requires self-examination and facing unpleasant truths. But it makes you a better person and if you don’t care about that, it makes your technology better, too.