Linux distros should be opinionated

Last week, the upstream project for a package I maintain was discussing whether or not to enable autosave in the default configuration. I said if the project doesn’t, I may consider making that the default in the Fedora package. Another commenter said “is it a good idea to have different default settings per packaging ? (ubuntu/fedora/windows)”

My take? Absolutely yes. As I said in the post on “rolling stable” distros, a Linux distribution is more than an assortment of packages; it is a cohesive whole. This necessarily requires changes to upstream defaults.

Changes to enable a functional, cohesive whole are necessary, of course. But there’s more than “it works”, there’s “it works the way we think it should.” A Linux distribution targets a certain audience (or audiences). Distribution maintainers have to make choices to make the distro meet that audience’s needs. They are not mindless build systems.

Of course, opinions do have a cost. If a particular piece of software works differently from one distro to another, users get confused. Documentation may be wrong, sometimes harmfully so. Upstream developers may have trouble debugging issues if they are not familiar with the distro’s changes.

Thus, opinions should be implemented judiciously. But when a maintainer has given a change due thought, they should make it.

What do “rolling release” and “stable” mean in the context of operating systems?

In a recent post on his blog, Chris Siebenmann wrote about his experience with Fedora upgrades and how, because of some of the non-standard things he does, upgrades are painful for him. At the end, he said “What I really want is a rolling release of ‘stable’ Fedora, with no big bangs of major releases, but this will probably never exist.”

I’m sympathetic to that position. Despite the fact that developers have worked to improve the ease of upgrades over the years, they are inherently risky. But what would a stable rolling release look like?

“Arch!” you say. That’s not wrong, but it also misses the point. What people generally want is new stuff so long as it doesn’t cause surprise. Rolling releases don’t prevent that, they spread it out. With Fedora’s policy, for example, major changes (should) happen as the release is being developed. Once it’s out, you get bugfixes and minor enhancements, but no big changes. You get the stability.

On the other hand, you can run Fedora Rawhide, which gets you the new stuff as soon as it’s available, but you don’t know when the big changes will come. And sometimes, the changes (big and little) are broken. It can be nice because you get the newness quickly. And the major changes (in theory) don’t all come at once.

Rate of change versus total change

For some people, it’s the distribution of change, not the total amount of change that makes rolling releases compelling. And in most cases, the changes aren’t that dramatic. When updates are loosely-coupled or totally independent, the timing doesn’t matter. The average user won’t even notice the vast majority of them.

But what happens when a really monumental change comes in? Switching the init system, for example, is kind of a big deal. In this case, you generally want the integration that most distributions provide. It’s not just that you get an assortment of packages from your distribution, it’s that you get a set of packages that work together. This is a fundamental feature for a Linux distribution (excepting those where do-it-yourself is the point).

Applying it to Fedora

An alternate phrasing of what I understand Chris to want is “release-quality packages made available when they’re ready, not on the release schedule.” That’s perfectly reasonable. And in general, that’s what Fedora wants Rawhide to be. It’s something we’re working on, particularly with the ability to gate Rawhide updates.

But part of why we have defined releases is to ensure the desired stability. The QA team and other testers put a lot of effort into automated and manual tests of releases. It’s hard to test against the release criteria when the target keeps shifting. It’s hard to make the distribution a cohesive whole instead of a collection of packages.

What Chris asks for isn’t wrong or unreasonable. But it’s also a difficult task to undertake and sustain. This is one area where ostree-based variants like Fedora CoreOS (for servers/cloud), Silverblue (for desktops), and IoT (for edge devices) bring a lot benefit. The big changes can be easily rolled back if there are problems.

Removing unmaintained packages from an installed system

Earlier this week, Miroslav Suchý proposed removing removing retired packages as part of Fedora upgrade (editor’s note: the proposal was withdrawn after community feedback). As it stands right now, if a package is removed in a subsequent release, it will stick around. For example, I have 34 packages on my work laptop from Fedora 28 (the version I first installed on it) through Fedora 31. The community has been discussing this, with no clear consensus.

I’m writing this post to explore my own thoughts. It represents my opinions as Ben Cotton: Fedora user and contributor, not as Ben Cotton: Fedora Program Manager.

What does it mean for a package to be “maintained”?

This question is the heart of the discussion. In theory, a maintained package means that there’s someone who can apply security and other bug fixes, update to new releases, etc. In practice, that’s not always the case. Anyone who has had a bug closed due to the end-of-life policy will attest to that.

The practical result is that as long as the package continues to compile, it may live on for a long time after the maintainer has given up on it. This doesn’t mean that it will get updates, it just means that no one has had a reason to remove it from the distribution.

On the other hand, the mere fact that a package has been dropped from the distribution doesn’t mean that something is wrong with it. If upstream hasn’t made any changes, the “unmaintained” version is just as functional as a maintained version would be.

What is the role of a Linux distribution?

Why do Linux distributions exist? After all, people could just download the software and build it themselves. That’s asking a lot of most people. Even those who have sufficient technical knowledge to compile all of the different packages in different languages with different quirks, few have the time or desire to do so.

So a distribution is, in part, a sharing of labor. By dividing the work, we reduce our own burden and democratize access.

A distribution is also a curated collection. It’s the set of software that the contributors say is worth using, configured in the “right way”. Sure there are a dozen or so web browsers in the Fedora repos, but that’s not the entirety of web browsers that exist. Just as an art museum may have several similar paintings, a distribution might have several similar packages. But they’re all there for a reason.

To remove or not to remove?

The question of whether to remove unmaintained packages then becomes a balance between the shared labor and the curation aspects of a distribution.

The shared labor perspective supports not removing packages. If the package is uninstalled at update, then someone who relies on that package now has to download and build it themselves. It may also cause user confusion if something that previously worked suddenly stops, or if a package that exists on an upgraded system can’t be installed on a new one.

On the other hand, the curation perspective supports removing the package. Although there’s no guarantee that a maintained package will get updates, there is a guarantee that an unmaintained package won’t. Removing obsolete packages at upgrade also means that the upgraded system more closely resembles a freshly-installed system.

There’s no right answer. Both options are reasonable extensions of fundamental purposes of a distribution. Both have obvious benefits and drawbacks.

Pick a side, Benjamin

If I have to pick a side, I’m inclined to side with the “remove the packages” argument. But we have to make sure we’re clearly communicating what is happening to the user. We should also offer an easy opt-out for users who want to say “I know what you’re trying to do here, but keep these packages anyway.”

The Linux desktop is not in trouble

Writing for ZDNet earlier this month, Steven J. Vaughan-Nichols declared trouble for the Linux desktop. He’s wrong.

Or maybe not. Maybe we’re just looking at different parts of the elephant. sjvn’s core argument, if I may sum it up, is that fragmentation is holding back the Linux desktop. Linux can’t gain significant traction in the desktop market because there are just so many options. This appeals to computer nerds, but leads to confusion for general users who don’t want to care about whether they’re running GNOME or KDE Plasma or whatever.

Fragmentation

I’m sympathetic to that argument. When I was writing documentation for Fedora, we generally wrote instructions for GNOME, since that was the default desktop. Fedora users can also choose from spins of KDE Plasma, LXQt, Xfce, plus can install other desktop environments. If someone installs KDE Plasma because that’s what their friend gave them, will they be able to follow the documentation? If not, will they get frustrated and move back to Windows or MacOS?

Even if they stick it out, there are two large players in the GUI toolkit world: GTK and Qt. You can use an app written in one in a desktop environment written in the other, but it doesn’t always look very good. And the configuration settings may not be consistent between apps, which is also frustrating.

Corporate indifference

Apart from that, sjvn also laments the lack of desktop effort from major Linux vendors:

True, the broad strokes of the Linux desktop are painted primarily by Canonical and Red Hat, but the desktop is far from their top priority. Instead, much of the nuts and bolts of the current generation of the Linux desktop is set by vendor-related communities: Red Hat, Fedora, SUSE’s openSUSE, and Canonical’s Ubuntu.

I would argue that this is the way it should be. As he notes in the preceding paragraph, the focus of revenue generation is on enterprise servers and cloud. There are two reasons for that: that’s where the customer money is and enterprises don’t want to innovate on their desktops.

I’ll leave the first part to someone else, but I think the “enterprises don’t want to innovate on their desktops” part is important. I’ve worked at and in support of some large organizations and in all cases, they didn’t want anything more from their desktops than “it allows our users to run their business applications in a reliable manner”. Combine this with the tendency of the enterprise to keep their upgrade cycles long and it makes no sense to keep desktop innovation in the enterprise product.

Community distributions are generally more focused on individuals or small organizations who may be more willing to accept disruptive change as the paradigm is moved forward. This is true beyond the desktop, too. Consider changes like the adoption of systemd or replacing yum with dnf: these also appeared in the community distributions first, but I didn’t see that used as a case for “enterprise Linux distributions are in trouble.”

What’s the answer?

Looking ahead, I’d love to see a foundation bring together the Linux desktop community and have them hammer out out a common desktop for everyone. Yes, I know, I know. Many hardcore Linux users love have a variety of choices. The world is not made up of desktop Linux users. For the million or so of us, there are hundreds of millions who want an easy-to-use desktop that’s not Windows, doesn’t require buying a Mac, and comes with broad software and hardware support.

Setting aside the XKCD #927 argument, I don’t know that this is an answer. Even if the major distros agreed to standardize on the same desktop (and with Ubuntu returning to GNOME, that’s now the case), that won’t stop effort on other desktops. If the corporate sponsors don’t invest any effort, the communities still will. People will use whatever is provided to them in the workplace, so presenting a single standard desktop to consumers will rely on the folks who make the community distributions to agree to that. It won’t happen.

But here’s the crux of my disagreement with this article. The facts are all correct, even if I disagree with the interpretation of some of them. The issue is that we’re not looking at the success of the Linux desktop in the same way.

If you define “Linux desktop” as “a desktop environment that runs the Linux kernel”, then ChromeOS is doing quite well, and will probably continue to grow (unless Google gets bored with it). In that case, the Linux desktop is not in trouble, it’s enjoying unprecedented success.

But when most people say “Linux desktop”, they think of a traditional desktop model. In this case, the threat to Linux desktops is the same as the threat to Windows and MacOS: desktops matter less these days. So much computing, particularly for consumers, happens in the web browser when done on a PC at all.

Rethinking the goal

This brings me back to my regular refrain: using a computer is a means, not an end. People don’t run a desktop environment to run a desktop environment, they run a desktop environment because it enables them to do the things they want to do. As those things are increasingly done on mobile or in the web browser, achieving dominant market share for desktops is no longer a meaningful goal (if, indeed, it ever was).

Many current Linux desktop users are (I guess), motivated at least in part by free software ideals. This is not a mainstream position. Consumers will need more practical reasons to choose any Linux desktop over the proprietary OS that was shipped by the computer’s manufacturer.

With that in mind, the answer isn’t standardization, it’s making the experience better. Fedora Silverblue and OpenSUSE Kubic are efforts in that direction. Using those as a base, with Flatpaks to distribute applications, the need for standardization at the desktop environment level decreases because the users are mostly interacting with the application level, one step above.

The usual disclaimer applies: I am a Red Hat employee who works on Fedora. The views in this post are my own and not necessarily the views of Red Hat, the Fedora Council, or anyone else. They may not even be my views by the time you read this.

What’s the future of Linux distributions?

“Distros don’t matter anymore” is a bold statement for someone who is paid to work on a Linux distro to make. Fortunately, I’m not making that statement. At least not exactly.

Distros still matter. But it’s fair to say that they matter in a different way than they did in the past. Like lava in a video game, abstractions slowly-but-inexorably move up the stack. For the entirety of their existence, effectively, Linux distributions have focused on producing operating systems (OSes) with some userspace applications. But the operating system is changing.

For one, OS developers have been watching each other work and taking inspiration for improvement. Windows is not macOS is not Linux, but they all take what they see as the “best” features of others and try to incorporate them. And with things like Windows Subsystem for Linux, the lines are blurring.

Applications are helping in this regard, too. Not everything is written in C and C++ anymore. Many applications are being developed in languages like Python, Ruby, and Java, where the application developer mostly doesn’t have to care about the OS. Which means the user doesn’t either. And of course, so much of what the average user does on their computer runs out of the web browser these days. The vast majority of my daily computer usage can be done on any modern OS, including Android.

With the importance of the operating system itself diminishing, distros can choose to either remain unchanged and watch their importance diminish or they can evolve to add new relevance.

This is all background for many conversations and presentations I heard earlier this month at the FOSDEM conference in Brussels. The first day of FOSDEM I spent mostly in the Fedora booth. The second day I was working the distro dev room. Both days had a lot of conversations about how distros can stay relevant — not in those words, but certainly in spirit.

The main theme was the idea of changing how the OS is managed and updated. The idea of the OS state as a git tree is interesting. Fedora’s Silverblue desktop and openSUSE Kubic are two leading examples.

So is this the future of Linux distributions? I don’t know. What I do know is that distributions must change to keep up with the world. This change should be in a way that makes the distro more obviously valuable to users.

Linus’s awakening

It may be the biggest story in open source in 2018, a year that saw Microsoft purchase GitHub. Linus Torvalds replaced the Code of Conflict for the Linux kernel with a Code of Conduct. In a message on the Linux Kernel Mailing List (LKML), Torvalds explained that he was taking time off to examine the way he led the kernel development community.

Torvalds has taken a lot of flak for his style over the years, including on this blog. While he has done an excellent job shepherding the technical development of the Linux kernel, his community management has often — to put it mildly — left something to be desired. Abusive and insulting behavior is corrosive to a community, and Torvalds has spent the better part of the last three decades enabling and partaking in it.

But he has seen the light, it would seem. To an outside observer, this change is rather abrupt, but it is welcome. Reaction to his message has been mixed. Some, like my friend Jono Bacon, have advocated supporting Linus in his awakening. Others take a more cynical approach:

I understand Kelly’s position. It’s frustrating to push for a more welcoming and inclusive community only to be met with insults and then when someone finally comes around to have everyone celebrate. Kelly and others who feel like her are absolutely justified in their position.

For myself, I like to think of it as a modern parable of the prodigal son. As tempting as it is to reject those who awaken late, it is better than them not waking at all. If Linus fails to follow through, it would be right to excoriate him. But if he does follow through, it can only improve the community around one of the most important open source projects. And it will set an example for other projects to follow.

I spend a lot of time thinking about community, particularly since I joined Red Hat as the Fedora Program Manager a few months ago. Community members — especially those in a highly-visible role — have an obligation to model the kind of behavior the community needs. This sometimes means a patient explanation when an angry rant would feel better. It can be demanding and time-consuming work. But an open source project is more than just the code; it’s also the community. We make technology to serve the people, so if our communities are not healthy, we’re not doing our jobs.

Installing Lubuntu on a Dell Mini 9

My six year old daughter has shown interest in computers. In 2016, we bought a Kano for her and she loves it. So I decided she might like to have her own laptop. We happened to have a Dell Mini 9 from 2011 or so that we’re not using anymore. I figured that would be a good Christmas present for her.

Selecting the OS

The laptop still had the Ubuntu install that shipped with it. I could have kept using that, but I wanted to start with a clean install. I use Fedora on our other machines, so I wanted to try that. Unfortunately, Fedora decided to drop 32-bit support since the community effort was not enough to sustain it.

I tried installing Kubuntu, a KDE version of Ubuntu. However, the “continue” button in the installer’s prepare step would not switch to active. Some posts on AskUbuntu suggested annoying it into submission or not pre-downloading updates. Neither of these options worked.

After a time, I gave up on Kubuntu. Given the relatively low power of the laptop, I figured KDE Plasma might be too heavy anyway. So I decided to try Lubuntu, an Ubuntu variant that uses LXDE.

Installing Lubuntu

With Lubuntu, I was able to proceed all the way through the installer. I still had the “continue” button issue, but so long as I didn’t select the download updates option, it worked. Great success! But when it rebooted after the install, the display was very, very wrong. It was not properly scaled and the text was impossible to read. Fortunately, I was not the first person to have this problem, and someone else had a solution: setting the resolution in the GRUB configuration.

I had not edited GRUB configuration in a long time. In fact, GRUB2 was still in the future, so I had to find instructions. Once again, AskUbuntu had the answer. I already knew what I needed to add, I just forgot how to update the configuration appropriately.

Up until this point, I had been using the wired Ethernet connection, but I wanted my daughter to be able to use the Wi-Fi network. So I had to install the Wi-Fi drivers for the card. Lastly, I disabled IPv6 (which I have since done at the router). Happily, the webcam and audio worked with no effort.

What I didn’t do

Because I hate myself, I still haven’t set up Ansible to manage the basics of the configuration across the four Linux machines we use at home. I had to manually create the users. Since my daughter is just beginning to explore computers, I didn’t have a lot of software I needed to install. The web browser and office suite are already there, and that’s all she needs at the moment. This summer we’ll get into programming.

All done

I really enjoyed doing this installation, despite the frustrations I had with the Kubuntu installer. When I got my new ASUS laptop a few months ago, everything worked out of the box. There was no challenge. This at least provided a little bit of an adventure.

I’m pleasantly surprised how well it runs, too. My also-very-old HP laptop, which has much better specs on paper, is much more sluggish. Even switching from KDE to LXDE on it doesn’t help much. But the Mini 9 works like a charm, and it’s a good size for a six year old’s hands.

After only a few weeks, the Wi-Fi card suddenly failed. I bought a working system on eBay for about $35 and pulled the Wi-Fi card out of that. I figure as old as the laptop is, I’ll want replacement parts again at some point. But so far, it is serving her very well.

HP laptop keyboard won’t type on Linux

Here’s another story from my “WTF, computer?!” files (and also my “oh I’m dumb” files).

As I regularly do, I recently updated my Fedora machines. This includes the crappy HP 2000-2b30DX Notebook PC that I bought as a refurb in 2013. After dnf finished, I rebooted the laptop and put it away. Then while I was at a conference last week, my wife sent me a text telling me that she couldn’t type on it.

When I got home I took a look. Sure enough, they keyboard didn’t key. But it was weirder than that. I could type in the decryption password for the hard drive at the beginning of the boot process. And when I attached a wireless keyboard, I could type. Knowing the hardware worked, I dropped to runlevel 3. The built-in keyboard worked then.

I tried applying the latest updates, but that didn’t help. Some internet searching lead me to Freedesktop.org bug 103561. Running dnf downgrade libinput and rebooting gave me a working keyboard again. The bug is closed as NOTABUG, since the maintainers say it’s an issue in the kernel, which is fixed in the 4.13 kernel release. So I checked to see if Fedora 27, which was released last week, includes the 4.13 kernel. It does, and so does Fedora 26.

That’s when I realized I still had the kernel package excluded from dnf updates on that machine because of a previous issue where a kernel update caused the boot process to hang while/after loading the initrd. I removed the exclusion, updated the kernel, and re-updated libinput. After a reboot, the keyboard still worked. But if you’re using a kernel version from 4.9 to 4.12, libinput 1.9, and an HP device, your keyboard may not work. Update to kernel 4.13 or downgrade libinput (or replace your hardware. I would not recommend the HP 2000 Notebook. It is not good.)

Using the ASUS ZenBook for Fedora

I recently decided that I’d had enough of the refurbished laptop I bought four years ago. It’s big and heavy and slow and sometimes the fan doesn’t work. I wanted something more portable and powerful enough that I could smoothly scroll the web browser. After looking around for good Linux laptops, I settled on the ASUS ZenBook.

Installation

The laptop came with Windows 10 installed, but that’s not really my jam. I decided to boot off a Fedora 26 KDE live image first just to make sure everything worked before committing to installing. Desktop Linux has made a lot of progress over the years, but you never know which hardware might not be supported. As it turns out, that wasn’t a problem. WiFi, Bluetooth, webcam, speakers, etc all worked out of the box.

It’s almost disappointing in a sense. There used to be some challenge in getting things working, but now it’s just install and go. This is great overall, of course, because it means Linux is more accessible to new users and it’s less crap I have to deal with when I just want my damn computer to work. But there’s still a little bit of the nostalgia for the days when configuring X11 by hand was something you had to do.

Use

I’ve had the laptop for a little over a month now. I haven’t put it through quite the workout I’d hoped to, but I feel like I’ve used it enough to have an opinion at this point. Overall, I really like it. The main problem I have is that the trackpad has a middle-click, which is actually pretty nice except for when I accidentally use it. I’ve closed many a browser tab because I didn’t move my thumb far enough over. That’s probably something I can disable in the settings, but I’d rather learn my way around it.

The Bluetooth has been flaky transferring files to and from my phone. but audio is…well I’ve never found Bluetooth audio to be particularly great, but it works as well as anything else.

One other bit of trouble I’ve had is with my home WiFI. I bought a range extender so that I can use WiFi on the back deck and it to use the same SSID as the main router. The directions said you can do this, but it might cause problems. With this laptop, the WiFi connection becomes unusable after a short period of time. Turning off the range extender fixes it, and I’ve had no other problems on other networks, so I guess I know what I have to do.

One thing that really stood out to me is carrying it around in a backpack. This thing is light. I had a few brief moments of panic thinking I had left it behind. I’ve held lighter laptops, but this is a good weight. But don’t worry about the lightness, it still has plenty of electrons to have a good battery life.

Around the same time I bought this, I got a new MacBook Pro for work. When it comes to typing, I like the keyboard on the ZenBook way better than the new MacBook keyboards.

Recommendation

If you’re looking for a lightweight Linux laptop that can handle general development and desktop applications, the ASUS ZenBook is a great choice. Shameless commercialism: If you’re going to buy one, maybe use this here affiliate link? Or don’t. I won’t judge you.

Disappearing WiFi with rt2800pci

I recently did a routine package update on my Fedora 24 laptop. I’ve had the laptop for three years and have been running various Fedorae the whole time, so I didn’t think much of it. So it came as some surprise to me when after rebooting I could no longer connect to my WiFi network. In fact, there was no indication that any wireless networks were even available.

Since the update included a new kernel, I thought that might be the issue. Rebooting into the old kernel seemed to fix it (more on that later!), so I filed a bug, excluded kernel packages from future updates, and moved on.

But a few days later, I rebooted and my WiFi was gone again. The kernel hadn’t updated, so what could it be? I spent a lot of time flailing around until I found a “solution”. A four-year-old forum post said don’t reboot. Booting from off or suspending and resuming the laptop will cause the wireless to work again.

And it turns out, that “fixed” it for me. A few other posts seemed to suggest power management issues in the rt2800pci driver. I guess that’s what’s going on here, though I can’t figure out why I’m suddenly seeing it after so long. Seems like a weird failure mode for failing hardware.

Here’s what dmesg and the systemd journal reported:

Aug 01 14:54:24 localhost.localdomain kernel: ieee80211 phy0: rt2800_wait_wpdma_ready: Error - WPDMA TX/RX busy [0x00000068]
Aug 01 14:54:24 localhost.localdomain kernel: ieee80211 phy0: rt2800pci_set_device_state: Error - Device failed to enter state 4 (-5)

Hopefully, this post saves someone else a little bit of time in trying to figure out what’s going on.