The Linux desktop is not in trouble

Writing for ZDNet earlier this month, Steven J. Vaughan-Nichols declared trouble for the Linux desktop. He’s wrong.

Or maybe not. Maybe we’re just looking at different parts of the elephant. sjvn’s core argument, if I may sum it up, is that fragmentation is holding back the Linux desktop. Linux can’t gain significant traction in the desktop market because there are just so many options. This appeals to computer nerds, but leads to confusion for general users who don’t want to care about whether they’re running GNOME or KDE Plasma or whatever.

Fragmentation

I’m sympathetic to that argument. When I was writing documentation for Fedora, we generally wrote instructions for GNOME, since that was the default desktop. Fedora users can also choose from spins of KDE Plasma, LXQt, Xfce, plus can install other desktop environments. If someone installs KDE Plasma because that’s what their friend gave them, will they be able to follow the documentation? If not, will they get frustrated and move back to Windows or MacOS?

Even if they stick it out, there are two large players in the GUI toolkit world: GTK and Qt. You can use an app written in one in a desktop environment written in the other, but it doesn’t always look very good. And the configuration settings may not be consistent between apps, which is also frustrating.

Corporate indifference

Apart from that, sjvn also laments the lack of desktop effort from major Linux vendors:

True, the broad strokes of the Linux desktop are painted primarily by Canonical and Red Hat, but the desktop is far from their top priority. Instead, much of the nuts and bolts of the current generation of the Linux desktop is set by vendor-related communities: Red Hat, Fedora, SUSE’s openSUSE, and Canonical’s Ubuntu.

I would argue that this is the way it should be. As he notes in the preceding paragraph, the focus of revenue generation is on enterprise servers and cloud. There are two reasons for that: that’s where the customer money is and enterprises don’t want to innovate on their desktops.

I’ll leave the first part to someone else, but I think the “enterprises don’t want to innovate on their desktops” part is important. I’ve worked at and in support of some large organizations and in all cases, they didn’t want anything more from their desktops than “it allows our users to run their business applications in a reliable manner”. Combine this with the tendency of the enterprise to keep their upgrade cycles long and it makes no sense to keep desktop innovation in the enterprise product.

Community distributions are generally more focused on individuals or small organizations who may be more willing to accept disruptive change as the paradigm is moved forward. This is true beyond the desktop, too. Consider changes like the adoption of systemd or replacing yum with dnf: these also appeared in the community distributions first, but I didn’t see that used as a case for “enterprise Linux distributions are in trouble.”

What’s the answer?

Looking ahead, I’d love to see a foundation bring together the Linux desktop community and have them hammer out out a common desktop for everyone. Yes, I know, I know. Many hardcore Linux users love have a variety of choices. The world is not made up of desktop Linux users. For the million or so of us, there are hundreds of millions who want an easy-to-use desktop that’s not Windows, doesn’t require buying a Mac, and comes with broad software and hardware support.

Setting aside the XKCD #927 argument, I don’t know that this is an answer. Even if the major distros agreed to standardize on the same desktop (and with Ubuntu returning to GNOME, that’s now the case), that won’t stop effort on other desktops. If the corporate sponsors don’t invest any effort, the communities still will. People will use whatever is provided to them in the workplace, so presenting a single standard desktop to consumers will rely on the folks who make the community distributions to agree to that. It won’t happen.

But here’s the crux of my disagreement with this article. The facts are all correct, even if I disagree with the interpretation of some of them. The issue is that we’re not looking at the success of the Linux desktop in the same way.

If you define “Linux desktop” as “a desktop environment that runs the Linux kernel”, then ChromeOS is doing quite well, and will probably continue to grow (unless Google gets bored with it). In that case, the Linux desktop is not in trouble, it’s enjoying unprecedented success.

But when most people say “Linux desktop”, they think of a traditional desktop model. In this case, the threat to Linux desktops is the same as the threat to Windows and MacOS: desktops matter less these days. So much computing, particularly for consumers, happens in the web browser when done on a PC at all.

Rethinking the goal

This brings me back to my regular refrain: using a computer is a means, not an end. People don’t run a desktop environment to run a desktop environment, they run a desktop environment because it enables them to do the things they want to do. As those things are increasingly done on mobile or in the web browser, achieving dominant market share for desktops is no longer a meaningful goal (if, indeed, it ever was).

Many current Linux desktop users are (I guess), motivated at least in part by free software ideals. This is not a mainstream position. Consumers will need more practical reasons to choose any Linux desktop over the proprietary OS that was shipped by the computer’s manufacturer.

With that in mind, the answer isn’t standardization, it’s making the experience better. Fedora Silverblue and OpenSUSE Kubic are efforts in that direction. Using those as a base, with Flatpaks to distribute applications, the need for standardization at the desktop environment level decreases because the users are mostly interacting with the application level, one step above.

The usual disclaimer applies: I am a Red Hat employee who works on Fedora. The views in this post are my own and not necessarily the views of Red Hat, the Fedora Council, or anyone else. They may not even be my views by the time you read this.

What’s the future of Linux distributions?

“Distros don’t matter anymore” is a bold statement for someone who is paid to work on a Linux distro to make. Fortunately, I’m not making that statement. At least not exactly.

Distros still matter. But it’s fair to say that they matter in a different way than they did in the past. Like lava in a video game, abstractions slowly-but-inexorably move up the stack. For the entirety of their existence, effectively, Linux distributions have focused on producing operating systems (OSes) with some userspace applications. But the operating system is changing.

For one, OS developers have been watching each other work and taking inspiration for improvement. Windows is not macOS is not Linux, but they all take what they see as the “best” features of others and try to incorporate them. And with things like Windows Subsystem for Linux, the lines are blurring.

Applications are helping in this regard, too. Not everything is written in C and C++ anymore. Many applications are being developed in languages like Python, Ruby, and Java, where the application developer mostly doesn’t have to care about the OS. Which means the user doesn’t either. And of course, so much of what the average user does on their computer runs out of the web browser these days. The vast majority of my daily computer usage can be done on any modern OS, including Android.

With the importance of the operating system itself diminishing, distros can choose to either remain unchanged and watch their importance diminish or they can evolve to add new relevance.

This is all background for many conversations and presentations I heard earlier this month at the FOSDEM conference in Brussels. The first day of FOSDEM I spent mostly in the Fedora booth. The second day I was working the distro dev room. Both days had a lot of conversations about how distros can stay relevant — not in those words, but certainly in spirit.

The main theme was the idea of changing how the OS is managed and updated. The idea of the OS state as a git tree is interesting. Fedora’s Silverblue desktop and openSUSE Kubic are two leading examples.

So is this the future of Linux distributions? I don’t know. What I do know is that distributions must change to keep up with the world. This change should be in a way that makes the distro more obviously valuable to users.

Linus’s awakening

It may be the biggest story in open source in 2018, a year that saw Microsoft purchase GitHub. Linus Torvalds replaced the Code of Conflict for the Linux kernel with a Code of Conduct. In a message on the Linux Kernel Mailing List (LKML), Torvalds explained that he was taking time off to examine the way he led the kernel development community.

Torvalds has taken a lot of flak for his style over the years, including on this blog. While he has done an excellent job shepherding the technical development of the Linux kernel, his community management has often — to put it mildly — left something to be desired. Abusive and insulting behavior is corrosive to a community, and Torvalds has spent the better part of the last three decades enabling and partaking in it.

But he has seen the light, it would seem. To an outside observer, this change is rather abrupt, but it is welcome. Reaction to his message has been mixed. Some, like my friend Jono Bacon, have advocated supporting Linus in his awakening. Others take a more cynical approach:

I understand Kelly’s position. It’s frustrating to push for a more welcoming and inclusive community only to be met with insults and then when someone finally comes around to have everyone celebrate. Kelly and others who feel like her are absolutely justified in their position.

For myself, I like to think of it as a modern parable of the prodigal son. As tempting as it is to reject those who awaken late, it is better than them not waking at all. If Linus fails to follow through, it would be right to excoriate him. But if he does follow through, it can only improve the community around one of the most important open source projects. And it will set an example for other projects to follow.

I spend a lot of time thinking about community, particularly since I joined Red Hat as the Fedora Program Manager a few months ago. Community members — especially those in a highly-visible role — have an obligation to model the kind of behavior the community needs. This sometimes means a patient explanation when an angry rant would feel better. It can be demanding and time-consuming work. But an open source project is more than just the code; it’s also the community. We make technology to serve the people, so if our communities are not healthy, we’re not doing our jobs.

Installing Lubuntu on a Dell Mini 9

My six year old daughter has shown interest in computers. In 2016, we bought a Kano for her and she loves it. So I decided she might like to have her own laptop. We happened to have a Dell Mini 9 from 2011 or so that we’re not using anymore. I figured that would be a good Christmas present for her.

Selecting the OS

The laptop still had the Ubuntu install that shipped with it. I could have kept using that, but I wanted to start with a clean install. I use Fedora on our other machines, so I wanted to try that. Unfortunately, Fedora decided to drop 32-bit support since the community effort was not enough to sustain it.

I tried installing Kubuntu, a KDE version of Ubuntu. However, the “continue” button in the installer’s prepare step would not switch to active. Some posts on AskUbuntu suggested annoying it into submission or not pre-downloading updates. Neither of these options worked.

After a time, I gave up on Kubuntu. Given the relatively low power of the laptop, I figured KDE Plasma might be too heavy anyway. So I decided to try Lubuntu, an Ubuntu variant that uses LXDE.

Installing Lubuntu

With Lubuntu, I was able to proceed all the way through the installer. I still had the “continue” button issue, but so long as I didn’t select the download updates option, it worked. Great success! But when it rebooted after the install, the display was very, very wrong. It was not properly scaled and the text was impossible to read. Fortunately, I was not the first person to have this problem, and someone else had a solution: setting the resolution in the GRUB configuration.

I had not edited GRUB configuration in a long time. In fact, GRUB2 was still in the future, so I had to find instructions. Once again, AskUbuntu had the answer. I already knew what I needed to add, I just forgot how to update the configuration appropriately.

Up until this point, I had been using the wired Ethernet connection, but I wanted my daughter to be able to use the Wi-Fi network. So I had to install the Wi-Fi drivers for the card. Lastly, I disabled IPv6 (which I have since done at the router). Happily, the webcam and audio worked with no effort.

What I didn’t do

Because I hate myself, I still haven’t set up Ansible to manage the basics of the configuration across the four Linux machines we use at home. I had to manually create the users. Since my daughter is just beginning to explore computers, I didn’t have a lot of software I needed to install. The web browser and office suite are already there, and that’s all she needs at the moment. This summer we’ll get into programming.

All done

I really enjoyed doing this installation, despite the frustrations I had with the Kubuntu installer. When I got my new ASUS laptop a few months ago, everything worked out of the box. There was no challenge. This at least provided a little bit of an adventure.

I’m pleasantly surprised how well it runs, too. My also-very-old HP laptop, which has much better specs on paper, is much more sluggish. Even switching from KDE to LXDE on it doesn’t help much. But the Mini 9 works like a charm, and it’s a good size for a six year old’s hands.

After only a few weeks, the Wi-Fi card suddenly failed. I bought a working system on eBay for about $35 and pulled the Wi-Fi card out of that. I figure as old as the laptop is, I’ll want replacement parts again at some point. But so far, it is serving her very well.

HP laptop keyboard won’t type on Linux

Here’s another story from my “WTF, computer?!” files (and also my “oh I’m dumb” files).

As I regularly do, I recently updated my Fedora machines. This includes the crappy HP 2000-2b30DX Notebook PC that I bought as a refurb in 2013. After dnf finished, I rebooted the laptop and put it away. Then while I was at a conference last week, my wife sent me a text telling me that she couldn’t type on it.

When I got home I took a look. Sure enough, they keyboard didn’t key. But it was weirder than that. I could type in the decryption password for the hard drive at the beginning of the boot process. And when I attached a wireless keyboard, I could type. Knowing the hardware worked, I dropped to runlevel 3. The built-in keyboard worked then.

I tried applying the latest updates, but that didn’t help. Some internet searching lead me to Freedesktop.org bug 103561. Running dnf downgrade libinput and rebooting gave me a working keyboard again. The bug is closed as NOTABUG, since the maintainers say it’s an issue in the kernel, which is fixed in the 4.13 kernel release. So I checked to see if Fedora 27, which was released last week, includes the 4.13 kernel. It does, and so does Fedora 26.

That’s when I realized I still had the kernel package excluded from dnf updates on that machine because of a previous issue where a kernel update caused the boot process to hang while/after loading the initrd. I removed the exclusion, updated the kernel, and re-updated libinput. After a reboot, the keyboard still worked. But if you’re using a kernel version from 4.9 to 4.12, libinput 1.9, and an HP device, your keyboard may not work. Update to kernel 4.13 or downgrade libinput (or replace your hardware. I would not recommend the HP 2000 Notebook. It is not good.)

Using the ASUS ZenBook for Fedora

I recently decided that I’d had enough of the refurbished laptop I bought four years ago. It’s big and heavy and slow and sometimes the fan doesn’t work. I wanted something more portable and powerful enough that I could smoothly scroll the web browser. After looking around for good Linux laptops, I settled on the ASUS ZenBook.

Installation

The laptop came with Windows 10 installed, but that’s not really my jam. I decided to boot off a Fedora 26 KDE live image first just to make sure everything worked before committing to installing. Desktop Linux has made a lot of progress over the years, but you never know which hardware might not be supported. As it turns out, that wasn’t a problem. WiFi, Bluetooth, webcam, speakers, etc all worked out of the box.

It’s almost disappointing in a sense. There used to be some challenge in getting things working, but now it’s just install and go. This is great overall, of course, because it means Linux is more accessible to new users and it’s less crap I have to deal with when I just want my damn computer to work. But there’s still a little bit of the nostalgia for the days when configuring X11 by hand was something you had to do.

Use

I’ve had the laptop for a little over a month now. I haven’t put it through quite the workout I’d hoped to, but I feel like I’ve used it enough to have an opinion at this point. Overall, I really like it. The main problem I have is that the trackpad has a middle-click, which is actually pretty nice except for when I accidentally use it. I’ve closed many a browser tab because I didn’t move my thumb far enough over. That’s probably something I can disable in the settings, but I’d rather learn my way around it.

The Bluetooth has been flaky transferring files to and from my phone. but audio is…well I’ve never found Bluetooth audio to be particularly great, but it works as well as anything else.

One other bit of trouble I’ve had is with my home WiFI. I bought a range extender so that I can use WiFi on the back deck and it to use the same SSID as the main router. The directions said you can do this, but it might cause problems. With this laptop, the WiFi connection becomes unusable after a short period of time. Turning off the range extender fixes it, and I’ve had no other problems on other networks, so I guess I know what I have to do.

One thing that really stood out to me is carrying it around in a backpack. This thing is light. I had a few brief moments of panic thinking I had left it behind. I’ve held lighter laptops, but this is a good weight. But don’t worry about the lightness, it still has plenty of electrons to have a good battery life.

Around the same time I bought this, I got a new MacBook Pro for work. When it comes to typing, I like the keyboard on the ZenBook way better than the new MacBook keyboards.

Recommendation

If you’re looking for a lightweight Linux laptop that can handle general development and desktop applications, the ASUS ZenBook is a great choice. Shameless commercialism: If you’re going to buy one, maybe use this here affiliate link? Or don’t. I won’t judge you.

Disappearing WiFi with rt2800pci

I recently did a routine package update on my Fedora 24 laptop. I’ve had the laptop for three years and have been running various Fedorae the whole time, so I didn’t think much of it. So it came as some surprise to me when after rebooting I could no longer connect to my WiFi network. In fact, there was no indication that any wireless networks were even available.

Since the update included a new kernel, I thought that might be the issue. Rebooting into the old kernel seemed to fix it (more on that later!), so I filed a bug, excluded kernel packages from future updates, and moved on.

But a few days later, I rebooted and my WiFi was gone again. The kernel hadn’t updated, so what could it be? I spent a lot of time flailing around until I found a “solution”. A four-year-old forum post said don’t reboot. Booting from off or suspending and resuming the laptop will cause the wireless to work again.

And it turns out, that “fixed” it for me. A few other posts seemed to suggest power management issues in the rt2800pci driver. I guess that’s what’s going on here, though I can’t figure out why I’m suddenly seeing it after so long. Seems like a weird failure mode for failing hardware.

Here’s what dmesg and the systemd journal reported:

Aug 01 14:54:24 localhost.localdomain kernel: ieee80211 phy0: rt2800_wait_wpdma_ready: Error - WPDMA TX/RX busy [0x00000068]
Aug 01 14:54:24 localhost.localdomain kernel: ieee80211 phy0: rt2800pci_set_device_state: Error - Device failed to enter state 4 (-5)

Hopefully, this post saves someone else a little bit of time in trying to figure out what’s going on.

Linux and Microsoft: a “deal with the devil”?

When Microsoft and the Linux Foundation announced that Azure certification will require passing a Linux exam, it caused a great disturbance in the Force. The FOSS Force, specifically. In a column, editor-in-chief Christine Hall called the partnership a “deal with the devil.” In a news roundup, Larry Cafiero said “[r]ather than throw the Microsoft that is treading water a life preserver, I still think throwing it an anchor would be more fitting.” Larry is a personal friend of mine, and he and Hall have both been covering open source since before I got my first computer. I can’t just dismiss their opinions out of hand.

Open source enthusiasts have every right to be leery of Microsoft. Former CEO Steve Ballmer famously said Linux is “a cancer” and the company was openly hostile to the Linux project specifically and open source generally for many years. And yet, Microsoft seems to be sincere in its efforts to participate in open source projects (even if it’s still a little bit two-left-footed).

Hall said Microsoft loves Linux “because [Microsoft] can sell it”. So what? Even Red Hat loves being able to sell Linux. Azure CTO Mark Russinovich told the audience at All Things Open this year “ if we don’t support Linux and open source in our cloud then we’ll be a Windows only cloud, and that would not be practical.” Yes, it’s absolutely in Microsoft’s self-interest to play nicely with the open source world. While the Year of the Linux on the Desktop is always just out of reach, Linux is firmly entrenched in the enterprise.

Microsoft may have (as of this writing), roughly 29 times the market capitalization of Red Hat, but it’s obvious that open source has “won”. And yet, elements of the community are stuck in the scrappy underdog mindset. If we want to pretend that we’re a meritocracy, we have to be willing to allow our former enemies to become…if not friends, then at least collaborators. If Microsoft is willing to play by the rules, then let’s let them.

Forget what Hall wrote earlier this month. Let’s go with what she said in October: “However, it might be time to tone down the anti-Microsoft rhetoric a bit and give them a little breathing room. If we give them enough rope, we can see if they hang themselves, or if they use it to strengthen their ties with the open source community.”

Be careful with shell redirection

Continuing on Friday’s theme of “Ben writes about some Linux basics”, I wanted to share a story of a bug I fixed recently. Our internal documentation server at work had been a little flaky. File copies from the build server would sometimes fail and the web server was being really slow. When I logged in, I noticed the root volume was full.

A full disk is a thing that happens sometimes, especially on small volumes, so I went off in search of the culprit. It turns out that the dead.letter file in root’s home directory was large (several gigabytes, if I recall). For a couple of years, the cron job that runs every 5 minutes to update the documentation page had been trying to send email, which failed since the MTA wasn’t configured.

Why was all of this output trying to be sent via email? Because at some point someone set up the cron job with redirection like so:

2>&1  > /dev/null

Let’s take a step back for a moment and explain what that means. There are two main output streams for command line programs: standard output (a.k.a. “STDOUT”) and standard error (a.k.a. “STDERR”). The former is generally regular output, whereas the latter is for the “important stuff,” like error messages. By default when you run a command in the terminal, they both go to your terminal so you can see them. But you might not always want to see them, so you might redirect to a file or to /dev/null.

Back to our woe-beset server. At first glance, you might say “okay, so both STDOUT (1) and STDERR (2) are being sent to /dev/null”. And you would be wrong. STDERR is being sent to wherever STDOUT is being sent, which at the time is still the terminal (or the cron output email), and then STDOUT is being redirected to /dev/null. So what was in place was effectively the same as:

> /dev/null

Changing the order of the redirection to:

> /dev/null 2>&1

Kept the dead.letter file from slowly strangling the life out of the disk.

 

I find find(1) to be useful

I recently shared Tom Limoncelli’s excellent critique of the BSD find(1) man page in the documentation channel at work. One of my coworkers responded with “that’s why I just use mlocate”, and that made me very sad. Sure, mlocate is a great tool if you know there’s a file somewhere that has a particular name (assuming it was created before the last time updatedb was run), but that’s about the best you can do.

There are plenty of examples on how to use find out there, but I haven’t written a “here’s a basic thing about Linux” post in a while, so I’ll add to the pile. find takes, at a minimum, a path to find things in. For example:

find /

will find (and print) every file on the system. Probably not all that useful. You can change the path argument to narrow things down a bit, but that’s still probably not all that useful to you. So let’s throw in some additional arguments to constrain it. Maybe you want to find all the JPEG files in your home directory?

find ~ -name '*jpg'

But wait! What if some of them have an uppercase extension?

find ~ -iname '*jpg'

Aw, but I bet some of the pictures have an extension of .jpeg because 8.3 is so 1985. Well, we can combine them in a slightly ugly fashion:

find ~ \( -iname '*jpeg' -o -iname '*jpg' \)

Oh, but you have some directories that end in jpg? (Why you named a directory “bucketofjpg” instead of “pictures” is beyond me) We can modify it to only look for files!

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f

Or maybe you’d just like to find those directories so you can rename them later:

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d

It turns out you’ve been taking a lot of pictures lately, so let’s narrow this down to ones whose status has changed in the last week.

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -ctime -7

You can do time filters based on file status change time (ctime), modification time (mtime), or access time (atime). These are in days, so if you want finer-grained control, you can express it in minutes instead (cmin, mmin, and amin, respectively). Unless you know exactly the time you want, you’ll probably prefix the number with a + (more than) or – (less than). The time arguments are probably the ones I use most often.

Maybe you’re running out of disk space, so you want to find all of the gigantic (let’s define that as greater than 1 gigabyte) files in the log directory:

find /var/log -size +1G

Or maybe you want to find all the files owned by bcotton in /data:

find /data -owner bcotton

You can also look for files based on permissions. Perhaps you want to find all of the world-readable files in your home directory to make sure you’re not oversharing.

find ~ -perm -o=r

So far, all we’ve done is print the file paths, which is useful, but sometimes you want to do more. find has a few built in actions (like -delete), but it’s true power comes in giving input for other commands to act on. In the simplest case, you can pipe the output to something like xargs. There’s also the -exec action, which allows you to execute more complicated actions against the output. For example, if you wanted to get the md5sum of all of your Python scripts:

find ~ -type f -name '*.py' -exec md5sum {} \;

(Yes, you could pipe to xargs here, too, but that’s not the point.) Note the \; at the end. That’s very important.

Warning! You can really cause a world of hurt if you’re not careful with the output of find. Files that contain spaces or other special characters might cause unexpected behavior when passed to another command. Be very careful. One way to mitigate your risk is to use -ok instead of -exec. This prompts you before executing each line (but it might get tedious if you have a lot of lines to process). The -ls action escapes special characters, so that might be useful when piping to another program.

This post only begins to scratch the surface of what find can do. Combining tests with boolean logic can give you incredible flexibility to find exactly the files you’re looking for. Have any favorite find expressions? Share them in the comments!