Installing Lubuntu on a Dell Mini 9

My six year old daughter has shown interest in computers. In 2016, we bought a Kano for her and she loves it. So I decided she might like to have her own laptop. We happened to have a Dell Mini 9 from 2011 or so that we’re not using anymore. I figured that would be a good Christmas present for her.

Selecting the OS

The laptop still had the Ubuntu install that shipped with it. I could have kept using that, but I wanted to start with a clean install. I use Fedora on our other machines, so I wanted to try that. Unfortunately, Fedora decided to drop 32-bit support since the community effort was not enough to sustain it.

I tried installing Kubuntu, a KDE version of Ubuntu. However, the “continue” button in the installer’s prepare step would not switch to active. Some posts on AskUbuntu suggested annoying it into submission or not pre-downloading updates. Neither of these options worked.

After a time, I gave up on Kubuntu. Given the relatively low power of the laptop, I figured KDE Plasma might be too heavy anyway. So I decided to try Lubuntu, an Ubuntu variant that uses LXDE.

Installing Lubuntu

With Lubuntu, I was able to proceed all the way through the installer. I still had the “continue” button issue, but so long as I didn’t select the download updates option, it worked. Great success! But when it rebooted after the install, the display was very, very wrong. It was not properly scaled and the text was impossible to read. Fortunately, I was not the first person to have this problem, and someone else had a solution: setting the resolution in the GRUB configuration.

I had not edited GRUB configuration in a long time. In fact, GRUB2 was still in the future, so I had to find instructions. Once again, AskUbuntu had the answer. I already knew what I needed to add, I just forgot how to update the configuration appropriately.

Up until this point, I had been using the wired Ethernet connection, but I wanted my daughter to be able to use the Wi-Fi network. So I had to install the Wi-Fi drivers for the card. Lastly, I disabled IPv6 (which I have since done at the router). Happily, the webcam and audio worked with no effort.

What I didn’t do

Because I hate myself, I still haven’t set up Ansible to manage the basics of the configuration across the four Linux machines we use at home. I had to manually create the users. Since my daughter is just beginning to explore computers, I didn’t have a lot of software I needed to install. The web browser and office suite are already there, and that’s all she needs at the moment. This summer we’ll get into programming.

All done

I really enjoyed doing this installation, despite the frustrations I had with the Kubuntu installer. When I got my new ASUS laptop a few months ago, everything worked out of the box. There was no challenge. This at least provided a little bit of an adventure.

I’m pleasantly surprised how well it runs, too. My also-very-old HP laptop, which has much better specs on paper, is much more sluggish. Even switching from KDE to LXDE on it doesn’t help much. But the Mini 9 works like a charm, and it’s a good size for a six year old’s hands.

After only a few weeks, the Wi-Fi card suddenly failed. I bought a working system on eBay for about $35 and pulled the Wi-Fi card out of that. I figure as old as the laptop is, I’ll want replacement parts again at some point. But so far, it is serving her very well.

HP laptop keyboard won’t type on Linux

Here’s another story from my “WTF, computer?!” files (and also my “oh I’m dumb” files).

As I regularly do, I recently updated my Fedora machines. This includes the crappy HP 2000-2b30DX Notebook PC that I bought as a refurb in 2013. After dnf finished, I rebooted the laptop and put it away. Then while I was at a conference last week, my wife sent me a text telling me that she couldn’t type on it.

When I got home I took a look. Sure enough, they keyboard didn’t key. But it was weirder than that. I could type in the decryption password for the hard drive at the beginning of the boot process. And when I attached a wireless keyboard, I could type. Knowing the hardware worked, I dropped to runlevel 3. The built-in keyboard worked then.

I tried applying the latest updates, but that didn’t help. Some internet searching lead me to Freedesktop.org bug 103561. Running dnf downgrade libinput and rebooting gave me a working keyboard again. The bug is closed as NOTABUG, since the maintainers say it’s an issue in the kernel, which is fixed in the 4.13 kernel release. So I checked to see if Fedora 27, which was released last week, includes the 4.13 kernel. It does, and so does Fedora 26.

That’s when I realized I still had the kernel package excluded from dnf updates on that machine because of a previous issue where a kernel update caused the boot process to hang while/after loading the initrd. I removed the exclusion, updated the kernel, and re-updated libinput. After a reboot, the keyboard still worked. But if you’re using a kernel version from 4.9 to 4.12, libinput 1.9, and an HP device, your keyboard may not work. Update to kernel 4.13 or downgrade libinput (or replace your hardware. I would not recommend the HP 2000 Notebook. It is not good.)

Using the ASUS ZenBook for Fedora

I recently decided that I’d had enough of the refurbished laptop I bought four years ago. It’s big and heavy and slow and sometimes the fan doesn’t work. I wanted something more portable and powerful enough that I could smoothly scroll the web browser. After looking around for good Linux laptops, I settled on the ASUS ZenBook.

Installation

The laptop came with Windows 10 installed, but that’s not really my jam. I decided to boot off a Fedora 26 KDE live image first just to make sure everything worked before committing to installing. Desktop Linux has made a lot of progress over the years, but you never know which hardware might not be supported. As it turns out, that wasn’t a problem. WiFi, Bluetooth, webcam, speakers, etc all worked out of the box.

It’s almost disappointing in a sense. There used to be some challenge in getting things working, but now it’s just install and go. This is great overall, of course, because it means Linux is more accessible to new users and it’s less crap I have to deal with when I just want my damn computer to work. But there’s still a little bit of the nostalgia for the days when configuring X11 by hand was something you had to do.

Use

I’ve had the laptop for a little over a month now. I haven’t put it through quite the workout I’d hoped to, but I feel like I’ve used it enough to have an opinion at this point. Overall, I really like it. The main problem I have is that the trackpad has a middle-click, which is actually pretty nice except for when I accidentally use it. I’ve closed many a browser tab because I didn’t move my thumb far enough over. That’s probably something I can disable in the settings, but I’d rather learn my way around it.

The Bluetooth has been flaky transferring files to and from my phone. but audio is…well I’ve never found Bluetooth audio to be particularly great, but it works as well as anything else.

One other bit of trouble I’ve had is with my home WiFI. I bought a range extender so that I can use WiFi on the back deck and it to use the same SSID as the main router. The directions said you can do this, but it might cause problems. With this laptop, the WiFi connection becomes unusable after a short period of time. Turning off the range extender fixes it, and I’ve had no other problems on other networks, so I guess I know what I have to do.

One thing that really stood out to me is carrying it around in a backpack. This thing is light. I had a few brief moments of panic thinking I had left it behind. I’ve held lighter laptops, but this is a good weight. But don’t worry about the lightness, it still has plenty of electrons to have a good battery life.

Around the same time I bought this, I got a new MacBook Pro for work. When it comes to typing, I like the keyboard on the ZenBook way better than the new MacBook keyboards.

Recommendation

If you’re looking for a lightweight Linux laptop that can handle general development and desktop applications, the ASUS ZenBook is a great choice. Shameless commercialism: If you’re going to buy one, maybe use this here affiliate link? Or don’t. I won’t judge you.

Disappearing WiFi with rt2800pci

I recently did a routine package update on my Fedora 24 laptop. I’ve had the laptop for three years and have been running various Fedorae the whole time, so I didn’t think much of it. So it came as some surprise to me when after rebooting I could no longer connect to my WiFi network. In fact, there was no indication that any wireless networks were even available.

Since the update included a new kernel, I thought that might be the issue. Rebooting into the old kernel seemed to fix it (more on that later!), so I filed a bug, excluded kernel packages from future updates, and moved on.

But a few days later, I rebooted and my WiFi was gone again. The kernel hadn’t updated, so what could it be? I spent a lot of time flailing around until I found a “solution”. A four-year-old forum post said don’t reboot. Booting from off or suspending and resuming the laptop will cause the wireless to work again.

And it turns out, that “fixed” it for me. A few other posts seemed to suggest power management issues in the rt2800pci driver. I guess that’s what’s going on here, though I can’t figure out why I’m suddenly seeing it after so long. Seems like a weird failure mode for failing hardware.

Here’s what dmesg and the systemd journal reported:

Aug 01 14:54:24 localhost.localdomain kernel: ieee80211 phy0: rt2800_wait_wpdma_ready: Error - WPDMA TX/RX busy [0x00000068]
Aug 01 14:54:24 localhost.localdomain kernel: ieee80211 phy0: rt2800pci_set_device_state: Error - Device failed to enter state 4 (-5)

Hopefully, this post saves someone else a little bit of time in trying to figure out what’s going on.

Linux and Microsoft: a “deal with the devil”?

When Microsoft and the Linux Foundation announced that Azure certification will require passing a Linux exam, it caused a great disturbance in the Force. The FOSS Force, specifically. In a column, editor-in-chief Christine Hall called the partnership a “deal with the devil.” In a news roundup, Larry Cafiero said “[r]ather than throw the Microsoft that is treading water a life preserver, I still think throwing it an anchor would be more fitting.” Larry is a personal friend of mine, and he and Hall have both been covering open source since before I got my first computer. I can’t just dismiss their opinions out of hand.

Open source enthusiasts have every right to be leery of Microsoft. Former CEO Steve Ballmer famously said Linux is “a cancer” and the company was openly hostile to the Linux project specifically and open source generally for many years. And yet, Microsoft seems to be sincere in its efforts to participate in open source projects (even if it’s still a little bit two-left-footed).

Hall said Microsoft loves Linux “because [Microsoft] can sell it”. So what? Even Red Hat loves being able to sell Linux. Azure CTO Mark Russinovich told the audience at All Things Open this year “ if we don’t support Linux and open source in our cloud then we’ll be a Windows only cloud, and that would not be practical.” Yes, it’s absolutely in Microsoft’s self-interest to play nicely with the open source world. While the Year of the Linux on the Desktop is always just out of reach, Linux is firmly entrenched in the enterprise.

Microsoft may have (as of this writing), roughly 29 times the market capitalization of Red Hat, but it’s obvious that open source has “won”. And yet, elements of the community are stuck in the scrappy underdog mindset. If we want to pretend that we’re a meritocracy, we have to be willing to allow our former enemies to become…if not friends, then at least collaborators. If Microsoft is willing to play by the rules, then let’s let them.

Forget what Hall wrote earlier this month. Let’s go with what she said in October: “However, it might be time to tone down the anti-Microsoft rhetoric a bit and give them a little breathing room. If we give them enough rope, we can see if they hang themselves, or if they use it to strengthen their ties with the open source community.”

Be careful with shell redirection

Continuing on Friday’s theme of “Ben writes about some Linux basics”, I wanted to share a story of a bug I fixed recently. Our internal documentation server at work had been a little flaky. File copies from the build server would sometimes fail and the web server was being really slow. When I logged in, I noticed the root volume was full.

A full disk is a thing that happens sometimes, especially on small volumes, so I went off in search of the culprit. It turns out that the dead.letter file in root’s home directory was large (several gigabytes, if I recall). For a couple of years, the cron job that runs every 5 minutes to update the documentation page had been trying to send email, which failed since the MTA wasn’t configured.

Why was all of this output trying to be sent via email? Because at some point someone set up the cron job with redirection like so:

2>&1  > /dev/null

Let’s take a step back for a moment and explain what that means. There are two main output streams for command line programs: standard output (a.k.a. “STDOUT”) and standard error (a.k.a. “STDERR”). The former is generally regular output, whereas the latter is for the “important stuff,” like error messages. By default when you run a command in the terminal, they both go to your terminal so you can see them. But you might not always want to see them, so you might redirect to a file or to /dev/null.

Back to our woe-beset server. At first glance, you might say “okay, so both STDOUT (1) and STDERR (2) are being sent to /dev/null”. And you would be wrong. STDERR is being sent to wherever STDOUT is being sent, which at the time is still the terminal (or the cron output email), and then STDOUT is being redirected to /dev/null. So what was in place was effectively the same as:

> /dev/null

Changing the order of the redirection to:

> /dev/null 2>&1

Kept the dead.letter file from slowly strangling the life out of the disk.

 

I find find(1) to be useful

I recently shared Tom Limoncelli’s excellent critique of the BSD find(1) man page in the documentation channel at work. One of my coworkers responded with “that’s why I just use mlocate”, and that made me very sad. Sure, mlocate is a great tool if you know there’s a file somewhere that has a particular name (assuming it was created before the last time updatedb was run), but that’s about the best you can do.

There are plenty of examples on how to use find out there, but I haven’t written a “here’s a basic thing about Linux” post in a while, so I’ll add to the pile. find takes, at a minimum, a path to find things in. For example:

find /

will find (and print) every file on the system. Probably not all that useful. You can change the path argument to narrow things down a bit, but that’s still probably not all that useful to you. So let’s throw in some additional arguments to constrain it. Maybe you want to find all the JPEG files in your home directory?

find ~ -name '*jpg'

But wait! What if some of them have an uppercase extension?

find ~ -iname '*jpg'

Aw, but I bet some of the pictures have an extension of .jpeg because 8.3 is so 1985. Well, we can combine them in a slightly ugly fashion:

find ~ \( -iname '*jpeg' -o -iname '*jpg' \)

Oh, but you have some directories that end in jpg? (Why you named a directory “bucketofjpg” instead of “pictures” is beyond me) We can modify it to only look for files!

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f

Or maybe you’d just like to find those directories so you can rename them later:

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d

It turns out you’ve been taking a lot of pictures lately, so let’s narrow this down to ones whose status has changed in the last week.

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -ctime -7

You can do time filters based on file status change time (ctime), modification time (mtime), or access time (atime). These are in days, so if you want finer-grained control, you can express it in minutes instead (cmin, mmin, and amin, respectively). Unless you know exactly the time you want, you’ll probably prefix the number with a + (more than) or – (less than). The time arguments are probably the ones I use most often.

Maybe you’re running out of disk space, so you want to find all of the gigantic (let’s define that as greater than 1 gigabyte) files in the log directory:

find /var/log -size +1G

Or maybe you want to find all the files owned by bcotton in /data:

find /data -owner bcotton

You can also look for files based on permissions. Perhaps you want to find all of the world-readable files in your home directory to make sure you’re not oversharing.

find ~ -perm -o=r

So far, all we’ve done is print the file paths, which is useful, but sometimes you want to do more. find has a few built in actions (like -delete), but it’s true power comes in giving input for other commands to act on. In the simplest case, you can pipe the output to something like xargs. There’s also the -exec action, which allows you to execute more complicated actions against the output. For example, if you wanted to get the md5sum of all of your Python scripts:

find ~ -type f -name '*.py' -exec md5sum {} \;

(Yes, you could pipe to xargs here, too, but that’s not the point.) Note the \; at the end. That’s very important.

Warning! You can really cause a world of hurt if you’re not careful with the output of find. Files that contain spaces or other special characters might cause unexpected behavior when passed to another command. Be very careful. One way to mitigate your risk is to use -ok instead of -exec. This prompts you before executing each line (but it might get tedious if you have a lot of lines to process). The -ls action escapes special characters, so that might be useful when piping to another program.

This post only begins to scratch the surface of what find can do. Combining tests with boolean logic can give you incredible flexibility to find exactly the files you’re looking for. Have any favorite find expressions? Share them in the comments!

 

 

Debian “drops” the Linux Standard Base

LWN recently reported on a decision by the Debian community to drop most support for the Linux Standard Base (LSB). The LSB is an attempt to define as standard for compatibility across Linux distributions. Even binaries should JustWork™ on multiple distributions. At work, I take advantage of this: for many packages we use the same binaries across CentOS, Ubuntu, and SLES.

I can’t blame the Debian maintainers for not wanting to continue putting in the effort. The LSB is a large standard set and very few applications have been officially LSB certified. In addition, the LSB’s selection of RPM as the package manager puts the spec at odds with Debian anyway.

Debian’s unwillingness to put effort into keeping up with the LSB doesn’t necessarily mean that it will suddenly become incompatible with other distributions. Debian plans to continue complying with the Filesystem Hierarchy Standard, a subset of the LSB that defines what files and directories go where. I suspect this is the key standard for many people who work across distributions anyway.

In the short term, this seems like a non-story. In the longer term, I wonder what will become of the Linux ecosystem. Running a single distribution is herding cats on the best of days. Coordinating standards across multiple distributions, even with common upstreams, is madness. Among the major distributions, there are basically two camps: Debian/Ubuntu and Fedora/RHEL (and RHEL-alikes). They’ve managed not to drift too far apart, thought I thought systemd would start that process.

To many, “Linux” (as an OS, not a kernel) is a single entity. Others don’t even realize that Ubuntu and Fedora are in any way related. While reality is (sort of) closer to the former currently, I wonder if we’ll get to a point where it’s closer to the latter. Standards are important but are useful only to the degree that they are complied with. Linux has avoided the competing standards problem so far, but will that remain the case?

elementary misses the point

A recent post on the elementary blog about how they ask for payment on download created a bit of a stir this week. One particular sentence struck a nerve (it has since been removed from the post): “We want users to understand that they’re pretty much cheating the system when they choose not to pay for software.”

No, they aren’t. I understand that people want to get paid for their work. It’s only natural. Especially when you’d really like that work to be what puts food on the table and not something you do after you work a full week for someone else. I certainly don’t begrudge developers asking for money. I don’t even begrudge requiring payment before being able to download the software. The developers are absolutely right when they say “elementary is under no obligation to release our compiled operating system for free download.”

Getting paid for developing open source software is not antithetical to open source or free (libre) software principles. Neither the OSI’s Open Source Definition nor the Free Software Foundation’s Free Software Definition necessarily preclude a developer from charging for works. That most software that’s free-as-in-freedom is also free-as-in-beer is true, but irrelevant. Even elementary touts the gratis nature of their work on the front page (talk about mixed messages):

100% free, both in terms of pricing and licensing. But you're a cheater if you take the free option.

100% free, both in terms of pricing and licensing. But you’re a cheater if you take the free option.

Simply put, the developers cannot choose to offer their work for free and then get mad when people take them up on the offer. Worse, they cannot alienate their community by calling them cheaters. Of the money the elementary receives, how much of it goes upstream to the Linux Foundation, the FSF, and the numerous other projects that make elementary possible? Surely they wouldn’t be so hypocritical as to take the work of others for free?

An open source project is more than just coders. It’s more than just coders and funders. A truly healthy project of any appreciable size will have people who contribute in various ways: writing documentation; providing support on mailing lists, fora, etc.; triaging bug reports; filing bug reports; doing design; marketing (including word-of-mouth). This work is important to the project, too, and should be considered an in-kind form of payment.

It’s up to each project to decide what they want in return for the work put in. But it’s up to each project to accept that people will take from all of the choices that are available. If that includes “I get it for free”, then the right answer is to find ways for those people to become a part of the community and contribute how they can.

On Linus Torvalds and communities

This week, the Internet was ablaze with reactions to comments made by Linus Torvalds at Linux.conf.au. Unsurprisingly, Torvalds defended the tone he employs on the Linux kernel mailing list, where he holds no punches. “I’m not a nice person, and I don’t care about you. I care about the technology and the kernel—that’s what’s important to me,” he said (as reported by Ars Technica). He later said “all that [diversity] stuff is just details and not really important.”

The reactions were mixed. Some were upset at the fact that an influential figure like Torvalds didn’t take the opportunity to address what they see as a major issue in the Linux community. Others dismissed those who were upset by pointing to the technical quality of Linux, cultural differences, etc.

I don’t subscribe to the LKML, so most of the posts I’ve seen are generally when someone is trying to point out a specific event (whether a behavior or a technical discussion), and I don’t claim to have a good sense for what that particular mailing list is like. Torvalds and the Linux community have developed a great technical product, but the community needs work.

Speaking to open source communities in general, too many people use the impersonal nature of email to mistake rudeness for directness. Direct and honest technical criticisms are a vital part of any collaborative development. Insults and viciousness are not. Some people thrive in (or at least tolerate) those kinds of environments, but they are incredibly off-putting to everyone else, particularly newcomers.

Open source communities, like any community, need to be welcoming to new members. This allows for the infusion of new ideas and new perspectives: some of which will be obnoxiously naive, some of which will be positively transformative. The naive posts of newcomers can be taxing when you’ve seen the same thing hundreds of times, but everyone has to learn somewhere. The solution is to have a team armed with pre-written responses in order to prevent frustrated emails.

Not being a jerk doesn’t just mean tolerating noobs, though. Communities should have an established code of conduct which addresses both annoying and mean actors. When the code of contact is being repeatedly breached, the violator needs to be nudged in the right direction. When a community is welcoming and actively works to remain that way, it thrives. That’s how it can get the diversity of ideas and grow the technical competency that Linus Torvalds so desires.