LISA Conference wrap-up

After a one-year hiatus, I returned to the LISA Conference as a member of the blog team. It was great to see old friends and make new ones. Continuing the theme from last year, the blog was less about daily summaries and more about telling stories. This was a lot more rewarding, but it was also more work. All told, I wrote 2822 words, which is less than normal, but I’d like to think the quality is better.

People stories

  • Alice Goldfuss — This year was Alice’s first LISA trip and first time presenting to a large conference. The reaction to her talk was overwhelmingly positive, and I’m sad I missed it.
  • Kyle Neumann — Kyle is another first-time attendee and loved his experience. He also gave me a lot of good ideas for how to make the first-timer experience better.
  • Jamie Riedesel — A long-time friend of this blog is recognized for contributions to the professional community.

Conference program

  • Government for better or for worse — The Wednesday keynote was delivered by the head of the US Digital Service and the Thursday keynote by a principal technologist at the ACLU. They provided contrasting perspectives on government.
  • The mini-tutorial experiment — Wednesday through Friday now has mini-tutorials interspersed with the conference program instead of being separate half- and full-day sessions.
  • Monday — Before I got into the groove of telling stories, I wrote what was basically a summary of my day.

Vendor articles

  • Midfin — This company just exited stealth and has an interesting product for making internal datacenters more nimble.
  • Xirrus — They donated equipment and engineering effort for the WiFi network.
  • JumpCloud — This company provides cloud-based Directory-as-a-Service, something I’ve been looking for at work.

Recent posts

In lieu of original content, here are a few articles I’ve recently written for

Be careful with shell redirection

Continuing on Friday’s theme of “Ben writes about some Linux basics”, I wanted to share a story of a bug I fixed recently. Our internal documentation server at work had been a little flaky. File copies from the build server would sometimes fail and the web server was being really slow. When I logged in, I noticed the root volume was full.

A full disk is a thing that happens sometimes, especially on small volumes, so I went off in search of the culprit. It turns out that the dead.letter file in root’s home directory was large (several gigabytes, if I recall). For a couple of years, the cron job that runs every 5 minutes to update the documentation page had been trying to send email, which failed since the MTA wasn’t configured.

Why was all of this output trying to be sent via email? Because at some point someone set up the cron job with redirection like so:

2>&1  > /dev/null

Let’s take a step back for a moment and explain what that means. There are two main output streams for command line programs: standard output (a.k.a. “STDOUT”) and standard error (a.k.a. “STDERR”). The former is generally regular output, whereas the latter is for the “important stuff,” like error messages. By default when you run a command in the terminal, they both go to your terminal so you can see them. But you might not always want to see them, so you might redirect to a file or to /dev/null.

Back to our woe-beset server. At first glance, you might say “okay, so both STDOUT (1) and STDERR (2) are being sent to /dev/null”. And you would be wrong. STDERR is being sent to wherever STDOUT is being sent, which at the time is still the terminal (or the cron output email), and then STDOUT is being redirected to /dev/null. So what was in place was effectively the same as:

> /dev/null

Changing the order of the redirection to:

> /dev/null 2>&1

Kept the dead.letter file from slowly strangling the life out of the disk.


I find find(1) to be useful

I recently shared Tom Limoncelli’s excellent critique of the BSD find(1) man page in the documentation channel at work. One of my coworkers responded with “that’s why I just use mlocate”, and that made me very sad. Sure, mlocate is a great tool if you know there’s a file somewhere that has a particular name (assuming it was created before the last time updatedb was run), but that’s about the best you can do.

There are plenty of examples on how to use find out there, but I haven’t written a “here’s a basic thing about Linux” post in a while, so I’ll add to the pile. find takes, at a minimum, a path to find things in. For example:

find /

will find (and print) every file on the system. Probably not all that useful. You can change the path argument to narrow things down a bit, but that’s still probably not all that useful to you. So let’s throw in some additional arguments to constrain it. Maybe you want to find all the JPEG files in your home directory?

find ~ -name '*jpg'

But wait! What if some of them have an uppercase extension?

find ~ -iname '*jpg'

Aw, but I bet some of the pictures have an extension of .jpeg because 8.3 is so 1985. Well, we can combine them in a slightly ugly fashion:

find ~ \( -iname '*jpeg' -o -iname '*jpg' \)

Oh, but you have some directories that end in jpg? (Why you named a directory “bucketofjpg” instead of “pictures” is beyond me) We can modify it to only look for files!

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f

Or maybe you’d just like to find those directories so you can rename them later:

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type d

It turns out you’ve been taking a lot of pictures lately, so let’s narrow this down to ones whose status has changed in the last week.

find ~ \( -iname '*jpeg' -o -iname '*jpg' \) -type f -ctime -7

You can do time filters based on file status change time (ctime), modification time (mtime), or access time (atime). These are in days, so if you want finer-grained control, you can express it in minutes instead (cmin, mmin, and amin, respectively). Unless you know exactly the time you want, you’ll probably prefix the number with a + (more than) or – (less than). The time arguments are probably the ones I use most often.

Maybe you’re running out of disk space, so you want to find all of the gigantic (let’s define that as greater than 1 gigabyte) files in the log directory:

find /var/log -size +1G

Or maybe you want to find all the files owned by bcotton in /data:

find /data -owner bcotton

You can also look for files based on permissions. Perhaps you want to find all of the world-readable files in your home directory to make sure you’re not oversharing.

find ~ -perm -o=r

So far, all we’ve done is print the file paths, which is useful, but sometimes you want to do more. find has a few built in actions (like -delete), but it’s true power comes in giving input for other commands to act on. In the simplest case, you can pipe the output to something like xargs. There’s also the -exec action, which allows you to execute more complicated actions against the output. For example, if you wanted to get the md5sum of all of your Python scripts:

find ~ -type f -name '*.py' -exec md5sum {} \;

(Yes, you could pipe to xargs here, too, but that’s not the point.) Note the \; at the end. That’s very important.

Warning! You can really cause a world of hurt if you’re not careful with the output of find. Files that contain spaces or other special characters might cause unexpected behavior when passed to another command. Be very careful. One way to mitigate your risk is to use -ok instead of -exec. This prompts you before executing each line (but it might get tedious if you have a lot of lines to process). The -ls action escapes special characters, so that might be useful when piping to another program.

This post only begins to scratch the surface of what find can do. Combining tests with boolean logic can give you incredible flexibility to find exactly the files you’re looking for. Have any favorite find expressions? Share them in the comments!



Upgrading to Fedora 23 and some meaningless torrent stats

Since Fedora 23 was released yesterday, I went ahead and upgraded my desktop over lunch. The process was mostly painless. I followed the instructions for using dnf in Fedora Magazine, but hit a small snag: a few of the packages blocked on requirements. So I removed an old kernel-devel package and gstreamer-plugins-ugly. But I still got this:

package kf5-kdesu-5.15.0-2.fc23.x86_64 requires kf5-filesystem >= 5.15.0, but none of the providers can be installed.

That’s not great, because you can’t remove that package without also removing KDE Plasma. Taking the –best off of the dnf invocation fixed it, without any weird upgrade issues (the –best option supposedly cancels the download if a package can’t be upgraded, but everything seems good after the fact).

Since I don’t have any great tales of technical prowess to share, I thought I’d comment on the torrents. Measuring usage of an open source operating system is a really tricky thing, so I thought I might see what the torrents tell us. Keep in mind that torrents are probably a terrible way of measuring popularity, too. I’m just going to assume that most people who torrent ISOs are only torrenting the ones they actually use (instead of me, where I torrent several just to be a good citizen).

Here’s my seeding ratios for Fedora 22:

Flavor i686 x86_64
KDE 16.1 32.9
Security 8.02 13.6
Workstation 24.8 31.2
Server 10.3 15

The “ratio ratio” as I call it is a comparison of seeding ratios between the two main architectures:

Flavor x86_64:i686
KDE 2.04
Security 1.70
Workstation 1.26
Server 1.46

So what does all of this tell us? Apart from “absolutely nothing!”, it says that KDE users install on x86_64 way more than on i686. Workstation is still really popular on 32-bit machines and overall. The first 32 hours of seeding for Fedora 23 show similar patterns. Yay?

Learning by mashing buttons

I’ve become somewhat of a Slack expert at work (or at least that’s the perception) On the technical side, we’ve been using it for over a year, and you’d think everyone would have a pretty similar degree of familiarity. That turns out not to be the case.

I don’t think I’m particularly smart, but I tend to be pretty good at knowing how to configure various applications. This isn’t some skill honed by years of meticulous study. On the contrary, I learn my way around by smashing keys until something interesting happens.

One of the first things I do when I get a new device or application is to go poking around in all of the menus looking for fun settings to change. When my family got our first computer, I nearly bricked it a few times playing with different settings. I’m much more careful now, but I’m still unwilling to leave settings unexplored.

Occasionally I’ll be reminded that not everyone does this, and it confuses me. Where’s the fun if you don’t play with all the settings to see what happens?

I’m a boothtrovert

Some people are introverts. Some are extroverts. I, apparently, am a boothtrovert. Last week, I attended the All Things Open conference in Raleigh. As part of the Community Moderator team, I took a few shifts in the site’s booth. And wow did I enjoy it!

I’m an equal mix of introvert and extrovert, depending on the day, but something about working the booth really got me going. It helped that I had nothing to sell. All I had to do was talk to people about the site: what they like about it (if they read it), and how maybe they should consider contributing. I’m not sure any of them actually will, but there were definitely a few people who began to light up when I explained that I went from thinking “I have nothing to contribute” to having submitted 20 articles in the space of just a few months.

But my favorite part happened during one of the book signing sessions. We were giving away signed copies of Jason van Gumster’s Blender for Dummies, and the stock we had was quickly exhausted. One guy came up to the table and looked pretty sad when he found out there were none left. So I asked his name and then brought him over to where Jason was standing. I introduced him to Jason and stepped out of the way to let them talk. They conversed for probably ten minutes or so. It wasn’t the same as getting a free and autographed book, but he certainly seemed less sad.

I’m not sure I could ever give sales pitches, but I certainly enjoy the interaction and conversation of booth work (interestingly, I mostly don’t like visiting booths at conferences, in part because I know it will result in having to ignore sales solicitations for the next three months). It looks like I’ll have another opportunity soon, but this time for work. I’ll be interested to see how booth-for-work compares to booth-for-volunteer.

Debian “drops” the Linux Standard Base

LWN recently reported on a decision by the Debian community to drop most support for the Linux Standard Base (LSB). The LSB is an attempt to define as standard for compatibility across Linux distributions. Even binaries should JustWork™ on multiple distributions. At work, I take advantage of this: for many packages we use the same binaries across CentOS, Ubuntu, and SLES.

I can’t blame the Debian maintainers for not wanting to continue putting in the effort. The LSB is a large standard set and very few applications have been officially LSB certified. In addition, the LSB’s selection of RPM as the package manager puts the spec at odds with Debian anyway.

Debian’s unwillingness to put effort into keeping up with the LSB doesn’t necessarily mean that it will suddenly become incompatible with other distributions. Debian plans to continue complying with the Filesystem Hierarchy Standard, a subset of the LSB that defines what files and directories go where. I suspect this is the key standard for many people who work across distributions anyway.

In the short term, this seems like a non-story. In the longer term, I wonder what will become of the Linux ecosystem. Running a single distribution is herding cats on the best of days. Coordinating standards across multiple distributions, even with common upstreams, is madness. Among the major distributions, there are basically two camps: Debian/Ubuntu and Fedora/RHEL (and RHEL-alikes). They’ve managed not to drift too far apart, thought I thought systemd would start that process.

To many, “Linux” (as an OS, not a kernel) is a single entity. Others don’t even realize that Ubuntu and Fedora are in any way related. While reality is (sort of) closer to the former currently, I wonder if we’ll get to a point where it’s closer to the latter. Standards are important but are useful only to the degree that they are complied with. Linux has avoided the competing standards problem so far, but will that remain the case?

Managing an IT team when you’re not technical…or even when you are

I came across a great article by Alison Green called “5 secrets to managing an IT team when you’re not a technical person“. I don’t disagree with anything she said, but I think she sells it a little bit short. The leadership failures that I’ve seen in my career are rarely because the person wasn’t technical enough. If anything, too-technical leaders are a bigger problem.

I’ve known more than one technical leader who focuses too much on the technology, and not the business case for the technology. When a new shiny object comes along, they chase it, leaving projects core to the business to languish. This not only causes short-term harm, but it can lead to longer-term failures to keep up with changes.

As an industry, we often promote good technical people into management without any consideration of their management abilities. Having worked with a wide variety of managers, I can say that I much prefer the non-technical-but-good-at-management managers to the very-good-technically-but-completely-hopeless-as-managers managers. The best managers I’ve worked for have been both, but that’s not as common as one would hope.

In the meantime, all managers of technical groups should read those five secrets and understand them. The best success occurs when the business and technology are working together.


I’m a little late to the game, but over the weekend I heard about Hacktoberfest. Sponsored by DigitalOcean in partnership with GitHub, Hacktoberfest is intended to get people to make contributions to open source projects. While I’ve made contributions before, the prospect of a free t-shirt that I don’t need was enough to get me to submit three pull requests on Saturday.

Wouldn’t it be great if I submitted a pull request every week? Then I looked at my todo list and my calendar and walked it back. I think a pull request  (or direct commit to a project I have access to) per month is a reasonable goal. I’ve been meaning to make more contributions to projects for a while, so this may be just the motivation I need.

I need to come up with a catchy name, but I’ll use this blog as a record of what I contribute. In the meantime, if you haven’t signed up for Hacktoberfest, go do that. Happy contributing!