Moving the website to Lektor

Years ago, I moved all of funnelfiasco.com (except the blog, which runs on WordPress) from artisinally hand-crafted HTML to using a static site generator. At the time, I chose a project called “blatter” which used jinja2 templates to generate a site. This gave me the opportunity to change basic information across the whole site at once. Not something I do often, but it’s a pain when I do.

Unfortunately, blatter was apparently quietly abandoned by the developer. This wasn’t really a problem until Python 2 reached end of life. Fedora (reasonably) retired much of the Python 2 ecosystem. I tried to port it to Python 3, but ran into a few problems. And frankly, the idea of taking on the maintenance burden for a project that hadn’t been updated in years was not at all appealing. So I went looking for something else.

I wanted to find something that used jinja2 in order to minimize the amount of work involved. I also wanted something focused on websites, not blogs specifically. It seems like so many platforms today are blog-first. That’s fine, it’s just not what I want. After some searching and a little bit of trial and error, I ended up selecting Lektor.

The good

Lektor is written in (primarily) Python 3 and uses jinja2 templates, so it hit my most important points. It has a command to run a local webserver for testing. In addition, you can set up multiple servers configurations for deployment. So I can have the content sync to my local web server to verify it and then deploy that to my “production” webserver. Builds are destructive, but the deploys are not, which means I don’t have to shoe-horn everything into Lektor.

Another great feature is the ability to programmatically generate thumbnails of images. I’ve made a little bit of use of that for the time being. In the future, especially if I ever go storm chasing again, I can see myself using that feature a lot more.

Lektor optionally supports writing the page content in markdown. I haven’t done this much since I was migrating pre-written content. I expect new content will be much markdownier. Markdown isn’t flexible enough for a lot of web purposes, but it covers some use cases well. Why write HTML when it’s not needed?

Lektor uses databags to provide input data to templates. I do this using JSON files. Complex operations with that are a lot easier than the embedded Python data structures that Blatter supported.

If I were interested in translating my site into multiple languages, Lektor has good support for that (including changing URLs). It also has a built-in admin and editing console, which is not something I use, but I can see the appeal.

The bad

Unlike Blatter, Lektor puts contents and templates in separate files. This makes it a little more difficult to special-case a specific site.

It also has a “one directory, one file” paradigm. Directories can have “attachments”, which can include html files, but they won’t get processed, so they need to stand alone. This is not such an issue if you’re starting from scratch. Since I’m not, it was more of a headache. You can overwrite the page’s slug, but that also makes certain assumptions.

For the Forecast Discussion Hall of Fame, I wanted to keep URLs as-is. That site has been linked to from a lot of places, and I’d hate to break those inbound links. Writing an htaccess file to redirect to the new URLs didn’t sound ideal either. I ended up writing a one-line patch that passed the argument I need to the python-slugify library. I tried to do it the right way so that it would be configurable, but it was beyond my skill to do so.

The big down side is the fact that the development has ground to a halt. It’s not abandoned, but the development activity happens in spurts. Right now it’s doing what I need it to do, but I worry at some point I’ll have to make a switch again. I’d like to contribute more upstream, but my skills are not advanced enough for this.

GitHub should stand up to the RIAA over youtube-dl

Earlier this week, GitHub took down the repository for the youtube-dl project. This came in response to a request from the RIAA—the recording industry’s lobbying and harassment body. youtube-dl is a tool for downloading videos. The RIAA argued that this violates the anticircumvention protections of the Digital Millennium Copyright Act (DMCA). While GitHub taking down the repository and its forks is true to the principle of minimizing corporate risk, it’s the wrong choice.

Microsoft—currently the world’s second-most valuable company with a market capitalization of $1.64 trillion—owns GitHub. If anyone is in a position to fight back on this, it’s Microsoft. Microsoft’s lawyers should have a one word answer to the RIAA’s request: “no”. (full disclosure: I own a small number of shares of Microsoft)

The procedural argument

The first reason to tell the RIAA where to stick it is procedural. The RIAA isn’t arguing that youtube-dl is infringing its copyrights or circumventing its protections. It is arguing that youtube-dl infringes YouTube’s protections. So even if it is, that’s YouTube’s problem, not the RIAA’s.

The factual argument

I have some sympathy for the anticircumvention argument. I’m not familiar with the specifics of how youtube-dl works, but it’s at least possible that youtube-dl circumvents YouTube’s copy protection. This would be a reasonable basis for YouTube to take action. Again, YouTube, not the RIAA.

I have less sympathy for the infringement argument. youtube-dl doesn’t induce infringement more than a web browser or screen recorder does. There are a variety of uses for youtube-dl that are not infringing. Foremost is the fact that some YouTube videos are under a license that explicitly allows sharing and remixing. Archivers use it to archive content. Some people who have time-variable Internet billing use it to download videos overnight.

So, yes, youtube-dl can be used to infringe the RIAA’s copyrights. It can also be used for non-infringing purposes. The code itself does not infringe. There’s nothing about it that gives the RIAA a justification to take it down.

youtube-dl isn’t the whole story

youtube-dl provides a focal point, but there’s more to it. Copyright law is now used to suppress instead of promote creative works. The DMCA, in particular, favors the large rightsholders over smaller developers and creators. It essentially forces sites to act on a “guilty until proven innocent” model. Companies in a position to push back have an obligation to do so. Microsoft has become a supporter of open source, now it’s time to show they mean it.

We should also consider the risks of consolidation. git is a decentralized system. GitHub has essentially centralized it. Sure, many competitors exist, but GitHub has become the default place to host open source code projects. The fact that GitHub’s code is proprietary is immaterial to this point. A FOSS service would pose the same risk if it became the centralized service.

I saw a quote on this discussion (which I can’t find now) that said “code is free, infrastructure is not.” And while projects self-hosting their code repository, issue tracker, etc may be philosophically appealing, that’s not realistic. Software-as-a-Service has lowered the barrier for starting projects, which is a good thing. But it doesn’t come without risk, which we are now seeing.

I don’t know what the right answer is for this. I know the answer won’t be easy. But both this specific case and the general issues they highlight are important for us to think about.

Linux distros should be opinionated

Last week, the upstream project for a package I maintain was discussing whether or not to enable autosave in the default configuration. I said if the project doesn’t, I may consider making that the default in the Fedora package. Another commenter said “is it a good idea to have different default settings per packaging ? (ubuntu/fedora/windows)”

My take? Absolutely yes. As I said in the post on “rolling stable” distros, a Linux distribution is more than an assortment of packages; it is a cohesive whole. This necessarily requires changes to upstream defaults.

Changes to enable a functional, cohesive whole are necessary, of course. But there’s more than “it works”, there’s “it works the way we think it should.” A Linux distribution targets a certain audience (or audiences). Distribution maintainers have to make choices to make the distro meet that audience’s needs. They are not mindless build systems.

Of course, opinions do have a cost. If a particular piece of software works differently from one distro to another, users get confused. Documentation may be wrong, sometimes harmfully so. Upstream developers may have trouble debugging issues if they are not familiar with the distro’s changes.

Thus, opinions should be implemented judiciously. But when a maintainer has given a change due thought, they should make it.

What do “rolling release” and “stable” mean in the context of operating systems?

In a recent post on his blog, Chris Siebenmann wrote about his experience with Fedora upgrades and how, because of some of the non-standard things he does, upgrades are painful for him. At the end, he said “What I really want is a rolling release of ‘stable’ Fedora, with no big bangs of major releases, but this will probably never exist.”

I’m sympathetic to that position. Despite the fact that developers have worked to improve the ease of upgrades over the years, they are inherently risky. But what would a stable rolling release look like?

“Arch!” you say. That’s not wrong, but it also misses the point. What people generally want is new stuff so long as it doesn’t cause surprise. Rolling releases don’t prevent that, they spread it out. With Fedora’s policy, for example, major changes (should) happen as the release is being developed. Once it’s out, you get bugfixes and minor enhancements, but no big changes. You get the stability.

On the other hand, you can run Fedora Rawhide, which gets you the new stuff as soon as it’s available, but you don’t know when the big changes will come. And sometimes, the changes (big and little) are broken. It can be nice because you get the newness quickly. And the major changes (in theory) don’t all come at once.

Rate of change versus total change

For some people, it’s the distribution of change, not the total amount of change that makes rolling releases compelling. And in most cases, the changes aren’t that dramatic. When updates are loosely-coupled or totally independent, the timing doesn’t matter. The average user won’t even notice the vast majority of them.

But what happens when a really monumental change comes in? Switching the init system, for example, is kind of a big deal. In this case, you generally want the integration that most distributions provide. It’s not just that you get an assortment of packages from your distribution, it’s that you get a set of packages that work together. This is a fundamental feature for a Linux distribution (excepting those where do-it-yourself is the point).

Applying it to Fedora

An alternate phrasing of what I understand Chris to want is “release-quality packages made available when they’re ready, not on the release schedule.” That’s perfectly reasonable. And in general, that’s what Fedora wants Rawhide to be. It’s something we’re working on, particularly with the ability to gate Rawhide updates.

But part of why we have defined releases is to ensure the desired stability. The QA team and other testers put a lot of effort into automated and manual tests of releases. It’s hard to test against the release criteria when the target keeps shifting. It’s hard to make the distribution a cohesive whole instead of a collection of packages.

What Chris asks for isn’t wrong or unreasonable. But it’s also a difficult task to undertake and sustain. This is one area where ostree-based variants like Fedora CoreOS (for servers/cloud), Silverblue (for desktops), and IoT (for edge devices) bring a lot benefit. The big changes can be easily rolled back if there are problems.

How I broke KDE Plasma by changing my shell (and also writing a bad script)

My friends, I’d like to tell you the story of how I spent Monday morning. I had a one-on-one with my manager and a team coffee break to start the day. Since the weather was so nice, I thought I’d take my laptop and my coffee out to the deck. But when I tried to log in to my laptop, all I had was the mouse cursor. Oh no!

I did my meeting with my manager on my phone and then got to work trying to figure out what went wrong. I saw some errors in the journal, but it wasn’t clear to me what was wrong.

Aug 31 09:23:00 fpgm akonadi_control[5155]: org.kde.pim.akonadicontrol: ProcessControl: Application '/usr/bin/akonadi_googlecalendar_resource' returned with exit
code 253 (Unknown error)
Aug 31 09:23:00 fpgm akonadi_googlecalendar_resource[6249]: QObject::connect: No such signal QDBusAbstractInterface::resumingFromSuspend()
Aug 31 09:23:00 fpgm akonadiserver[5159]: org.kde.pim.akonadiserver: New notification connection (registered as Akonadi::Server::NotificationSubscriber(0x7f4d9c0
10140) )
Aug 31 09:23:00 fpgm akonadi_googlecalendar_resource[6249]: Icon theme "breeze" not found.
Aug 31 09:23:00 fpgm akonadiserver[5159]: org.kde.pim.akonadiserver: Subscriber Akonadi::Server::NotificationSubscriber(0x7f4d9c010140) identified as "AgentBaseC
hangeRecorder - 94433180309520"
Aug 31 09:23:01 fpgm akonadi_googlecalendar_resource[6249]: kf5.kservice.services: KMimeTypeTrader: couldn't find service type "KParts/ReadOnlyPart"  
                                                           Please ensure that the .desktop file for it is installed; then run kbuildsycoca5.

What broke

Before starting the weekend, I had updated all of the packages, as I normally did. But none of the updated packages seemed relevant. I hadn’t done any weird customization. As “pino|work” in IRC and I tried to work through it, I remembered that I had added a startup script to set the XDG_DATA_DIRS environment variable in the hopes of getting installed flatpaks to show up in the menu. (Hold on to this thought, it becomes important again later.)

I moved it out of the way to get things cleaned up (by removing the plasma-org.kde.plasma.desktop-appletsrc and plasmashellrc files). Looking at the script, I realized I had a syntax error (a stray single quote ended up in there) while trying to set XDG_DATA_DIRS. Yay! That’s easy enough to fix.

Why it broke

Except it was still broken. It was broken because I referred to XDG_DATA_DIRS but it was undefined. Why didn’t it inherit it? Ohhhhh because fish doesn’t use the /etc/profile.d directory.

So remember how I did this in order to get Flatpaks to show up in my start menu? I could have sworn they did at some point. It turns out that I was right. The flatpak package installs the scripts into /etc/profile.d, which fish doesn’t read. So when I switched my shell from Bash to fish a while ago, those scripts never ran at login.

How I “fixed” it

To fix my problem, I could have written scripts that work with fish. Instead, I decided to take the easy route and change my shell back to bash. But in order to keep using fish, I set Konsole to launch fish instead of bash. Since I only ever do a graphical login on my desktop, that’s no big deal, and it avoids a lot of headache.

The bummer of it all is that I lost some of the configuration I had in the files I deleted. But apparently the failed logins made it far enough to modify the files in a way that Plasma doesn’t like. At any rate, I didn’t do much customization, so I didn’t lose much either.

Removing unmaintained packages from an installed system

Earlier this week, Miroslav Suchý proposed removing removing retired packages as part of Fedora upgrade (editor’s note: the proposal was withdrawn after community feedback). As it stands right now, if a package is removed in a subsequent release, it will stick around. For example, I have 34 packages on my work laptop from Fedora 28 (the version I first installed on it) through Fedora 31. The community has been discussing this, with no clear consensus.

I’m writing this post to explore my own thoughts. It represents my opinions as Ben Cotton: Fedora user and contributor, not as Ben Cotton: Fedora Program Manager.

What does it mean for a package to be “maintained”?

This question is the heart of the discussion. In theory, a maintained package means that there’s someone who can apply security and other bug fixes, update to new releases, etc. In practice, that’s not always the case. Anyone who has had a bug closed due to the end-of-life policy will attest to that.

The practical result is that as long as the package continues to compile, it may live on for a long time after the maintainer has given up on it. This doesn’t mean that it will get updates, it just means that no one has had a reason to remove it from the distribution.

On the other hand, the mere fact that a package has been dropped from the distribution doesn’t mean that something is wrong with it. If upstream hasn’t made any changes, the “unmaintained” version is just as functional as a maintained version would be.

What is the role of a Linux distribution?

Why do Linux distributions exist? After all, people could just download the software and build it themselves. That’s asking a lot of most people. Even those who have sufficient technical knowledge to compile all of the different packages in different languages with different quirks, few have the time or desire to do so.

So a distribution is, in part, a sharing of labor. By dividing the work, we reduce our own burden and democratize access.

A distribution is also a curated collection. It’s the set of software that the contributors say is worth using, configured in the “right way”. Sure there are a dozen or so web browsers in the Fedora repos, but that’s not the entirety of web browsers that exist. Just as an art museum may have several similar paintings, a distribution might have several similar packages. But they’re all there for a reason.

To remove or not to remove?

The question of whether to remove unmaintained packages then becomes a balance between the shared labor and the curation aspects of a distribution.

The shared labor perspective supports not removing packages. If the package is uninstalled at update, then someone who relies on that package now has to download and build it themselves. It may also cause user confusion if something that previously worked suddenly stops, or if a package that exists on an upgraded system can’t be installed on a new one.

On the other hand, the curation perspective supports removing the package. Although there’s no guarantee that a maintained package will get updates, there is a guarantee that an unmaintained package won’t. Removing obsolete packages at upgrade also means that the upgraded system more closely resembles a freshly-installed system.

There’s no right answer. Both options are reasonable extensions of fundamental purposes of a distribution. Both have obvious benefits and drawbacks.

Pick a side, Benjamin

If I have to pick a side, I’m inclined to side with the “remove the packages” argument. But we have to make sure we’re clearly communicating what is happening to the user. We should also offer an easy opt-out for users who want to say “I know what you’re trying to do here, but keep these packages anyway.”

Cherrytree updates in COPR

For Fedora 31 users, I have updated the cherrytree package in my COPR to the latest upstream release (0.39.2). For Fedora 32 and rawhide users…well, there’s a problem. As you may know, Python 2 has reached end of life. And that means most of Python 2 is gone in Fedora 32. I tried to build the dependency chain in COPR, but the yaks kept getting hairier and hairier. Instead, I’ve packaged the C++ rewrite as cherrytree-future.

cherrytree-future is available for Fedora 31, Fedora 32, and rawhide. I have packages for x86_64 and aarch64 for all three versions and for armhfp on Fedora 31 and 32 (the rawhide builder was out of disk space, oops!).

Because cherrytree-future is still pre-release I intentionally did not have the package obsolete cherrytree. If you’re upgrading from Fedora 31 to Fedora 32, you will first have to remove cherrytree and install cherrytree-future.

I have been using cherrytree-future in the last day and it’s working well for me so far. If you encounter any problems with the package (e.g. a missing dependency), please file an issue on my GitHub repo. If you encounter problems with the program itself, file the bug upstream.

Once upstream cuts an official release of the rewrite, I’ll work on getting it into the official repos.

[solved] Can’t log in to KDE on Fedora 31

Earlier today, I ran dnf update on my laptop, as I do regularly. After rebooting, I couldn’t log in. When I typed in my user name and password, it almost immediately returned to the login screen. Running startx from the command line failed, too. I spent an hour or two trying to diagnose the problem. There were a lot of distracting messages in the xorg log.

The problem turned out to be that the startkde command was no longer on my machine. It seems upgrading from version 5.16 to 5.17 of the plasma-workspace package removes startkde in favor of startplasma-x11. Creating a symlink fixed it as a workaround.

This is reported as bug #1785826, and I’m sure Rex and the rest of the Fedora KDE team will have a suitable fix out soon. In the meantime, creating a symlink appears to be the best way to fix it.

Why the symlink works

When an X session starts, it looks in a few different places to see what should be run. One of those places is /etc/X11/xinit/Xclients. This file checks for a preferred desktop environment. If one isn’t specified, it works through a list trying to find one that works. It does this by looking for the specific desktop environment’s executable.

Since startkde no longer exists, it had no way of checking for KDE Plasma. I don’t have any other desktop environments installed on this machine, so there was no other desktop environment to fallback to. I suspect if GNOME were installed, it would have logged me into GNOME instead, at least when running startx.

So another fix would be to replace instances of startkde with startplasma-x11 in the Xclients file (similarly if you have that file in your home directory). However, this leaves anything else that might check for the existence of startkde in the lurch. (I don’t know if anything does).

There’s probably more options for fixing it out there; this is very much not my area of expertise. I’d have to say that this was the most frustrating issue I’ve had to debug in a long time, in part because it took me a while to even know where the problem was. The fact that moving my ~/.kde directory didn’t result in a new one being created told me that it was pretty early in the process.

What distractions did I see?

In trying to diagnose the issue, I got distracted by a variety of error messages:

  • xf86EnableIOPorts: failed to set IOPL for I/O (Operation not permitted)
  • /dev/fb0: permission denied
  • gkr-pam: unable to locate daemon control file
  • pam_kwallet5: couldn't open file

Book review: People Powered

Jono Bacon knows something about communities. He wrote the book on it, in fact. And now he has written another book. People Powered is a guide for how companies can create and curate communities.

I often see companies try to start what they call “communities”. In reality, they are ways for the company to get free labor that provide no real benefit to the the participants. But it doesn’t have to be that way. A community that doesn’t benefit the sponsoring company is not likely to continue receiving sponsorship. But if there’s no benefit to the community members, the community will not thrive. Only when everyone involved gets value from the community will the community be vibrant.

A community mission is different than your business vision, but tightly wound around it.

All too often, books like this prescribe the One True Way™. Bacon does not do that. He fills the book with many things the reader should do, but he also makes it clear that there are many right ways to run a community, just as there are many wrong ways.

People Powered is a starting point, not an answer. As I was reading it, I thought “this is a good set of recipes”. Further on, Bacon used the same metaphor. Curse you, Jono! But it’s an apt metaphor. The book presents advice and knowledge based on Bacon’s 20 years of community management. But each community has specific needs, so the reader is encouraged to selectively apply the most relevant parts. And in the tradition of open source, plans should be iterative and evolve to meet the changing needs of communities. Like any good cook, the recipe provides a starting point; the cook makes adjustments to taste.

If I could sum up People Powered in two words, I would pick “be intentional.” Given two more words, I’d add “be selective.” People are often tempted to do all the things, to be all things to all people. And while that may be in the future of a community, getting started requires a more specific focus on what will (and more importantly, what won’t) be done.

People Powered is full of practical advice (including a lot of calls-to-action to find resources on jonobacon.com). But it also contains more philosophical views. Bacon is not a psychologist, but he has made a study of psychology and sociology over the years. This informs the theoretical explanations behind his practical steps. It also guides the conceptual models for communities that he lays out over the course of the book. And to prove that it’s a Jono Bacon book, it includes a few references to behavioral economics and several to Iron Maiden.

I really enjoyed this book. Some of it was obvious to me, given my community leadership experience (admittedly, I’m not the target audience), but I still got a lot of value from it. Chapter 9 (Cyberspace and Meatspace: Better Together) particularly spoke to me in light of some conversations I’ve had at work recently. People Powered is an excellent book for anyone who is currently leading or planning to lead a community as part of a corporate effort.

People Powered (affiliate link) is published by HarperCollins Leadership and was released yesterday.

Disclosures: 1. I received a pre-release digital review copy of People Powered. I received no other consideration for this post (unless you purchased it from the affiliate link above). 2. Jono Bacon is a personal friend, but I would tell him if his book was awful.

New to Fedora: z

Earlier this month, I attended Chris Waldon’s session “Terminal Velocity: Work faster in your shell” at All Things Open. He covered several interesting tools, one of which is a project called z. z is a smarter version of the cd command. It keeps track of what directories you change to and uses a combination of the frequency and recency (“frecency”) to make an educated guess about where you wanted to go.

I find this really appealing because I often forget where in the file system I put a directory. And z is written as a shell script, so it’s easy to package and use.

z is now packaged and submitted to rawhide, with updates pending for F31 and F30.