What does it mean for a Linux distribution to be “fresh”?

I recently had a discussion with Luboš Kocman of openSUSE about how distros can monitor their “freshness”. In other words: how close is a distro to upstream? From our perspectives, it’s helpful to know which packages are significantly behind their upstreams. These packages represent areas that might need attention, whether that be a gentle nudge to the maintainer or recruiting additional volunteers from the community.

The challenge is that freshness can mean different things. The Repology project monitors a large number of distributions and upstreams to report on the status. But simply comparing the upstream version number to the packaged version number ignores a lot of very important context.

Updating to the latest upstream version as soon as it comes out is the most obvious definition of “fresh”, but it’s not always the best. Rolling releases (and their users) probably want that. In Fedora, policy is to not do “major updates” within a release. Many other release-oriented distributions have a similar policy, with varying degrees of “major”. Enterprise distributions add another wrinkle: they’ll backport security fixes (and sometimes key features), so the difference in version number doesn’t necessarily tell you what’s missing.

Of course, the upstream’s version number doesn’t necessarily tell you much. Semantic versioning is great, but not everyone uses it. And not everyone that uses it uses it well. If a distribution has version 1.4 and upstream released 1.5, is that a lack of freshness or an intentional decision to avoid mid-release compatibility changes?

I don’t have a good answer. This is a hard problem to solve. Something like Repology may be the best we can do with reasonable effort. But I’d love to have a more accurate view of how fresh Fedora packages are within the bounds of policy.

Applying the Potter Stewart rule to release blockers

I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description, and perhaps I could never succeed in intelligibly doing so. But I know it when I see it.

Justice Potter Stewart in Jacobellis v. Ohio

Potter Stewart was talking bout hard-core pornography when he wrote “I know it when I see it”, but the principle also applies to release blockers. These are bugs that are so bad that you can’t release your software until they are fixed.Most bugs do not fall into this category. It would be nice to fix them, of course, but they’re not world-stopping.

Making a predictable process

For the most part, you want your release blockers to be defined by specific criteria. It’s even better if automated testing can use the criteria, but that’s not always possible. The point is that you want the process to be predictable.

A predictable blocker process provides value and clarity throughout the process. Developers know what the rules are so they can take care to fix (or avoid!) potential blockers early. Testers know what tests to prioritize. Users know that, while the software might do something annoying, it won’t turn the hard drive into a pile of melted metal, for example. And everyone knows that the release won’t be delayed indefinitely.

Having a predictable blocker process means not only developing the criteria, but sticking to the criteria. If a bug doesn’t violate a criterion, you can’t call it a blocker. Of course, it’s impossible to come up with every possible reason why you might want to block a release, so sometimes you have to add a new criterion.

Breaking the predictable process

This came up in last week’s Go/No-Go meeting for Fedora Linux 34 Beta. We were considering a bug that caused a delay of up to two minutes before gnome-initial-setup started. I argued that this should be a blocker because of the negative reputational impact, despite the fact that it does not break any specific release criterion. I didn’t want first time users (or worse: reviewers) to turn away from Fedora Linux because of this bug.

QA wizard Adam Williamson argued against calling it a blocker without developing a specific criterion it violates. To do otherwise, he said, makes a mockery of the process. I understand his position, but I disagree. While there are a variety of reasons to have release blockers, I see the preservation of reputation as the most important. In other words, the point of the process is to prevent the release of software so bad that it drives away current and potential users.

Admittedly, that is an entirely subjective opinion. I expect disagreement on it. But if you accept my premise and acknowledge that you can’t pre-write criteria to catch every possible, then it follows that squishy “I know it when I see it” rules are sometimes okay.

Can’t you just make that a criterion?

The best blocking criteria are objective. This aids automated testing and (mostly) avoids arguments about interpretation. But that’s not always possible. Even saying “feature X works” is open to argument over what constitutes “working”.

The challenge lies in how to incorporate “this bug is bad even though there’s no specific rule” without making the process unpredictable. In this case, Adam’s position makes a lot of sense. It’s much easier to write rules to address specific issues and apply them retroactively. Of course, doing that in a go/no-go meeting is perhaps straining the word “predictable” a bit, too.

So what happened?

In the case of this specific bug, we had an escape hatch. The release was likely to be declared no-go for other reasons, so we didn’t need to come to a decision either way. With a candidate fix available, we could just pull that in as a freeze exception and write a new criterion for the next time.

Because of this, I decided not to push my argument. I declined to propose a criterion in-meeting because I wanted to take some time to think about what the right approach is. I also wanted to spend some time thinking about the blocker process holistically. This gives me a blog post to publish (hi!) and some content for a project I’ll be announcing in the near future. In the meantime, I’ve proposed a change to the criteria.