The right of disattribution

While discussing the ttyp0 font license, Richard Fontana and I had a disagreement about its suitability for Fedora. My reasoning for putting it on the “good” list was taking shape as I wrote. Now that I’ve had some time to give it more thought, I want to share a more coherent (I hope) argument. The short version: authors have a fundamental right to require disattribution.

What is disattribution?

Disattribution is a word I invented because the dictionary has no antonym for attribution. Attribution, in the context of open works, means saying who authored the work you’re building on. For example, this post is under the Creative Commons Attribution-ShareAlike 4.0 license. That means you can use and remix it, provided you credit me (Attribution) and also let others use and remix your remix (ShareAlike). On the other hand, disattribution would say something like “you can use and remix this work, but don’t put my name on it.”

Why disattribution?

There are two related reasons an author might want to require disattribution. The first is that either the original work or potential derivatives are embarrassing. Here’s an example: in 8th grade, my friend wrote a few lines of a song about the destruction of Pompeii. He told me that I could write the rest of it on the condition that I don’t tell anyone that he had anything to do with it.

The other reason is more like brand protection. Or perhaps avoiding market confusion. This isn’t necessarily due to embarrassment. Open source maintainers are often overworked. Getting bugs and support requests from a derivative project because the user is confused is a situation worth avoiding.

Licenses that require attribution are uncontroversial. If we can embrace the right of authors to require attribution, we can embrace the right of authors to require disattribution.

Why not disattribution?

Richard’s concerns seemed less philosophical and more practical. Open source licenses are generally concerned with copyright law. Disattribution, particularly in the second reasoning, is closer to trademark law. But licenses are the tool we have available; don’t be surprised when we ask them to do more than they should.

Perhaps the bigger concern is the constraint it places on derivative works. The ttyp0 license requires not using “UW” as the foundry name. Richard’s concern was that two-letter names are too short. I don’t agree. There are plenty of ways to name a project that avoid one specific word. Even in this specific case, a name like “nuwave”—which contains “uw”—because it’s an unrelated “word.”

Excluding a specific word is fine. A requirement that excludes many words or provides some other unreasonable constraint would be the only reason I’d reject such a license.

Other writing: November 2021

What have I been writing when I haven’t been writing here?

Now you see me

  • Compiler s01e08 (podcast) — I talk about contributing technical documentation in open source projects and why you (yes, you!) should contribute.

Stuff I wrote


Stuff I curated


Using variables in Smartsheet task names

I use Smartsheet to generate the Fedora Linux release schedules. I generally copy the previous release’s schedule forward and update target release date. But then I have to search for the release number (and the next release number, the previous release number, and the previous-previous release number) to update them. Find and replace is a thing, but I don’t want to do it blindly.

But last week, I figured out a trick to use variables in the task names. This way when I copy a new schedule, I just have to update the number once and all of the numbers are updated automatically.

First you have to create a field in the Sheet Summary view. I called it “Release” and set it to be of the Text/Number type. I put the release number in there.

Then in the task name, I can use that field. What tripped me up at first was that I was trying to do variable substitution like you might do in the Bash shell. But really, what you need to do is string concatenation. So I’d use

="Fedora Linux " + Release# + " release"

This results in “Fedora Linux 37 release” when release is set to 37. To get the next release, you do math on the variable:

="Fedora Linux " + (Release# + 1) + " release"

This results in “Fedora Linux 38 release” when release is set to 37. This might be obvious to people who use Smartsheet deeply, but for me, it was a fun discovery. It saves me literally minutes of work every three years.

You can do real HPC in the cloud

The latest Top 500 list came out earlier this week. I’m generally indifferent to the Top 500 these days (in part because China has two exaflop systems that it didn’t submit benchmarks for). But for better or worse, it’s still an important measure for many HPC practitioners. And that’s why the fact that Microsoft Azure cracked the top 10 is such a big deal.

For years, I heard that the public cloud can’t be used for “real” HPC. Sure, you can do throughput workloads, or small MPI jobs as a code test, but once it’s time to do the production workload, it has to be bare metal. This has never not been wrong. With a public cloud cluster as the 10th most powerful supercomputer* in the world, there’s no question that it can be done.

So the question becomes: should you do “real” HPC in the cloud? For whatever “real” means. There are cases where buying hardware and running it makes sense. There are cases where the flexibility of infrastructure-as-a-service wins. The answer has always been—and always will be—run the workload on the infrastructure that best fits the needs. To dismiss cloud for all use cases is petty gatekeeping.

I congratulate my friends at Azure for their work in making this happen. I couldn’t be happier for them. Most of the world’s HPC happens in small datacenters, not the large HPC centers that tend to dominate the Top 500. The better public cloud providers can serve the majority of the market, the better it is for us all.

Book review: The Address Book

How did your street get its name? When did we start numbering buildings? What does it mean to have an address—or to not have one? If any of these questions are interesting to you, you’ll appreciate The Address Book: What Street Addresses Reveal About Identity, Race, Wealth, and Power by Deirdre Mask.

I first heard about this book on the podcast “Every Little Thing“. Mask was a guest on a recent episode and shared the story of a project to name roads in rural West Virginia. This story was relevant to a memory I had long forgotten. Although I grew up on a named road, we didn’t have a numbered address until 911 service came to the area when I was in early elementary school. Prior to that, addresses were just box numbers on rural routes.

But newly-named and newly-numbered roads are not unique to the US. Mask explores how roads were named and renamed in different places over the centuries. Naming, of course, is an expression of power so names and numbers reflect the power at the time. Even today, there are millions of people who don’t have addresses, which increasingly cuts them off from what we understand as modern society.

I’d love a book of trivia about road names. The Address Book is not that. But it’s a fascinating look at the deeper meaning behind the act of naming.

Zillow’s failure isn’t AI, it’s hubris

Zillow’s recent exit from the house-flipping arena was big news recently. In business news, the plummeting stock price and looming massive layoff made headlines. In tech circles, the talk was about artificial intelligence, and how Zillow’s algorithms failed them. And while I love me some AI criticism, I don’t think that’s what’s at play here.

Other so-called “iBuyers” haven’t suffered the same fate as Zillow. In fact, they vastly out-performed Zillow from the reporting I heard. Now maybe the competitors aren’t as AI-reliant as Zillow and that’s why. But I think a more likely cause is one we see time and time again: smart people believing themselves too much.

Being smart isn’t a singular value. Domain and context play big roles. And yet we often see people who are very smart speak confidently on topics they know nothing about. (And yes, this post may be an example of that. I’d counter that this post isn’t really about Zillow, it’s about over-confidence, a subject that have a lot of experience with.) Zillow is really good at being a search engine for houses. It’s okay at estimating the value of houses. But that doesn’t necessarily translate to being good at flipping houses.

I’m sure there are ways the algorithm failed, too. But as in many cases, it’s not a problem with AI as a technology, but how the AI is used. The lesson here, as in every AI failure, should be that we have to be a lot more careful with the decisions we trust to computers.

Other writing: October 2021

What I have been writing when I haven’t been writing here?

Stuff I wrote



Stuff I curated


Other writing: September 2021

What have I been writing when I haven’t been writing here?

Stuff I wrote


Open Organization

Stuff I curated


Indiana COVID-19 update: 1 October 2021

Well, here we are again. Indiana’s numbers have been consistently trending downward. I feel comfortable saying we’ve passed the delta peak. The “good” news is that given recent infections—plus people who have been vaccinated—winter will be less peak and more plateau. Let’s look at some graphs from my dashboard.

Cases, hospitalizations, and deaths

The rate of change in cases stopped increasing in early August. By early September, they were dropping week-over-week. With school starting and then Labor Day, there was a potential for a big increase. Thankfully, we did not see that. Even more thankfully, kids 5–11 may be able to get their first shots before October.

Week-over-week (blue) and week-over-two-week (red) differences in new COVID-19 cases

Hospitalizations have been falling steadily in the past few weeks. If the trend holds, we may be below 2,000 by Monday. That would be the first time since August 23. ICU beds and ventilator usage peaked around September 13. Interestingly, the ventilator usage percentage then was higher than during the worst part of last winter. I’m not sure if that’s due to a reduction in capacity or what.

Day-over-day (blue) and week-over-week (red) changes in hospitalizations

Deaths have also peaked. As of right now, it appears that the peak was September 15. However, the lag in reports seems to have increased, so it’s possible that date will shift forward a bit. In any case, the precipitous drop has become less precipitous. The peak daily death toll is near what we saw in spring 2020. I shudder to think how bad things would have been had the delta variant arrived pre-vaccine.

Daily COVID-19 deaths on a logarithmic scale.

The future

The Institute for Health Metrics and Evaluation (IHME) model has varied a lot in the last month or so. The September 1 model run seems to have captured the increase the best, although it had a stronger and later peak than what is apparently the case. Although the state hasn’t made any changes, I’ve observed more people wearing masks and the state has seen an increase in vaccination. The earlier models took a pessimistic view of behavior, which may explain the difference.


The state has changed to updating its dashboard at 5pm instead of noon. This is ostensibly to allow more time for quality control and to catch missing data. Cynically, I think it’s because they’d been hours late regularly and decided to lean into it. The updates have been less reliable, too.

Given that the briefings are now being done irregularly and without the governor present, I must stick with my conclusion that he has abdicated any claim of leadership. The state seems to have no desire to give a damn about COVID-19.

Plagiarism in music

Last week I read an LA Times article about allegations of plagiarism leveled at Olivia Rodrigo. Rodrigo is a very talented artist (“good 4 u” gets stuck in my head for days at a time), but is she a thief? I haven’t heard the songs mentioned in the article, so I can’t say in this specific case.

But in the general sense, my bar for “plagiarism” in music is pretty high. The entirety of popular music is based on artists incorporating things they’ve heard before to varying degrees. Rob Paravonian’s Pachelbel rant is a great demonstration. I’ll grant that “Canon in D” has long entered the public domain. But imagine if musicians had to wait a century to reuse a few bars of music.

My personal view—which may or may not match copyright law—is that unless it’s taking audience from the previous song/artist, it’s fine. This is similar to one of the factors in fair use. As a concrete example, Vanilla Ice’s “Ice, Ice Baby” definitely takes from Queen & David Bowie’s “Under Pressure”. And that’s fine. The existence of “Ice, Ice Baby” hasn’t stopped anyone from listening to “Under Pressure”.

Cultural works, particularly in music and Internet discourse, rely inextricably on remixing. We should embrace a very permissive paradigm.