I updated my Indiana COVID-19 dashboard with the latest numbers. It continues to look bad. Hospitalizations are up 15% in the past week. The new daily case record set yesterday is 30% higher than the record set a week ago. We had two days in the last four with 20+ deaths (and bear in mind that the recently daily numbers tend to rise rather significantly in the days that follow).
Most alarming is the latest forecast from the Institute for Health Metrics and Evaluation (IHME). Their 10/15 model forecast is now on the dashboard, and it continues to show a big upswing in fatalities through November and December. The models have been pretty consistent with underpredicting the death count lately, so the big increase in the last two runs is extra worrisome.
In order to get a better sense of the past and possible future, I plotted the observed deaths with each of the model runs I have in the spreadhseet.
While the early September runs were a little hot, none of them really captured the increase we’ve seen over the past few weeks. The last two runs (10/9 and 10/15) are the first two to fully consider the move to Stage 5, I believe. And it’s clear that the forecast is not looking kindly on that.
Indeed, the Governor’s move to Stage 5 looks worse and worse with each passing day. The state health commissioner announced earlier this week that she and her family tested positive.
I just updated my Indiana COVID-19 dashboard with today’s numbers. They are not pretty. The state set a record for new cases for the third consecutive day. Today’s increase was “only” 6.5%, which is an improvement on the 24% increase that yesterday’s new record represented.
Positive tests or positive individuals?
The number of tests administered is on the rise, but we’re testing far fewer individuals. In fact, we’re testing about 33% fewer people a day than we did at the peak in late August. That the state is focusing on the total positivitiy rate (5.2% over the last 7 days) as opposed to the rate of positive individuals (9.3% over the last 7 days) strikes me as deceptive.
I attribute the disproportionate increase in tests (compared to people tested) to school systems, at least in part. I know of teachers who have had to take several COVID-19 tests in the past two months in order to return to work after any illness that shares a symptom with COVID-19. While I applaud the schools for taking this seriously, it does lead to some misleading numbers.
Deaths and hospitalizations
On Thursday, Indiana hit 20 daily COVID-19 deaths again. Most recently, this mark was tallied on September 26 (21 deaths). The last time before that was June 14th. We have not had a day with single-digit deaths since September 21. The only stretch longer than that is the 68 days from March 28 through June 5.
Hospitalizations are up dramatically as well, as I mentioned in the last update. The current levels haven’t been seen since late May. Hospitalizations yesterday were 42% higher than on September 9 and 30% than two weeks prior.
The Institute for Health Metrics and Evaluation (IHME) released a new model run late last night. I have added that to the dashboard as IHME 10/9 and hidden the IHME 9/11 lines for readability. A few days in, and this run seems to be over-estimating Indiana deaths so far. This is a welcome relief, since the last month’s worth of runs have been pretty consistently running too low. Given that deaths tend to be reported over the course of several days, the model may end up being more accurate after all. IHME has not published the updated briefing yet, so they may have more to say about the changes in this week’s run.
IHME’s forecast assumes that states will re-implement restrictions when conditions deteriorate to a certain point. Assuming that is accurate, we’re looking at restrictions coming back in mid-to-late November. Under that scenario, the forecast calls for a peak of 66 deaths per day in early December (with a range of 35-105). That would exceed our April peak by 32%.
However, given Governor Holcomb’s decision to move to Stage 5 in the face of materially unimproved circumstances, I don’t know if we can depend on that. If we do nothing, or further ease the few restrictions left, the model suggests we could be losing over a hundred Hoosiers a day in late December.
I haven’t made any changes to my Indiana COVID-19 plots since the last update, but I wanted to comment on some of the trends. The Governor announced a week ago today that the state would move to Stage 5 of our response. The reduced restrictions took effect on Saturday.
It’s far too early for drawing any causal effects. Nonetheless, I find it interesting that fate seems to be saying “I’ll show you!” In the last six days, the trend in daily deaths is upward. Saturday, Sunday, and Monday have averaged an increase in 7 deaths over the prior week (although the two-week comparisons are much noisier). The week-over-week change in cases is riding a five-day positive run. This is the first stretch longer than three days since early August. Normally the new case count varies wildly in both directions, so it’s unusual to see a stable run like this.
The state’s dashboard hints at an upward trend in the positive test rate again. Hospitalizations are up 16% (135 patients) in the past week. This trend has continued fairly steadily for the past week and a half.
What concerns me most is the model verification. The last few weeks of IHME forecasts were initially running a bit high, but in the last few days, they’re now under-predicting the daily death counts. This could suggest that the bad scenario predicted for December will be worse than forecast. It also may not. This is a short window, so we’ll have to see how trends hold.
As I said at the beginning, these bad trends in Indiana’s data cannot be tied to the move to Stage 5. But it does suggest that it was a bad decision. As my friend Renee wrote today, it’s less that things have improved and more that we’ve just grown accustomed to things being bad.
Well it took approximately forever, but I finally got around to updating the back end tech for my website (post coming soon!). That means I can catch up on a backlog of Forecast Discussion Hall of Fame nominees.
I have added four new entries to the Social Media Foyer: tweets from NWS offices in Atlanta, Paducah, and Pittsburgh, as well as a Facebook post from Nashville. Plus: a Public Information Statement that includes a map, a Santa Claus watch, forecasters in Juneau telling you to go outside, a sweet discussion with no tricks, an “I’m only doing this because I have to”, a not-so-super Mario, and monster wrestling. Because you’ve all been so patient, there’s also a new entry in the ISIXTYFIVE section where he comes to the conclusion that he sucks.
Some of these have been sitting in my inbox since January 2018, so I apologize in being slow to add them. You get what you pay for sometimes.
Like many of you, COVID-19 has weighed heavily on me in 2020. Part of the weight is the uncertainty of it all. While we seem to have a reasonable knowledge now of how to minimize spread and avoid fatality (not that we necessarily are doing these things. Wear your damn masks, people), that was not the case in the beginning. And while I’m not a virologist or an epidemiologist, I find having a sense of the numbers helps my unease. So early on, I started keeping track of some basic stats for Indiana COVID-19 deaths in a Google spreadsheet. You can take a look at it now. Below, I explain some of the history and some observations.
At first, I tracked the deaths by day of report. This led to a noticeable pattern. Deaths dropped Sunday and Monday, since the previous day was a weekend. I assume hospitals were slower to report to the local health departments who were in turn slower to report to the states. To address this I also had a plot that ignored weekends. For both of these, I had a seven-day moving average to smooth out individual bumps in the data. This made it easier to spot trends.
After a while, though, I realized that the deaths reported on any given day could represent deaths that occurred on many days. Realizing this, I cleared out the old data and went through each day on the Indiana COVID-19 dashboard. The state makes it easy to see when past days have new deaths added, so it’s easy to keep that up to date. I plotted the daily deaths on linear and log scales with 7-day moving averages. Those first two graphs have basically remain unchanged since.
It’s also worth noting that the state’s dashboard has improved dramatically since the early days. This includes a moving average for all of the reported metrics.
Even without relying on day-of-report for tracking deaths, there seemed to be a rough periodicity to the daily death counts. I won’t try to come up with an explanation. But it was clear that comparing day-to-day didn’t necessarily give an accurate picture. So I started tracking week-over-week and week-over-two-week death counts. This, I figured, gives a better picture of the trend. If the differences are consistently negative, that means we’re heading in the right direction. If the differences are consistently positive, that’s a bad sign.
After a while, I decided to start tracking cases in the same way. The state’s dashboard makes this more difficult. The graphs don’t indicate when dates have changed, although in daily checks I’ve routinely observed changes of 5-10 cases as far back as 2-3 weeks. The state does make data available via downloadable spreadsheet, so I’ve started using that instead. It’s just less convenient (especially on a weekend when I am sometimes doing it from my phone).
Most recently (as in the last two days), I’ve started tracking the Institute for Health Metrics and Evaluation’s (IHME) forecasts. I’d checked their website pretty regularly in the beginning, but now that we’ve reached a sort of terribleness equilibrium, I haven’t. But given the model trends that are suggesting a really terrible Christmas for a lot of people, I thought it would be worth paying attention to.
George E.P. Box said “all models are wrong, but some are useful”. You don’t earn a meteorology degree without learning this lesson. So in order to see how useful the models are, I’m comparing their forecast to the actual deaths.
This is where it gets pretty squishy. To de-noise the data a little bit, I’m comparing the models to a three-day average. I picked that because that’s what IHME says they use to smooth the observed data on their website. But their smoothed numbers don’t quite match mine, so I don’t really know.
At any rate, IHME seems to update about once every week or so. That graph would get messy pretty quickly. My plan is to keep the four most recent model runs and the first run in prior months just to get a feel on how much the model forecasts are improving. I haven’t gone back to add historical model runs beyond the few I’ve currently added. I may end up doing that at some point, but probably not. I’m not particularly interested in whether or not a model from April correctly predicted December. I care if last week’s forecast looks like it has a good handle on things.
Indiana’s daily death rate has been remarkably consistent over time. With the exception of early August when we saw a bump, we’ve averaged around 9 deaths per day since late June. This is better than the quick increases we saw in April, when the increases were twice what the totals are now. But considering that early IHME model runs had the rate going to zero in May (if I recall correctly), 10 a day is pretty disheartening.
Hospitals and local officials are a little slow to report deaths. It’s not uncommon for a day’s count to double from the initial report in the days following. It’s gotten to the point where I generally don’t enter a day’s deaths until the next day in order to not skew the end of the graph.
The week-over-week differences in new cases are surprisingly volatile. As recently as a few days ago, there’s a swing from +359 on 14 September to -91 on 15 September in the one week comparisons. The two week comparison went from -376 on 9 September to +445 on 10 September. Just looking at the graph, the volatility has seemingly worsened over time.
I try to update the spreadsheet every day. Generally in the early afternoon, as the state dashboard updates at noon. At the moment, I don’t have any plans to make significant changes to what I track or how I graph it. If I do, I’ll post here. I have briefly considered writing some tooling to graph, parse, and plot all of the input data, but the spreadsheet works well enough for now. I have plenty of other things to occupy my time.
Last week, the upstream project for a package I maintain was discussing whether or not to enable autosave in the default configuration. I said if the project doesn’t, I may consider making that the default in the Fedora package. Another commenter said “is it a good idea to have different default settings per packaging ? (ubuntu/fedora/windows)”
My take? Absolutely yes. As I said in the post on “rolling stable” distros, a Linux distribution is more than an assortment of packages; it is a cohesive whole. This necessarily requires changes to upstream defaults.
Changes to enable a functional, cohesive whole are necessary, of course. But there’s more than “it works”, there’s “it works the way we think it should.” A Linux distribution targets a certain audience (or audiences). Distribution maintainers have to make choices to make the distro meet that audience’s needs. They are not mindless build systems.
Of course, opinions do have a cost. If a particular piece of software works differently from one distro to another, users get confused. Documentation may be wrong, sometimes harmfully so. Upstream developers may have trouble debugging issues if they are not familiar with the distro’s changes.
Thus, opinions should be implemented judiciously. But when a maintainer has given a change due thought, they should make it.
In a recent post on his blog, Chris Siebenmann wrote about his experience with Fedora upgrades and how, because of some of the non-standard things he does, upgrades are painful for him. At the end, he said “What I really want is a rolling release of ‘stable’ Fedora, with no big bangs of major releases, but this will probably never exist.”
I’m sympathetic to that position. Despite the fact that developers have worked to improve the ease of upgrades over the years, they are inherently risky. But what would a stable rolling release look like?
“Arch!” you say. That’s not wrong, but it also misses the point. What people generally want is new stuff so long as it doesn’t cause surprise. Rolling releases don’t prevent that, they spread it out. With Fedora’s policy, for example, major changes (should) happen as the release is being developed. Once it’s out, you get bugfixes and minor enhancements, but no big changes. You get the stability.
On the other hand, you can run Fedora Rawhide, which gets you the new stuff as soon as it’s available, but you don’t know when the big changes will come. And sometimes, the changes (big and little) are broken. It can be nice because you get the newness quickly. And the major changes (in theory) don’t all come at once.
Rate of change versus total change
For some people, it’s the distribution of change, not the total amount of change that makes rolling releases compelling. And in most cases, the changes aren’t that dramatic. When updates are loosely-coupled or totally independent, the timing doesn’t matter. The average user won’t even notice the vast majority of them.
But what happens when a really monumental change comes in? Switching the init system, for example, is kind of a big deal. In this case, you generally want the integration that most distributions provide. It’s not just that you get an assortment of packages from your distribution, it’s that you get a set of packages that work together. This is a fundamental feature for a Linux distribution (excepting those where do-it-yourself is the point).
Applying it to Fedora
An alternate phrasing of what I understand Chris to want is “release-quality packages made available when they’re ready, not on the release schedule.” That’s perfectly reasonable. And in general, that’s what Fedora wants Rawhide to be. It’s something we’re working on, particularly with the ability to gate Rawhide updates.
But part of why we have defined releases is to ensure the desired stability. The QA team and other testers put a lot of effort into automated and manual tests of releases. It’s hard to test against the release criteria when the target keeps shifting. It’s hard to make the distribution a cohesive whole instead of a collection of packages.
What Chris asks for isn’t wrong or unreasonable. But it’s also a difficult task to undertake and sustain. This is one area where ostree-based variants like Fedora CoreOS (for servers/cloud), Silverblue (for desktops), and IoT (for edge devices) bring a lot benefit. The big changes can be easily rolled back if there are problems.
My friends, I’d like to tell you the story of how I spent Monday morning. I had a one-on-one with my manager and a team coffee break to start the day. Since the weather was so nice, I thought I’d take my laptop and my coffee out to the deck. But when I tried to log in to my laptop, all I had was the mouse cursor. Oh no!
I did my meeting with my manager on my phone and then got to work trying to figure out what went wrong. I saw some errors in the journal, but it wasn’t clear to me what was wrong.
Aug 31 09:23:00 fpgm akonadi_control: org.kde.pim.akonadicontrol: ProcessControl: Application '/usr/bin/akonadi_googlecalendar_resource' returned with exit
code 253 (Unknown error)
Aug 31 09:23:00 fpgm akonadi_googlecalendar_resource: QObject::connect: No such signal QDBusAbstractInterface::resumingFromSuspend()
Aug 31 09:23:00 fpgm akonadiserver: org.kde.pim.akonadiserver: New notification connection (registered as Akonadi::Server::NotificationSubscriber(0x7f4d9c0
Aug 31 09:23:00 fpgm akonadi_googlecalendar_resource: Icon theme "breeze" not found.
Aug 31 09:23:00 fpgm akonadiserver: org.kde.pim.akonadiserver: Subscriber Akonadi::Server::NotificationSubscriber(0x7f4d9c010140) identified as "AgentBaseC
hangeRecorder - 94433180309520"
Aug 31 09:23:01 fpgm akonadi_googlecalendar_resource: kf5.kservice.services: KMimeTypeTrader: couldn't find service type "KParts/ReadOnlyPart"
Please ensure that the .desktop file for it is installed; then run kbuildsycoca5.
Before starting the weekend, I had updated all of the packages, as I normally did. But none of the updated packages seemed relevant. I hadn’t done any weird customization. As “pino|work” in IRC and I tried to work through it, I remembered that I had added a startup script to set the XDG_DATA_DIRS environment variable in the hopes of getting installed flatpaks to show up in the menu. (Hold on to this thought, it becomes important again later.)
I moved it out of the way to get things cleaned up (by removing the plasma-org.kde.plasma.desktop-appletsrc and plasmashellrc files). Looking at the script, I realized I had a syntax error (a stray single quote ended up in there) while trying to set XDG_DATA_DIRS. Yay! That’s easy enough to fix.
Why it broke
Except it was still broken. It was broken because I referred to XDG_DATA_DIRS but it was undefined. Why didn’t it inherit it? Ohhhhh because fish doesn’t use the /etc/profile.d directory.
So remember how I did this in order to get Flatpaks to show up in my start menu? I could have sworn they did at some point. It turns out that I was right. The flatpak package installs the scripts into /etc/profile.d, which fish doesn’t read. So when I switched my shell from Bash to fish a while ago, those scripts never ran at login.
How I “fixed” it
To fix my problem, I could have written scripts that work with fish. Instead, I decided to take the easy route and change my shell back to bash. But in order to keep using fish, I set Konsole to launch fish instead of bash. Since I only ever do a graphical login on my desktop, that’s no big deal, and it avoids a lot of headache.
The bummer of it all is that I lost some of the configuration I had in the files I deleted. But apparently the failed logins made it far enough to modify the files in a way that Plasma doesn’t like. At any rate, I didn’t do much customization, so I didn’t lose much either.