Like many of you, COVID-19 has weighed heavily on me in 2020. Part of the weight is the uncertainty of it all. While we seem to have a reasonable knowledge now of how to minimize spread and avoid fatality (not that we necessarily are doing these things. Wear your damn masks, people), that was not the case in the beginning. And while I’m not a virologist or an epidemiologist, I find having a sense of the numbers helps my unease. So early on, I started keeping track of some basic stats for Indiana COVID-19 deaths in a Google spreadsheet. You can take a look at it now. Below, I explain some of the history and some observations.
Initial work
At first, I tracked the deaths by day of report. This led to a noticeable pattern. Deaths dropped Sunday and Monday, since the previous day was a weekend. I assume hospitals were slower to report to the local health departments who were in turn slower to report to the states. To address this I also had a plot that ignored weekends. For both of these, I had a seven-day moving average to smooth out individual bumps in the data. This made it easier to spot trends.
After a while, though, I realized that the deaths reported on any given day could represent deaths that occurred on many days. Realizing this, I cleared out the old data and went through each day on the Indiana COVID-19 dashboard. The state makes it easy to see when past days have new deaths added, so it’s easy to keep that up to date. I plotted the daily deaths on linear and log scales with 7-day moving averages. Those first two graphs have basically remain unchanged since.

It’s also worth noting that the state’s dashboard has improved dramatically since the early days. This includes a moving average for all of the reported metrics.
Spotting trends
Even without relying on day-of-report for tracking deaths, there seemed to be a rough periodicity to the daily death counts. I won’t try to come up with an explanation. But it was clear that comparing day-to-day didn’t necessarily give an accurate picture. So I started tracking week-over-week and week-over-two-week death counts. This, I figured, gives a better picture of the trend. If the differences are consistently negative, that means we’re heading in the right direction. If the differences are consistently positive, that’s a bad sign.

After a while, I decided to start tracking cases in the same way. The state’s dashboard makes this more difficult. The graphs don’t indicate when dates have changed, although in daily checks I’ve routinely observed changes of 5-10 cases as far back as 2-3 weeks. The state does make data available via downloadable spreadsheet, so I’ve started using that instead. It’s just less convenient (especially on a weekend when I am sometimes doing it from my phone).
Model verification
Most recently (as in the last two days), I’ve started tracking the Institute for Health Metrics and Evaluation’s (IHME) forecasts. I’d checked their website pretty regularly in the beginning, but now that we’ve reached a sort of terribleness equilibrium, I haven’t. But given the model trends that are suggesting a really terrible Christmas for a lot of people, I thought it would be worth paying attention to.
George E.P. Box said “all models are wrong, but some are useful”. You don’t earn a meteorology degree without learning this lesson. So in order to see how useful the models are, I’m comparing their forecast to the actual deaths.
This is where it gets pretty squishy. To de-noise the data a little bit, I’m comparing the models to a three-day average. I picked that because that’s what IHME says they use to smooth the observed data on their website. But their smoothed numbers don’t quite match mine, so I don’t really know.
At any rate, IHME seems to update about once every week or so. That graph would get messy pretty quickly. My plan is to keep the four most recent model runs and the first run in prior months just to get a feel on how much the model forecasts are improving. I haven’t gone back to add historical model runs beyond the few I’ve currently added. I may end up doing that at some point, but probably not. I’m not particularly interested in whether or not a model from April correctly predicted December. I care if last week’s forecast looks like it has a good handle on things.
Observations
Indiana’s daily death rate has been remarkably consistent over time. With the exception of early August when we saw a bump, we’ve averaged around 9 deaths per day since late June. This is better than the quick increases we saw in April, when the increases were twice what the totals are now. But considering that early IHME model runs had the rate going to zero in May (if I recall correctly), 10 a day is pretty disheartening.
Hospitals and local officials are a little slow to report deaths. It’s not uncommon for a day’s count to double from the initial report in the days following. It’s gotten to the point where I generally don’t enter a day’s deaths until the next day in order to not skew the end of the graph.
The week-over-week differences in new cases are surprisingly volatile. As recently as a few days ago, there’s a swing from +359 on 14 September to -91 on 15 September in the one week comparisons. The two week comparison went from -376 on 9 September to +445 on 10 September. Just looking at the graph, the volatility has seemingly worsened over time.
The future
I try to update the spreadsheet every day. Generally in the early afternoon, as the state dashboard updates at noon. At the moment, I don’t have any plans to make significant changes to what I track or how I graph it. If I do, I’ll post here. I have briefly considered writing some tooling to graph, parse, and plot all of the input data, but the spreadsheet works well enough for now. I have plenty of other things to occupy my time.
Pingback: Updated observations on Indiana COVID-19 trends – Blog FiascoBlog Fiasco