I’m not going to sit here and try to come up with new ways to say “this is bad”. Not a lot has changed since the last update: the numbers are all chugging along on trend. There is a change to my dashboard, however.
I realized today that I had been pulling the wrong data from the IHME models. I had been entering the “best case” scenario that includes universal mask wearing and the like. What I should have been pulling from was the reference model. This results in higher predicted values.
The overall impact isn’t that great. The scenarios don’t really diverge for a while, so for the most part the model error graph is unchanged. The future, particularly late November and into December is where you notice a difference. The recent models still under-predict the deaths, but the general trend matches well.
IHME said in their latest update that they didn’t make many changes to the model for the latest run. It’s essentially the same as last week but with more recent data. Unsurprisingly, it’s pretty close to the previous run. Both of those have a lower peak than forecasts from September, but still 50% higher than the spring peak.
It’s worth noting that IHME’s model assumes that states will re-introduce restrictions at a certain point. I’m having a hard time seeing that happening in Indiana—at least not as quickly as IHME’s assumption would have it. I wonder how much of Governor Holcomb’s refusal to even entertain the idea of moving back a phase or several has to do with the election in a week and a half. After the election, he’ll either be a lame duck or he’ll be into his last term (sort of). That takes away much of the political risk.
I updated my Indiana COVID-19 dashboard with the latest numbers. It continues to look bad. Hospitalizations are up 15% in the past week. The new daily case record set yesterday is 30% higher than the record set a week ago. We had two days in the last four with 20+ deaths (and bear in mind that the recently daily numbers tend to rise rather significantly in the days that follow).
Most alarming is the latest forecast from the Institute for Health Metrics and Evaluation (IHME). Their 10/15 model forecast is now on the dashboard, and it continues to show a big upswing in fatalities through November and December. The models have been pretty consistent with underpredicting the death count lately, so the big increase in the last two runs is extra worrisome.
In order to get a better sense of the past and possible future, I plotted the observed deaths with each of the model runs I have in the spreadhseet.
While the early September runs were a little hot, none of them really captured the increase we’ve seen over the past few weeks. The last two runs (10/9 and 10/15) are the first two to fully consider the move to Stage 5, I believe. And it’s clear that the forecast is not looking kindly on that.
Indeed, the Governor’s move to Stage 5 looks worse and worse with each passing day. The state health commissioner announced earlier this week that she and her family tested positive.
I just updated my Indiana COVID-19 dashboard with today’s numbers. They are not pretty. The state set a record for new cases for the third consecutive day. Today’s increase was “only” 6.5%, which is an improvement on the 24% increase that yesterday’s new record represented.
Positive tests or positive individuals?
The number of tests administered is on the rise, but we’re testing far fewer individuals. In fact, we’re testing about 33% fewer people a day than we did at the peak in late August. That the state is focusing on the total positivitiy rate (5.2% over the last 7 days) as opposed to the rate of positive individuals (9.3% over the last 7 days) strikes me as deceptive.
I attribute the disproportionate increase in tests (compared to people tested) to school systems, at least in part. I know of teachers who have had to take several COVID-19 tests in the past two months in order to return to work after any illness that shares a symptom with COVID-19. While I applaud the schools for taking this seriously, it does lead to some misleading numbers.
Deaths and hospitalizations
On Thursday, Indiana hit 20 daily COVID-19 deaths again. Most recently, this mark was tallied on September 26 (21 deaths). The last time before that was June 14th. We have not had a day with single-digit deaths since September 21. The only stretch longer than that is the 68 days from March 28 through June 5.
Hospitalizations are up dramatically as well, as I mentioned in the last update. The current levels haven’t been seen since late May. Hospitalizations yesterday were 42% higher than on September 9 and 30% than two weeks prior.
The Institute for Health Metrics and Evaluation (IHME) released a new model run late last night. I have added that to the dashboard as IHME 10/9 and hidden the IHME 9/11 lines for readability. A few days in, and this run seems to be over-estimating Indiana deaths so far. This is a welcome relief, since the last month’s worth of runs have been pretty consistently running too low. Given that deaths tend to be reported over the course of several days, the model may end up being more accurate after all. IHME has not published the updated briefing yet, so they may have more to say about the changes in this week’s run.
IHME’s forecast assumes that states will re-implement restrictions when conditions deteriorate to a certain point. Assuming that is accurate, we’re looking at restrictions coming back in mid-to-late November. Under that scenario, the forecast calls for a peak of 66 deaths per day in early December (with a range of 35-105). That would exceed our April peak by 32%.
However, given Governor Holcomb’s decision to move to Stage 5 in the face of materially unimproved circumstances, I don’t know if we can depend on that. If we do nothing, or further ease the few restrictions left, the model suggests we could be losing over a hundred Hoosiers a day in late December.
I haven’t made any changes to my Indiana COVID-19 plots since the last update, but I wanted to comment on some of the trends. The Governor announced a week ago today that the state would move to Stage 5 of our response. The reduced restrictions took effect on Saturday.
It’s far too early for drawing any causal effects. Nonetheless, I find it interesting that fate seems to be saying “I’ll show you!” In the last six days, the trend in daily deaths is upward. Saturday, Sunday, and Monday have averaged an increase in 7 deaths over the prior week (although the two-week comparisons are much noisier). The week-over-week change in cases is riding a five-day positive run. This is the first stretch longer than three days since early August. Normally the new case count varies wildly in both directions, so it’s unusual to see a stable run like this.
The state’s dashboard hints at an upward trend in the positive test rate again. Hospitalizations are up 16% (135 patients) in the past week. This trend has continued fairly steadily for the past week and a half.
What concerns me most is the model verification. The last few weeks of IHME forecasts were initially running a bit high, but in the last few days, they’re now under-predicting the daily death counts. This could suggest that the bad scenario predicted for December will be worse than forecast. It also may not. This is a short window, so we’ll have to see how trends hold.
As I said at the beginning, these bad trends in Indiana’s data cannot be tied to the move to Stage 5. But it does suggest that it was a bad decision. As my friend Renee wrote today, it’s less that things have improved and more that we’ve just grown accustomed to things being bad.
Like many of you, COVID-19 has weighed heavily on me in 2020. Part of the weight is the uncertainty of it all. While we seem to have a reasonable knowledge now of how to minimize spread and avoid fatality (not that we necessarily are doing these things. Wear your damn masks, people), that was not the case in the beginning. And while I’m not a virologist or an epidemiologist, I find having a sense of the numbers helps my unease. So early on, I started keeping track of some basic stats for Indiana COVID-19 deaths in a Google spreadsheet. You can take a look at it now. Below, I explain some of the history and some observations.
At first, I tracked the deaths by day of report. This led to a noticeable pattern. Deaths dropped Sunday and Monday, since the previous day was a weekend. I assume hospitals were slower to report to the local health departments who were in turn slower to report to the states. To address this I also had a plot that ignored weekends. For both of these, I had a seven-day moving average to smooth out individual bumps in the data. This made it easier to spot trends.
After a while, though, I realized that the deaths reported on any given day could represent deaths that occurred on many days. Realizing this, I cleared out the old data and went through each day on the Indiana COVID-19 dashboard. The state makes it easy to see when past days have new deaths added, so it’s easy to keep that up to date. I plotted the daily deaths on linear and log scales with 7-day moving averages. Those first two graphs have basically remain unchanged since.
It’s also worth noting that the state’s dashboard has improved dramatically since the early days. This includes a moving average for all of the reported metrics.
Even without relying on day-of-report for tracking deaths, there seemed to be a rough periodicity to the daily death counts. I won’t try to come up with an explanation. But it was clear that comparing day-to-day didn’t necessarily give an accurate picture. So I started tracking week-over-week and week-over-two-week death counts. This, I figured, gives a better picture of the trend. If the differences are consistently negative, that means we’re heading in the right direction. If the differences are consistently positive, that’s a bad sign.
After a while, I decided to start tracking cases in the same way. The state’s dashboard makes this more difficult. The graphs don’t indicate when dates have changed, although in daily checks I’ve routinely observed changes of 5-10 cases as far back as 2-3 weeks. The state does make data available via downloadable spreadsheet, so I’ve started using that instead. It’s just less convenient (especially on a weekend when I am sometimes doing it from my phone).
Most recently (as in the last two days), I’ve started tracking the Institute for Health Metrics and Evaluation’s (IHME) forecasts. I’d checked their website pretty regularly in the beginning, but now that we’ve reached a sort of terribleness equilibrium, I haven’t. But given the model trends that are suggesting a really terrible Christmas for a lot of people, I thought it would be worth paying attention to.
George E.P. Box said “all models are wrong, but some are useful”. You don’t earn a meteorology degree without learning this lesson. So in order to see how useful the models are, I’m comparing their forecast to the actual deaths.
This is where it gets pretty squishy. To de-noise the data a little bit, I’m comparing the models to a three-day average. I picked that because that’s what IHME says they use to smooth the observed data on their website. But their smoothed numbers don’t quite match mine, so I don’t really know.
At any rate, IHME seems to update about once every week or so. That graph would get messy pretty quickly. My plan is to keep the four most recent model runs and the first run in prior months just to get a feel on how much the model forecasts are improving. I haven’t gone back to add historical model runs beyond the few I’ve currently added. I may end up doing that at some point, but probably not. I’m not particularly interested in whether or not a model from April correctly predicted December. I care if last week’s forecast looks like it has a good handle on things.
Indiana’s daily death rate has been remarkably consistent over time. With the exception of early August when we saw a bump, we’ve averaged around 9 deaths per day since late June. This is better than the quick increases we saw in April, when the increases were twice what the totals are now. But considering that early IHME model runs had the rate going to zero in May (if I recall correctly), 10 a day is pretty disheartening.
Hospitals and local officials are a little slow to report deaths. It’s not uncommon for a day’s count to double from the initial report in the days following. It’s gotten to the point where I generally don’t enter a day’s deaths until the next day in order to not skew the end of the graph.
The week-over-week differences in new cases are surprisingly volatile. As recently as a few days ago, there’s a swing from +359 on 14 September to -91 on 15 September in the one week comparisons. The two week comparison went from -376 on 9 September to +445 on 10 September. Just looking at the graph, the volatility has seemingly worsened over time.
I try to update the spreadsheet every day. Generally in the early afternoon, as the state dashboard updates at noon. At the moment, I don’t have any plans to make significant changes to what I track or how I graph it. If I do, I’ll post here. I have briefly considered writing some tooling to graph, parse, and plot all of the input data, but the spreadsheet works well enough for now. I have plenty of other things to occupy my time.
Many news outlets, including my local paper, recently carried an AP story about a report issued by The Education Trust. In the report, we learn that one out of every four people who take the U.S. military’s entrance exam fail. The report and article use these findings to indict the education system in the United States. Unfortunately, it is more of an indictment of the authors. While the Armed Services Vocational Aptitude Battery (ASVAB) is required at “hundreds” of high schools, it is by no means ubiquitous. The sample, then, is not random, but largely self-selecting. Consider also that the ASVAB is not required for officer-track students (i.e. service academies and ROTC), but only for enlisted personnel. When I first read the article, I immediately realized that the conclusion wasn’t justified.
It wasn’t until I did some further research that I realized exactly how wrong the authors were. As it turns out, ASVAB scores are given as percentiles. In other words, to get into the Army, you need not get 31% of the questions correct, you need to score better than 31% of the other test takers. This means that the military automatically rejects the lowest scores, no matter how good or bad they may be on an absolute scale. The military grants waivers for low scores in certain situations, which is why only a quarter of test takers fail.
So the news here is that 25% of students fail an exam designed for them to fail. In other news, water is wet. On second thought, maybe this is an indictment of the education system, but not in the way suggested. An elementary understanding of statistics immediately calls into question the credibility of the study. One paragraph of a Wikipedia article ruins the starting point of the article. The education system may have flaws, but the only flaws exposed by this article are the lack of statistical understanding and simple research ability possessed by The Education Trust and AP writers Christine Armario and Dorie Turner.
Disclaimer: In the year or more since this blog started, I’ve made a concerted effort to avoid political discussion. I have political opinions, some of them rather strong, but there are plenty of other places on the Internet where one can find barely-knowledgeable idiots ranting on about politics. I’ve got other things that I’d rather talk about with my ones of readers. With that in mind, today’s post isn’t intended to be a discussion of the political aspects of school policy, but just a look at what I consider to be interesting numbers. You can draw whatever conclusions you like from it.
I am a member of the local newspaper’s community advisory board. Once a month, the self-selected group sits down with the Executive Editor and the Managing Editor and we discuss various topics that help keep the newsroom connected with the community. A few months ago, as the state legislature was negotiating the budget this year, the topic turned to education. I knew that anything “for the childrens!” was likely to involve emotion and drama from all sides of the argument. Arming myself with factual information would not only help me discuss the matter logically, but would give me enough to decide what my opinion even was.
What I did was not a rigorous analysis, it only took me an hour and involved only a few bits of data. Using the state’s website, I found various statistics on public school districts in Lafayette and the surrounding areas. The first step is defining what success is. Success needs to be quantifiable to be useful, but for some reason, the state does not have a metric labeled “success.” As proxies for the elusive “success” number, I used graduation rate, the percentage of graduates who go to college, and the pass rate for the ISTEP+ exam.
For the contributors to success, I tried to anticipate what would be commonly argued. Since cutting school funding is a political sin, I looked at the dollars spent per student. The teacher-to-student ratio is often used to indicate the quality of a particular school, so that data was added in. Conservatives may argue that the school systems are over-burdened with administrators so I looked at the administrator-to-student ratio. Liberals might suggest that poor and minority students are set up to fail, so I took a look at the percentage of minority enrollment, and use the percentage of students receiving free or reduced lunches as a proxy for income. Having forgotten most of what I learned in my “Elementary Statistical Methods” class, I couldn’t do any impressive analysis. What I did instead was to plot each factor against each measure of success.
Dollars per student
For the school districts I examined, the range of total spending per student ranged from $8,100 to $12,500. It is interesting to note that there was statistically no effect of spending on the graduation rate or the ISTEP+ pass rate. Spending and college enrollment rate were weakly related, but the relationship was negative. That is to say that the more money spent per student, the smaller the percentage who went on to college. It is important to note, of course, that correlation does not imply causation. From the data, we cannot tell if spending more per student is likely to decrease those going to college, or if fewer students going to college means a district gets more funding to try to improve that metric. Either way, you can’t tell how successful a school is by how much money it spends per student.
Teachers per student
The argument often put forth is that small class sizes lead to more individual attention, which allows each child to learn better. That makes sense. From my friends and relatives in education, I can say with confidence that larger class sizes hasten teacher frustration. However, the data suggests that the educational success of a school district is improved by having fewer teachers per student. Once again, two of the three pairs were meaningless — ISTEP+ and graduation were not statistically linked to the number of teachers. The collegre rate did show a very weak relationship, but as the number of teachers increased, the percentage of students going to college dropped. This makes sense in light of the spending, since having more teachers results in a higher cost.
Administrators per student
An increased number of administrators also brings a higher cost, but with arguably less benefit. The numbers show that there is no benefit, at least as far as our “success” metrics are concerned, to having more administrators per student. No doubt there are arguments both for and against having a higher number of administrators per student, and either can lead to different successful outcomes, none of which are what we’ve looked at.
It was not clear how the Indiana Department of Education defines “minority.” As a result, it makes coming to conclusions based on the data a bit more difficult. Fortunately, for ISTEP+ and college attendance, there’s no statistically significant relationship, so there’s no conclusions to make. There is a weak relationship suggesting that as minority enrollment increases, the percentage of students graduating high school decreases.
I saved family income for last, because it alone had truly meaningful results. As I said earlier, income data for each school district was not readily available. Instead, I had to use the percentage of students on free or reduced-price lunch assistance as a proxy. The higher the percentage, the poorer the district. The range for this metric is from 14% (West Lafayette) to 66% (Lafayette and Frankfort). It is interesting to note that Lafayette and Frankfort schools also have the highest percentage of minorities. There’s only a weak relationship indicating poorer students are less likely to go to college, perhaps in part because of Indiana’s 21st Century Scholars program. However, there’s a moderately strong relationship to suggest that wealthier students are more likely to pass ISTEP+ and to graduate high school.
So what is the secret to a successful school? Don’t have poor students. As I said above, this is not a rigorous analysis, but it is notable that our income proxy is the only factor that affected the success metrics picked. I won’t speculate on an explanation. Here’s some R-squared values for those who are stat geeks: