Is it time to replace the Saffir-Simpson scale?

Short answer: yes. Long answer: I’ll let Cliff Mass explain it. But as the 2018 Atlantic hurricane season draws to a close today, I’m more convinced than ever that the Saffir-Simpson scale does us no good.

The categories simply don’t mean much to the average person. Sustained wind speed is only one part of a hurricane’s power, and perhaps not even the most important. Storm surge, rainfall, and wind gusts are all significant contributors to the harm caused by hurricanes. Of course, coastal conditions, population density, and building quality factor into the end impact, too. Particularly inland, a slow-moving but weaker storm could cause more damage (due to flooding) than a stronger storm that spends less time over the area.

Ultimately, as I’ve written in the past, it’s not the meteorology that the public cares about. They want to know what the impact will be and what they should do about it. This means de-emphasizing wind speeds and focusing more on impacts. To its credit, NOAA agencies have put more emphasis on impacts in the last few years, but the weather industry as a whole needs to do a better job of embracing it. It requires a cultural change in the public, too, which may take a generation to settle in.

But there’s no time like the present to start preparing for day. And maybe it’s time to drop the distinction between tropical storm and hurricane watches and warnings, too.

“We had no warning!” gets old after a while

“We had no warning” may be the most uttered phrase in weatherdom. It seems like every time there is a significant weather event, that sentence pops up. It’s sometimes true, but often it’s true-ish. It gets trotted out most often for severe thunderstorms (particularly tornadoes) and frequently translates to “I wasn’t paying attention to the warning”.

On Wednesday, the Washington Post‘s Capital Weather Gang ran a story about the city manager of Adak, Alaska crying “no warning!” after an intense cyclone brought winds in excess of 120 miles per hour. This is a case of true-ish at best. More than 24 hours before winds reached warning criteria, the National Weather Service Forecast Office in Anchorage issued a high wind watch with a mention of wind gusts up to 85 MPH. 16 hours before the winds reached warning criteria, a high wind warning was issued and the gust forecast was increased to 95 MPH.

The actual wind speed topped out above 120 MPH, as I said, and the onset was about three hours earlier than forecast. They didn’t nail it, but it’s not exactly “not nearly close to being anywhere accurate.” The difference between 95 and 120 MPH is nothing to scoff at (recall that the kinetic energy increases with the square of speed), but I’m not sure there’s much more you could do to protect your house from 120 MPH winds that you wouldn’t do for 95 MPH.

I like the Meteorologist in Charge’s reaction. It was effectively “what more do you want?” The warning was out, so if the city manager truly didn’t get it, the question is “Why not?” It’s important for the NWS to make sure that the products they issue are well disseminated and well understood. It’s also important for public officials to have a way to receive them. “We had no warning” should be reserved for cases where it’s more than just true-ish.

Tornado warning false alarm rates

Five Thirty Eight recently ran a post about the false alarm rate of tornado warnings. Tornado warnings fail to verify (i.e. have no tornado) approximately 75% of the time, a number that has held steady for years. This comes as no surprise to meteorologists, and probably not to the general public. What’s disappointing about the article is that it doesn’t address the reason that the false alarm rate hasn’t improved: because it’s not a priority.

The ideal case, of course, is a false alarm rate of zero. While the article quotes the reasoning (“you would rather have a warning out there and have it miss than have an event and not have one out there”), it doesn’t explain why that reasoning leads to a high false alarm rate.

The first reason is that an emphasis on maximizing detection means that in questionable scenarios, forecasters will lean toward issuing a warning instead of not. I’ve been in an office when an unwarned tornado has been reported. The forecasters are not happy about that. They take the National Weather Service mission of protecting life and property seriously. The impact of a false alarm (inconvenience and lost productivity) outweighs the potential loss of life from a missed event.

After inadvertently posting this when I meant to save the draft, a friend commented that the “ideal FAR is actually non-zero if you want lead time.” This leads to the second reason an emphasis on detection increases the false alarm rate. Issuing a tornado warning seconds before the tornado hits is of limited utility. People in the warned area need time to move to safety. The article does point out that lead time has increased steadily for the past few decades. But the more lead time you have, the more likely it is that a warned storm won’t produce a tornado. Tornadoes are exceptional events.

There’s a balance between detection rate and lead time on one side and false alarm rate on the other. Like a seesaw, lowering one side raises the other (if you play with the signs on the numbers, that is). Prudent policy focuses first on detection and then on lead time, so the false alarm rate has to suffer. Improvements in technology and science will hopefully move the fulcrum such that we can lower the false alarm rate without reducing the lead time or probability of detection too much.