Forecast Discussion Hall of Fame featured in The Atlantic

On Friday, The Atlantic published an article about National Weather Service forecast discussions and why they are…they way they are. The article prominently featured several entries in the Forecast Discussion Hall of Fame and mentioned yours truly by name. After years of carefully curating the best forecast discussions, my hard work is finally paying off. Time to quit my job and bask in the glory!

Okay, so maybe not. It’s a pretty cool thing to happen, though. If this blog has gained any new followers thanks to that article, welcome!

While snowfall records were falling over the weekend, FunnelFiasco records were falling, too. I took a look at the site stats for weather.funnelfiasco.com over the weekend. As of Saturday evening, just the weather subdomain had nearly 14,000 hits from about 2,700 unique visitors in January, almost all on Friday and Saturday. That’s over six months’ worth of traffic and about half a month’s for all of FunnelFiasco.

January traffic by day for weather.funnelfiasco.com through the evening of January 23.

January traffic by day for weather.funnelfiasco.com through the evening of January 23.

Let’s look at some meaningless statistics. The two largest hosts were both .noaa.gov addresses, which doesn’t surprise me. I have to figure that the article would have had some interest in the halls of the National Weather Service. A caltech.edu address was 18th, which surprises me. I guess my Purdue friends don’t read The Atlantic. The leading operating system was Windows, with iOS, Linux, and OS X following. iOS was 23% of January weather.funnelfiasco.com traffic and it’s normally 1.9% of total funnelfiasco.com traffic.

mPING and charging for free labor

In 2012, NOAA’s National Severe Storms Laboratory (NSSL) partnered with the Cooperative Institute for Mesoscale Meteorological Studies (CIMMS) at the University of Oklahoma to collect crowdsourced precipitation type data. The “meteorological Phenomena Identification Near the Ground” project (almost always referred to as “mPING”) allows smartphone users to easily and anonymously report precipitation type.

This information can be very valuable to operational forecasters (it is not often easy to tell if it is raining or snowing at a particular location unless someone tells you) and to researchers working to improve radar algorithms. In the slightly-more-than three years since mPING was launched, nearly a million reports have been received, which suggests the public (or at least the members of the public who know about it) find it important to contribute, too.

Which brings us to last week’s announcement that access to the data is no longer free. Apparently the discretionary funding from the NSSL has expired, so it’s moving to OU-funded infrastructure. This means that the University will try to get what money it can for the data. A variety of licenses are available, depending on what level of access is desired.

API access to submit reports will remain free, so it will not cost anyone to contribute a report. But instead of volunteering effort (however minimal) to a public project, mPING reporters are now effectively unpaid labor for the University of Oklahoma. Sources within NOAA tell me that the forecast offices and national centers will continue to receive free access to real-time data, which is good, but not the point. OU thinks this data has value, so why should people provide it for free?

I actually wonder if it has monetary value. Certainly it has utility, but I don’t see too many places being willing to pay for it. One well-known TV meteorologist has already said he will stop using mPING data on-air because purchasing a license is not an efficient use of his limited financial resources. Highway departments may be the most likely to find it worthwhile to pay for a license, as near-real-time precipitation type information could prove very useful to the dispatching of salt trucks and plows. Still, the general effect seems to be that it will put off many people from reporting, diminishing the value of what may be an unsellable product in the first place.

I get that baby’s gotta eat. I spent years working at a large research university, including supporting systems that distributed meteorological data. I understand the reality, but that doesn’t mean I have to like it. I’m pessimistic on what these changes mean for mPING, and the poor example they set for citizen science generally. I hope I’m wrong.

Rating snow storms

It may be January, but with the relatively warm December we had, I’m not ready to start thinking about snow (spoiler alert: I’m never ready to think about snow). But snow is bound to happen at some point, and the Weather Channel will be sure to name the storm. Humans like to assign numbers to things. We have ratings for tornadoes, we have ratings for hurricanes, but we don’t have ratings for snow storms. Or do we?

Paul Kocin and Louis Uccelini developed the Northeast Snowfall Impact Scale (NESIS) in 2014. NESIS considers the snow depth, the area, and the affected population. This last part makes it pretty unique among meteorological numbers. Meteorological phenomena are often considered without thought for population (for example, the National Weather Service will issue a tornado warning even if no one lives in the affected area). Tornadoes are rated based on damage (not wind speed!), which sort of proxies population, but not exactly.

NESIS doesn’t seem to be widely used, and it’s almost certainly unknown outside of the weather community. Maybe because we don’t tend to see snow storms as disasters in the same way that tornadoes and hurricanes are? It would be nice to see it catch on, though.

“We had no warning!” gets old after a while

“We had no warning” may be the most uttered phrase in weatherdom. It seems like every time there is a significant weather event, that sentence pops up. It’s sometimes true, but often it’s true-ish. It gets trotted out most often for severe thunderstorms (particularly tornadoes) and frequently translates to “I wasn’t paying attention to the warning”.

On Wednesday, the Washington Post‘s Capital Weather Gang ran a story about the city manager of Adak, Alaska crying “no warning!” after an intense cyclone brought winds in excess of 120 miles per hour. This is a case of true-ish at best. More than 24 hours before winds reached warning criteria, the National Weather Service Forecast Office in Anchorage issued a high wind watch with a mention of wind gusts up to 85 MPH. 16 hours before the winds reached warning criteria, a high wind warning was issued and the gust forecast was increased to 95 MPH.

The actual wind speed topped out above 120 MPH, as I said, and the onset was about three hours earlier than forecast. They didn’t nail it, but it’s not exactly “not nearly close to being anywhere accurate.” The difference between 95 and 120 MPH is nothing to scoff at (recall that the kinetic energy increases with the square of speed), but I’m not sure there’s much more you could do to protect your house from 120 MPH winds that you wouldn’t do for 95 MPH.

I like the Meteorologist in Charge’s reaction. It was effectively “what more do you want?” The warning was out, so if the city manager truly didn’t get it, the question is “Why not?” It’s important for the NWS to make sure that the products they issue are well disseminated and well understood. It’s also important for public officials to have a way to receive them. “We had no warning” should be reserved for cases where it’s more than just true-ish.

Reporting severe weather via social media

It feels weird writing a post about sever weather in mid-December, but here we are. Over the weekend, storm chaser Dick McGowan tried to report a tornado to the NWS office in Amarillo, Texas. His report was dismissed with “There is no storm where you are located. This is NOT a valid report.” The only problem was that there was a tornado.

Weather Twitter was awash in discussion of the exchange on Saturday night. A lot of it was critical, but some was cautionary. The latter is where I want to focus. If you follow me on Twitter, it will not surprise you to hear that I’m a big fan of social media. And I think it’s been beneficial to severe weather operations. Not only does it make public reporting easier, but it allows forecasters to directly reach the public with visually-rich information in a way not previously possible.

But social media has limitations, too. Facebook’s algorithms make it nearly useless for disseminating time-sensitive information (e.g. warnings), and the selective filtering means that a large portion of the audience won’t get the message anyway. Twitter is much better for real-time posting, but is severely constrained by the 140 character limit.  In both cases, NWS meteorologists are experts on weather, not social media (though there are efforts to improve social media training for forecasters), and there’s not necessarily going to be someone keeping a close eye on incoming social media.

I don’t know all of the details of Saturday night’s event. From one picture I saw, it looked like the storm in question looked pretty weak on radar. There were also several possible places Dick could have been looking and it didn’t look he made which direction he was looking clear. At the root, this is a failure to communicate.

As I said above, I’m a big fan of social media. If I need to get in touch with someone, social media is my first choice. I frequently make low-priority weather reports to the NWS via Twitter. For high-priority reports (basically anything that meets severe criteria or that presents an immediate threat to life), I still prefer to make a phone call. Phone calls are less parallelizable, but they’re lower-latency and higher-bandwidth than Tweets. The ability for a forecaster to ask for a clarification and get an answer quickly is critical.

If you do make a severe weather report via Twitter, I strongly encourage enabling location on the Tweet. An accurate location can make a big difference. As with all miscommunications, we must strive to be clear in how we talk to others, particularly in textual form.

Hot take: The case against astronomical seasons

Let’s start by coming to an agreement: seasons, as we know them, are complete fabrications. They’re an attempt by our brains to neatly package broad, variable trends in weather conditions. Any attempt to define distinct boundaries is a futile effort.

So now that we agree that this is all a waste of time, let me tell you why the way we traditionally demarcate seasons is wrong. Traditionally, seasons are based on the length of the day. As you learned in elementary school science class, the earth’s axis is tilted with respect to the plane of our orbit. The days get shorter and colder when we are titled away from the sun (in the northern hemisphere, that happens to be when we’re closer to the sun).

The shortest day of the year is the day of the winter solstice (this year, the winter solstice occurs at 0448 UTC on December 22, or just before midnight on the 21st in the Eastern time zone). For the next six months, the days get progressively longer. In June, the summer solstice occurs and the days start getting shorter again.

The winter solstice in particular has long been an event of spiritual meaning. Ancient cultures celebrated the return of the sun. Many Christmas traditions (and perhaps the date itself) come from Roman and other pagan celebrations.

“But what does this have to do with weather?” you ask. Absolutely nothing, which is my point. The change in insolation (incoming solar radiation) is the driving force behind the weather seasons we experience, but it’s well out of phase with the astronomical boundaries. The shortest day may occur in December, but the coldest days tend to be in January or February. Similarly, the hottest days are in late July and August, even though the longest day is in June.

Since the late eighteenth century, meteorologists have used the first of December as the start of winter (the first of March is the beginning of spring, the first of June is the beginning of summer, and the first of September is the beginning of fall). This is considerably more convenient for calculations and the fuzziness doesn’t matter because seasons are a lie anyway.

Over the years, I’ve come to favor the meteorological definition of the seasons more and more. At least it’s based on the weather. I’ll leave the solstices and equinoxes to the astronomers.

Why the Sunshine app won’t change weather prediction

With $2 million in funding behind it, the Sunshine app hit the iOS App Store on Wednesday. Sunshine promises to disrupt weather forecasting by using crowd-sourced data and providing custom point forecasts. Sadly, that promise will fall flat.

First, I’m pretty wary of weather companies that don’t have a meteorologist on staff. If Sunshine has one, they’re doing a good job of hiding that fact. It’s not that amateurs can’t be good forecasters, but the atmosphere is more complicated than it is often given credit for. The Sunshine team seems to know just enough to say things that sound reasonable but aren’t really. For example, this quote from CEO Katerina Stroponiati.

The more users we have, with phones offering up sensor data and users submitting weather reports, the more accurate we will get. Like an almanac.

Except that almanacs aren’t accurate. Then there’s this quote from their first Medium post.

The reason weather forecasts are inaccurate and imprecise is because traditional weather companies use satellites that can only see the big picture while weather stations are few and far between.

That’s fairly accurate (though it oversimplifies), but they point to a particularly noteworthy busted blizzard forecast as an example of the inaccuracy of traditional forecasts. Snowfall can be impacted greatly by small differences, but blizzards are fairly large-scale systems, and I’m skeptical that Sunshine would have done any better, especially considering that it has no “experience” outside of the Bay Area.

It sounds like Sunshine’s approach is basically a statistical model. That is a valid and often valuable forecast tool, but it has its limits. Sunshine claims a 36% improvement over “weather incumbents” in its trial period (where’s the published study?), but that involved only 200 testers in the San Francisco area. While definite microclimates exist in that region, it’s not exactly known for wild changes in weather. I doubt such an improvement could be sustained across a wider area.

Sunshine relies on crowdsourced reports and the pressure sensor in new iPhones to collect data. Unlike many other parameters, reasonably accurate pressure measurements are not sensitive to placement. A large, dense network of pressure sensors would be of considerable benefit to forecasters, provided the data is made available. However, wind, temperature, and humidity measurements — both at the surface and aloft — are important as well. This is particularly true for severe weather events.\

Crowdsourcing weather observations is nothing new. Projects like CoCoRaHS and mPing have been collecting weather data from the general public for years. The Weather Underground app has crowdsourced observations, and Weather Underground — along with other sites like Weatherbug — has developed a network of privately-owned weather observation stations across the country. The challenge, as it will be with Sunshine’s reports, lies in quality assurance and making the data available to the numerical weahther prediction models.

I hope Sunshine does well. I hope it makes a valuable contribution to the science of weather forecasting. I hope it gets people asking their Congressional delegation why we can’t fund denser surface and upper-air observations. I just don’t expect it will have much of an impact on its own.

Hurricane Joaquin forecast contest begins

Hey! The tropics have awoken and there’s a not-unreasonable chance that the newly-upgraded Hurricane Joaquin will make landfall. Here’s your chance to test your forecast skill: http://funnelfiasco.com/cgi-bin/hurricane.cgi?cmd=view&year=2015&name=joaquin

Submit your forecast by 00 UTC on October 2 (8 PM EDT Thursday). If Joaquin does not make landfall, we’ll just pretend like this never happened. For previous forecast game results, see http://weather.funnelfiasco.com/tropical/game/

New entries in the Forecast Discussion Hall of Fame

Radar estimates of wind speed aren’t always the most reliable for a variety of reasons, but on Friday, forecasters in Memphis, TN opted to believe the radar instead of the surface observation. Maybe because the Millington station was reporting a 343 MPH wind gust.

NWS forecasters are public servants dedicated to preserving life and property. It should come as no surprise that they are sometimes moved by uncontrollable bursts of patriotism. Chris Hattings in Riverton, WY felt very Jeffersonian on Independence Day.

Both of these discussions have been added to the Forecast Discussion Hall of Fame.