Climatune: correlating weather and Spotify playlists

I don’t remember how I stumbled upon it, but I recently discovered Climatune, a joint effort between Spotify and Accuweather that presents a playlist to match the current weather. According to Accuweather’s blog post on the topic, the playlists were developed based on an analysis of 85 million streams across 900 cities. This is an incredibly interesting project, even if the Lafayette playlists don’t seem to vary much.

What I like about projects such as Climatune is the reminder that we are still animals affected by our surroundings. When I worked at McDonald’s, we noticed anecdotally that sales of the Filet-o-Fish apparently increased on rainy days. I regret that I never ran daily sales reports to test this observation. I suppose in either case, there was an effect. Either customers ordered more, or we were more aware of the sales.

Correlating weather with other data is hardly a new concept. The most common example is probably comparing Chicago crime reports to the temperature. But researchers have investigated mood-weather correlation from Twitter posts. Several studies have examined the effects of weather on the stock market.

These sorts of studies can be hard, since it’s hard to control for all of the possible factors. But even if we can’t draw statistically sound conclusions, it’s fun to look at the connections. And if weather isn’t your thing, Spotify also has a site that makes a custom playlist that fits the cook time of your Thanksgiving turkey.

A quick look at the Weather Research and Forecasting Innovation Act

When I first heard that Congress had passed a bill it called the “Weather Research and Forecasting Innovation Act“, I was a little concerned. The majority has a history of being hostile to science (and one former senator was hostile to the National Weather Service in particular), and the title of the bill just screams “legislative doublespeak”. But I’ve read skimmed the bill and there doesn’t seem to be much to object to.

Much of the bill is of a “duh, we’re already working on that” nature; it requires the National Weather Service to conduct research to improve forecasts and warnings. One think I liked is that it specifically called out communication of forecasts and warnings as an area of improvement. It requires the current system to be examined, with necessary changes made where the current system is unsatisfactory. The current watch/warning/advisory system leaves a lot to be desired.

This bill is probably most notable for what it doesn’t say. On the positive side, it does not proscribe specific metrics (e.g. “the average lead time for a tornado warning must be 60 minutes”). It seems clear that meteorological experts were consulted for this bill. 

On the other hand, the only time “climate” is mentioned is when the full name of the COSMIC satellite program is given. There’s nothing in the bill that specifically precludes the NWS from conducting research related to climate change, but I couldn’t help reading into  the stated focus or short-term and seasonal weather. Legislation isn’t written in a vacuum, and the plain fact is that the current administration and Congress aren’t big supporters of climate change research or mitigation.

The main area of concern for me is the budget. A few programs have specific dollar amounts assigned, but it’s not clear to me if those are new appropriations or a directive to use the existing budget to that purpose. Certainly the main budget will have an impact on how this bill, should it be signed, is implemented. Given the initial reports about the Trump administration’s first budget, I remain solidly pessimistic. But returning to the provisions of this specific bill, it requires a lot of reporting, much of which appears to be new. The reporting, and even the substantive efforts, could end up being an unfunded mandate.

I can’t predict what the outcome of this bill will be. It got bipartisan support and I haven’t heard of any major complaints from my friends within the Weather Service. That in itself is encouraging; it should at least do no harm. If backed with sufficient funding, this may lead to improved forecasts for a variety of weather hazards. This, of course, is the stated mission of the National Weather Service: to protect life and property.

The most dangerous part of storm chasing is the road

People who have never gone storm chasing don’t always believe me when I say it’s a very boring hobby. Hours of driving can lead to…steady rain. Or blue skies. Or any number of outcomes that were probably not worth the time and money invested. They see movies or “reality” TV shows and assume it’s constantly a dangerous, edge-of-your-seat thrill ride.

Well it is dangerous, as we were reminded last week. Three chasers were killed in an automobile accident after one driver apparently ran a stop sign and hit another vehicle. I had never heard of the driver in question, so I can’t begin to speculate about his approach. Chances are he was a safe and conscientious driver most of the time. But it only takes one time. In that picture I saw that purports to be the fatal intersection, the stop sign is several feet away from the road. It’s easy to miss if something else grabs your attention.

Any chaser without a “close call” story is full of it (or just has a really bad memory). Distracted driving is dangerous, and chasing — especially near the storm — is an exercise in distracted driving. Maps, radar, radios, storm structure, cameras. All of these things compete for attention, but still you have to watch the road. Even with someone in the passenger seat, it can be hard to focus on the task at hand.

Truth be told, I’m surprised there haven’t been more deaths due to road accidents. The tornado isn’t the dangerous part.

January was warm, but in a weird way

Did last month feel warm to you? January was unusually warm in central Indiana, but in a subtle way. The highest temperature recorded at the Purdue University Airport (KLAF) was 63°F — just four degrees warmer than last year. And the lowest temperature — -4°F — was a degree colder than last year. But what if last year was warm, too?

Let’s compare January 2017 to the climate normals (1981-2010). The average daily high was 5.4 degrees above normal, and the average daily low was 5.9 degrees above normal. The average temperature for the month was 5.6 degrees above normal. That’s noticeably warm, but not necessarily outrageous.

What makes January 2017 stand out is the extended stretch of warmer temperatures. More specifically, how it just wasn’t freezing for much of the month. January 2017 was right on target for number of days with a low below zero (3), but it had five fewer days than normal with a low below 32°F. In fact, Lafayette set a new record for consecutive hours in January with a temperature above freezing.

Chart of 1995 and 2017 above-freezing streaks.

Streaks of January hourly temperatures at or above freezing at KLAF. Chart from Iowa Environmental Mesonet.

Since records began at KLAF in 1972, only two years have had a 10-day or longer above freezing streak. In 1995, we had a 10-day streak. This year it was 10.9 days. It just didn’t get cold for the second half of the month. Four of the last 10 Januaries had at least 5 days with a high temperature above 50°F. Four of the last 10 years also had at least one January day above 60°F. This year’s 2 is bested by 3 in 2008.

So if you were thinking to yourself “wow, January was really warm” but the high temperatures didn’t look all that off, rest assured that it was. It just forgot to January.

Continue reading

Weather forecast accuracy is improving

ForecastWatch recently issued a report on the accuracy of weather forecasts from 2010 through June 2016 (PDF here). While many readers will focus on who was more accurate, what stood out to me was how forecast accuracy has improved. Meteorologists have long “enjoyed” a reputation for inaccuracy — often more due to perception than fact. But those in the know are aware that skill is increasing.

Forecast accuracy over time

ForecastWatch’s U.S. analysis shows a clear — if small — improvement in the average accuracy since 2010.

Average U.S. forecast accuracy from 2010 – June 2016.

The chart above shows the average for all of the forecast sources Forecast Watch analyzed. To be frank, World Weather Online is a stinker, and brings the averages down by a considerable margin. Examining the best and worst forecast shows more interesting results.

Best and worst U.S. forecast accuracy from 2010 – June 2016.

Forecasts get less skillful over time, thanks to subtle inaccuracies in the initial conditions (see also: butterfly effect). That’s obvious in both graphs. What this second chart shows is that the best 6-9 forecast is now roughly as skillful as the worst 3-5 forecast was in 2010. And the best 3-5 day forecast is in the middle of the 1-3 day forecast skill from just a few years ago.

Forecasts are definitely improving. This is due in part to better modeling — both more powerful computers and also the ability to ingest more data. Research and improved tooling helps as well.

Forecasts still bust, of course, and forecasters hate bad forecasts as much as the public does. As I write this, forecasters in North Carolina are dealing with an inaccurate snow forecast (winter weather forecasting sucks due to reasons I explained in a previous post). Missed forecasts can cost money and lives, so it’s good to see a trend of improvement.

Forecast accuracy in your city

The ForecastWatch report breaks down by broad regions: United States, Europe, and Asia/Pacific. But weather is variable on much smaller scales. The ForecastAdvisor tool compares forecasts at the local level giving you the ability to see who does the best for your city. As of early January 2017, AccuWeather had the most accurate forecasts for Lafayette, Indiana, but they only place fourth when considering the past year.

Measuring HVAC efficiency with degree days

I finally turned my furnace on for the winter this weekend. As I thought about all the money I’ve saved keeping it off for several extra weeks, I was reminded of a discussion I had with a friend earlier this year. He was comparing his electricity bill to the previous year to see how much his new air conditioning unit was saving. Of course, there are a lot of ways to arrive at the wrong answer.

Let’s look at some of the things that affect the dollar amount of a utility bill:

  • The price per unit. Natural gas, propane, and electricity all have costs that vary based on a variety of factors: fuel cost, regulatory requirements, taxes, etc.
  • Weather. Obviously extreme weather will affect how much is consumed, which then affects the final bill.
  • Billing period. At least for my bills, the length of the billing period can vary by one or two days. This is probably due to working days (particularly holidays), and the fact that months are not of equal length.
  • Usage patterns. If you’re out of town for a week in October of this year, but not of last year, your overall usage will probably be down this year. Similarly, if you add, remove, or replace appliances that can have an effect separate from your HVAC system. Or if you switch from working at an office to working from home, you’ll probably see an increase in utility usage.

So how can you see if your new furnace, air conditioner, fancy thermostat, or whatever has made a difference? It’s going to be hard to account for some of the factors above (particularly usage patterns), but the best way is to look at your usage per degree day.

What’s a degree day? It’s a measure of the amount of heating or cooling required. The simplest measure of a heating degree day is to subtract the daily average temperature from 65. For example, if the average temperature for a given day is 55, then you would record 10 heating degree days. Similarly, to calculate cooling degree days, you would subtract 65 from the average temperature. So on a day with a 75 degree average temperature, you would record 10 cooling degree days. The lowest measure of degree days is 0; you would not record negative values. (Note that “average” here means the mathematical average of the high and low, not the climatological normal. If you have more detailed temperature data, you can calculate on an hourly or similar basis to get a more accurate value.)

Utility companies will sometimes include the heating or cooling degree days for the billing period in your bill. If not, you can get values online. Divide your usage for the month (e.g. the kiloWatt hours) by the heating or cooling degree days to get a value you can compare to other bills. If your usage patterns are relatively the same, you’ll now be able to compare year-to-year.

Hurricanes doing laps

As I write this Thursday night, Hurricane Matthew is approaching the east coast of Florida. By the time this post goes live, Matthew will have just made landfall (or made its closet approach to the Florida coast). Hundreds have been killed in Haiti, according to officials there, and I haven’t heard of any updates from Cuba or the Bahamas, both of which were hit fairly hard.

But even as the immediate concerns for Florida, Georgia, and the Carolinas are the primary focus, there’s another though in the mind of meteorologists: a second round.

National Hurricane Center forecast graphic for Hurricane Matthew.

National Hurricane Center forecast graphic for Hurricane Matthew.

If the forecast holds and Matthew loops back around to strike the Bahamas and Florida again, it could exacerbate already devastating damage. It is expected to weaken, so the threat will be more for rain than wind, but with existing widespread damage, it could be significant.

Such an event is not unprecedented, but it is rare. Eduoard and Kyle, both in 2002, did loops over open water, but did not strike the same area twice. Hurricane Esther struck Cape Cod twice in 1961.

From what I’ve been able to find, it looks like 1994’s Hurricane Gordon is the closest analog, but it’s not great. Gordon snaked through the Florida Straights and moved onshore near Fort Myers. The second landfall was near the location of the “seafall” on the Atlantic coast. Gordon’s peak strength was a low-end category 1, not the category 3 or 4 that Matthew will be at landfall (or closest approach).

Matthew is already making its place in history as the strongest storm on record to impact the northeastern Florida coast. Next week, we’ll find out how much gets tacked on.

The Local Storm Report product still has value

Several tornadoes hit central Indiana last month. During the event, a tornado warning was issued for Indianapolis. I saw several local media people tweeting that police had reported a tornado but no Local Storm Report (LSR) had been issued by the National Weather Service. I thought that was rather odd, and mentioned this incongruity in a tweet. It didn’t seem right to me that a tornado could be reported in the 14th-largest city in the United States but have no LSR issued.

Several people replied to tell me that the police report was included in the text of the warning. I did not take kindly to that. While including such information in a warning is great, that’s not what I was after. I specifically wanted an LSR. I was asked if it’s still a relevant product,  so here’s this post.

The Local Storm Report is still a distinctly useful product because it has a defined format. While most people do not consume LSRs directly, the rigid format allows it to be used in a variety of useful ways. For example, a media outlet can parse the incoming LSRs and use the coordinates and type to make a map for TV or web viewing. This can help the audience better understand the type and location of a threat.

Additionally, they’re helpful for downstream experts (other forecast offices, emergency managers, etc.) to know what a storm has produced. I often watch the LSRs issued by the Lincoln, IL or Chicago offices when severe weather is approaching my area to see the ground truth to go along with the warning. Knowing that a storm has (or hasn’t)  produced what the warning advertised can be very helpful in formulating a response to an approaching weather threat.

Apart from the warnings, timely and frequent LSR issuance is one of the most valuable functions of a National Weather Service office during a severe weather event.

But what about social media?

I’m glad you asked. Someone suggested that social media is a better avenue for communicating storm reports, in part because a picture is worth a thousand words. I agree to a point. Seeing a picture of the tornado heading for you is more powerful than words or a radar image. In that sense, social media is better.

But Facebook is awful for real-time information. Twitter is limited in the amount of detail you can include and has a relatively small audience. Plus, social media is hard to automatically parse to reuse the data, unless every forecaster tweets in a prescribed format.

The ideal scenario would be to tie social media into the process of issuing LSRs. As an LSR is generated, the forecaster would have the option of posting the information to the office’s social media accounts (perhaps with a link to the LSR) . If we’re granting wishes, the posting process would also allow for the inclusion of external images.

Until that day comes, I’m going to keep looking to LSRs for verification during severe weather events. And I’ll keep being disappointed when they’re not issued. 

New entry in the Forecast Discussion Hall of Fame

Most entries in the Forecast Discussion Hall of Fame earn the honor with a consistent excellency throughout the entire work. As Hurricane Hermine approached the Florida coast earlier this week, forecasters at the Tallahassee forecast office were focused on the effects of that storm. The fire weather discussion contained a single word, and that’s what landed it as the most recent entry.

It’s worth noting, too, that several subsequent updates to the Area Forecast Discussion left the fire weather section unchanged. I’m glad to see Southern Region Headquarters did not immediately rain bureaucratic hell upon the office. I’m not sure that would be the case in other regions.

Long range heat wave forecasts

What if I could tell you today that we’d have a major heat wave on June 11? A recently-published study could make that possible. Researchers analyzing heat waves over several decades have found a signal that improves the reliability of long-range heat forecasts. Will it be useful for forecasting specific days? I have my doubts, but we’ll see. They apparently plan to use it quasi-operationally this summer.

The more likely scenario is that it will help improve probabilistic forecasts on a multi-day-to-month scale. For example, the one month and three month outlooks issued by the Climate Prediction Center. There’s clear value in knowing departure from normal conditions over the course of the next few months, particularly for agricultural concerns but also for civic planning. I’m not sure I see much value in knowing now that June 11 will be oppressively hot as opposed to June 4.

While this study got a fair amount of coverage in the weather press, I don’t see that it will have much of an impact to the general public for a while. In fact, if it results in gradual improvement to long-range probabilistic forecasts, the public will probably never notice the impact, even if it turns out to be substantial over the course of several years.