We all like to gripe about how often the TV weather-guy is wrong. Most of us know that once you get past about a 3-day forecast, the predictions are about as accurate as throwing a dart at the weather map, blindfolded.
I came across an interesting little story. Blogger Randy Olson reviews Nate Silver’s book, The Signal and the Noise. It’s a study of how good (and bad) predictions are, and why the task of prediction is far harder than most of us give it credit.
Here’s what caught my eye: Silver was able to compare the National Weather Service, the Weather Channel, and Local TV predictions—and then compare these three plots to what actually happened. In a perfect world, when the forecast says 50% chance of rain, 50% of the forecast area would get wet. That’s the definition of a perfect forecast.
Silver found that all of the forecasting services were fairly close; the National Weather Service and the Weather Channel were within just a few percentage points of perfect. You’d think that the local TV guys would be right in line with them, seeing as they base most of their forecasts on the NWS data. You’d think that—but you’d be wrong.
The local TV news consistently predicted more rain than happened. All they would have to do is parrot the NWS data, and they would have been within 5-10% of perfect. Despite that, they were frequently way off. When the TV predicted 100% rainfall, do you know how what the actual precipitation percentage was? Somewhere around 68%!
Again, all they had to do to be at 98% was copy and paste the NWS report. Why were they so far off?
Because you and I—through the pressure of ratings—make them.
Silver explained this phenomenon as the “wet bias.” TV reporters will always forecast rain more often than it happens. It’s a simple incentive: if they tell us that it’s going to rain and it doesn’t, we feel lucky and forget about it. If they tell us it isn’t going to rain, and it rains on our parade, we’re furious and we don’t forget! We as viewers tend to only remember the bad news (selection bias). So the incentive for the weatherman is—over-predict rain. It won’t hurt him, and it could help him.
So what’s the lesson about human nature?
We evaluate the performance of others based on their effect on us.
Here’s what we don’t tend to evaluate others based on: accuracy, effort, intention, feelings, or pretty much anything else.
When someone cuts me off in traffic, he’s an inconsiderate maniac who is a menace to society who should be taken off the road. When I do it, I evaluate myself differently. It was an accident. I’m only human!
When they get my order wrong at the McDonald’s drive-thru, I don’t think about me being unclear, the sound equipment making their job difficult, poor job training and equipment, or the possibility that they’re near the end of a double-shift. I just assume they don’t care and want to ruin my lunch.
I’m not always the most charitable observer, am I? Sometimes that causes other people to change their behavior…and not generally for the better.
The weatherman knows that, and so he covers his bases by fudging the numbers, but most people in life don’t have that simple recourse.
This week, when you get angry at somebody, stop and think: am I judging them like I judge the weatherman? Maybe I could cut the people around me a little more slack than I usually do.
What do you think?
In the meantime, I’ll be over at weather.gov…and thinking about what Jesus said. “Judge not…”