# FiveThirtyEight Assess their Accuracy

Gold Member
2022 Award
This is a podcast and it can be found here. Basically, an interview of Nate Silver, the stat head behind FiveThirtyEight.com.
They have made thousands of predictions on politics and sports (which they publish) and consider their competition to be betting sites and other modelers. Nate used to play professional poker.

What I found interesting was how they go about calibrating their statistical models by comparing predictions.
Because they have so many results (large numbers of predictions paired with results; sports is good for this), and how successful they have been (by some statistical analysis) they can look at how accurate their predictions were.
From that point, they try to improve their model.

BWV
Thanks, have to listen to the podcast. If this is all they are benchmarking themselves to
https://projects.fivethirtyeight.com/checking-our-work/presidential-elections/
then its incomplete. The interesting questions are how FiveThirtyEight compares to prediction markets and, now that they are widely followed, their impact on prediction markets. One has to assume that FiveThirtyEight's predictions are incorporated and discounted in prediction markets like Predictit, making FiveThirtyEight just one piece of information that the market price incorporates.

The other interesting issue is the impact of FiveThirtyEight's predictions on actual events. If FiveThirtyEight says candidate X has a better chance of winning, that information now taken seriously by donors, party insiders and voters.

BillTre
Kind of self fulfilling prophecy scenario?

Gold Member
Here is the written article, published last week by FiveThirtyEight, on their backtesting of their predictions: https://fivethirtyeight.com/features/when-we-say-70-percent-it-really-means-70-percent/

It discusses two main measures by which they are evaluating their predictions. First, are the predictions well calibrated? When for all events where they predicted at a 70% probability, do 7 out of ten of those events occur while the remaining three fail to occur? Second, they examine the skill of their prediction. For example, predicting that each of the 68 teams in the NCAA men's basketball tournament has a 1/68 chance of winning the tournament produces a well calibrated forecast, but not a particularly skillful one.

Here are some independent evaluations/commentary on FiveThirtyEight's methods:

scottdave, WWGD and BillTre
BWV
FiveThirtyEight does not beat prediction markets