FiveThirtyEight Assess their Accuracy

  • Context: Undergrad 
  • Thread starter Thread starter BillTre
  • Start date Start date
  • Tags Tags
    Accuracy
Click For Summary
SUMMARY

The discussion centers on the accuracy of predictions made by FiveThirtyEight, particularly through an interview with Nate Silver, the founder. FiveThirtyEight utilizes statistical models to calibrate their predictions by comparing them against actual outcomes, especially in sports and politics. They evaluate their predictions based on calibration and skill, ensuring that their forecasts align with the probabilities they assign. Additionally, the impact of their predictions on prediction markets, such as Predictit, is highlighted, suggesting that their forecasts influence market behavior and decision-making among stakeholders.

PREREQUISITES
  • Understanding of statistical calibration in predictive modeling
  • Familiarity with prediction markets, specifically Predictit
  • Knowledge of FiveThirtyEight's forecasting methods and metrics
  • Basic concepts of probability and statistical skill evaluation
NEXT STEPS
  • Research FiveThirtyEight's calibration techniques in detail
  • Explore the impact of prediction markets on political forecasting
  • Study Nate Silver's book, "The Signal and the Noise"
  • Analyze independent evaluations of FiveThirtyEight's forecasting accuracy
USEFUL FOR

This discussion is beneficial for data analysts, political scientists, statisticians, and anyone interested in the methodologies behind predictive modeling and its implications in real-world scenarios.

BillTre
Science Advisor
Gold Member
Messages
2,739
Reaction score
11,966
This is a podcast and it can be found here. Basically, an interview of Nate Silver, the stat head behind FiveThirtyEight.com.
They have made thousands of predictions on politics and sports (which they publish) and consider their competition to be betting sites and other modelers. Nate used to play professional poker.

What I found interesting was how they go about calibrating their statistical models by comparing predictions.
Because they have so many results (large numbers of predictions paired with results; sports is good for this), and how successful they have been (by some statistical analysis) they can look at how accurate their predictions were.
From that point, they try to improve their model.
 
Mathematics news on Phys.org
Thanks, have to listen to the podcast. If this is all they are benchmarking themselves to
https://projects.fivethirtyeight.com/checking-our-work/presidential-elections/
then its incomplete. The interesting questions are how FiveThirtyEight compares to prediction markets and, now that they are widely followed, their impact on prediction markets. One has to assume that FiveThirtyEight's predictions are incorporated and discounted in prediction markets like Predictit, making FiveThirtyEight just one piece of information that the market price incorporates.

The other interesting issue is the impact of FiveThirtyEight's predictions on actual events. If FiveThirtyEight says candidate X has a better chance of winning, that information now taken seriously by donors, party insiders and voters.
 
  • Like
Likes   Reactions: BillTre
Kind of self fulfilling prophecy scenario?
 
Here is the written article, published last week by FiveThirtyEight, on their backtesting of their predictions: https://fivethirtyeight.com/features/when-we-say-70-percent-it-really-means-70-percent/

It discusses two main measures by which they are evaluating their predictions. First, are the predictions well calibrated? When for all events where they predicted at a 70% probability, do 7 out of ten of those events occur while the remaining three fail to occur? Second, they examine the skill of their prediction. For example, predicting that each of the 68 teams in the NCAA men's basketball tournament has a 1/68 chance of winning the tournament produces a well calibrated forecast, but not a particularly skillful one.

Here are some independent evaluations/commentary on FiveThirtyEight's methods:
https://www.buzzfeednews.com/article/jsvine/2016-election-forecast-gradeshttps://towardsdatascience.com/why-...lver-vs-nassim-taleb-twitter-war-a581dce1f5fc
 
  • Like
Likes   Reactions: scottdave, WWGD and BillTre
FiveThirtyEight does not beat prediction markets

 
Here is another article from FiveThirtyEight on the same issue, but with results from the 2019 NCAA B-ball results.
They also have some graphs of their measures of skill, uncertainty, and resolution of their forecasts.
The graphs also compare their political and sports forecasts.
 
I have heard Nate's book The Signal and the Noise is good. I found a used copy for $5.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 0 ·
Replies
0
Views
4K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 49 ·
2
Replies
49
Views
12K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 26 ·
Replies
26
Views
4K