- #1
BWV
- 1,519
- 1,845
find the whole idea of modelling real world events involving human beings an interesting intersection of math, science and bull####. The idea of assigning a probabilty to a complex event like an election seems like scientism at its worst. If someone who has studies past elections says ‘my educated guess is that candidate x will win’ is honest, but sounds imprecise. There is a quantitative fallacy at work where people seem to give more weight to ‘my model predicts a 91.3% chance that candidate X will win’ but the probability is BS.
there is a Wiki article on this:
anyway, this guy said Trump had a 91% chance of winning with a model that had predicted ‘25 of 27 times’ (of course the model only existed for the past few elections)
there is a Wiki article on this:
https://en.wikipedia.org/wiki/McNamara_fallacy#cite_note-2The McNamara fallacy (also known as the quantitative fallacy[1]), named for Robert McNamara, the US Secretary of Defense from 1961 to 1968, involves making a decision based solely on quantitative observations (or metrics) and ignoring all others. The reason given is often that these other observations cannot be proven.
The fallacy refers to McNamara's belief as to what led the United States to defeat in the Vietnam War—specifically, his quantification of success in the war (e.g., in terms of enemy body count), ignoring other variables.[2]The first step is to measure whatever can be easily measured. This is OK as far as it goes. The second step is to disregard that which can't be easily measured or to give it an arbitrary quantitative value. This is artificial and misleading. The third step is to presume that what can't be measured easily really isn't important. This is blindness. The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.
— Daniel Yankelovich, "Corporate Priorities: A continuing study of the new demands on business" (1972).
anyway, this guy said Trump had a 91% chance of winning with a model that had predicted ‘25 of 27 times’ (of course the model only existed for the past few elections)
https://spectator.org/helmut-norpot...ZLfNFYsrM7UstbwriwHb5RmBNxpZ6ThoJCjImg-7UOn2wMost political prognosticators delay their final predictions for a presidential race until the morning of Election Day. In 2016, for example, Nate Silver of FiveThirtyEight and Nate Cohn of the New York Times released their final projections during the wee hours of November 8. One forecaster, however, routinely publishes his predictions six months in advance. Professor Helmut Norpoth of Stony Brook University in New York issued his last word on the Trump–Clinton contest 246 days before the voters went to the polls. In March 2016, Norpoth confidently predicted that Donald Trump would be elected president. He not only contradicted Silver, Cohn, and countless other experts, he had the effrontery to be right.
Professor Norpoth just as confidently predicts that Trump will trounce Joe Biden in November. Specifically, he gives the president a 91 percent chance of winning the election with an unambiguous 362-176 Electoral College margin. If this seems implausible, considering the avalanche of polls that portend Trump’s imminent political demise, it will seem less so when Norpoth’s track record is taken into account. The 2016 election was by no means his first foray into political projections. His model has correctly predicted five of the last six presidential elections, and projected the correct result in all but two of the last 27 contests when pre-election data were fed into his “Primary model.”Helmut Norpoth’s “Primary model” has been right 25 of 27 times, and it predicts that, in 2020, President Trump will defeat Joe Biden 362-176 in the Electoral College.