News Who Will Win the Elections? Predictions and Analysis

  • Thread starter Thread starter rootX
  • Start date Start date
Click For Summary
The discussion centers around predictions for the upcoming elections, with participants expressing uncertainty about the outcomes between Obama and Romney. Many believe the race is extremely close, with some polls showing Romney slightly ahead, while others indicate Obama has the advantage due to endorsements and debate performance. The impact of Hurricane Sandy on voter sentiment is debated, with some arguing it has complicated Obama's chances, while others note his effective response to the crisis. The electoral map suggests that Obama has a clearer path to victory, needing fewer swing states compared to Romney, who must secure multiple key states to win. Overall, the consensus leans towards an Obama victory, despite the tight race.

Who will win elections?


  • Total voters
    63
  • #91
Pythagorean said:
Anybody want to take bets? Loser has to post "I was wrong, I let my desire to see Romney win interfere with my ability to accurately predict the outcome of the presidential election."

;p
Will others post "I was wrong" if Romney actually wins? :-p
 
Physics news on Phys.org
  • #92
Pythagorean said:
Anybody want to take bets? Loser has to post "I was wrong, I let my desire to see Romney win interfere with my ability to accurately predict the outcome of the presidential election."

;p
Even if Romney wins? That wouldn't make much sense!
 
  • #93
Angry Citizen said:
That doesn't match Gallup's website. Can you explain that?

I'm also not sure that that is an acceptable source.
Aggregate polling has a margin of error somewhere in the neighborhood of 1%.
Source, please. My understanding is that by nature aggregators can't have a calculable margin for error. That doesn't make it zero (or 1%), it just makes it uncalculable.
It is not a statistical tie. Quit that.
Even if we assume your logic to be true, Realclearpolitics, an aggregator, says they are polling at an aggregate differential of 0.2%. So that contradicts your assertion. So please do tell me what the basis is of your assertion that it is not a statistical tie.
 
Last edited:
  • #94
hint:

;p means "teeheehee"
 
  • #95
My understanding is that by nature aggregators can't have a calculable margin for error. That doesn't make it zero (or 1%), it just makes it uncalculable.

This isn't true- I do a lot of sample aggregation for work (not in political polls, but lots of other contexts) and there are plenty of methods to estimate the error/uncertainty in the aggregate. Its really bread-and-butter stats stuff. Now the uncertainty might be dependent on some assumptions you make, so its not necessarily obviously cut and dry. Sam Wang's uncertainties are very different than what you see in some similar models.

Poll aggregation models RELY on the fact that an aggregate of state polls has a lower uncertainty than any single poll. The other improvement is that polling state-by-state gives you a much better picture of the electoral college than the popular vote (considering you can win an election and lose the popular vote).
 
  • #96
rootX said:
Will others post "I was wrong" if Romney actually wins? :-p

I have several thousand riding on the outcome spread through a few election markets (its not quite a straight bet, because I used the odds-spread to hedge a bit).

I will also post here in this thread "I was wrong" if Romney wins the election- if it doesn't cost money, its hardly a bet.
 
  • #97
CAC1001 said:
free-market capitalism

Can you please tell me what you mean by free-market capitalism?

My definition of free market capitalism is a economic system where no governance exists. IE: The concept of free-market means one doesn't believe there should be a government.
 
  • #98
ParticleGrl said:
This isn't true- I do a lot of sample aggregation for work (not in political polls, but lots of other contexts) and there are plenty of methods to estimate the error/uncertainty in the aggregate. Its really bread-and-butter stats stuff. Now the uncertainty might be dependent on some assumptions you make, so its not necessarily obviously cut and dry.
Clearly, if you have two identical polls with a certain margins for error, you should be able to combined them to make one big poll with a lower margin for error. But that requires that the polls actually be identical and I'm not sure they actually are. I'm having some trouble finding how RealClearPolitics does its aggregation, but Nate Silver provides a little insight:
The nine national polls included in the RealClearPolitics average as of Sunday evening, for instance, contained about 12,000 interviews between them. (Collectively, they showed Mr. Obama ahead by a trivial margin of 0.2 percentage points.)

The margin of sampling error on a 12,000-person sample is larger than you might think: about plus or minus 2 percentage points in measuring the difference between the two major candidates. So the national polls reflected in the RealClearPolitics average could in fact be consistent with anything from a two-point lead for Mr. Romney to one of about the same margin for Mr. Obama.
http://fivethirtyeight.blogs.nytime...but-obama-remains-electoral-college-favorite/

This at least gives us the lower limit on the error. The best it could possibly be is 2% if all the polls were conducted identically, which of course, they were not.
 
  • #99
Gary Johnson.
 
  • #100
SixNein said:
Can you please tell me what you mean by free-market capitalism?

My definition of free market capitalism is a economic system where no governance exists. IE: The concept of free-market means one doesn't believe there should be a government.

Never heard of it defined that way. Maybe you are confusing the ideal of a completely free-market economy versus an actual free-market economy?

Generally, a free-market economy is an economic system where the means of production are privately-owned and where decisions about how to distribute scarce resources are determined by the forces of the market, via supply and demand, as opposed to the government. A free-market requires a government to enforce things like rule of law, protection of private property, regulation, and so forth, and thus will also require taxation as well.
 
  • #101
russ_watters said:
Clearly, if you have two identical polls with a certain margins for error, you should be able to combined them to make one big poll with a lower margin for error. But that requires that the polls actually be identical and I'm not sure they actually are. I'm having some trouble finding how RealClearPolitics does its aggregation, but Nate Silver provides a little insight: http://fivethirtyeight.blogs.nytime...but-obama-remains-electoral-college-favorite/

This at least gives us the lower limit on the error. The best it could possibly be is 2% if all the polls were conducted identically, which of course, they were not.

Err... I don't know every branch of statistics, but every time I've used statistics combining data sets, the the combined data set does not have a lower error margin than both of the constituents. It is found in a pythagorean fashion, so it always ends up being more than the max of error margins of your constituent datas.

like 2 and 3 MOE's would give an aggregated MOE: sqrt(2^2 + 3^2) = 3.6
 
  • #102
Pythagorean said:
Err... I don't know every branch of statistics, but every time I've used statistics combining data sets, the the combined data set does not have a lower error margin than both of the constituents. It is found in a pythagorean fashion, so it always ends up being more than the max error margins of your constituent datas.

I think you are confusing variance (which does behave as above) and error margin (i.e. variance expressed as a percentage of population size.

For n Bernouilli trials, the variance = ##np(1-p)## which clearly increases as n increases. But the expected error is proportional to ##\sqrt{n}## (i.e. the standard deviation), and expressed as a percentage of the sample size it is proportional to ##1/\sqrt{n}##.
 
  • #103
It was explicitly error aggregation.

Here is a powerpoint from the state data center of www.census.gov that uses the same convention:

"www.census.gov/sdc/trent.ppt"
 
Last edited by a moderator:
  • #104
Though I do agree that they use the variance in the above examples (as I was taught to). Margin of error is defined elsewhere with the 1/sqrt(n). I think within one study, you surely use the 1/sqrt(n).

But aggregation, I'm not so sure about. I remember learning that errors from different sets should compound, not cancel. But this is from classes, not real research experience.
 
  • #106
I think the race is a coin toss right now. However, things have moved much better for Obama over the last week or two. Obama has been an extremely lucky politician I think, in that the media has covered for him regarding this whole Benghazi issue, and then Hurricane Sandy cut into the momentum Mitt Romney had.
 
  • #107
Pythagorean said:
Though I do agree that they use the variance in the above examples (as I was taught to). Margin of error is defined elsewhere with the 1/sqrt(n). I think within one study, you surely use the 1/sqrt(n).

But aggregation, I'm not so sure about. I remember learning that errors from different sets should compound, not cancel. But this is from classes, not real research experience.

I'll pass on the terminology (even if I could remember what it was when I last took a stats course, it's probably changed since then!) and stick to a common sense example.

Suppose you have a fair coin and toss it twice. The maximum "sampling errot" in the number of heads is ##\pm 1##, but you have a 50% chance of getting a 50% sampling error - i.e. getting 0% or 100% heads when the "true value" is 50%.

On the other hand if you toss it 1,000,000 times, the number of heads you get will probably be several hundred away from the expected value of 500,000, but a percentage sampling error from getting 501,000 heads instead of 500,000 is only 0.1%.
 
  • #108
Certainly. My hesitation was, from my data aggregation exposure, the two sample were collected with different methodologies and the errors weren't just a sample size issue. There were Other measurement errors associated with the data sets.

An additional curiosity: do all polls assume a uniform distribution?
 
Last edited:
  • #109
Pythagorean said:
An additional curiosity: do all polls assume a uniform distribution?
Of what? I suspect the answer is no...
 
  • #110
It always of whatever you're measuring. So for penny, we assume a uniform distribution of heads and tails. Here it would Romnians and Obamians. I don't know what else you could do besides assume uniform, or rather, how you would motivate it.
 
  • #111
Pythagorean said:
It always of whatever you're measuring. So for penny, we assume a uniform distribution of heads and tails. Here it would Romnians and Obamians. I don't know what else you could do besides assume uniform, or rather, how you would motivate it.
Still not following. A "uniform distribution" doesn't compute to me - it implies a 50/50 split between Romney supporters and Obama supporters, which would always give a 50/50 poll result, wouldn't it? But let me explain in more detail:

The polls do correct for the demographics of the sample, including the fraction of Democrats and Republicans sampled, Women and men. Whites and minorities. Etc...though not all polls will correct for all demographics or by the same amount. Ie, you don't correct the sample to make it 50/50 democrats vs republicans unless you think that an equal number of registered democrats and republicans will vote. That's where a lot of voodoo creeps into the polls. How do you figure out before the election what the split will be between registered democrats and republicans?

In addition, response rates for polls are very low, which can make for an additional source of error not quantified (and not quantifiable). In other words, a poll that says it has an error of 3% is really saying that of the 9% of people who respond to polls, it has an error margin of 3%. But what about the other 91% of people who wouldn't even answer the poll? How would they vote? We don't know, but using the poll assumes that they would answer the same way. That's a pretty big assumption.
http://web.mit.edu/newsoffice/2012/explained-margin-of-error-polls-1031.html
 
  • #112
BBC seems to be giving both candidates 50-50 chances of winning in their opinion and other articles. They have been critical of Romney as well of Obama.
 
  • #113
In addition, response rates for polls are very low, which can make for an additional source of error not quantified (and not quantifiable).

Yes, absolutely- but this is true of the individual polls as much as the aggregates. We still expect aggregates of polls to reduce the quantifiable sampling errors. If all the state polls are biased (because people not responding are planning to vote radically differently), then its garbage in -> garbage out, of course.
 
  • #114
It always of whatever you're measuring. So for penny, we assume a uniform distribution of heads and tails. Here it would Romnians and Obamians. I don't know what else you could do besides assume uniform, or rather, how you would motivate it.

I think you are confused. The goal of polling is to figure out what the underlying distribution looks like- if pollsters just assumed the underlying distribution were uniform, then what's the point of the poll?

Imagine someone gives you a penny and says 'this might be biased, I don't know'. Whats your methodology? You flip the coin a bunch and see what it comes out. You know you only have two outcomes, so the underlying-distribution of the coin is some sort of binomial http://en.wikipedia.org/wiki/Binomial_distribution distribution.

Now, you flip the coins N times, and you ask "what binomial distributions are consistent with the results I have?" And that sets your estimate for the probability of getting heads with your (potentially) biased coin.

As to your question about combining samples reducing error- if you flip the coin 100 times do you think you'll have a better or worse estimate than if you fip the coin 1000 times? 10000 times?

As to non-sampling errors, its hard to make a model with any kind of random-error that won't average out by aggregating the polls. However, systematic errors that are present across polls (imagine that 3/4 of the population of a state only spoke Lortax, an alien language, and that everyone who spoke Lortax were planning to vote not for Romney, or Obama, but for Kang. By not conducting polls in Lortax, every poll is systematically biased against Kang) won't cancel out, but they won't grow by including more polls.
 
  • #115
ParticleGrl said:
I think you are confused. The goal of polling is to figure out what the underlying distribution looks like- if pollsters just assumed the underlying distribution were uniform, then what's the point of the poll?

Imagine someone gives you a penny and says 'this might be biased, I don't know'. Whats your methodology? You flip the coin a bunch and see what it comes out. You know you only have two outcomes, so the underlying-distribution of the coin is some sort of binomial http://en.wikipedia.org/wiki/Binomial_distribution distribution.

Now, you flip the coins N times, and you ask "what binomial distributions are consistent with the results I have?" And that sets your estimate for the probability of getting heads with your (potentially) biased coin.

As to your question about combining samples reducing error- if you flip the coin 100 times do you think you'll have a better or worse estimate than if you fip the coin 1000 times? 10000 times?

As to non-sampling errors, its hard to make a model with any kind of random-error that won't average out by aggregating the polls. However, systematic errors that are present across polls (imagine that 3/4 of the population of a state only spoke Lortax, an alien language, and that everyone who spoke Lortax were planning to vote not for Romney, or Obama, but for Kang. By not conducting polls in Lortax, every poll is systematically biased against Kang) won't cancel out, but they won't grow by including more polls.


I think the my cognitive problem was that the absolute error does actually grow, the relative error doesn't. So while the error in number of people grows, the percentage of error would shrink.

So what kind of windowing average do they use for the distribution? A loaded coin would have a consistent bias that the average would converge on. But for a signal that can't be modeled with an equation, you'd have to window the last N elections or something? And what about Reps that vote Obama or Dems that vote Romney? Is there a chain of Bayesian functions for the different demographics?

Feels like a lot of black magic to me, though that's consistent with me being confused I suppose.
 
  • #116
each (weekly) copy of the economist is purchased by about 750,000 americans, whereas each (daily) copy of the new york times is purchased by less than one million readers. so the economist has a rather large relative readership.
 
  • #117
CAC1001 said:
The U.S.'s was very high in the 1950s because of the tax increases that had been passed during the very left-leaning FDR.
The top fed rate stayed at 91% until 1964, with one Republican President and two Republican controlled Congresses during this period. I don't know enough about the history of this period to speak about whether or not there were efforts underway to lower rates. Maybe one reason the rate wasn't lowered was that (despite arguments to the contrary), it wasn't tanking the economy? In fact, GDP growth rate and unemployment were both pretty good, at about 5% in 1964.

Edit: This is a redundant argument that I just noticed has mostly been addressed in a previous post. Feel free to ignore.
 
Last edited:
  • #118
So what kind of windowing average do they use for the distribution? A loaded coin would have a consistent bias that the average would converge on. But for a signal that can't be modeled with an equation, you'd have to window the last N elections or something? And what about Reps that vote Obama or Dems that vote Romney? Is there a chain of Bayesian functions for the different demographics?

I think you are overthinking things. The assumption made is that on any given day x% of the population will vote democrat, y% will vote Republican, a z% won't be decided yet.

Now, if I could call every single person in a state and ask them how they are going to vote, I'd know x, y and z with certainty. I can't do this, so instead I can call a few thousand people at random and ask them. Now I ask, what values for x are consistent with the answers I got when I called? So instead of a probability of getting heads when I flip a coin, I have a probability of getting "I'm voting for whoever" when I call a random person the phone.

This is the simplest polling, and requires nothing of the last few elections as an input. Its just attempting to suss out what's going right now.

Now, the simplest version of correcting for demographics goes like this- let's say I call 1000 people, and I find out that 82% of people are voting for X and (18% are voting for Y, no undecides). However, I realize that I've gotten unlucky in my sampling and 90% of the people I've called are women. Further, the way women and men are voting is very different- 90% of women are voting for X, but only 10% of the men!

Here, I might be tempted to correct for demographics- I'd say 'ok, I have estimates of how men and women vote, but my overall sample is not representative. So, I'm going to go ahead and say in the actual population only 50% are voting for X(0.50*0.10 (men) + 0.50*0.90(women)). Now- this correction DOES effect our sample error- I leave it to the interested reader to figure out how to set the sampling error- but the why is simple. We have a much smaller sample of men, so a larger error in this case.

A more difficult question, I imagine, is what to do if your sample comes back 40% democrat, 30% republican,30% independent, or something of that nature. The problem is that democrats are very likely to vote Obama, Republicans very likely to vote Romney, so correcting a demographic imbalance can end up making your poll worthless- if you always correct to nearly 50/50 dem/rep the vote is alway going to be nearly 50/50. Also, people probably switch between a party identification and independent at a whim. If you are a Republican live in San Francisco or NYC and expect democrats to dominate, you might be more likely to indentify as independent so you don't feel like you were on the losing side. Same thing for democrats in Houston,etc.

Now when we aggregate polls we have to be careful because people's opinions change over time, so we might only want to aggregate the most recent polls, or use some model to try and predict how people's opinions change,etc. We can, in principle, make it as complicated as we like, but the underlying idea is very simple.
 
Last edited:
  • #119
Ah, then just weighting for demographics. I have even more issues trying to divide all humans up between males and females. You're right, I'm overthinking this, but it's hard to see the simplified explanation as being very representative. I'm not arguing that it isn't sufficient; it just strains my intuition.

Anyway, thanks for the discussion, I'm going to shut up now.
 
  • #120
I'm almost hoping that the election results in a http://news.yahoo.com/romney-biden-administration-could-happen-223736689--abc-news-politics.html administration. Then, perhaps the Democrats and Republicans would finally talk to each other.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
2K
Replies
7
Views
2K
  • Poll Poll
  • · Replies 10 ·
Replies
10
Views
7K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
15
Views
4K
  • · Replies 21 ·
Replies
21
Views
5K
Replies
9
Views
5K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 45 ·
2
Replies
45
Views
7K
  • · Replies 2 ·
Replies
2
Views
5K