Agent Smith said:
There's a difference between confidence and probability and I don't know what that is.
Say you've got a jar with 1,000,000 red and blue balls in it and you want to estimate how many red balls there are (analogous to a naive random-sampled poll). You draw 100 balls, of which ten are red. You can generate an estimate of the number of red balls and 95% confidence limits. I repeat the experiment, but in my case ninety are red. Again I can generate an estimate of the number of red balls and 95% confidence limits. Assuming we're both playing fairly at least one of us has got
really (un)lucky in our drawing, but (assuming there are at least ten red and ten blue balls) neither result is impossible, and the freakishness of (one of) the results won't become obvious until we compare notes. Your set of limits and mine won't overlap, so if we were to interpret them as "95% probability that the true number of red balls is in these limits" we've got 190% probability accounted for. So they can't be probabilities - at least not naively like I'm stating it.
As
@Dale says, in the frequentist interpretation of confidence intervals all it's saying is that if we repeated the experiment many times 95% of the computed confidence intervals would contain the actual number of red balls. In the Bayesian interpretation, I'm saying that there is a 95% chance that the real value lies within my calculated range
given what my data is saying about it, and you're saying the same
given what your data says. We can't just add two conditional probabilities so we can't get to 190% with the italicised caveats properly stated.
We can, of course, pool our data and come up with a combined answer, or do some post hoc meta-analysis of our separate results. I believe the second is what a lot of political commentators do to combine polls from different sources to make election predictions, with all sorts of weightings and offsets to allow for their beliefs about biases or inadequacies in the input polls.