Discrete distribution taking only non-negative integer values

AI Thread Summary
The discussion focuses on understanding the proof related to discrete distributions in probability theory, specifically how summations are manipulated. Participants clarify that the first line of the proof expresses the relationship between probabilities and inequalities, where P(X ≥ i) is expanded into a summation. They also explain that interchanging the order of summation is valid under certain conditions, emphasizing the importance of rigor when dealing with infinite sums. Additionally, there is a suggestion to start from the expected value definition for clarity, though it may complicate readability. Overall, the conversation highlights common confusions in probability notation and the nuances of mathematical proofs.
rickywaldron
Messages
6
Reaction score
0
I can't seem to wrap my head around the types of sums used in probability theory, and here is a classic example. Section 6.1 of this article:
http://en.wikipedia.org/wiki/Expect...ution_taking_only_non-negative_integer_values

The first line of the proof, what is going on here? I know how summation works, except I can't see the relation between the LHS and the RHS

Then the last step, I can't see how the second summation goes away and is just replaced by a j!
I always get confused by this notation but when I understand it intuitively I am much more comfortable.

Thanks
 
Physics news on Phys.org
In the first line of the proof, they are replacing
P(X \ge i) = \sum_{j = i}^\infty P(X = j)
which is basically just writing out what the inequality sign means (if X is greater than or equal to i, then it is equal to i, or i + 1, or i + 2, etc).

Then they interchange the order of the summation and note that nothing in the summation depends on the summation variable (i) anymore, and they use
\sum_{i = 1}^j 1 = 1 + 1 + \cdots + 1 \text{ ($j$ times)} = j
 
Thanks, really good response
 
In fact, looking at the proof, I would probably put it the other way around, starting from the definition E[X] = \sum j P(X = j). This makes the proof a bit less readable perhaps, the tricks described above seem to come even more from thin air as they do now, but that is quite typical in mathematical proofs, I think.

Also, if you want to be very rigorous: when dealing with infinite sums it is generally not allowed to interchange the order of the summation without additional restrictions on the summand. So when handing this in as an exercise for a class you would want to elaborate a bit on that, I guess.
 
@CompuChip Could you please explain how the summation interchanged. I could not get how limits of i changed from infinity to till j. Thanks.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...
Back
Top