- #1
Tosh5457
- 134
- 28
Edit: I have to think more about this, I'll post later.
Last edited:
Tosh5457 said:Edit: I have to think more about this, I'll post later.
mathman said:Assuming all the X's are independent, it would be better to use the characteristic function (Fourier transform of distribution). Then the characteristic function for Y is the Nth power of the char. function for X. To get the distribution function for Y, take the inverse transform.
Tosh5457 said:Isn't that only for continuous random variables (and not discrete)?
Anyway, shouldn't it be easy by convolution? I think that I'm only missing something very basic...
You can do it for discrete random variables. The density function is a linear combination of Dirac delta functions, so the characteristic function is a linear combination of exponential functions.Tosh5457 said:Isn't that only for continuous random variables (and not discrete)?
Anyway, shouldn't it be easy by convolution? I think that I'm only missing something very basic...
You can do it for discrete random variables. The density function is a linear combination of Dirac delta functions, so the characteristic function is a linear combination of exponential functions.
Tosh5457 said:I'm not comfortable with that, I never even studied dirac delta function.
About the convolution, I still don't understand some things...
1st problem:
Definition of convolution for discrete r.v:
[tex]f_{Z}(z)=\sum_{x=-\inf}^{x=+inf}f_{Y}(z-x)f_{X}(x)[/tex]
where x is an integer.
The problem here is that [tex]f_{X}(x)[/tex] may be always 0 in my example (except for x = -1), since there may be no integer prizes... And if I defined the r.v. X slightly differently (not assigning that -1 value) that function would always be 0 and there would be no probablity mass function for Z? Something's not right here...
Tosh5457 said:Ok I'll write everything again with the pmf included, so everything will be on the same post.
Definition of convolution of pmf's for discrete random variables X and Y:
[tex]f_{Z}(z)=\sum_{x=-inf}^{x=+inf}f_{Y}(z-x)f_{X}(x)[/tex]
where x is an integer.
PMF of X and Y (they have the same distribution):
[tex]f(X=x)=\left\{\begin{matrix}a_{1}, x=-1
\\ a_{2}, x=w_{1}
\\ a_{3}, x=w_{2}
\\ ...
\\ a_{n}, x=w_{n-1}
\end{matrix}\right.[/tex]
where w1, w2, ..., wn-1 are real constants.
Problem:
In the series, [tex]f_{X}(x)[/tex] may be always 0 except for -1, since w1, w2, ..., wn-1 in general won't be integers.
chiro said:you're dealing with defining the probabilities for Z and if X and Y have the same values that correspond to some probability (remember you said that X and Y have the same distribution which implies the same a(1),w1,w2 etc, then you will find the probability distribution using the probabilities and then create the mapping to values afterwards.
Are you wondering about the mapping procedure if your values that are mapped to probabilities are not integers?
Tosh5457 said:Consider the random variable Y, which is the sum of N random variables X:
[tex]Y=X+X+...+X=NX[/tex]
[tex]Y(y)=X(x)+X(x)+...+X(x)=NX(x)[/tex]
But I don't know how to define y... This is why I can't use the convolution theorem, because I didn't really understand this part.
Tosh5457 said:Yes, that's my problem.
The variable y represents a particular value of the random variable Y. It just like saying "Let W be the random variable that represents the value of the face on a roll of a fair die". W(w) would be the particular event that the face value was w. (e.g. W(2) says the face that came up was 2.
To find the probability of particular value of Y, such as y = $40, you must sum the probabilities of all combinations of values xi for the Xi which add up $40. So you consider all possible values of the xi that meet that requirement. That is essentially what "convolving" random variables means.
Tosh5457 said:Thanks for the replies
Sum the probabilities? Don't you mean multiply? If I sum the probabilities, and if I understood right, I'll get probabilities higher than 1 in some cases.
For example, Y = w1 + w1 + w1 + ... + w1 = N*w1 is a possibility for Y. If I sum the probabilities of X=w1 N times I can easily get a number higher than 1.
A discrete random variable is a type of random variable that can only take on a finite or countably infinite set of values. These values are typically represented by whole numbers and are often the result of counting or measuring a certain event or phenomenon.
To calculate the distribution of the sum of two discrete random variables, you first need to determine the possible values of the sum by adding all possible combinations of values from the two variables. Then, you can calculate the probability of each sum by multiplying the probabilities of the corresponding values from each variable. Finally, you can create a table or graph to display the distribution of the sum.
The distribution of a single discrete random variable shows the probability of each possible value that the variable can take on. On the other hand, the distribution of the sum of two discrete random variables shows the probability of each possible sum that can result from adding values from the two variables.
Yes, the distribution of the sum of discrete random variables can often be approximated by a continuous distribution, such as the normal distribution, when the number of variables is large enough and the probabilities are not too close to 0 or 1. This is known as the Central Limit Theorem.
The distribution of the sum of discrete random variables is used in many real-world applications, such as in finance, engineering, and statistics. For example, it can be used to model the total sales of a company, the amount of rainfall in a certain area, or the duration of a phone call. By understanding the distribution of the sum, scientists and researchers can make predictions and make informed decisions in various fields.