What is the Expected Number of Retransmissions?

In summary, the conversation discusses the probability of error-free transmission over a communication channel, which is assumed to be 0.9. Retransmissions are initiated until a correct transmission occurs. The probability of not requiring any retransmissions is 0.9, the probability of exactly two retransmissions is 0.009, the expected number of retransmissions is 0.11111, and the variance is 0.123456. The formula used is the negative binomial distribution, which is the sum of independent, identically-distributed geometric distributions.
  • #1
lina29
85
0

Homework Statement


Assume that the probability of error-free transmission of a message over a communication channel is p=0.9. If a message is not transmitted correctly, a retransmission is initiated. This procedure is repeated until a correct transmission occurs. Such a channel is often called a feedback channel. Assuming that successive transmissions are independent,

A- What is the probability that no retransmissions are required? (Note: Number of retransmissions = Number of transmissions - 1.)
B- What is the probability that exactly two retransmissions are required?
C- What is the expected number of retransmissions?
D- What is the variance of the number of retransmissions?


Homework Equations


A-.9
C-.1


The Attempt at a Solution


B- (1-p)(1-p)= (.1)(.1)=.01 which was wrong
D- Std(X)= sqrt( np (1-p)) = .1 and the variance is the std squared so var(X)=.01 which was also wrong

What did I miss?
 
Physics news on Phys.org
  • #2
Well, for B you need to send the message three times. The first two have to fail and the third one has to succeed.
 
  • #3
so (.1)(.1)(.9)=.009?
What would I do for D?
 
  • #4
lina29 said:
so (.1)(.1)(.9)=.009?
What would I do for D?

Where did you get your formula for Std? What's n? I don't think your mean is correct either. How did you get that?
 
  • #5
I got the formula from my notes for the binomial distribution. However, if it's wrong I can use another formula. I don't know n but the expected value is .1 ( which was right) and it is also equal is np. Is there another approach I should take?
 
  • #6
lina29 said:
I got the formula from my notes for the binomial distribution. However, if it's wrong I can use another formula. I don't know n but the expected value is .1 ( which was right) and it is also equal is np. Is there another approach I should take?

I think this is actually called a Negative Binomial Distribution. It's like a binomial distribution but the only events that are allowed are a series of failures followed by a success. Anything in your notes on that?
 
  • #7
nope. How would I do that?
 
  • #8
lina29 said:
nope. How would I do that?

Either look it up or try and work it out from scratch. For example, I get that the expected number of retransmissions is 0*(.9)+1*(.1)*.9+2*(.1)^2*(.9)+3*(.1)^3*(.9)+...
 
  • #9
I looked it up and it said I would need to know the number of trials and the number of success. How did you find that? Also this would give the expected number. How would I go from there to the variance?
 
  • #10
lina29 said:
I looked it up and it said I would need to know the number of trials and the number of success. How did you find that? Also this would give the expected number. How would I go from there to the variance?

If n is the number of retransmissions and p(n) is the probability of n retransmissions you find the expected number by summing n*p(n). You did p(2) in exercise B. You should be able to find that case in the sum. You would find the variance by summing n^2*p(n) and then subtracting the expected number squared. If this is a formula based course and they haven't taught you how to sum things like this, then I think you are missing some formulas.
 
  • #11
This isn't a formula based course so we only learn basic formulas. Since I got the expected number previously as .1 I got n=1/9 and then from there I found the variance to be .001111. Would that be right?
 
  • #12
lina29 said:
This isn't a formula based course so we only learn basic formulas. Since I got the expected number previously as .1 I got n=1/9 and then from there I found the variance to be .001111. Would that be right?

I think there is something messed up with this question. I get the expected number to be 1/9=0.11111... There's something wrong if it's saying .1 is correct. If you want to look at some formulas go to http://en.wikipedia.org/wiki/Negative_binomial_distribution
There are formulas for the mean and variance on the right. You want to use r=1 (since only one success is required) and p=.1. What do you get?
 
  • #13
it might be saying .1 is correct since 1/9 is .1 repeating. I got the variance to be .123456 using the formula pr/ ((1-p)^2)
 
  • #14
lina29 said:
it might be saying .1 is correct since 1/9 is .1 repeating. I got the variance to be .123456 using the formula pr/ ((1-p)^2)

Well, I got 10/81. But that's .123456 to six decimal places. So yes, it's kind of funny those formulae weren't in your notes.
 
  • #15
thank you so much!
 
  • #16
Dick said:
I think this is actually called a Negative Binomial Distribution. It's like a binomial distribution but the only events that are allowed are a series of failures followed by a success. Anything in your notes on that?

Your answer is misleading: the distribution is the Geometric, not the negative binomial (although, of course, the geometric is a special case of the negative binomial!). Geometric = number of trials until the first success (variant: number before the first success). Negative binomial = sum of independent, identically-distributed geometrics. It is called the negative binomial because its distribution involves binomial coefficients with appropriate negative arguments.

RGV
 
  • #17
Ok, I give up. I'm not an expert at this but isn't the negative binomial a distribution where you count failures with constant probability until a fixed number of successes (or vice versa). I just summed the series by hand and checked they matched the numbers quoted for the negative binomial to the best of my understanding. I didn't need to invoke any negative numbers in the binomial coefficients because I didn't do it that way. What do you think is wrong? Doesn't geometric just tell you about the number of successes versus the number of failures without any specific stopping condition except the total number of trials? If not, I'll stop trying to answer questions like this.
 
Last edited:
  • #18
Dick said:
Ok, I give up. I'm not an expert at this but isn't the negative binomial a distribution where you count failures with constant probability until a fixed number of successes (or vice versa). I just summed the series by hand and checked they matched the numbers quoted for the negative binomial to the best of my understanding. I didn't need to invoke any negative numbers in the binomial coefficients because I didn't do it that way. What do you think is wrong? Doesn't geometric just tell you about the number of successes versus the number of failures without any specific stopping condition except the total number of trials? If not, I'll stop trying to answer questions like this.

The geometric distribution is the number of trials until the first success (or first failure) in Bernoulli trials. So, for example, asking for the probability that the number of tests until failure be at least n is the same as asking for the probability that the geometric is > n-1. A negative binomial is the sum of geometrics, so would be used, for example, when asking about the number of trials needed until detection of the 4th failure: that would be the sum of 4 geometrics. The geometric distribution with parameter p is [itex]P\{X=k\} = p q^{k-1} \text{ for } k = 1, 2, \ldots,[/itex] where q = 1-p. A sum of n geometric random variables has distribution
[tex]P\{X=k\}= (-1)^{k-n} {-n\choose k-n} p^n q^{k-n} = {k-1 \choose k-n} p^n q^{k-n}, \; k = n, n+1, \ldots. [/tex] This does involve the "negative binomial" C(-n,k-n).

RGV
 
Last edited:
  • #19
Ray Vickson said:
The geometric distribution is the number of trials until the first success (or first failure) in Bernoulli trials. So, for example, asking for the probability that the number of tests until failure be at least n is the same as asking for the probability that the geometric is > n-1. A negative binomial is the sum of geometrics, so would be used, for example, when asking about the number of trials needed until detection of the 4th failure: that would be the sum of 4 geometrics. The geometric distribution with parameter p is [itex]P\{X=k\} = p q^{k-1} \text{ for } k = 1, 2, \ldots,[/itex] where q = 1-p. A sum of n geometric random variables has distribution
[tex]P\{X=k\}= (-1)^{k-n} {-n\choose k-n} p^n q^{k-n} = \frac{n(n+1) \cdots (k-1)}{(k-n)!} p^n q^{k-n}, \; k = n+1, n+2, \ldots. [/tex] This does involve the "negative binomial" C(-n,k-n).

RGV

Ok, so I just picked an overly general case? I wasn't actually sure what it was called and picked the first distribution I found that seemed to fit. Thanks!
 
  • #20
Looked all okay to me. I am stuck on proving why the variance is pr/(1-p)^2.

A. 0.9
B. 0.009
C. 1/9
D. 10/81

No, I guess D is the other one.
 
Last edited by a moderator:
  • #21
MarcoD said:
Looked all okay to me. I am stuck on proving why the variance is pr/(1-p)^2.

A. 0.9
B. 0.009
C. 1/9
D. 10/81

Ray Vickson is right that I should have called it a geometric distribution. You can also use the r=1 case of the negative binomial, but that makes it seem more complicated than it needs to be. Did you prove the expectation value is p/(1-p)?
 
  • #22
Dick said:
Ray Vickson is right that I should have called it a geometric distribution. You can also use the r=1 case of the negative binomial, but that makes it seem more complicated than it needs to be. Did you prove the expectation value is p/(1-p)?

Okay, it's this one.

Well, 'prove' is a big word. I tried something and that worked; it's a bit sloppy. I am now more interested why the variance would also have a simple proof.
 
  • #23
MarcoD said:
Looked all okay to me. I am stuck on proving why the variance is pr/(1-p)^2.

A. 0.9
B. 0.009
C. 1/9
D. 10/81

No, I guess D is the other one.

If you can get the variance for r = 1 you can get it for general r, using the fact that the variance of a sum of independent random variables is the sum of their variances. For the case r = 1 you can use Var(X) = E(X^2) - (EX)^2, or use a moment-generating function.

BTW: you need to be careful: there are essentially two versions of the Geometric distribution, one for the number of Bernoulli trials up to the first success, and the other for the number of Bernoulli trials *before* the first success. The USUAL one is the first one: X = number of trials up to the first success, with distribution [itex] P\{X=k\} = p q^{k-1}, \; k=1,2,\ldots,[/itex] whose mean is [itex] EX = 1/p[/itex] and variance is [itex]VX = q/p^2\; (q = 1-p)[/itex]. The other (rarer) one is for Y = number of trials before the first success, with distribution [itex] P\{Y=k\} = p q^k,\; k = 0, 1, \ldots,[/itex], whose mean is [itex]EY = q/p[/itex] and variance is [itex]VY = q/p^2 = VX.[/itex]

The easiest way to get the variance in either case is to use the MGF: the MGF of X is [itex]M(z) = pz/(1-qz), \; (q = 1-p), [/itex] from which we can get EX as G'(1) and VX as [itex] VX = G''(1) + EX - (EX)^2.[/itex]

RGV
 
Last edited:
  • #24
MarcoD said:
Looked all okay to me. I am stuck on proving why the variance is pr/(1-p)^2.

A. 0.9
B. 0.009
C. 1/9
D. 10/81

No, I guess D is the other one.

If you can get the variance for r = 1 you can get it for general r, using the fact that the variance of a sum of independent random variables is the sum of their variances. For the case r = 1 you can use Var(X) = E(X^2) - (EX)^2, or use a moment-generating function. BTW: you need to be careful: there are essentially two versions of the Geometric distribution, one for the number of Bernoulli trials up to the first success, and the other for the number of Bernoulli trials *before* the first success. The USUAL one is the first one: X = number of trials up to the first success, with distribution [itex] P\{X=k\} = p q^{k-1}, \; k=1,2,\ldots,[/itex] whose mean is [itex] EX = 1/p[/itex] and variance is [itex]VX = q/p^2\; (q = 1-p)[/itex]. The other (rarer) one is for Y = number of trials before the first success, with distribution [itex] P\{Y=k\} = p q^k,\; k = 0, 1, \ldots,[/itex], whose mean is [itex]EY = q/p[/itex] and variance is [itex]VY = q/p^2 = VX.[/itex] The easiest way to get the variance in either case is to use the MGF: the MGF of X is [itex]M(z) = pz/(1-qz), \; (q = 1-p), [/itex] from which we can get EX as G'(1) and VX as [itex] VX = G''(1) + EX - (EX)^2.[/itex]

RGV
 
  • #25
MarcoD said:
Okay, it's this one.

Well, 'prove' is a big word. I tried something and that worked; it's a bit sloppy. I am now more interested why the variance would also have a simple proof.

'Proving' it would involve summing some infinite series. The Wikipedia article does the example of finding the expectation [itex]E[Y][/itex]. The first step to computing the variance would be finding [itex]E[Y^2][/itex].
 

What is a probability distribution?

A probability distribution is a mathematical function that describes the likelihood of different outcomes occurring in a random event. It assigns a probability to each possible outcome, and the sum of all probabilities is equal to 1.

What are the types of probability distributions?

There are two main types of probability distributions: discrete and continuous. Discrete distributions are used when the possible outcomes are countable, such as the number of heads in a coin toss. Continuous distributions are used when the possible outcomes are measured on a continuous scale, such as height or weight.

How do you calculate the mean of a probability distribution?

The mean of a probability distribution is calculated by multiplying each possible outcome by its corresponding probability and then summing all the products. This is also known as the expected value.

What is the central limit theorem?

The central limit theorem states that as the sample size of a dataset increases, the distribution of the sample means will approach a normal distribution regardless of the shape of the original population distribution. This allows for the use of normal distribution in statistical analyses even when the underlying data may not be normally distributed.

How is probability distribution used in real life?

Probability distribution is used in a variety of fields, such as finance, economics, and science. It can help in making predictions, analyzing risks, and making decisions. For example, in finance, probability distribution is used to model stock prices and calculate the risk of different investment strategies. In science, it is used to analyze experimental results and make inferences about the population.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
588
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
727
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
8
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Back
Top