Coding theory - binary symmetric channel

coverband
Messages
170
Reaction score
1
Hi

Let us suppose we transmit the binary digit '1'. The probability of not receiving '1' is p. Thus the probability of receiving '1' is 1-p. Suppose we send a longer code of length n. The probability of this code being received correctly is (1-p)^n.

Now I don't understand this next statement: The probability that one error will occur in a specified position is p(1-p)^(n-1).

Taking an example let's say we transmit the code 000. The probabilty of receiving this code without error is (1-p)^3 [fine] + 3p(n-1)^2 [What !?]
 
Physics news on Phys.org
"The probability that one error will occur in a specified position is p(1-p)^(n-1). "

This means that, for example, the probability that an error will occur in the first digit, and digits 2-n will be correct, is p(1-p)^(n-1). You have to specify the position in order for this formula to hold.
 
Last edited:
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...

Similar threads

Back
Top