Probability Question about Error Detection

AI Thread Summary
The discussion revolves around the effectiveness of a double parity check system compared to a single parity check in error detection within binary strings. The original question challenges the TA's solution, which states the probability of failure in detecting errors with two parity bits is 1 - p^2, suggesting it is greater than the failure probability of a single parity check. Participants clarify that while the double parity system can detect more errors, the increased length of the bitstream also raises the likelihood of undetected errors. The conversation emphasizes understanding the distinction between the probability of detection failure and the actual number of errors detected. Ultimately, the double parity check system is acknowledged to be more robust in error detection despite the increased chance of failure due to the longer checks.
AngelofMusic
Messages
58
Reaction score
0
This probability question came up in our networking tutorial today, and I'm not entirely happy with the TA's solution, so I was hoping someone here could help out.

Background: A single parity bit is added to every n bits in a binary string of length 2n. In our class, we add the parity bit so that there is always an even number of 1's in the n+1 bits. So in this case, we add two bits, each adding either 0 or 1 to the n previous bits to make there an even number of 1's. The single parity check can only detect an odd number of errors. If an even number of bits are flipped/have error, then they fail in detecting it.

Question: Given p = probability of the single parity bit succeeding to detect an error within n bits, what is the probability of the new scheme (with the two parity bits added to 2n bits) failing to detect an error.

TA's Solution: P[failure in 2n] = 1 - p^2

I don't think this makes any sense, because 1-p^2 = (1+p)(1-p) \geq (1-p). Which means that this calculated probability of failure is greater than the probability of failure of the single parity check. And to me, logically, that doesn't make much sense in context.

My own thoughts: If we divide the 2n bits in two parts consisting of n bits... then we can have these kinds of errors:

Let (error in first n bits, error in second n bits). Then we have:
(even # errors, even # errors) --> undetectable
(even # errors, odd # errors) --> detectable
(odd # errors, even # errors) --> detectable
(odd # errors, odd # errors) --> detectable

And this is sort of where I become unsure of what to do. I keep thinking something along the lines of the fact that of all the undetectable errors with a single parity check (the first 3 cases), the two parity bits only fails on one of them. So maybe P[failure in 2n] = 1/3 * probability failure in single bit parity check = 1/3*(1-p)

I don't think that's right, though. Can anyone help nudge me in the right direction? This question has been driving me insane for a week and I was hoping we could solve it in tutorial, but I still don't understand. It seems that it shouldn't be all that difficult, and I'm probably just missing something simple!
 
Last edited:
Physics news on Phys.org
AngelofMusic said:
I don't think this makes any sense, because 1-p^2 = (1+p)(1-p) \geq (1-p). Which means that this calculated probability of failure is greater than the probability of failure of the single parity check. And to me, logically, that doesn't make much sense in context.

Okay, logically to me, that statement makes sense.

If the probability of succeeding the check in n bits is p.
Then the probability of succeeding in 2n bits is p^2.

The probability of not doing it right in 2n is then 1 - p^2, which is GREATER than the probability of not doing it right for 1n. The longer your bitstream, or the longer you keep running your parity check, the greater the chance that at some point in time, something is bound to go wrong. Does that not make sense?
 
Yeah, on some levels, that does sort of make sense. But it's not really calculating the probability of error, but the probability of the parity check failing to detect an error.

I keep thinking that this double parity check (not sure if this is the correct terminology) should be able to detect *more* errors than the simple single parity check. Am I wrong?
 
AngelofMusic said:
I keep thinking that this double parity check (not sure if this is the correct terminology) should be able to detect *more* errors than the simple single parity check. Am I wrong?

It WILL detect more errors! I think you're a bit confused with the probability concept here. The 1 - p^2 there is the probability that it will fail, not the number of errors it does not detect. Very right, the 2 parity check system combined will detect more errors than any single check, or atleast equal, since it would be the summation. But let's look at it this way:

You have a chance of 10% to fail the exam.
Your friend has a chance of 10% to fail the exam.
1000 other people in your class each have a chance of 10% of failing the exam.

Now intuitively, if you took the exam alone, there would be only a 10 percent chance of failing right? That's pretty low. I'd bet on you not failing. But if the entire group of 1002 students took the exam, do you believe me when I say, someone is going to fail? Apply to parity checker.
 
Ah, okay. So in terms of number of errors, the 2 parity check system will detect more of them. But because there are more errors, a greater percentage of them will go undetected?
 
Something like that. The checking period is prolonged. There's more to check, so yes there will be more errors, and also if you keep doing something long enough, you're more likely to fail. I think you got a hang of it =D
 
Thank you so much for your help! :D
 
Oh your welcome. Glad to be of help =)
 

Similar threads

Replies
1
Views
2K
Replies
6
Views
5K
Replies
2
Views
1K
Replies
1
Views
8K
Replies
1
Views
2K
Replies
2
Views
2K
Back
Top