Probability Question about Error Detection

Click For Summary

Discussion Overview

The discussion revolves around a probability question related to error detection in networking, specifically focusing on the effectiveness of a double parity check system compared to a single parity check. Participants explore the implications of adding two parity bits to a binary string and how this affects the detection of errors in the data transmission process.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants question the TA's solution that the probability of failure in the double parity check system is given by P[failure in 2n] = 1 - p^2, arguing that it seems illogical for this probability to exceed that of the single parity check.
  • One participant suggests that if the probability of succeeding in n bits is p, then the probability of succeeding in 2n bits is p^2, leading to a higher probability of failure in the longer bitstream.
  • Another participant emphasizes that the discussion is about the probability of the parity check failing to detect an error, not just the probability of error itself.
  • Some participants assert that the double parity check should be able to detect more errors than the single parity check, although they express confusion over the probability concepts involved.
  • A participant illustrates the concept using an analogy about exam failure rates, suggesting that while individual checks may have low failure rates, the cumulative effect in a larger system increases the likelihood of undetected errors.
  • There is a recognition that while the double parity check system can detect more errors, the increased number of checks may also lead to a higher percentage of undetected errors.

Areas of Agreement / Disagreement

Participants express differing views on the implications of the double parity check system, with some agreeing that it should detect more errors while others remain uncertain about the relationship between the number of checks and the probability of failure to detect errors. The discussion does not reach a consensus on the correct interpretation of the probabilities involved.

Contextual Notes

Participants highlight the complexity of calculating probabilities in the context of error detection, noting that assumptions about independence and the nature of errors may not be fully addressed. The discussion reflects various interpretations of how the addition of parity bits influences error detection capabilities.

AngelofMusic
Messages
58
Reaction score
0
This probability question came up in our networking tutorial today, and I'm not entirely happy with the TA's solution, so I was hoping someone here could help out.

Background: A single parity bit is added to every n bits in a binary string of length 2n. In our class, we add the parity bit so that there is always an even number of 1's in the n+1 bits. So in this case, we add two bits, each adding either 0 or 1 to the n previous bits to make there an even number of 1's. The single parity check can only detect an odd number of errors. If an even number of bits are flipped/have error, then they fail in detecting it.

Question: Given p = probability of the single parity bit succeeding to detect an error within n bits, what is the probability of the new scheme (with the two parity bits added to 2n bits) failing to detect an error.

TA's Solution: P[failure in 2n] = 1 - p^2

I don't think this makes any sense, because 1-p^2 = (1+p)(1-p) \geq (1-p). Which means that this calculated probability of failure is greater than the probability of failure of the single parity check. And to me, logically, that doesn't make much sense in context.

My own thoughts: If we divide the 2n bits in two parts consisting of n bits... then we can have these kinds of errors:

Let (error in first n bits, error in second n bits). Then we have:
(even # errors, even # errors) --> undetectable
(even # errors, odd # errors) --> detectable
(odd # errors, even # errors) --> detectable
(odd # errors, odd # errors) --> detectable

And this is sort of where I become unsure of what to do. I keep thinking something along the lines of the fact that of all the undetectable errors with a single parity check (the first 3 cases), the two parity bits only fails on one of them. So maybe P[failure in 2n] = 1/3 * probability failure in single bit parity check = 1/3*(1-p)

I don't think that's right, though. Can anyone help nudge me in the right direction? This question has been driving me insane for a week and I was hoping we could solve it in tutorial, but I still don't understand. It seems that it shouldn't be all that difficult, and I'm probably just missing something simple!
 
Last edited:
Physics news on Phys.org
AngelofMusic said:
I don't think this makes any sense, because 1-p^2 = (1+p)(1-p) \geq (1-p). Which means that this calculated probability of failure is greater than the probability of failure of the single parity check. And to me, logically, that doesn't make much sense in context.

Okay, logically to me, that statement makes sense.

If the probability of succeeding the check in n bits is p.
Then the probability of succeeding in 2n bits is p^2.

The probability of not doing it right in 2n is then 1 - p^2, which is GREATER than the probability of not doing it right for 1n. The longer your bitstream, or the longer you keep running your parity check, the greater the chance that at some point in time, something is bound to go wrong. Does that not make sense?
 
Yeah, on some levels, that does sort of make sense. But it's not really calculating the probability of error, but the probability of the parity check failing to detect an error.

I keep thinking that this double parity check (not sure if this is the correct terminology) should be able to detect *more* errors than the simple single parity check. Am I wrong?
 
AngelofMusic said:
I keep thinking that this double parity check (not sure if this is the correct terminology) should be able to detect *more* errors than the simple single parity check. Am I wrong?

It WILL detect more errors! I think you're a bit confused with the probability concept here. The 1 - p^2 there is the probability that it will fail, not the number of errors it does not detect. Very right, the 2 parity check system combined will detect more errors than any single check, or atleast equal, since it would be the summation. But let's look at it this way:

You have a chance of 10% to fail the exam.
Your friend has a chance of 10% to fail the exam.
1000 other people in your class each have a chance of 10% of failing the exam.

Now intuitively, if you took the exam alone, there would be only a 10 percent chance of failing right? That's pretty low. I'd bet on you not failing. But if the entire group of 1002 students took the exam, do you believe me when I say, someone is going to fail? Apply to parity checker.
 
Ah, okay. So in terms of number of errors, the 2 parity check system will detect more of them. But because there are more errors, a greater percentage of them will go undetected?
 
Something like that. The checking period is prolonged. There's more to check, so yes there will be more errors, and also if you keep doing something long enough, you're more likely to fail. I think you got a hang of it =D
 
Thank you so much for your help! :D
 
Oh your welcome. Glad to be of help =)
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 7 ·
Replies
7
Views
583
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
8K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K