Probability Question about Error Detection

In summary, the conversation discusses a probability question related to adding two parity bits to a binary string of length 2n. The TA's solution states that the probability of failure in 2n bits is 1-p^2, which is greater than the probability of failure in n bits. The individual's own thoughts involve dividing the 2n bits into two parts and considering different types of errors. The conclusion is that the double parity check system will detect more errors, but because there are more errors to check, there is a higher chance of failure.
  • #1
AngelofMusic
58
0
This probability question came up in our networking tutorial today, and I'm not entirely happy with the TA's solution, so I was hoping someone here could help out.

Background: A single parity bit is added to every n bits in a binary string of length 2n. In our class, we add the parity bit so that there is always an even number of 1's in the n+1 bits. So in this case, we add two bits, each adding either 0 or 1 to the n previous bits to make there an even number of 1's. The single parity check can only detect an odd number of errors. If an even number of bits are flipped/have error, then they fail in detecting it.

Question: Given p = probability of the single parity bit succeeding to detect an error within n bits, what is the probability of the new scheme (with the two parity bits added to 2n bits) failing to detect an error.

TA's Solution: P[failure in 2n] = [tex]1 - p^2[/tex]

I don't think this makes any sense, because [tex]1-p^2 = (1+p)(1-p) \geq (1-p)[/tex]. Which means that this calculated probability of failure is greater than the probability of failure of the single parity check. And to me, logically, that doesn't make much sense in context.

My own thoughts: If we divide the 2n bits in two parts consisting of n bits... then we can have these kinds of errors:

Let (error in first n bits, error in second n bits). Then we have:
(even # errors, even # errors) --> undetectable
(even # errors, odd # errors) --> detectable
(odd # errors, even # errors) --> detectable
(odd # errors, odd # errors) --> detectable

And this is sort of where I become unsure of what to do. I keep thinking something along the lines of the fact that of all the undetectable errors with a single parity check (the first 3 cases), the two parity bits only fails on one of them. So maybe P[failure in 2n] = 1/3 * probability failure in single bit parity check = 1/3*(1-p)

I don't think that's right, though. Can anyone help nudge me in the right direction? This question has been driving me insane for a week and I was hoping we could solve it in tutorial, but I still don't understand. It seems that it shouldn't be all that difficult, and I'm probably just missing something simple!
 
Last edited:
Physics news on Phys.org
  • #2
AngelofMusic said:
I don't think this makes any sense, because [tex]1-p^2 = (1+p)(1-p) \geq (1-p)[/tex]. Which means that this calculated probability of failure is greater than the probability of failure of the single parity check. And to me, logically, that doesn't make much sense in context.

Okay, logically to me, that statement makes sense.

If the probability of succeeding the check in n bits is [tex]p[/tex].
Then the probability of succeeding in 2n bits is [tex]p^2[/tex].

The probability of not doing it right in 2n is then [tex] 1 - p^2[/tex], which is GREATER than the probability of not doing it right for 1n. The longer your bitstream, or the longer you keep running your parity check, the greater the chance that at some point in time, something is bound to go wrong. Does that not make sense?
 
  • #3
Yeah, on some levels, that does sort of make sense. But it's not really calculating the probability of error, but the probability of the parity check failing to detect an error.

I keep thinking that this double parity check (not sure if this is the correct terminology) should be able to detect *more* errors than the simple single parity check. Am I wrong?
 
  • #4
AngelofMusic said:
I keep thinking that this double parity check (not sure if this is the correct terminology) should be able to detect *more* errors than the simple single parity check. Am I wrong?

It WILL detect more errors! I think you're a bit confused with the probability concept here. The [tex] 1 - p^2[/tex] there is the probability that it will fail, not the number of errors it does not detect. Very right, the 2 parity check system combined will detect more errors than any single check, or atleast equal, since it would be the summation. But let's look at it this way:

You have a chance of 10% to fail the exam.
Your friend has a chance of 10% to fail the exam.
1000 other people in your class each have a chance of 10% of failing the exam.

Now intuitively, if you took the exam alone, there would be only a 10 percent chance of failing right? That's pretty low. I'd bet on you not failing. But if the entire group of 1002 students took the exam, do you believe me when I say, someone is going to fail? Apply to parity checker.
 
  • #5
Ah, okay. So in terms of number of errors, the 2 parity check system will detect more of them. But because there are more errors, a greater percentage of them will go undetected?
 
  • #6
Something like that. The checking period is prolonged. There's more to check, so yes there will be more errors, and also if you keep doing something long enough, you're more likely to fail. I think you got a hang of it =D
 
  • #7
Thank you so much for your help! :D
 
  • #8
Oh your welcome. Glad to be of help =)
 

1. What is the purpose of error detection in probability?

The purpose of error detection in probability is to identify and minimize errors in data or calculations. This helps to ensure the accuracy and reliability of results and conclusions drawn from a study or experiment.

2. How is error detection used in probability studies?

Error detection is used in probability studies by implementing various techniques and methods to identify and correct errors. This can include statistical tests, cross-checking data with multiple sources, and identifying outliers or inconsistencies in the data.

3. What are some common types of errors in probability studies?

Some common types of errors in probability studies include measurement errors, sampling errors, and calculation errors. These can occur due to human error, faulty equipment, or limitations in the study design.

4. How can errors in probability affect the results of a study?

Errors in probability can significantly impact the results of a study by skewing data and leading to incorrect conclusions. This can undermine the validity and reliability of the study and its findings.

5. What steps can be taken to prevent errors in probability studies?

To prevent errors in probability studies, researchers should carefully plan and design their studies, use reliable and accurate methods for data collection and analysis, and implement error detection and correction techniques. It is also important to thoroughly review and validate the results before drawing conclusions.

Similar threads

  • Engineering and Comp Sci Homework Help
Replies
4
Views
814
  • Engineering and Comp Sci Homework Help
Replies
1
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
919
  • Engineering and Comp Sci Homework Help
Replies
6
Views
4K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
986
  • Atomic and Condensed Matter
Replies
8
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
8K
  • Atomic and Condensed Matter
Replies
8
Views
1K
Back
Top