What is the true value given successive sub-unity sensor readings?

P(true value < 1 | 2 trials < 1) = 0.98.In summary, using Bayes' Rule, we can calculate the probability that the true value is less than one given a sensor reading that is less than one. This probability is 0.98 for two trials and decreases as the number of trials increases.
  • #1
carllacan
274
3
We have a sensor that measures a certain value that lies in the range (0, 3). The sensor is not perfect, and sometimes it fails. When this happens it ouputs a value under 1, regadless of the actual value. The failure probability is 0,01.

Suposing the sensor outputs a value under 1, what is the probability that the actual value is below 1? Suposing that we repeat the measurement N times and we always obtain a value under 1 what is the probability that the actual value is below 1?

My attempt at a solution:

Call a the actual value, and s the measured value. Then [itex]P(a<1) = \frac{1}{3}[/itex] and [itex]P(a>1) = \frac{2}{3}[/itex].Also [itex]P(s<1|a<1) = 1[/itex], [itex]P(s<1|a>1) = 0.01[/itex] and so [itex]P(s<1) = P(s<1|a<1)*P(a<1) + P(s<1|a>1)*P(a>1) = 1 \frac{1}{3}+0.01 \frac{2}{3} = 0.34[/itex] .

Using the Bayes Theorem [itex]P(a<1|s<1) = \frac{P(s<1|a<1)P(a<1)}{P(s<1)} = \frac{1/3}{0.34} = 0.98[/itex]

When we get to the N measuremetns case I start having problems. Intuitively I think that measurement failures are independent of each other, that is, a faulty first measurement does not influence the probability of a faulty second measurement, and so on. Then the answer would simply be [itex]P(a<1| s_{1:N} < 1) = 0.98^N[/itex]. It seems though misleadingly simple.

Could you help me out?
 
Last edited:
Physics news on Phys.org
  • #2
You need to distinguish between the event "s<1" ([itex]S_{i}[/itex]) and the event "failure" ([itex]F_{i}[/itex]). The [itex]F_{i}[/itex] may be independent, but the [itex]S_{i}[/itex] are certainly not, since they measure the value. Your result would imply that a repeated measurement of s<1 actually decreases the chance of a<1, which is absurd. Let A be the event "a<1". Then

[tex]
\begin{equation}
\label{eq:r1}
P(A|S_{1}\wedge{}S_{2})=
P(A|S_{1}\wedge{}S_{2}\wedge{}F_{1}\wedge{}F_{2})P(F_{1}\wedge{}F_{2})+
P(A|S_{1}\wedge{}S_{2}\wedge{}F_{1}\wedge\urcorner{}F_{2})P(F_{1}\wedge\urcorner{}F_{2})+
\end{equation}
[/tex]

[tex]
\begin{equation}
\label{eq:r2}
P(A|S_{1}\wedge{}S_{2}\wedge\urcorner{}F_{1}\wedge{}F_{2})P(\urcorner{}F_{1}\wedge{}F_{2})+
P(A|S_{1}\wedge{}S_{2}\wedge\urcorner{}F_{1}\wedge\urcorner{}F_{2})
P(\urcorner{}F_{1}\wedge\urcorner{}F_{2})=
\end{equation}
[/tex]

[tex]
\begin{equation}
\label{eq:r3}
\frac{1}{30000}+\frac{9999}{10000}=\frac{29998}{30000}
\end{equation}
[/tex]

Now, that's more like it. The more often you measure "s<1" the more
unlikely [itex]\urcorner{}A[/itex] becomes.
 
Last edited by a moderator:
  • #3
stlukits said:
You need to distinguish between the event "s<1" ([itex]S_{i}[/itex]) and the event "failure" ([itex]F_{i}[/itex]). The [itex]F_{i}[/itex] may be independent, but the [itex]S_{i}[/itex] are certainly not, since they measure the value. Your result would imply that a repeated measurement of s<1 actually decreases the chance of a<1, which is absurd. Let A be the event "a<1". Then

[tex]
\begin{equation}
\label{eq:r1}
P(A|S_{1}\wedge{}S_{2})=
P(A|S_{1}\wedge{}S_{2}\wedge{}F_{1}\wedge{}F_{2})P(F_{1}\wedge{}F_{2})+
P(A|S_{1}\wedge{}S_{2}\wedge{}F_{1}\wedge\urcorner{}F_{2})P(F_{1}\wedge\urcorner{}F_{2})+
\end{equation}
[/tex]

[tex]
\begin{equation}
\label{eq:r2}
P(A|S_{1}\wedge{}S_{2}\wedge\urcorner{}F_{1}\wedge{}F_{2})P(\urcorner{}F_{1}\wedge{}F_{2})+
P(A|S_{1}\wedge{}S_{2}\wedge\urcorner{}F_{1}\wedge\urcorner{}F_{2})
P(\urcorner{}F_{1}\wedge\urcorner{}F_{2})=
\end{equation}
[/tex]

[tex]
\begin{equation}
\label{eq:r3}
\frac{1}{30000}+\frac{9999}{10000}=\frac{29998}{30000}
\end{equation}
[/tex]

Now, that's more like it. The more often you measure "s<1" the more
unlikely [itex]\urcorner{}A[/itex] becomes.

I understand your solution, but, don't you have to use another time the Bayes Theorem? Like "on top" of the first result? This comes from a introduction to recursive Bayesian filters, so I "feel" like there should be an iterative procedure. Or am I wrong?
 
  • #4
No iteration needed.

[tex]
\begin{equation}
P(A|S_{1:N})=\frac{1}{3}*(0.01)^{N}+\frac{100^{N}-1}{100^{N}}
\end{equation}
[/tex]

I guess this is an iteration of sorts, which you can prove by induction, on the Total Probability Theorem

http://mathworld.wolfram.com/TotalProbabilityTheorem.html
 
Last edited by a moderator:
  • #5
stlukits said:
No iteration needed.

[tex]
\begin{equation}
P(A|S_{1:N})=\frac{1}{3}*(0.01)^{N}+\frac{100^{N}-1}{100^{N}}
\end{equation}
[/tex]

I guess this is an iteration of sorts, which you can prove by induction, on the Total Probability Theorem

http://mathworld.wolfram.com/TotalProbabilityTheorem.html

I understand that there is no need, but this is the foundation for more complicated problems. Namely problems in which we have a probability distribution over many values and the reality is influenced by probabilistic actions. Or, mathematically, we have [itex]p(z_t| x_t ) [/itex] and [itex]p(x_t| u_t )[/itex].

In the case of this problem:

[itex]p(z_t| x_t ) [/itex] becomes
[itex] if x_t < 1:~~ p(z_t|x_t) = 1 for z_t < 1 and 0 for z_t > 1[/itex]
[itex] if x_t > 1:~~ p(z_t|x_t) = 0.01 for z_t < 1 and 0.99 for z_t > 1[/itex]

And [itex]p(x_t| u_t )[/itex] is just [itex]p(x)[/itex].

Later on these distributions will be far more complicated, so I need to understant the general method, aplied to this problem, in order to better understand more advanced things.
 
  • #6
carllacan said:
Suposing the sensor outputs a value under 1, what is the probability that the actual value is below 1?
When we get to the N measuremetns case I start having problems. Intuitively I think that measurement failures are independent of each other, that is, a faulty first measurement does not influence the probability of a faulty second measurement,
As stlukits says, even if the probabilities of failures are independent, the probabilities of the true values being less than one may not be. Although the problem is stated as though there is one identical true value for all the trials, when you multiply the Bayesian probabilities, you are assuming the true values are independent for all measurements. If there is one true value, your calculation must change.

P(true value < 1 | all N measurements < 1) = P( (true value < 1) and (all N measurements < 1))/P(all N measurements < 1)

This is still a direct application of Bayes' Rule.
 
  • Like
Likes 1 person
  • #7
stlukits was correct in that your original answer is wrong. A long string of readings of values less than one is strongly indicative that the true value is indeed less than one. It is the probability that the true value is greater than one that should be decreasing. After that, I don't know what stlukits was calculating. His result is incorrect.I'll look at the sample space as the set cross product of (true value ≥ 1, true value <1) and (sensor read true, sensor failed):
  • The true value is greater than one and the sensor read true,
  • The true value is greater than one and the sensor failed,
  • The true value is less than one and the sensor read true,
  • The true value is less than one and the sensor failed.
Note that a sensor reading of less than one excludes the first outcome. The only way to have a true value greater than one and a sensor reading less than one is if the sensor failed. This suggests that you look at the opposite of the problem than you were asked to look at: What is the probability that the true value is greater than one given a sensor reading that was less than one? Try using Bayes' Law to calculate this probability. The probability that the true value is less than one is one minus this calculated probability.

To calculate the probability that the true value is greater than one given two readings that are both less than one, you simply change the prior probability from your default prior based on the principle of indifference to the probability you calculated above for a true value greater than one given a single reading less than one. Similarly, the probability that the true value is greater than one given n readings all of which are less than one, you change the prior to the probability of n-1 successive sub-unity readings.
 
  • Like
Likes 1 person

1. What is a Repeated Bayesian filter?

A Repeated Bayesian filter is a statistical algorithm that uses Bayesian inference to estimate the probability of a certain event occurring based on previous observations. It is often used in machine learning and artificial intelligence to make predictions and decisions.

2. How does a Repeated Bayesian filter work?

A Repeated Bayesian filter works by continuously updating the probability of an event occurring based on new observations. It uses the Bayes' theorem to calculate the posterior probability, which is then used as the prior probability for the next calculation. This process is repeated multiple times, hence the name "repeated" Bayesian filter.

3. What are the advantages of using a Repeated Bayesian filter?

One of the main advantages of using a Repeated Bayesian filter is its ability to handle uncertain and noisy data. It can also adapt to changing environments and update its predictions accordingly. Additionally, it can incorporate new information as it becomes available, making it a flexible and dynamic algorithm.

4. In what applications is a Repeated Bayesian filter commonly used?

A Repeated Bayesian filter is commonly used in applications that require prediction or decision-making based on uncertain data. This includes speech recognition, natural language processing, computer vision, and financial forecasting. It is also used in robotics, autonomous vehicles, and other areas of artificial intelligence.

5. Are there any limitations to using a Repeated Bayesian filter?

One limitation of a Repeated Bayesian filter is that it assumes the data follows a specific probability distribution. If the data does not conform to this assumption, the filter may provide inaccurate results. Additionally, as the number of observations increases, the computational complexity of the algorithm also increases, which can be a limitation for real-time applications.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
3K
Replies
1
Views
792
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Introductory Physics Homework Help
Replies
12
Views
736
Replies
2
Views
844
Replies
80
Views
4K
Replies
1
Views
588
Replies
3
Views
1K
Replies
4
Views
671
Back
Top