Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Repeated Bayesian filter

  1. Feb 18, 2014 #1
    We have a sensor that measures a certain value that lies in the range (0, 3). The sensor is not perfect, and sometimes it fails. When this happens it ouputs a value under 1, regadless of the actual value. The failure probability is 0,01.

    Suposing the sensor outputs a value under 1, what is the probability that the actual value is below 1? Suposing that we repeat the measurement N times and we always obtain a value under 1 what is the probability that the actual value is below 1?

    My attempt at a solution:

    Call a the actual value, and s the measured value. Then [itex]P(a<1) = \frac{1}{3}[/itex] and [itex]P(a>1) = \frac{2}{3}[/itex].Also [itex]P(s<1|a<1) = 1[/itex], [itex]P(s<1|a>1) = 0.01[/itex] and so [itex]P(s<1) = P(s<1|a<1)*P(a<1) + P(s<1|a>1)*P(a>1) = 1 \frac{1}{3}+0.01 \frac{2}{3} = 0.34[/itex] .

    Using the Bayes Theorem [itex]P(a<1|s<1) = \frac{P(s<1|a<1)P(a<1)}{P(s<1)} = \frac{1/3}{0.34} = 0.98[/itex]

    When we get to the N measuremetns case I start having problems. Intuitively I think that measurement failures are independent of each other, that is, a faulty first measurement does not influence the probability of a faulty second measurement, and so on. Then the answer would simply be [itex]P(a<1| s_{1:N} < 1) = 0.98^N[/itex]. It seems though misleadingly simple.

    Could you help me out?
     
    Last edited: Feb 18, 2014
  2. jcsd
  3. Feb 18, 2014 #2
    You need to distinguish between the event "s<1" ([itex]S_{i}[/itex]) and the event "failure" ([itex]F_{i}[/itex]). The [itex]F_{i}[/itex] may be independent, but the [itex]S_{i}[/itex] are certainly not, since they measure the value. Your result would imply that a repeated measurement of s<1 actually decreases the chance of a<1, which is absurd. Let A be the event "a<1". Then

    [tex]
    \begin{equation}
    \label{eq:r1}
    P(A|S_{1}\wedge{}S_{2})=
    P(A|S_{1}\wedge{}S_{2}\wedge{}F_{1}\wedge{}F_{2})P(F_{1}\wedge{}F_{2})+
    P(A|S_{1}\wedge{}S_{2}\wedge{}F_{1}\wedge\urcorner{}F_{2})P(F_{1}\wedge\urcorner{}F_{2})+
    \end{equation}
    [/tex]

    [tex]
    \begin{equation}
    \label{eq:r2}
    P(A|S_{1}\wedge{}S_{2}\wedge\urcorner{}F_{1}\wedge{}F_{2})P(\urcorner{}F_{1}\wedge{}F_{2})+
    P(A|S_{1}\wedge{}S_{2}\wedge\urcorner{}F_{1}\wedge\urcorner{}F_{2})
    P(\urcorner{}F_{1}\wedge\urcorner{}F_{2})=
    \end{equation}
    [/tex]

    [tex]
    \begin{equation}
    \label{eq:r3}
    \frac{1}{30000}+\frac{9999}{10000}=\frac{29998}{30000}
    \end{equation}
    [/tex]

    Now, that's more like it. The more often you measure "s<1" the more
    unlikely [itex]\urcorner{}A[/itex] becomes.
     
    Last edited: Feb 18, 2014
  4. Feb 18, 2014 #3
    I understand your solution, but, don't you have to use another time the Bayes Theorem? Like "on top" of the first result? This comes from a introduction to recursive Bayesian filters, so I "feel" like there should be an iterative procedure. Or am I wrong?
     
  5. Feb 18, 2014 #4
    No iteration needed.

    [tex]
    \begin{equation}
    P(A|S_{1:N})=\frac{1}{3}*(0.01)^{N}+\frac{100^{N}-1}{100^{N}}
    \end{equation}
    [/tex]

    I guess this is an iteration of sorts, which you can prove by induction, on the Total Probability Theorem

    http://mathworld.wolfram.com/TotalProbabilityTheorem.html
     
    Last edited: Feb 18, 2014
  6. Feb 18, 2014 #5
    I understand that there is no need, but this is the foundation for more complicated problems. Namely problems in which we have a probability distribution over many values and the reality is influenced by probabilistic actions. Or, mathematically, we have [itex]p(z_t| x_t ) [/itex] and [itex]p(x_t| u_t )[/itex].

    In the case of this problem:

    [itex]p(z_t| x_t ) [/itex] becomes
    [itex] if x_t < 1:~~ p(z_t|x_t) = 1 for z_t < 1 and 0 for z_t > 1[/itex]
    [itex] if x_t > 1:~~ p(z_t|x_t) = 0.01 for z_t < 1 and 0.99 for z_t > 1[/itex]

    And [itex]p(x_t| u_t )[/itex] is just [itex]p(x)[/itex].

    Later on these distributions will be far more complicated, so I need to understant the general method, aplied to this problem, in order to better understand more advanced things.
     
  7. Feb 18, 2014 #6

    FactChecker

    User Avatar
    Science Advisor
    Gold Member

    As stlukits says, even if the probabilities of failures are independent, the probabilities of the true values being less than one may not be. Although the problem is stated as though there is one identical true value for all the trials, when you multiply the Bayesian probabilities, you are assuming the true values are independent for all measurements. If there is one true value, your calculation must change.

    P(true value < 1 | all N measurements < 1) = P( (true value < 1) and (all N measurements < 1))/P(all N measurements < 1)

    This is still a direct application of Bayes' Rule.
     
  8. Feb 19, 2014 #7

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    stlukits was correct in that your original answer is wrong. A long string of readings of values less than one is strongly indicative that the true value is indeed less than one. It is the probability that the true value is greater than one that should be decreasing. After that, I don't know what stlukits was calculating. His result is incorrect.


    I'll look at the sample space as the set cross product of (true value ≥ 1, true value <1) and (sensor read true, sensor failed):
    • The true value is greater than one and the sensor read true,
    • The true value is greater than one and the sensor failed,
    • The true value is less than one and the sensor read true,
    • The true value is less than one and the sensor failed.
    Note that a sensor reading of less than one excludes the first outcome. The only way to have a true value greater than one and a sensor reading less than one is if the sensor failed. This suggests that you look at the opposite of the problem than you were asked to look at: What is the probability that the true value is greater than one given a sensor reading that was less than one? Try using Bayes' Law to calculate this probability. The probability that the true value is less than one is one minus this calculated probability.

    To calculate the probability that the true value is greater than one given two readings that are both less than one, you simply change the prior probability from your default prior based on the principle of indifference to the probability you calculated above for a true value greater than one given a single reading less than one. Similarly, the probability that the true value is greater than one given n readings all of which are less than one, you change the prior to the probability of n-1 successive sub-unity readings.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Repeated Bayesian filter
  1. Bayesian Inference (Replies: 2)

Loading...