Kalman, White Noise, Sensor Specification, Discretization?

Hare
Messages
3
Reaction score
0
Hi.

I have a few questions about sensor specifications and its implementation in a Kalman Filter and simulation of gyroscope/accelerometer output.

Abbreviation used:
d - discrete
c - continuous

Q1:
From book: Aided Navigation - Farrell (you don't need the book to understand the question).

Section 7.2 Methodology: Detailed Example:
Here
\sigma_1, \sigma_{bu} and \sigma_{by} are root PSD of continuous white noise.
\sigma_2 is std. deviation of discrete white noise.

Qc = diag([\sigma_{bu}^2 \sigma_{by}^2 \sigma_1^2 ])
Qd = function of Qc and sampling time

Rd = \sigma_2^2

This I understand, but not section 7.4.

Section 7.4 An Alternative Approach:
Same sigma values as above, plus
sigma_3 is root PSD of continuous white noise.

Qc = diag([\sigma_{bu}^2 \sigma_{by}^2 \sigma_3^2])
Qd = function of Qc and sampling time

Rd =\sigma_1^2
and later
Rd = diag([\sigma_2^2 \sigma_1^2])

a)
Can you just use \sigma in Rd no matter if it's std. deviation for discrete white noise (\sigma_2) or root PSD of continuous white noise(\sigma_1)?
b)
Shouldn't \sigma_1 be "discretizised" or something?


Q2:
If you get noise specification for the sensor noise from PSD method, Allan Variance method or data sheet, it is for continuous white noise. Is this correct?

Q3:
How do you simulate the output of a sensor in e.g. Matlab/simulink if you have noise specification in continuous time?

Q4:
When we discretizise a stochastic linear system, we get:
Qd = f(Qc, A, T)
Rd = Rc
(http://en.wikipedia.org/wiki/Discretization)
I don't understand why Rd = Rc?

Q5:
In the book: Optimal State Estimation - Kalman, Hinf and Nonlinear Approaches - 2006 - Dan Simon, section 8.1 DISCRETE-TIME AND CONTINUOUS-TIME WHITE NOISE, it says:

"... discrete-time white noise with covariance Qd in a system with a sample period of T, is equivalent to continuous-time white noise with covariance Qc*\delta (t) (*\delta (t): dirac delta function), where Qc = Qd / T."

"... Rd = Rc / T ... This establishes the equivalence between white measurement noise in discrete time and continuous time. The effects of white measurement noise in discrete time and continuous time are the same if
v(k) ~ (0,Rd)
v(t) ~ (0,Rc)
"
How does this relate to my other questions? It seems to suggest that Rd \neq Rc as Q4 suggests.


Best Regard
Jonas
 
Physics news on Phys.org
Any experts out there?
 
I'm not an expert in you topic, but I'd find it interesting to chat about it until one comes along.


Hare said:
Hi.

I have a few questions about sensor specifications and its implementation in a Kalman Filter and simulation of gyroscope/accelerometer output.

Abbreviation used:
d - discrete
c - continuous

Q1:
From book: Aided Navigation - Farrell (you don't need the book to understand the question).

Maybe an navigation engineer doesn't need a the book! - but this isn't the engineering section of the forum.

Section 7.2 Methodology: Detailed Example:
Here
\sigma_1, \sigma_{bu} and \sigma_{by} are root PSD of continuous white noise.
\sigma_2 is std. deviation of discrete white noise.

Qc = diag([\sigma_{bu}^2 \sigma_{by}^2 \sigma_1^2 ])
Qd = function of Qc and sampling time

Rd = \sigma_2^2

This I understand, but not section 7.4.

I don't understand from that description, what equations relate those quantities to each other.


Section 7.4 An Alternative Approach:
Same sigma values as above, plus
sigma_3 is root PSD of continuous white noise.

Qc = diag([\sigma_{bu}^2 \sigma_{by}^2 \sigma_3^2])
Qd = function of Qc and sampling time

Rd =\sigma_1^2
and later
Rd = diag([\sigma_2^2 \sigma_1^2])

a)
Can you just use \sigma in Rd no matter if it's std. deviation for discrete white noise (\sigma_2) or root PSD of continuous white noise(\sigma_1)?
b)
Shouldn't \sigma_1 be "discretizised" or something?

Again, I don't know the equations that your are using.


Q2:
If you get noise specification for the sensor noise from PSD method, Allan Variance method or data sheet, it is for continuous white noise. Is this correct?

For us non-engineeers, give a link to a specific data sheet where sensor noise is given. Perhaps we can learn what the specification means from an example.

Q3:
How do you simulate the output of a sensor in e.g. Matlab/simulink if you have noise specification in continuous time?

Unfortunately, I don't own Matlab or simulink. I can run the free software Octave which is similar.

Q4:
When we discretizise a stochastic linear system, we get:
Qd = f(Qc, A, T)
Rd = Rc
(http://en.wikipedia.org/wiki/Discretization)
I don't understand why Rd = Rc?

Are you saying that something in that article asserts Rd = Rc or are asking about material from Farell's book?

Q5:
In the book: Optimal State Estimation - Kalman, Hinf and Nonlinear Approaches - 2006 - Dan Simon, section 8.1 DISCRETE-TIME AND CONTINUOUS-TIME WHITE NOISE, it says:

"... discrete-time white noise with covariance Qd in a system with a sample period of T, is equivalent to continuous-time white noise with covariance Qc*\delta (t) (*\delta (t): dirac delta function), where Qc = Qd / T."

"... Rd = Rc / T ... This establishes the equivalence between white measurement noise in discrete time and continuous time. The effects of white measurement noise in discrete time and continuous time are the same if
v(k) ~ (0,Rd)
v(t) ~ (0,Rc)
"
How does this relate to my other questions? It seems to suggest that Rd \neq Rc as Q4 suggests.

I don't know if you think about white noise the way electrical engineers often do, as a type of "power spectrum". I haven't learned to think of it that way.

My intuitive understanding of white noise goes like this. Suppose you wanted to simulate a white noise at 1 second time intervals by drawing a random number from some distribution. You do this an produce a graph. Then someone (say your boss) wants you to simulate "the same" noise but at a time step of 1/10 of a second. If you draw numbers from the same distribution at each 1/10 of a second, you get another jumpy graph. Maybe you think it looks OK. But suppose your boss is using the white noise in some other function that adds it up, like the simple sum of all the white noise jumps you provide. He probably won't think that you did a good job because in a time interval of a certain size , say 60 seconds, the graph of his function will look a more jumpy with your 1/10 th second noise that it did with the 1 second noise.

Suppose you try to fix this by scaling the numbers you draw. For 1/10 a second, you draw random numbers from the same distribution that you were using and then you divide the numbers by 10. This won't please the boss either. The graph of his function won't look jumpy enough. (Intuitively, this is because the jumps he gets at 1 second intervals is now the average of 10 draws at 1/10 second intervals which has a smaller variation that one draw per second.

The way to please your boss is to scale the numbers so their variance is 1/10 of the standard deviation of the original distribution. Then, according the law for finding the variance of a sum of independent random variables, the variation he gets a 1 second intervals is the sum of 10 terms, each of which has variance that is 1/10 of the variance he had at 1 second intervals. So he sees the same variability over a given time interval as he did with your 1 second noise.

If I want to talk about the variance of white noise, in the sense of a process that takes places in continuous time rather than at discrete steps, I think of the units of measure as being something-squared per unit time. Only when both units are established (e.g. feet and seconds) is the variance of the noise well defined.

I can't tell what the texts you presented are saying about Rc=Rd. An emprical measurment of noise at an arbitrary time step has the proper units for noise at a unit time step, e.g. 2.4 ft^2/ 2 secs = 1.2 ft^2/ 1 sec. But I don't know if that is what is being asserted.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top