Calculation of the error function.

Click For Summary

Discussion Overview

The discussion revolves around the calculation of the error function in the context of signals and random processes. Participants explore the relationship between two signals, X(t) and G(t), and their combined effect through a low pass filter, focusing on the expectation of the squared difference between the original and filtered signals.

Discussion Character

  • Technical explanation
  • Mathematical reasoning

Main Points Raised

  • One participant presents the random process Y(t) as a product of two signals, X(t) and G(t), and seeks to calculate the expectation of the squared difference between X(t) and Y(t).
  • Another participant suggests a potential simplification of the error function, proposing that it can be expressed in terms of the expectations of X and G.
  • A later post corrects the notation used in the previous calculations and clarifies that the focus is on the expectation of the squared difference between X(t) and a new variable Z(t), which is defined as the output of the filter applied to Y(t).

Areas of Agreement / Disagreement

Participants appear to have differing levels of expertise regarding the calculations involved, and there is no consensus on the correct approach to calculating the expectation of the squared difference. The discussion remains unresolved with respect to the final calculation.

Contextual Notes

There are indications of confusion regarding notation and the application of the law of total expectation, which may affect the clarity of the discussion. The assumptions about the independence of the random variables and the properties of the signals are also relevant but not fully explored.

MathematicalPhysicist
Science Advisor
Gold Member
Messages
4,662
Reaction score
372
I have the next two signals:

X(t) and G(t) and a random process Y(t)=G(t)X(t) where X(t) and G(t) are wide sense stationary with expectation values: E(X)=0, E(G)=1.

Now, it's also given that ##G(t)=\cos(3t+\psi)## where ##\psi## is uniformly distributed on the interval ##(0,2\pi]## and is statistically independent of X(t).

The signal X(t) is transferred through a low pass filter, given in the frequency domain as ##H(\Omega)=1## when ##\Omega \leq 4\pi## and otherwise zero.

I am given that ##Y(\Omega)=X(\Omega)H(\Omega)##, and I want to calculate:

##\epsilon = E((X(t)-Y(t))^2)##

I guess I can go to the frequency domain, but I also need to use the http://en.wikipedia.org/wiki/Law_of_total_expectation

But I am not sure how exactly to condition this, thanks in advance.
 
Last edited by a moderator:
Physics news on Phys.org
For LaTeX, use either $$ on each end or ## (for inline LaTeX) on each end of your expressions. The $ pair is pretty much equivalent to [ tex ] and the # pair is equivalent to [ itex ] (all without extra spaces inside the brackets).
 
Hmmmmm... I am not that expert but isn't it

epsilon = E((X(t)-Y(t))^2) = E((X-G X)^2)= E((X(1-G))^2) = E(X^2 (1-G)^2) =
E (X^2)*E((1-G)^2) = E^2(X)*E^2(1-G) = 0
 
Sorry, I have abuse of notation, I should have denoted:
$$Z(\Omega)=Y(\Omega)H(\Omega)$$

And I am looking for $$E((X(t)-Z(t))^2)$$

I was tired yesterday evening, a long exam that day.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
8
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K