Recent content by JohanL

  1. J

    I Calculating the expected value of the square of an integral of Brownian Motion

    Maybe. But i don't know how to apply it in this case. edit: solved it. ty!
  2. J

    I Calculating the expected value of the square of an integral of Brownian Motion

    I found the exercise and solution online. They don't say anything about T. Im guessing its just the upper limit of integration and not a stopping time if you say it contradicts the other equations. Sorry about that. Seems like i can't edit now?
  3. J

    I Calculating the expected value of the square of an integral of Brownian Motion

    For a standard one-dimensional Brownian motion W(t), calculate: $$E\bigg[\Big(\frac{1}{T}\int\limits_0^TW_t\, dt\Big)^2\bigg]$$I can't figure out how the middle term simplifies. $$ \mathsf E\left(\int_0^T W_t\mathrm dt\right)^2 = \mathsf E\left[T^2W_T^2\right] - 2T\mathsf E\left[W_T\int_0^T...
  4. J

    Autocorrelation function of a Wiener process & Poisson process

    I had only seen the tower law used in connection with martingales, and defined in connection with martingales, and the law of total expectation have i ofc used but only in what they call the special case on the wiki-page. Did not know it was a more general case. Ty!
  5. J

    Random process involving CDF and PDF of standard normal

    Homework Statement Let $$ \Phi(x)=\int_{-\infty}^{x} \frac{1} { \sqrt{2\pi} } e^{-y^2 /2} dy $$ and $$ \phi(x)=\Phi^\prime(x)=\frac{1} { \sqrt{2\pi} } e^{-x^2 /2} $$ be the standard normal (zero - mean and unit variance) cummulative probability distribution function and the standard normal...
  6. J

    Autocorrelation function of a Wiener process & Poisson process

    Homework Statement 3. The Attempt at a Solution [/B] ***************************************** Can anyone possibly explain step 3 and 4 in this solution?
  7. J

    Limit of a continuous time Markov chain

    Big thank you pizzasky! That i have understood from the very beginning. I think i was very clear to explain what i did not understand: ##E(X(s)X(t)) = (\mu^{(s)})_{1} p_{11}(t)## (That it really should have been E(X(s)X(t+s)) didnt make things easier.) And pizzasky's use of the definintion...
  8. J

    Inequality involving probability of stationary zero-mean Gaussian

    I don't know how he justify it or if its correct. I found it online. Its his solution to an old exam task. I think its safest i post a picture of it:
  9. J

    Limit of a continuous time Markov chain

    Thanks. Now its only these two left to understand :) ##E(X(s)X(t)) = (\mu^{(s)})_{1} p_{11}(t)## ##R_X(s,s+t) = p_0(s) p_1(s) P_{01}(t) + p_1(s)^2 P_{11}(t)##Whats the definitions, theorems, equations...that leads to these expressions. E.g. how is ##E(X(s)X(t))## defined in this setting. Its...
  10. J

    Inequality involving probability of stationary zero-mean Gaussian

    Im probably misunderstanding but... Maybe I wasnt clear. I quoted the lecturer's solution: $$P ((X(1) > x) ∩ (X(2) > x)) = P(X(1) > x) = P(X(2) > x) = 1 − Φ(x) $$ when ##\rho = 1##. (so that X(1) and X(2) are perfectly positively correlated), giving equality in the desired inequality ...
  11. J

    Limit of a continuous time Markov chain

    Thanks! I am getting there slowly! I think I am with you this far "If ##P(t) = \exp{( t G)}## (which can be obtained by solving the DE system ##P'(t) = G P(t)## or ##P'(t) = P(t) G##), then for any time ##\tau## the state probability distribution row vector is ##p(\tau) = p(0) P(\tau)##. Thus...
  12. J

    Inequality involving probability of stationary zero-mean Gaussian

    Thanks all. Yeah i think you got it right with 2. It was probably an unclear problem statement and maybe my bad latex that made you all a little confused. I found out that the rest of the solution should be "$$P ((X(1) > x) ∩ (X(2) > x)) = P(X(1) > x) = P(X(2) > x) = 1 − Φ(x) $$ when ρ = 1...
  13. J

    Limit of a continuous time Markov chain

    Homework Statement Calculate the limit $$lim_{s,t→∞} R_X(s, s+t) = lim_{s,t→∞}E(X(s)X(s+t))$$ for a continuous time Markov chain $$(X(t) ; t ≥ 0)$$ with state space S and generator G given by $$S = (0, 1)$$ $$ G= \begin{pmatrix} -\alpha & \alpha \\ \beta & -\beta\...
  14. J

    Inequality involving probability of stationary zero-mean Gaussian

    Homework Statement Let $$(X(n), n ∈ [1, 2])$$ be a stationary zero-mean Gaussian process with autocorrelation function $$R_X(0) = 1; R_X(+-1) = \rho$$ for a constant ρ ∈ [−1, 1]. Show that for each x ∈ R it holds that $$max_{n∈[1,2]} P(X(n) > x) ≤ P (max_{n∈[1,2]} X(n) > x)$$ Are there any...
Back
Top