Search results

  1. S

    Weighted Moving Average of Cubic

    I THINK I may have got it. I basically looked at what we had, and what we need. In order for us to get back a0, for example, we need: \frac{1}{2L+1-\frac{I_2^{2}}{I_4}}\sum_{i=-L}^{i=L}a_0(1-i^2\frac{I_2}{I_4}) = a_0 Well, let's multiply through...
  2. S

    Weighted Moving Average of Cubic

    I mean, one way to write what we have is \frac{1}{2L+1-I_2^{2}/I_4} \sum_{i=-L}^{i=L} (1-i^2\frac{I_2}{I_4}) [a_0 + a_1 (t+i) + a_2 (t+i)^2 + a_3 (t+i)^3]
  3. S

    Weighted Moving Average of Cubic

    I'm unsure as to how you rearranged the weights to get B_i = A - B i^2, would you mind clarifying what A and Bi2 are?
  4. S

    Weighted Moving Average of Cubic

    Sorry, it just follows the same pattern as I2: I_4=\sum_{i=-L}^{i=L} i^{4}
  5. S

    Weighted Moving Average of Cubic

    L is any arbitrary number. For any L, this should be true.
  6. S

    Weighted Moving Average of Cubic

    1. Show that applying a second-order weighted moving average to a cubic polynomial will not change anything. X_t = a_0 + a_1t + a_2t^2 + a_3t^3 is our polynomial Second-order weighted moving average: \sum_{i=-L}^{i=L} B_iX_{t+i} where B_i=(1-i^2I_2/I_4)/(2L+1-I_2^{2}/I_4) where...
  7. S

    Estimate p from sample of two Binomially Distributions

    Ahh yes, I thought about that also. It's nice to see I wasn't off-base by considering that way. Why does it not make statistical sense to use the L I suggested (which is based off the fact that X+Y~B(12,p) ?
  8. S

    Estimate p from sample of two Binomially Distributions

    \frac{dL}{dp} = 8p^7(1-p)^4 - 4p^8(1-p)^3=0 p^7(1-p)^3[8(1-p)]-4p]=0 8-8p-4p=0 ignoring p=0,p=1 8=12p \Rightarrow p=8/12=2/3 Did I miss something? Note: I screwed up my fraction simplification previously if that's what you meant.
  9. S

    Estimate p from sample of two Binomially Distributions

    1. Suppose X~B(5,p) and Y~(7,p) independent of X. Sampling once from each population gives x=3,y=5. What is the best (minimum-variance unbiased) estimate of p? Homework Equations P(X=x)=\binom{n}{x}p^x(1-p)^{n-x} The Attempt at a Solution My idea is that Maximum Likelihood estimators are...
  10. S

    Covariance - Bernoulli Distribution

    Yes, cov(X,Y) = E(XY)-E(X)E(Y). Moreover, E(XY) = E(XY|X=0)P(X=0) + E(XY|X=1)P(X=1) Now, find P(X=0), P(X=1) (this should be easy). But, you are asking about how to find E(Y|X=1)? Well, we have a formula for f(Y|X=1) don't we? It's a horizontal line at 1 from 0 to 1. Then, the expected value...
  11. S

    Covariance - Bernoulli Distribution

    Yes, you should get an answer in terms of p. Just use the E(XY) formula above and recall the formula for cov(X,Y).
  12. S

    Covariance - Bernoulli Distribution

    E(XY)=E(xy|x=0)P(x=0) + E(xy|x=1)P(x=1)
  13. S

    Maximum Likelihood Estimator + Prior

    Well, based off the graph of \pi^{n_1}(1-\pi)^{n_2} with several different n1 and n2 values plugged in that the best choice would be \pi=n1/n when 1/2≤n1/n≤1, else we choose \pi=1/2 since we usually look at the corner points (1/2 and 1)
  14. S

    Maximum Likelihood Estimator + Prior

    What do you mean fail? Intuitively, \pi_{ML}=\frac{n_1}{n} would "fail" in the case that it is \frac{n_1}{n} < 1/2 But, I'm not sure what our solution must be then if it fails.
  15. S

    Maximum Likelihood Estimator + Prior

    1.Suppose that X~B(1,∏). We sample n times and find n1 ones and n2=n-n1zeros a) What is ML estimator of ∏? b) What is the ML estimator of ∏ given 1/2≤∏≤1? c) What is the probability ∏ is greater than 1/2? d) Find the Bayesian estimator of ∏ under quadratic loss with this prior 2. The attempt...
  16. S

    Maximum Likelihood Estimators

    Not so far, though I'll be talking to a graduate TA about it.
  17. S

    Maximum Likelihood Estimators

    Homework Equations L(x,p) = \prod_{i=1}^npdf l= \sum_{i=1}^nlog(pdf) Then solve \frac{dl}{dp}=0 for p (parameter we are seeking to estimate) The Attempt at a Solution I know how to do this when we are given a pdf, but I'm confused how to do this when we have a sample.
  18. S

    Trouble taking a derivative

    That's fine. Then, remember: -(y^2-y+1)^{-2} = -\frac{1}{(y^2-y+1)^2} (An expression to the -2 power doesn't equal 1/sqrt(expression))
  19. S

    Trouble taking a derivative

    Remember: \frac{d}{dx}\frac{f(x)}{g(x)} = \frac{g(x)f'(x) - f(x)g'(x)}{g(x)^2}
  20. S

    Find P(X+Y<1) in a different way

    1. Let the pdf of X,Y be f(x,y) = x^2 + \frac{xy}{3}, 0<x<1, 0<y<2 Find P(X+Y<1) two ways: a) P(X+Y<1) = P(X<1-Y) b) Let U = X + Y, V=X, and finding the joint distribution of (U,V), then the marginal distribution of U. The Attempt at a Solution a) P(X<1-Y) = ? P(x<1-y) = \int_0^1...
  21. S

    Calculating Variances of Functions of Sample Mean

    Solved. Just had to remember the delta method.
  22. S

    Calculating Variances of Functions of Sample Mean

    1. Essentially what I'm trying to do is find the asymptotic distributions for a) Y2 b) 1/Y and c) eY where Y = sample mean of a random iid sample of size n. E(X) = u; V(X) = σ2 Homework Equations a) Y^2=Y*Y which converges in probability to u^2, V(Y*Y)=\sigma^4 + 2\sigma^2u^2 So...
  23. S

    Conditional Variances

    That is the way we are instructed to "name" it in class. f(x) is the joint density function of f(x,y). fx(x) is equivalent, but 99.9% of the time the Professor uses f(x). I'm still stuck on whether we should be integrating with respect to y or x (use dy or dx). Intuitively, dy makes more...
  24. S

    Conditional Variances

    Well, my thinking was that the solution for V(Y|X) is not dependent on the value of y, thus we would only need to use the marginal dist f(x) = \int_{-\infty}^{\infty} f(x,y)dy Even though V(Y|X) contains no y, should we still use the joint pdf? Moreover, I started thinking that we should be...
  25. S

    Conditional Variances

    1. Given f(x,y) = 2, 0<x<y<1, show V(Y) = E(V(Y|X)) + V(E(Y|x)) Homework Equations I've found V(Y|X) = \frac{(1-x)^2}{12} and E(Y|X) = \frac{x+1}{2} The Attempt at a Solution So, E(V(Y|X))=E(\frac{(1-x)^2}{12}) = \int_0^y \frac{(1-x)^2}{12}f(x)dx, correct?
  26. S

    Covariance - Bernoulli Distribution

    Doh, that was easy. It also makes a lot of sense. Both give the same answer so it's nice to see I did my summations correct. Thanks a lot!
  27. S

    Covariance - Bernoulli Distribution

    Hmm, I thought about that but when I was thinking about it it seemed difficult to calculate E(XY|X=0) for example. That is probably an oversight of mine, though. But, cov(x,y) = \frac{p(p-1)}{2} is correct?
  28. S

    Covariance - Bernoulli Distribution

    This is where I am now: f(y|x=0) = 1/2 for 0<y<2 and f(y|x=1) = 1 0<y<1--> uniform distribution --> E(Y|x=0) = 1 ; E(Y|x=1) = 1/2 Also, E(Y) = E(Y|x=0)P(x=0) + E(Y|x=1)P(x=1) = 1-p/2 So, P(x=0,y) = (1-p)/3 for any y=0,1,2 P(x=1,y) = 1/2 for y=0,1 and 0 for y=2 Then, cov(x,y) = \sum_{y=0}^2...
  29. S

    Covariance - Bernoulli Distribution

    So are you saying Y is g(x). But, we know f(y|x=0) and f(y|x=1) but do we know g(1), g(0)? I tried thinking of it as a discrete, but couldn't we just use f_Y(y) = \int f_{X,Y}(x,y)dx =\int f_{Y}(y|X=x)f_X(x)dx since we would get \int_0^2 1/2*1*(1-p)dx + \int_0^1 (1*p)dx since f(x) =...
  30. S

    Covariance - Bernoulli Distribution

    Ya I thought so. Well, E(Y) = ∫y*f(y)dy = ∫y*(∫f(x,y)dx)dy or = ∫y*f(x,y)*f(x|y)dy But, I can't see how to use f(y|x=0) and f(y|x=1) We do know that f(x) = px(1-p)1-x
  31. S

    Covariance - Bernoulli Distribution

    1. Consider the random variables X,Y where X~B(1,p) and f(y|x=0) = 1/2 0<y<2 f(y|x=1) = 1 0<y<1 Find cov(x,y) Homework Equations Cov(x,y) = E(XY) - E(X)E(Y) = E[(x-E(x))(y-E(y))] E(XY)=E[XE(Y|X)] The Attempt at a Solution E(X) = p (known since it's Bernoulli, can also...
  32. S

    Conditional Expectation

    Just kidding I worked it out, thanks.
  33. S

    Conditional Expectation

    That will give us 2(1-x) so f(y|x) = 1/(1-x) I'm confused how this will give (1+x)/2 for E(y|x)
  34. S

    Conditional Expectation

    The only other way I can think of doing f(x) would be to integrate from 0 to 1 instead. f(x) is defined as the integral of the joint pdf in terms of y. So, we could get integral(2dy) from x to 1?
  35. S

    Conditional Expectation

    1. Let the joint pdf be f(x,y) = 2 ; 0<x<y<1 ; 0<y<1 Find E(Y|x) and E(X|y) Homework Equations E(Y|x) = \int Y*f(y|x)dy f(y|x) = f(x,y) / f(x) The Attempt at a Solution f(x) = \int 2dy from 0 to y = 2y f(y|x) = f(x,y)/f(x) = 1/2y E(Y|x) = \int Y/2Y dy from x to 1 = \int 1/2 dy from x to 1...
  36. S

    Integral + Complex Conjugate

    Homework Statement Show that the following = 0: \int_{-\infty}^{+\infty} \! i*(\overline{d/dx(sin(x)du/dx})*u \, \mathrm{d} x + \int_{-\infty}^{+\infty} \! \overline{u}*(d/dx(sin(x)du/dx) \, \mathrm{d} x where \overline{u} = complex conjugate of u and * is the dot product. 2. Work so far...
  37. S

    Does diagonalizable imply symmetric?

    Darn.. well I guess to find another way to show the PDE is well-posed.
  38. S

    Does diagonalizable imply symmetric?

    In order to prove my PDE system is well-posed, I need to show that if a matrix is diagonalizable and has only real eigenvalues, then it's symmetric. Homework Equations I've found theorems that relate orthogonally diagonalizable and symmetric matrices, but is that sufficient? The...
  39. S

    Spectral Radius: Matrix

    My bad, I didn't mean to be confusing :/ Thank you so much!
  40. S

    Spectral Radius: Matrix

    Perhaps I'm confused about induced ∞ norm vs. entry-wise, sorry :/ We haven't discussed entry-wise vs induced. The point I will be making is that since the row-sum of any given row is 1, then the ∞ norm = 1 --> spectral radius ≤ 1. From there, show that we can construct an eigenvector that, for...
  41. S

    Spectral Radius: Matrix

    Sorry, I should clarify: I don't use the fact that each element is <= 1, but it's obvious from the row sum + non-negative requirements.
  42. S

    Spectral Radius: Matrix

    I wasn't using the property that all entries are less than 1 really. I was using the fact that the row sum = 1 and non-negative entries must thus require that all entries are <=1 in this case.
  43. S

    Spectral Radius: Matrix

    Yes, but that matrix doesn't fit the classification given to us. The sum of any given row must be = 1 by the problem statement.
  44. S

    Spectral Radius: Matrix

    Ya sorry, I ran 10 miles a little bit ago, I'm a little weary haha. As for induced: I think you're right it only works for induced norms. But, the idea should still work here since ||.||inf is an induced norm and that's the norm we care about for this problem.
  45. S

    Spectral Radius: Matrix

    Sorry, I must have thought of that wrong: if we find a \lambda>1, then \varsigma(A)>1. But, idk how you would find such an eigenvalue lol so that doesn't make sense. Hm.. well if a row sums to 1, then (Matrix Row)*(vector) = 1 \rightarrow vector = all 1's, correct? Then, Ax =\lambdax...
  46. S

    Spectral Radius: Matrix

    Hmm, interesting. I suppose that does make sense! I'm still not sure exactly how to proceed. From \sum_{j=1}^n 1*a_{ij}=1, we know that 1 * (sum of any row) = 1. Even if we find such a eigenvector that gives us an eigenvalue of 1 for all matrices, how does that help us? After all, what's...
  47. S

    Spectral Radius: Matrix

    Hmm, I'll have to think about that and the hint. In the meantime: http://en.wikipedia.org/wiki/Matrix_norm Scroll down to where it says "Any induced norm satisfies the inequality." There is some justification. It seems very important when you want to look at convergence and successive powers...
Top