# Search results

1. ### Weighted Moving Average of Cubic

I THINK I may have got it. I basically looked at what we had, and what we need. In order for us to get back a0, for example, we need: \frac{1}{2L+1-\frac{I_2^{2}}{I_4}}\sum_{i=-L}^{i=L}a_0(1-i^2\frac{I_2}{I_4}) = a_0 Well, let's multiply through...
2. ### Weighted Moving Average of Cubic

I mean, one way to write what we have is \frac{1}{2L+1-I_2^{2}/I_4} \sum_{i=-L}^{i=L} (1-i^2\frac{I_2}{I_4}) [a_0 + a_1 (t+i) + a_2 (t+i)^2 + a_3 (t+i)^3]
3. ### Weighted Moving Average of Cubic

I'm unsure as to how you rearranged the weights to get B_i = A - B i^2, would you mind clarifying what A and Bi2 are?
4. ### Weighted Moving Average of Cubic

Sorry, it just follows the same pattern as I2: I_4=\sum_{i=-L}^{i=L} i^{4}
5. ### Weighted Moving Average of Cubic

L is any arbitrary number. For any L, this should be true.
6. ### Weighted Moving Average of Cubic

1. Show that applying a second-order weighted moving average to a cubic polynomial will not change anything. X_t = a_0 + a_1t + a_2t^2 + a_3t^3 is our polynomial Second-order weighted moving average: \sum_{i=-L}^{i=L} B_iX_{t+i} where B_i=(1-i^2I_2/I_4)/(2L+1-I_2^{2}/I_4) where...
7. ### Estimate p from sample of two Binomially Distributions

Ahh yes, I thought about that also. It's nice to see I wasn't off-base by considering that way. Why does it not make statistical sense to use the L I suggested (which is based off the fact that X+Y~B(12,p) ?
8. ### Estimate p from sample of two Binomially Distributions

\frac{dL}{dp} = 8p^7(1-p)^4 - 4p^8(1-p)^3=0 p^7(1-p)^3[8(1-p)]-4p]=0 8-8p-4p=0 ignoring p=0,p=1 8=12p \Rightarrow p=8/12=2/3 Did I miss something? Note: I screwed up my fraction simplification previously if that's what you meant.
9. ### Estimate p from sample of two Binomially Distributions

1. Suppose X~B(5,p) and Y~(7,p) independent of X. Sampling once from each population gives x=3,y=5. What is the best (minimum-variance unbiased) estimate of p? Homework Equations P(X=x)=\binom{n}{x}p^x(1-p)^{n-x} The Attempt at a Solution My idea is that Maximum Likelihood estimators are...
10. ### Covariance - Bernoulli Distribution

Yes, cov(X,Y) = E(XY)-E(X)E(Y). Moreover, E(XY) = E(XY|X=0)P(X=0) + E(XY|X=1)P(X=1) Now, find P(X=0), P(X=1) (this should be easy). But, you are asking about how to find E(Y|X=1)? Well, we have a formula for f(Y|X=1) don't we? It's a horizontal line at 1 from 0 to 1. Then, the expected value...
11. ### Covariance - Bernoulli Distribution

Yes, you should get an answer in terms of p. Just use the E(XY) formula above and recall the formula for cov(X,Y).
12. ### Covariance - Bernoulli Distribution

E(XY)=E(xy|x=0)P(x=0) + E(xy|x=1)P(x=1)
13. ### Covariance - Bernoulli Distribution

What do you mean?
14. ### Maximum Likelihood Estimator + Prior

Well, based off the graph of \pi^{n_1}(1-\pi)^{n_2} with several different n1 and n2 values plugged in that the best choice would be \pi=n1/n when 1/2≤n1/n≤1, else we choose \pi=1/2 since we usually look at the corner points (1/2 and 1)
15. ### Maximum Likelihood Estimator + Prior

What do you mean fail? Intuitively, \pi_{ML}=\frac{n_1}{n} would "fail" in the case that it is \frac{n_1}{n} < 1/2 But, I'm not sure what our solution must be then if it fails.
16. ### Maximum Likelihood Estimator + Prior

1.Suppose that X~B(1,∏). We sample n times and find n1 ones and n2=n-n1zeros a) What is ML estimator of ∏? b) What is the ML estimator of ∏ given 1/2≤∏≤1? c) What is the probability ∏ is greater than 1/2? d) Find the Bayesian estimator of ∏ under quadratic loss with this prior 2. The attempt...
17. ### Maximum Likelihood Estimators

Not so far, though I'll be talking to a graduate TA about it.
18. ### Maximum Likelihood Estimators

Homework Equations L(x,p) = \prod_{i=1}^npdf l= \sum_{i=1}^nlog(pdf) Then solve \frac{dl}{dp}=0 for p (parameter we are seeking to estimate) The Attempt at a Solution I know how to do this when we are given a pdf, but I'm confused how to do this when we have a sample.

Resolved.
20. ### Trouble taking a derivative

That's fine. Then, remember: -(y^2-y+1)^{-2} = -\frac{1}{(y^2-y+1)^2} (An expression to the -2 power doesn't equal 1/sqrt(expression))
21. ### Trouble taking a derivative

Remember: \frac{d}{dx}\frac{f(x)}{g(x)} = \frac{g(x)f'(x) - f(x)g'(x)}{g(x)^2}
22. ### Find P(X+Y<1) in a different way

1. Let the pdf of X,Y be f(x,y) = x^2 + \frac{xy}{3}, 0<x<1, 0<y<2 Find P(X+Y<1) two ways: a) P(X+Y<1) = P(X<1-Y) b) Let U = X + Y, V=X, and finding the joint distribution of (U,V), then the marginal distribution of U. The Attempt at a Solution a) P(X<1-Y) = ? P(x<1-y) = \int_0^1...
23. ### Calculating Variances of Functions of Sample Mean

Solved. Just had to remember the delta method.
24. ### Calculating Variances of Functions of Sample Mean

1. Essentially what I'm trying to do is find the asymptotic distributions for a) Y2 b) 1/Y and c) eY where Y = sample mean of a random iid sample of size n. E(X) = u; V(X) = σ2 Homework Equations a) Y^2=Y*Y which converges in probability to u^2, V(Y*Y)=\sigma^4 + 2\sigma^2u^2 So...
25. ### Conditional Variances

That is the way we are instructed to "name" it in class. f(x) is the joint density function of f(x,y). fx(x) is equivalent, but 99.9% of the time the Professor uses f(x). I'm still stuck on whether we should be integrating with respect to y or x (use dy or dx). Intuitively, dy makes more...
26. ### Conditional Variances

Well, my thinking was that the solution for V(Y|X) is not dependent on the value of y, thus we would only need to use the marginal dist f(x) = \int_{-\infty}^{\infty} f(x,y)dy Even though V(Y|X) contains no y, should we still use the joint pdf? Moreover, I started thinking that we should be...
27. ### Conditional Variances

1. Given f(x,y) = 2, 0<x<y<1, show V(Y) = E(V(Y|X)) + V(E(Y|x)) Homework Equations I've found V(Y|X) = \frac{(1-x)^2}{12} and E(Y|X) = \frac{x+1}{2} The Attempt at a Solution So, E(V(Y|X))=E(\frac{(1-x)^2}{12}) = \int_0^y \frac{(1-x)^2}{12}f(x)dx, correct?
28. ### Covariance - Bernoulli Distribution

Doh, that was easy. It also makes a lot of sense. Both give the same answer so it's nice to see I did my summations correct. Thanks a lot!
29. ### Covariance - Bernoulli Distribution

Hmm, I thought about that but when I was thinking about it it seemed difficult to calculate E(XY|X=0) for example. That is probably an oversight of mine, though. But, cov(x,y) = \frac{p(p-1)}{2} is correct?
30. ### Covariance - Bernoulli Distribution

This is where I am now: f(y|x=0) = 1/2 for 0<y<2 and f(y|x=1) = 1 0<y<1--> uniform distribution --> E(Y|x=0) = 1 ; E(Y|x=1) = 1/2 Also, E(Y) = E(Y|x=0)P(x=0) + E(Y|x=1)P(x=1) = 1-p/2 So, P(x=0,y) = (1-p)/3 for any y=0,1,2 P(x=1,y) = 1/2 for y=0,1 and 0 for y=2 Then, cov(x,y) = \sum_{y=0}^2...
31. ### Covariance - Bernoulli Distribution

So are you saying Y is g(x). But, we know f(y|x=0) and f(y|x=1) but do we know g(1), g(0)? I tried thinking of it as a discrete, but couldn't we just use f_Y(y) = \int f_{X,Y}(x,y)dx =\int f_{Y}(y|X=x)f_X(x)dx since we would get \int_0^2 1/2*1*(1-p)dx + \int_0^1 (1*p)dx since f(x) =...
32. ### Covariance - Bernoulli Distribution

Ya I thought so. Well, E(Y) = ∫y*f(y)dy = ∫y*(∫f(x,y)dx)dy or = ∫y*f(x,y)*f(x|y)dy But, I can't see how to use f(y|x=0) and f(y|x=1) We do know that f(x) = px(1-p)1-x
33. ### Covariance - Bernoulli Distribution

1. Consider the random variables X,Y where X~B(1,p) and f(y|x=0) = 1/2 0<y<2 f(y|x=1) = 1 0<y<1 Find cov(x,y) Homework Equations Cov(x,y) = E(XY) - E(X)E(Y) = E[(x-E(x))(y-E(y))] E(XY)=E[XE(Y|X)] The Attempt at a Solution E(X) = p (known since it's Bernoulli, can also...
34. ### Conditional Expectation

Just kidding I worked it out, thanks.
35. ### Conditional Expectation

That will give us 2(1-x) so f(y|x) = 1/(1-x) I'm confused how this will give (1+x)/2 for E(y|x)
36. ### Conditional Expectation

The only other way I can think of doing f(x) would be to integrate from 0 to 1 instead. f(x) is defined as the integral of the joint pdf in terms of y. So, we could get integral(2dy) from x to 1?
37. ### Conditional Expectation

1. Let the joint pdf be f(x,y) = 2 ; 0<x<y<1 ; 0<y<1 Find E(Y|x) and E(X|y) Homework Equations E(Y|x) = \int Y*f(y|x)dy f(y|x) = f(x,y) / f(x) The Attempt at a Solution f(x) = \int 2dy from 0 to y = 2y f(y|x) = f(x,y)/f(x) = 1/2y E(Y|x) = \int Y/2Y dy from x to 1 = \int 1/2 dy from x to 1...

Bump.
39. ### Integral + Complex Conjugate

Homework Statement Show that the following = 0: \int_{-\infty}^{+\infty} \! i*(\overline{d/dx(sin(x)du/dx})*u \, \mathrm{d} x + \int_{-\infty}^{+\infty} \! \overline{u}*(d/dx(sin(x)du/dx) \, \mathrm{d} x where \overline{u} = complex conjugate of u and * is the dot product. 2. Work so far...
40. ### Does diagonalizable imply symmetric?

Darn.. well I guess to find another way to show the PDE is well-posed.
41. ### Does diagonalizable imply symmetric?

In order to prove my PDE system is well-posed, I need to show that if a matrix is diagonalizable and has only real eigenvalues, then it's symmetric. Homework Equations I've found theorems that relate orthogonally diagonalizable and symmetric matrices, but is that sufficient? The...

My bad, I didn't mean to be confusing :/ Thank you so much!

Perhaps I'm confused about induced ∞ norm vs. entry-wise, sorry :/ We haven't discussed entry-wise vs induced. The point I will be making is that since the row-sum of any given row is 1, then the ∞ norm = 1 --> spectral radius ≤ 1. From there, show that we can construct an eigenvector that, for...

Sorry, I should clarify: I don't use the fact that each element is <= 1, but it's obvious from the row sum + non-negative requirements.

I wasn't using the property that all entries are less than 1 really. I was using the fact that the row sum = 1 and non-negative entries must thus require that all entries are <=1 in this case.

Yes, but that matrix doesn't fit the classification given to us. The sum of any given row must be = 1 by the problem statement.