I THINK I may have got it. I basically looked at what we had, and what we need.
In order for us to get back a0, for example, we need:
\frac{1}{2L+1-\frac{I_2^{2}}{I_4}}\sum_{i=-L}^{i=L}a_0(1-i^2\frac{I_2}{I_4}) = a_0
Well, let's multiply through...
I mean, one way to write what we have is
\frac{1}{2L+1-I_2^{2}/I_4} \sum_{i=-L}^{i=L} (1-i^2\frac{I_2}{I_4}) [a_0 + a_1 (t+i) + a_2 (t+i)^2 + a_3 (t+i)^3]
1. Show that applying a second-order weighted moving average to a cubic polynomial will not change anything.
X_t = a_0 + a_1t + a_2t^2 + a_3t^3 is our polynomial
Second-order weighted moving average: \sum_{i=-L}^{i=L} B_iX_{t+i}
where B_i=(1-i^2I_2/I_4)/(2L+1-I_2^{2}/I_4)
where...
Ahh yes, I thought about that also. It's nice to see I wasn't off-base by considering that way.
Why does it not make statistical sense to use the L I suggested (which is based off the fact that X+Y~B(12,p) ?
\frac{dL}{dp} = 8p^7(1-p)^4 - 4p^8(1-p)^3=0
p^7(1-p)^3[8(1-p)]-4p]=0
8-8p-4p=0 ignoring p=0,p=1
8=12p \Rightarrow p=8/12=2/3
Did I miss something?
Note: I screwed up my fraction simplification previously if that's what you meant.
1. Suppose X~B(5,p) and Y~(7,p) independent of X. Sampling once from each population gives x=3,y=5. What is the best (minimum-variance unbiased) estimate of p?
Homework Equations
P(X=x)=\binom{n}{x}p^x(1-p)^{n-x}
The Attempt at a Solution
My idea is that Maximum Likelihood estimators are...
Yes, cov(X,Y) = E(XY)-E(X)E(Y).
Moreover, E(XY) = E(XY|X=0)P(X=0) + E(XY|X=1)P(X=1)
Now, find P(X=0), P(X=1) (this should be easy).
But, you are asking about how to find E(Y|X=1)? Well, we have a formula for f(Y|X=1) don't we? It's a horizontal line at 1 from 0 to 1. Then, the expected value...
Well, based off the graph of \pi^{n_1}(1-\pi)^{n_2} with several different n1 and n2 values plugged in that the best choice would be \pi=n1/n when 1/2≤n1/n≤1, else we choose \pi=1/2 since we usually look at the corner points (1/2 and 1)
What do you mean fail?
Intuitively, \pi_{ML}=\frac{n_1}{n} would "fail" in the case that it is \frac{n_1}{n} < 1/2
But, I'm not sure what our solution must be then if it fails.
1.Suppose that X~B(1,∏). We sample n times and find n1 ones and n2=n-n1zeros
a) What is ML estimator of ∏?
b) What is the ML estimator of ∏ given 1/2≤∏≤1?
c) What is the probability ∏ is greater than 1/2?
d) Find the Bayesian estimator of ∏ under quadratic loss with this prior
2. The attempt...
Homework Equations
L(x,p) = \prod_{i=1}^npdf
l= \sum_{i=1}^nlog(pdf)
Then solve \frac{dl}{dp}=0 for p (parameter we are seeking to estimate)
The Attempt at a Solution
I know how to do this when we are given a pdf, but I'm confused how to do this when we have a sample.
1. Let the pdf of X,Y be f(x,y) = x^2 + \frac{xy}{3}, 0<x<1, 0<y<2
Find P(X+Y<1) two ways:
a) P(X+Y<1) = P(X<1-Y)
b) Let U = X + Y, V=X, and finding the joint distribution of (U,V), then the marginal distribution of U.
The Attempt at a Solution
a) P(X<1-Y) = ?
P(x<1-y) = \int_0^1...
1. Essentially what I'm trying to do is find the asymptotic distributions for
a)
Y2
b) 1/Y and
c) eY where
Y = sample mean of a random iid sample of size n.
E(X) = u; V(X) = σ2
Homework Equations
a) Y^2=Y*Y which converges in probability to u^2,
V(Y*Y)=\sigma^4 + 2\sigma^2u^2
So...
That is the way we are instructed to "name" it in class. f(x) is the joint density function of f(x,y).
fx(x) is equivalent, but 99.9% of the time the Professor uses f(x).
I'm still stuck on whether we should be integrating with respect to y or x (use dy or dx).
Intuitively, dy makes more...
Well, my thinking was that the solution for V(Y|X) is not dependent on the value of y, thus we would only need to use the marginal dist f(x) = \int_{-\infty}^{\infty} f(x,y)dy
Even though V(Y|X) contains no y, should we still use the joint pdf?
Moreover, I started thinking that we should be...
1. Given f(x,y) = 2, 0<x<y<1, show V(Y) = E(V(Y|X)) + V(E(Y|x))
Homework Equations
I've found V(Y|X) = \frac{(1-x)^2}{12} and E(Y|X) = \frac{x+1}{2}
The Attempt at a Solution
So, E(V(Y|X))=E(\frac{(1-x)^2}{12}) = \int_0^y \frac{(1-x)^2}{12}f(x)dx, correct?
Hmm, I thought about that but when I was thinking about it it seemed difficult to calculate E(XY|X=0) for example. That is probably an oversight of mine, though.
But, cov(x,y) = \frac{p(p-1)}{2} is correct?
This is where I am now:
f(y|x=0) = 1/2 for 0<y<2 and f(y|x=1) = 1 0<y<1--> uniform distribution --> E(Y|x=0) = 1 ; E(Y|x=1) = 1/2
Also, E(Y) = E(Y|x=0)P(x=0) + E(Y|x=1)P(x=1) = 1-p/2
So, P(x=0,y) = (1-p)/3 for any y=0,1,2
P(x=1,y) = 1/2 for y=0,1 and 0 for y=2
Then,
cov(x,y) = \sum_{y=0}^2...
So are you saying Y is g(x). But, we know f(y|x=0) and f(y|x=1) but do we know g(1), g(0)?
I tried thinking of it as a discrete, but couldn't we just use
f_Y(y) = \int f_{X,Y}(x,y)dx =\int f_{Y}(y|X=x)f_X(x)dx
since we would get \int_0^2 1/2*1*(1-p)dx + \int_0^1 (1*p)dx since f(x) =...
Ya I thought so.
Well, E(Y) = ∫y*f(y)dy = ∫y*(∫f(x,y)dx)dy or = ∫y*f(x,y)*f(x|y)dy
But, I can't see how to use f(y|x=0) and f(y|x=1)
We do know that f(x) = px(1-p)1-x
1. Consider the random variables X,Y where X~B(1,p) and
f(y|x=0) = 1/2 0<y<2
f(y|x=1) = 1 0<y<1
Find cov(x,y)
Homework Equations
Cov(x,y) = E(XY) - E(X)E(Y) = E[(x-E(x))(y-E(y))]
E(XY)=E[XE(Y|X)]
The Attempt at a Solution
E(X) = p (known since it's Bernoulli, can also...
The only other way I can think of doing f(x) would be to integrate from 0 to 1 instead. f(x) is defined as the integral of the joint pdf in terms of y.
So, we could get integral(2dy) from x to 1?
1. Let the joint pdf be f(x,y) = 2 ; 0<x<y<1 ; 0<y<1
Find E(Y|x) and E(X|y)
Homework Equations
E(Y|x) = \int Y*f(y|x)dy
f(y|x) = f(x,y) / f(x)
The Attempt at a Solution
f(x) = \int 2dy from 0 to y = 2y
f(y|x) = f(x,y)/f(x) = 1/2y
E(Y|x) = \int Y/2Y dy from x to 1 = \int 1/2 dy from x to 1...
Homework Statement
Show that the following = 0:
\int_{-\infty}^{+\infty} \! i*(\overline{d/dx(sin(x)du/dx})*u \, \mathrm{d} x + \int_{-\infty}^{+\infty} \! \overline{u}*(d/dx(sin(x)du/dx) \, \mathrm{d} x where \overline{u} = complex conjugate of u and * is the dot product.
2. Work so far...
In order to prove my PDE system is well-posed, I need to show that if a matrix is diagonalizable and has only real eigenvalues, then it's symmetric.
Homework Equations
I've found theorems that relate orthogonally diagonalizable and symmetric matrices, but is that sufficient?
The...
Perhaps I'm confused about induced ∞ norm vs. entry-wise, sorry :/
We haven't discussed entry-wise vs induced.
The point I will be making is that since the row-sum of any given row is 1, then the ∞ norm = 1 --> spectral radius ≤ 1.
From there, show that we can construct an eigenvector that, for...
I wasn't using the property that all entries are less than 1 really. I was using the fact that the row sum = 1 and non-negative entries must thus require that all entries are <=1 in this case.
Ya sorry, I ran 10 miles a little bit ago, I'm a little weary haha.
As for induced: I think you're right it only works for induced norms. But, the idea should still work here since ||.||inf is an induced norm and that's the norm we care about for this problem.
Sorry, I must have thought of that wrong: if we find a \lambda>1, then \varsigma(A)>1.
But, idk how you would find such an eigenvalue lol so that doesn't make sense.
Hm.. well if a row sums to 1, then (Matrix Row)*(vector) = 1 \rightarrow vector = all 1's, correct?
Then, Ax =\lambdax...
Hmm, interesting. I suppose that does make sense!
I'm still not sure exactly how to proceed.
From \sum_{j=1}^n 1*a_{ij}=1, we know that 1 * (sum of any row) = 1.
Even if we find such a eigenvector that gives us an eigenvalue of 1 for all matrices, how does that help us?
After all, what's...
Hmm, I'll have to think about that and the hint.
In the meantime: http://en.wikipedia.org/wiki/Matrix_norm
Scroll down to where it says "Any induced norm satisfies the inequality."
There is some justification. It seems very important when you want to look at convergence and successive powers...