Functional analysis, projection operators

Fredrik
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
Gold Member
Messages
10,876
Reaction score
423

Homework Statement



I want to understand the proof of proposition 7.1 in Conway. The theorem says that if \{P_i|i\in I\} is a family of projection operators, and P_i is orthogonal to P_j when i\neq j, then for any x in a Hilbert space H,

\sum_{i\in I}P_ix=Px

where P is the projection operator for the closed subspace that's spanned by the members of the Pi(H).

Homework Equations



Suppose that P_M and P_N are projection operators for closed subspaces M and N respectively. Then

(a) P_M+P_N is a projection operator for M\oplus N if and only if M and N are orthogonal.
(b) P_MP_N is a projection operator for M\cap N if and only if [P_M,P_N]=0.
(c) P_M-P_N is a projection operator for M\cap N^\perp if and only if N\subset M.

The Attempt at a Solution



For each finite F\subset I, define

S_F=\sum_{i\in F}P_ix

The map F\mapsto S_F is a net, and I need to show that it converges to Px. So given \varepsilon>0, I want to find a finite F_0\subset I such that

F\geq F_0\Rightarrow \|S_F-Px\|<\varepsilon

I think I've shown that the norm on the right is equal to

\|P_{M_{I-F}}x\|

where M_{I-F} is the closed subspace spanned by the vectors in the P_i(H) with i\in I-F. But then what? I still don't see how to pick an F that makes the above as small as I want.
 
Physics news on Phys.org
Allright, I think I finally found it. First, let us fix some notation: let M_i=P_i(H) and M=P(H). Let's focus on this question first:

For \epsilon>0, find a finite set F_0\subseteq I such that

\|Px-\sum_{i\in F_0}{P_ix}\|<\epsilon

We know that Px\in M, and we know that M=\overline{span\{M_i~\vert~i\in F_0\}}. Thus there exist a finite set F0 and vectors m_i\in M_i for i\in F_0, such that \|Px-\sum_{i\in F_0}{m_i}\|<\epsilon.

Now, do the following:

\|Px-\sum_{i\in F_0}{P_ix}\|\leq \|Px-\sum_{i\in F_0}{m_i}\|+\|\sum_{i\in F_0}{m_i}-\sum_{i\in F_0}{P_ix}\|

I'll let you continue from here... It is of course obvious that you should try to find an upper bound to \|m_i-P_ix\|...
 
micromass said:
We know that Px\in M, and we know that M=\overline{span\{M_i~\vert~i\in F_0\}}. Thus there exist a finite set F0 and vectors m_i\in M_i for i\in F_0, such that \|Px-\sum_{i\in F_0}{m_i}\|<\epsilon.
Aha. (I assume the first \in F_0 should be \in I). Px is in the closure of the set of linear combinations of members of the Mi, and that means that every open ball around Px contains one of those linear combinations. Very clever. (Nothing like this occurred to me when I was thinking about this problem).

micromass said:
\|Px-\sum_{i\in F_0}{P_ix}\|\leq \|Px-\sum_{i\in F_0}{m_i}\|+\|\sum_{i\in F_0}{m_i}-\sum_{i\in F_0}{P_ix}\|

I'll let you continue from here... It is of course obvious that you should try to find an upper bound to \|m_i-P_ix\|...
I actually have to go to bed, but I'll continue tomorrow. Thank you very much.
 
Fredrik said:
Aha. (I assume the first \in F_0 should be \in I)

Yes, of course. I reread my post 10 times to check for typo's, and they're still one left. I suck at typing correct things...
 
I'm so dumb I wasn't able to see how to use the other thing you hinted, but I found a way to solve the problem (using your first hint). Perhaps not the most elegant solution. I decided to express Px as a sum of vectors that belong to the Mi plus a "remainder" that's orthogonal to the other terms.

\varepsilon^2>\Big\|\sum_{i\in F_0}m_i-Px\Big\|^2=\Big\|\sum_{i\in F_0}(m_i-P_i x)+(Px)^\perp_{F_0}\Big\|^2=\sum_{i\in F_0}\|m_i-P_ix\|^2+\|(Px)^\perp_{F_0}\|^2\geq \|(Px)^\perp_{F_0}\|^2

Then I realized that what I called (Px)^\perp_{F_0} here is what I called P_{M_{I-F}}x in my first post. So

\Big\|\sum_{i\in F_0}P_ix-Px\big\|=\|P_{M_{I-F}}x\|=\|(Px)^\perp_{F_0}\|<\varepsilon

And then the rest wasn't too hard. If F\geq F_0, then I-F\subset I-F_0 and

\Big\|\sum_{i\in F}P_ix-Px\big\|=\|P_{M_{I-F}}x\|=\|P_{M_{I-F}}P_{M_{I-F_0}}x\|\leq\|P_{M_{I-F_0}}x\|<\varepsilon

Thanks again for the help. If your solution differs significantly from mine, I would be interested in seeing it.
 
Yes, this was basically what I had in mind. My solution is just a bit longer and less elegant :smile:
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top