Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Real and complex vector spaces

  1. Mar 12, 2015 #1

    dyn

    User Avatar

    I've just encountered the following theorem : If T is a linear operator in a complex vector space V then if

    < v , Tv > =0 for all v in V then T=0
    But the theorem doesn't hold in real 2-D vector space as the operator could be the operator that rotates any vector by 90 degrees. My question is : the set of complex numbers includes the set of real numbers so if a theorem holds in a complex vector space then how can it not hold in a real vector space ?
     
  2. jcsd
  3. Mar 12, 2015 #2

    cgk

    User Avatar
    Science Advisor

    I think you might want to re-consider the proof for that "theorem". It effectively says that a non-zero linear operator cannot possibly map a vector to a vector which is orthogonal to it. Your 2D-counter example holds for complex vector spaces as well.
     
  4. Mar 12, 2015 #3

    ShayanJ

    User Avatar
    Gold Member

    Here is the proof. Its for self-adjoint operators. I can't find what's wrong with the proof in the real vector space part, but I'm sure the complex vector space part is not correct and only proves that if ## (Tv,v)=0 ## for all v, then ## (Tx,y) ## is real for all x and y.

    EDIT: With an addition, the complex vector space part will give T=0 too. Strange!

    The operator that rotates a vector to the subspace orthogonal to it is ## T=\mathbb I - |v\rangle \langle v | ##(Its written in a basis where v itself is part of an orthonormal basis. Not sure how to write it in general.) It seems to me its self-adjoint(actually Hermitian). But if I'm wrong, and its not Hermitian, then there is no contradiction.
     
    Last edited: Mar 12, 2015
  5. Mar 13, 2015 #4

    cgk

    User Avatar
    Science Advisor

    I'd like to retract my previous statement. The theorem is fine for self-adjoint operators (but the OP did not say anything about self-adjointness). The given counter-example (an unitary rotation which rotates every vector into a vector orthogonal to it) is not self-adjoint.

    However, in complex vector spaces (let's assume finite dimensional for bit), the theorem also holds beyond self-adjoint operators, namely, at least for all "normal" operators (including both self-adjoint and unitary operators. The 2D rotation is unitary). Note that any "normal" operator over a finite-dimensional complex vector space allows for a spectral decomposition [tex]T=\sum_i \lambda_i |e_i\rangle\langle e_i|[/tex], where the eigenvectors [itex]e_i[/itex] are orthonormal. Thus, unless all the eigenvalues are zero (and thus the operator is zero), there will be some eigenvector which is not mapped to zero.

    So what are the non-zero eigenvectors in the complex 2D rotation case? Let's ask a program:
    import numpy as np
    from scipy import linalg as la

    # A = matrix of a 90 degree 2x2 rotation
    A = np.array([[0.,-1.],[1.,0.]])
    # make eigenvalues (ew) and eigenvectors (ev)
    ew,ev = la.eig(A)

    print "A:\n", A
    print "ew:\n", ew
    print "ev[:,0]:\n", ev[:,0]
    print "ev[:,1]:\n", ev[:,1]

    # this should be a null matrix.
    print "A - ev diag(ew) ev.H:\n", np.dot(ev,np.dot(np.diag(ew), ev.conj().T)) - A

    # test first eigenvector
    v = ev[:,0]
    print "v:\n%s" % v
    print "<v, Av>:\n", np.dot(v.conj().T, np.dot(A,v))


    This gives:
    /tmp> python wh.py
    A:
    [[ 0. -1.]
    [ 1. 0.]]
    ew:
    [ 0.+1.j 0.-1.j]
    ev[:,0]:
    [ 0.70710678+0.j 0.00000000-0.70710678j]
    ev[:,1]:
    [ 0.70710678-0.j 0.00000000+0.70710678j]
    A - ev diag(ew) ev.H:
    [[ 0.00000000e+00+0.j 2.22044605e-16+0.j]
    [ -2.22044605e-16+0.j -0.00000000e+00+0.j]]
    v:
    [ 0.70710678+0.j 0.00000000-0.70710678j]
    <v, Av>:
    1j

    So, it turns out that there are vectors which a 2D rotation maps to multiples of themselves. If a and b are orthogonal vectors, then |a><a| + 1i * |b><b| is one of them. And, back to the OP: This vector can not be represented in R^2, but in C^2.
     
  6. Mar 13, 2015 #5

    cgk

    User Avatar
    Science Advisor

    Btw: What I meant by "a non-zero linear operator cannot possibly map a vector to a vector which is orthogonal to it." was an operator of the form T = |a><b|, where a and b are orthogonal. If I am not very mistaken, such operators do not afford a spectral decomposition and actually do have <v, Tv> = 0 for all v, despite being non-zero.
     
  7. Mar 13, 2015 #6

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I like the following approach to that theorem. Let T be an arbitrary self-adjoint linear operator. First we prove the statement

    (a) If ##\langle x,Ty\rangle=0## for all x,y, then T=0.

    The proof is simply to choose x=Ty.

    Now we can use (a) to prove the statement

    (b) If ##\langle x,Tx\rangle=0## for all x, then T=0.

    I'm not going to type all the details, but the proof goes roughly like this: Let x,y be arbitrary. The assumption implies that ##\langle x+y,T(x+y)\rangle=0## and ##\langle x+iy,T(x+iy)\rangle=0##. These results imply that the real and imaginary parts of ##\langle x,Ty\rangle## are 0. This implies that the antecedent of the implication in (a) holds. So (a) tells us that T=0.

    The theorem holds for real vector spaces as well. In that case, the preferred term is "symmetric" rather than "self-adjoint". The counterexample in post #1 fails because a rotation by 90° isn't symmetric.
     
    Last edited: Mar 13, 2015
  8. Mar 13, 2015 #7

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Because the set of complex numbers has nice properties that the set of real numbers doesn't have. For example, every polyonomial with coefficients in ##\mathbb C## has a root in ##\mathbb C## (the fundamental theorem of algebra), but not every polynomial with coefficients in ##\mathbb R## has a root in ##\mathbb R##.

    Since an eigenvalue of a linear operator A is a root (in the field of scalars) of the nth-degree polynomial ##\det(A-\lambda I)##, this means that every linear operator on a complex vector space has an eigenvalue. This argument doesn't prove that every linear operator on a real vector space has an eigenvalue, because the characteristic polynomial may not have roots in ##\mathbb R##.

    This time your counterexample works. A rotation by 90° on ##\mathbb R^2## doesn't have eigenvalues. The matrix of such a rotation (counterclockwise) is
    $$\begin{pmatrix}0 & -1\\ 1 & 0\end{pmatrix},$$ so the characteristic polynomial is
    $$\begin{vmatrix}-\lambda & -1\\ 1 & -\lambda\end{vmatrix} =\lambda^2+1,$$ which has no roots in ##\mathbb R##.
     
  9. Mar 17, 2015 #8
    There is a useful formula called the polarization identity, that gives a simple proof of this theorem. Namely, if ##T## is an operator in a complex inner product space ##H## and ##\mathbf x, \mathbf y\in H##, then $$(T\mathbf x, \mathbf y) = \frac14 \sum_{\alpha: \alpha^4=1} \alpha (T(\mathbf x + \alpha \mathbf y), \mathbf x + \alpha \mathbf y);$$ here ##\alpha## takes 4 values: ##1##, ##-1##, ##i##, ##-i##. The proof is just a straightforward calculation.

    This polarization identity immediately implies that if ##(T\mathbf x, \mathbf x)=0## for all ##\mathbf x##, then ##(T\mathbf x, \mathbf y) ## for all ##\mathbf x, \mathbf y\in H## and so ##T=0##.

    Another immediate application of the polarization identity is a theorem saying that if for an operator ##T## in a complex inner product space ##(T\mathbf x , \mathbf x)## is real for all ##\mathbf x##, then ##T## is self-adjoint, ##T=T^*##.

    There is also a version of this identity in the real space, but it only works for symmetric operators, namely for symmetric ##T## we have $$ (T\mathbf x,\mathbf y) = \frac14 \Bigl( (T(\mathbf x + \mathbf y), \mathbf x + \mathbf y) - (T(\mathbf x - \mathbf y), \mathbf x -\mathbf y) \Bigr);$$ formally it looks like the complex polarization identity, but only terms with ##\alpha=\pm1## are taken.

    Now to the original question:
    Because real numbers form a proper subset of complex numbers, any complex space is a real space as well (if we can multiply vectors by complex numbers, we of course can multiply by reals), but not the other way around. To get a complex space from a real one we need to perform the so-called complexification (think about allowing complex coordinates in ##\mathbb R^n## to get ##\mathbb C^n##). The complexification of a real space is strictly bigger, so the condition ##(T\mathbf x, \mathbf x)=0## for all ##\mathbf x## in the complexification is stronger than the same condition in the original real space.
     
  10. Mar 18, 2015 #9

    dyn

    User Avatar

    The last post sums up my confusion ; " any complex space is a real space as well ". Lets say I am working in complex vector spaces and I get the condition < Tx , x > =0 how would I know if that implies T=0 or the vectors have turned out to be real in which case T does not have to be zero ?
     
  11. Mar 18, 2015 #10
    In a complex space the condition ##(Tx,x)=0## for all ##x## always implies that ##T=0##, that is a theorem. And what do you mean by "the vectors have turned out to be real"? What does it mean in an abstract inner product space?

    Or do you simply consider the space ##\mathbb C^n##, and by "real" vector you mean the vector with real coordinates? In this case the condition ##(Tx,x)=0## for all ##x\in \mathbb C^n## implies that ##T=0##, but the weaker condition ##(Tx,x)=0## for all vectors ##x## with real coordinates does not imply that ##T=0##.
     
  12. Mar 18, 2015 #11

    dyn

    User Avatar

    I meant in a complex vector space but the vectors only had real coordinates. so if I had < Tx , X > =0 I wouldn't know if T had to be zero unless I checked if the vectors only had real coordinates ?
     
  13. Mar 18, 2015 #12
    Do you mean that all vectors only have real coordinates? That is impossible in a complex space
     
  14. Mar 18, 2015 #13

    dyn

    User Avatar

    I think i'm just confusing myself now. thanks for your help
     
  15. Mar 19, 2015 #14

    cgk

    User Avatar
    Science Advisor

    Dear dyn, Fredrik, and Hawkeye18: Thanks for this thread and your posts. I normally consider myself to be a reasonably competent user of liner algebra, but dyn's question managed to completely confuse me and lead my intuition astray. It is a great question. It is funny to see that a lot can still be learned from a problem which can be boiled down into a 2x2 matrix formulation. And I was very happy with both answers #7 and #8. Both great ways of looking at this.

    That's all. Thanks!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook