Real and complex vector spaces

In summary, a real vector space is a set of vectors that can be added and multiplied by real numbers, while a complex vector space allows for multiplication by complex numbers. Vector addition and scalar multiplication are defined differently in real vector spaces. A basis in a complex vector space is a set of linearly independent vectors that can represent any vector. Complex vector spaces can be represented geometrically and have significance in physics for representing physical quantities and quantum systems.
  • #1
dyn
773
61
I've just encountered the following theorem : If T is a linear operator in a complex vector space V then if

< v , Tv > =0 for all v in V then T=0
But the theorem doesn't hold in real 2-D vector space as the operator could be the operator that rotates any vector by 90 degrees. My question is : the set of complex numbers includes the set of real numbers so if a theorem holds in a complex vector space then how can it not hold in a real vector space ?
 
  • Like
Likes cgk
Physics news on Phys.org
  • #2
I think you might want to re-consider the proof for that "theorem". It effectively says that a non-zero linear operator cannot possibly map a vector to a vector which is orthogonal to it. Your 2D-counter example holds for complex vector spaces as well.
 
  • #3
Here is the proof. Its for self-adjoint operators. I can't find what's wrong with the proof in the real vector space part, but I'm sure the complex vector space part is not correct and only proves that if ## (Tv,v)=0 ## for all v, then ## (Tx,y) ## is real for all x and y.

EDIT: With an addition, the complex vector space part will give T=0 too. Strange!

The operator that rotates a vector to the subspace orthogonal to it is ## T=\mathbb I - |v\rangle \langle v | ##(Its written in a basis where v itself is part of an orthonormal basis. Not sure how to write it in general.) It seems to me its self-adjoint(actually Hermitian). But if I'm wrong, and its not Hermitian, then there is no contradiction.
 
Last edited:
  • #4
I'd like to retract my previous statement. The theorem is fine for self-adjoint operators (but the OP did not say anything about self-adjointness). The given counter-example (an unitary rotation which rotates every vector into a vector orthogonal to it) is not self-adjoint.

However, in complex vector spaces (let's assume finite dimensional for bit), the theorem also holds beyond self-adjoint operators, namely, at least for all "normal" operators (including both self-adjoint and unitary operators. The 2D rotation is unitary). Note that any "normal" operator over a finite-dimensional complex vector space allows for a spectral decomposition [tex]T=\sum_i \lambda_i |e_i\rangle\langle e_i|[/tex], where the eigenvectors [itex]e_i[/itex] are orthonormal. Thus, unless all the eigenvalues are zero (and thus the operator is zero), there will be some eigenvector which is not mapped to zero.

So what are the non-zero eigenvectors in the complex 2D rotation case? Let's ask a program:
import numpy as np
from scipy import linalg as la

# A = matrix of a 90 degree 2x2 rotation
A = np.array([[0.,-1.],[1.,0.]])
# make eigenvalues (ew) and eigenvectors (ev)
ew,ev = la.eig(A)

print "A:\n", A
print "ew:\n", ew
print "ev[:,0]:\n", ev[:,0]
print "ev[:,1]:\n", ev[:,1]

# this should be a null matrix.
print "A - ev diag(ew) ev.H:\n", np.dot(ev,np.dot(np.diag(ew), ev.conj().T)) - A

# test first eigenvector
v = ev[:,0]
print "v:\n%s" % v
print "<v, Av>:\n", np.dot(v.conj().T, np.dot(A,v))
This gives:
/tmp> python wh.py
A:
[[ 0. -1.]
[ 1. 0.]]
ew:
[ 0.+1.j 0.-1.j]
ev[:,0]:
[ 0.70710678+0.j 0.00000000-0.70710678j]
ev[:,1]:
[ 0.70710678-0.j 0.00000000+0.70710678j]
A - ev diag(ew) ev.H:
[[ 0.00000000e+00+0.j 2.22044605e-16+0.j]
[ -2.22044605e-16+0.j -0.00000000e+00+0.j]]
v:
[ 0.70710678+0.j 0.00000000-0.70710678j]
<v, Av>:
1j

So, it turns out that there are vectors which a 2D rotation maps to multiples of themselves. If a and b are orthogonal vectors, then |a><a| + 1i * |b><b| is one of them. And, back to the OP: This vector can not be represented in R^2, but in C^2.
 
  • #5
Btw: What I meant by "a non-zero linear operator cannot possibly map a vector to a vector which is orthogonal to it." was an operator of the form T = |a><b|, where a and b are orthogonal. If I am not very mistaken, such operators do not afford a spectral decomposition and actually do have <v, Tv> = 0 for all v, despite being non-zero.
 
  • #6
I like the following approach to that theorem. Let T be an arbitrary self-adjoint linear operator. First we prove the statement

(a) If ##\langle x,Ty\rangle=0## for all x,y, then T=0.

The proof is simply to choose x=Ty.

Now we can use (a) to prove the statement

(b) If ##\langle x,Tx\rangle=0## for all x, then T=0.

I'm not going to type all the details, but the proof goes roughly like this: Let x,y be arbitrary. The assumption implies that ##\langle x+y,T(x+y)\rangle=0## and ##\langle x+iy,T(x+iy)\rangle=0##. These results imply that the real and imaginary parts of ##\langle x,Ty\rangle## are 0. This implies that the antecedent of the implication in (a) holds. So (a) tells us that T=0.

The theorem holds for real vector spaces as well. In that case, the preferred term is "symmetric" rather than "self-adjoint". The counterexample in post #1 fails because a rotation by 90° isn't symmetric.
 
Last edited:
  • #7
dyn said:
My question is : the set of complex numbers includes the set of real numbers so if a theorem holds in a complex vector space then how can it not hold in a real vector space ?
Because the set of complex numbers has nice properties that the set of real numbers doesn't have. For example, every polyonomial with coefficients in ##\mathbb C## has a root in ##\mathbb C## (the fundamental theorem of algebra), but not every polynomial with coefficients in ##\mathbb R## has a root in ##\mathbb R##.

Since an eigenvalue of a linear operator A is a root (in the field of scalars) of the nth-degree polynomial ##\det(A-\lambda I)##, this means that every linear operator on a complex vector space has an eigenvalue. This argument doesn't prove that every linear operator on a real vector space has an eigenvalue, because the characteristic polynomial may not have roots in ##\mathbb R##.

This time your counterexample works. A rotation by 90° on ##\mathbb R^2## doesn't have eigenvalues. The matrix of such a rotation (counterclockwise) is
$$\begin{pmatrix}0 & -1\\ 1 & 0\end{pmatrix},$$ so the characteristic polynomial is
$$\begin{vmatrix}-\lambda & -1\\ 1 & -\lambda\end{vmatrix} =\lambda^2+1,$$ which has no roots in ##\mathbb R##.
 
  • Like
Likes cgk
  • #8
There is a useful formula called the polarization identity, that gives a simple proof of this theorem. Namely, if ##T## is an operator in a complex inner product space ##H## and ##\mathbf x, \mathbf y\in H##, then $$(T\mathbf x, \mathbf y) = \frac14 \sum_{\alpha: \alpha^4=1} \alpha (T(\mathbf x + \alpha \mathbf y), \mathbf x + \alpha \mathbf y);$$ here ##\alpha## takes 4 values: ##1##, ##-1##, ##i##, ##-i##. The proof is just a straightforward calculation.

This polarization identity immediately implies that if ##(T\mathbf x, \mathbf x)=0## for all ##\mathbf x##, then ##(T\mathbf x, \mathbf y) ## for all ##\mathbf x, \mathbf y\in H## and so ##T=0##.

Another immediate application of the polarization identity is a theorem saying that if for an operator ##T## in a complex inner product space ##(T\mathbf x , \mathbf x)## is real for all ##\mathbf x##, then ##T## is self-adjoint, ##T=T^*##.

There is also a version of this identity in the real space, but it only works for symmetric operators, namely for symmetric ##T## we have $$ (T\mathbf x,\mathbf y) = \frac14 \Bigl( (T(\mathbf x + \mathbf y), \mathbf x + \mathbf y) - (T(\mathbf x - \mathbf y), \mathbf x -\mathbf y) \Bigr);$$ formally it looks like the complex polarization identity, but only terms with ##\alpha=\pm1## are taken.

Now to the original question:
dyn said:
My question is : the set of complex numbers includes the set of real numbers so if a theorem holds in a complex vector space then how can it not hold in a real vector space ?

Because real numbers form a proper subset of complex numbers, any complex space is a real space as well (if we can multiply vectors by complex numbers, we of course can multiply by reals), but not the other way around. To get a complex space from a real one we need to perform the so-called complexification (think about allowing complex coordinates in ##\mathbb R^n## to get ##\mathbb C^n##). The complexification of a real space is strictly bigger, so the condition ##(T\mathbf x, \mathbf x)=0## for all ##\mathbf x## in the complexification is stronger than the same condition in the original real space.
 
  • Like
Likes cgk
  • #9
The last post sums up my confusion ; " any complex space is a real space as well ". Let's say I am working in complex vector spaces and I get the condition < Tx , x > =0 how would I know if that implies T=0 or the vectors have turned out to be real in which case T does not have to be zero ?
 
  • #10
dyn said:
The last post sums up my confusion ; " any complex space is a real space as well ". Let's say I am working in complex vector spaces and I get the condition < Tx , x > =0 how would I know if that implies T=0 or the vectors have turned out to be real in which case T does not have to be zero ?

In a complex space the condition ##(Tx,x)=0## for all ##x## always implies that ##T=0##, that is a theorem. And what do you mean by "the vectors have turned out to be real"? What does it mean in an abstract inner product space?

Or do you simply consider the space ##\mathbb C^n##, and by "real" vector you mean the vector with real coordinates? In this case the condition ##(Tx,x)=0## for all ##x\in \mathbb C^n## implies that ##T=0##, but the weaker condition ##(Tx,x)=0## for all vectors ##x## with real coordinates does not imply that ##T=0##.
 
  • #11
I meant in a complex vector space but the vectors only had real coordinates. so if I had < Tx , X > =0 I wouldn't know if T had to be zero unless I checked if the vectors only had real coordinates ?
 
  • #12
dyn said:
I meant in a complex vector space but the vectors only had real coordinates.

Do you mean that all vectors only have real coordinates? That is impossible in a complex space
 
  • #13
I think I'm just confusing myself now. thanks for your help
 
  • #14
Dear dyn, Fredrik, and Hawkeye18: Thanks for this thread and your posts. I normally consider myself to be a reasonably competent user of liner algebra, but dyn's question managed to completely confuse me and lead my intuition astray. It is a great question. It is funny to see that a lot can still be learned from a problem which can be boiled down into a 2x2 matrix formulation. And I was very happy with both answers #7 and #8. Both great ways of looking at this.

That's all. Thanks!
 

1. What is the difference between a real vector space and a complex vector space?

A real vector space is a set of vectors that can be added together and multiplied by real numbers. A complex vector space, on the other hand, allows for vectors to be multiplied by complex numbers, which have both a real and imaginary component. This allows for a more general representation of vectors in higher dimensions.

2. How are vector addition and scalar multiplication defined in a real vector space?

In a real vector space, vector addition is defined as the operation of adding two vectors together component-wise. Scalar multiplication involves multiplying a vector by a real number, which scales the size of the vector without changing its direction.

3. What is a basis in a complex vector space?

A basis is a set of linearly independent vectors that can be used to represent any vector in the vector space. In a complex vector space, a basis can be formed by combining both real and imaginary vectors.

4. Can a complex vector space be represented geometrically?

Yes, a complex vector space can be represented geometrically in a similar way to a real vector space. In two dimensions, a complex vector can be represented as an arrow with both a real and imaginary component. In higher dimensions, the geometric representation becomes more complex.

5. What is the significance of complex vector spaces in physics?

In physics, complex vector spaces are used to represent physical quantities that have both magnitude and direction, such as electric and magnetic fields. They are also used in quantum mechanics to represent the state of a quantum system, which can have both real and imaginary components.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
934
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
3K
  • Linear and Abstract Algebra
Replies
7
Views
909
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
869
  • Linear and Abstract Algebra
Replies
21
Views
1K
  • Differential Geometry
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
886
Back
Top