Invariant vectors/eigenvectors of R(., v)v

  • Thread starter Sajet
  • Start date
  • Tags
    Invariant
In summary, the conversation discusses the proof of two statements. The first statement shows that for a specific vector, v, in the tangent space of complex projective space and real projective space, R(w,v)v = w for all orthogonal vectors w. The second statement involves proving this for two specific cases, CP^n and HP^n. In these cases, it is shown that R(iv,v)v = 4iv and R(w,v)v = 4w for all w in the intersection of the orthogonal complement of v in R^n and the orthogonal complement of v in H^n. The conversation then discusses understanding the beginning of the proof, which involves using the fact that R(.,v)v is a self-adjoint endomorphism
  • #1
Sajet
48
0
I'm afraid I need help again...

First, these two things are shown:

1) Let [itex]v \in T_{\bar p}\mathbb{CP}^n, ||v|| = 1[/itex]. Then: [itex]R(w, v)v = w \forall w \in (\mathbb Cv)^\perp[/itex]

2) Let [itex]v \in T_{\bar p}\mathbb{HP}^n, ||v|| = 1[/itex]. Then: [itex]R(w, v)v = w \forall w \in (v\mathbb H)^\perp[/itex]

Afterwards the following is supposed to be proven:

a) [itex]R(iv, v)v = 4iv[/itex] (in the case of [itex]CP^n[/itex])
b) [itex]R(w, v)v = 4w \forall w \in (\mathbb Rv)^\perp\cap(v \mathbb H)[/itex] (in the case of [itex]HP^n[/itex])

Unfortunately, I don't understand the very beginning of the following proof:

"It is already clear that [itex]iv[/itex] is an eigenvector of [itex]R(., v)v[/itex] (meaning [itex]R(iv, v)v = \kappa iv[/itex] for some [itex]\kappa[/itex])"

I've been on this since yesterday but I don't see why this is the case. Does it somehow follow from 1)?

In b) it is basically the same thing (I think) but the script is a little bit more elaborate - so maybe this helps. It reads:

"We have already shown that [itex](vH)\cap(\mathbb Rv)^\perp[/itex] is an invariant subspace of the endomorphism [itex]R(., v)v[/itex]. Let [itex]w \in (vH)\cap (\mathbb Rv)^\perp[/itex] be an eigenvector."

Do these two statements immediately follow from 1) and 2)? I mean 1) basically shows:

[itex]R(., v)v|_{(\mathbb Cv)^\perp} = id_{(\mathbb Cv)^\perp}[/itex]

But I can't make the connection to [itex]R(iv, v)v = \kappa iv[/itex]...
 
Physics news on Phys.org
  • #2
Do you have an electronic version of the document you're reading?
 
  • #3
I do but it is in German.
 
  • #4
I don't understand either, sorry. :(
 
  • #5
Thank you anyway :)
 
  • #6
I might have an idea. At a different point the following theorem is introduced:

For every [itex]\bar p \in \mathbb{CP}^n[/itex] the tangent space [itex]T_{\bar p} \mathbb{CP}^n[/itex] carries the structure of a complex vector space. For [itex]\iota \in U(n+1)[/itex] we have [itex]\bar \iota_*(\lambda v) = \lambda\bar \iota_*(v)[/itex] for all [itex]\lambda \in \mathbb C, v \in T\mathbb{CP}^n[/itex]. ([itex]\bar \iota[/itex] is the induced map on [itex]\mathbb{CP}^n[/itex]).

Then there is a similar statement about [itex]\mathbb{HP}^n[/itex], namely [itex]\iota \in Sp(n+1) \Rightarrow \bar \iota_*(v\mathbb H) = \bar \iota_*(v)\mathbb H[/itex]

Sp(n+1) and U(n+1) are defined as matrices A fulfilling [itex]AA^* = I[/itex].

If the map [itex]B(w) := R(w, v)v[/itex] were in [itex]Sp(n+1), U(n+1)[/itex], then the above theorem might be what the proof is referring to. I figured out that B is a self-adjoint endomorphism, therefore [itex]B = B^*[/itex] (right?). But that doesn't mean [itex]BB^* = I[/itex]. For that to be true, it needs to be [itex]B = B^{-1}[/itex], meaning [itex]R(R(w, v)v, v)v = w[/itex]. I've worked on this for the last couple of hours but I think this is not even true...
 
  • #7
Okay, I've got it now. Not that clear imo (at least for me it wasn't) but this is the explanation for anybody who cares:

[itex]R(., v)v[/itex] is a self-adjoint endomorphism. Therefore the tangent space has an orthonormal basis of eigenvectors and all eigenvalues are real. It follows that iv must be an eigenvector.

Then basically the same applies in the case of HPn.
 

1. What are invariant vectors/eigenvectors of R(., v)v?

Invariant vectors or eigenvectors of R(., v)v are vectors that remain unchanged when multiplied by the linear transformation R(., v)v. In other words, the direction of the vector remains the same after the transformation.

2. How are invariant vectors/eigenvectors of R(., v)v useful in science?

Invariant vectors/eigenvectors of R(., v)v are useful in various scientific fields, such as physics, engineering, and computer science. They can be used to simplify complex systems and make calculations easier. They also provide insight into the behavior of systems and can help identify important patterns and relationships.

3. How do you find invariant vectors/eigenvectors of R(., v)v?

To find invariant vectors/eigenvectors of R(., v)v, you need to solve the equation R(., v)v(x) = λx, where λ is a constant called the eigenvalue and x is the eigenvector. This can be done by finding the roots of the characteristic polynomial of R(., v)v.

4. Can a matrix have more than one invariant vector/eigenvector of R(., v)v?

Yes, a matrix can have multiple invariant vectors/eigenvectors of R(., v)v. In fact, a matrix can have an infinite number of eigenvectors, as long as they are linearly independent. Each eigenvector corresponds to a unique eigenvalue.

5. What is the significance of the eigenvalues of R(., v)v?

The eigenvalues of R(., v)v provide information about the behavior of the linear transformation. They can determine whether the transformation stretches, compresses, or flips the vector. In physics, the eigenvalues can represent physical quantities such as energy or angular momentum. In engineering, they can indicate stability or instability in a system.

Similar threads

  • Differential Geometry
Replies
3
Views
1K
  • Differential Geometry
Replies
13
Views
2K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
941
  • Linear and Abstract Algebra
Replies
1
Views
839
  • Linear and Abstract Algebra
Replies
11
Views
2K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
Replies
24
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
1K
Back
Top