I Proving Isometries: A Step-By-Step Guide

  • I
  • Thread starter Thread starter Doradus
  • Start date Start date
  • Tags Tags
    Proof
Doradus
Messages
4
Reaction score
0
Hello,

i'm trying to prove this statements, but I'm stuck.

Be ##V=R^n## furnished with the standard inner product and the standard basis S.
And let W ##\subseteq## V be a subspace of V and let ##W^\bot## be the orthogonal complement.

a) Show that there is exactly one linear map ##\Phi:V \rightarrow V## with ##\Phi|_w=id_w## and with ##\Phi|_{w^\bot}=-id_{w^\bot}##

b) Show that V have an orthonormal basis B consisting of the eigenvectors of ##\Phi## and indicate ##D_{BB}(\Phi##

c) Show that ##D_{BS}(id_v)## and ##D_{SS}(\Phi)## are orthogonal matrices.
For a) i have the following incomplete derivation:

Be ##a_1##...##a_n## an orthonormal basis of W and be ##b_1##...##b_n## an orthonormal basis of ##W^\bot##.
Therefore ##\Phi## is defined as ##\Phi: a_i \mapsto a_i, b_i \mapsto -b_j## with 1##\le##i##\le##n and 1##\le##j##\le##n. We can see that ##a_i## and ##b_i## are eigenvectors of ##\Phi##.

And now I'm stuck. I'm sure, i saw somewhere an prove with this derivation. But i don't remember. Is this even a good starting point or a dead end?
Well, I'm not very good at mathematical prooves.
But maybe someone can help me with the next step or someone have an other idea to proove this.
Thanks in advance.
 
Physics news on Phys.org
I am somewhat unfamiliar with your notation.
Could you please provide a bit more detail about what is meant by:
##\Phi |_\omega, d_\omega, D_{BB}, D_{BS}, \text{ and } D_{SS} ##

One thing I notice right off is you define two n-dimensional basis sets -- one spanning W and the other spanning Wperp. With n vectors, you should span both spaces, or all of V.
Let ##\{ a_i\}_{i=1}^n## be a basis set for V, ordered in such a way that ##\exists k,## such that ## \{ a_i\}_{i=1}^k## is a basis set for ##W## and ## \{ a_i\}_{i=k+1}^n## is a basis set for ##W^\perp##.
 
##\Phi |_W## is the same as ##\Phi(W)##
##id_W## is the identity funktion ##\Phi(w)=w##
##D_{BB}## Matrix with basis B
##D_{SS}## Matrix with basis S
##D_{BS}## I am not sure. :-)

Well, because I'm not sure, what ##D_{BS}## means, i think c) is not that important. I'm more interested in a) and b).
 
I see. thanks for the explanation.
For the first one, assume there are two linear maps then show that they must be equal. Because a linear map can be uniquely defined by its matrix representation, showing that the matrix representation must be the same should work.
##D_{BB} ## is a matrix that takes an input from basis B and gives an output in basis B.
##D_{SS} ## is a matrix that takes an input from basis S and gives an output in basis S.
Then, ##D_{BS} ## should be a matrix that takes an input from basis B and gives an output in basis S.

Look at a simple example, Let ##V = \mathbb{R}^3 ##, then ##S = \{ \hat x, \hat y, \hat z\}##, ##W## is the xy-plane. ##W^\perp## is span of ##\hat z##.
The matrix representation of ##[\Phi ]_{SS} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{bmatrix}##
In your question 2, you are asked to give the matrix representation in the eigenbasis B...which should be pretty similar.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top