MHB Orthogonal Transformation in Euclidean Space

Sudharaka
Gold Member
MHB
Messages
1,558
Reaction score
1
Hi everyone, :)

Here's one of the questions that I encountered recently along with my answer. Let me know if you see any mistakes. I would really appreciate any comments, shorter methods etc. :)

Problem:

Let \(u,\,v\) be two vectors in a Euclidean space \(V\) such that \(|u|=|v|\). Prove that there is an orthogonal transformation \(f:\, V\rightarrow V\) such that \(v=f(u)\).

Solution:

We assume that \(u\) and \(v\) are non zero. Otherwise the result holds trivially.

Let \(B\) denote the associated symmetric bilinear function of the Euclidean space. Let us define the linear transformation \(f\) as,

\[f(x)=\begin{cases}x&\mbox{if}&x\neq u\\v&\mbox{if}&x=u\end{cases}\]

It's clear that, \(B(f(x),\,f(y))=B(x,\,y)\) whenever \(x,\,y\neq u\). Also \(B(f(u),\,f(u))=B(v,\,v)\) and since \(|v|=|u|\Rightarrow B(v,\,v)=B(u,\,u)\) we have \(B(f(u),\,f(u))=B(u,\,u)\).

It remains to show that, \(B(f(x),\,f(u))=B(x,\,u)\) for \(x\neq u\).

\[B(f(v+u),\,f(v+u))=B(f(v),\,f(v))+2B(f(v),\,f(u))+B(f(u),\,f(u))\]

Also since \(v+u\neq u\),

\[B(f(v+u),\,f(v+u))=B(v+u,\,v+u)=B(v,\,v)+2B(v,\,u)+B(u,\,u)\]

Using the above two results and the fact that \(B(u,\,u)=B(v,\,v)\) we get,

\[B(f(v),\,f(u))=B(v,\,u)\]

Now consider \(B(f(x+v),\,f(x+u))\).

Case I: \(x+v\neq u\)

\[B(f(x+v),\,f(x+u))=B(f(x),\,f(x))+B(f(x),\,f(u))+B(f(v),\,f(x))+B(f(v),\,f(u))\]

Also,

\[B(f(x+v),\,f(x+u))=B(x+v,\,x+u)=B(x,\,x)+B(x,\,u)+B(x,\,v)+B(v,\,u)\]

Using the above two results and the fact that \(B(f(v),\,f(u))=B(v,\,u)\) we get,

\[B(f(x),\,f(u))=B(x,\,u)\]

Case II: \(x+v=u\)

\[B(x,\,v)=B(u-v,\,v)=B(u,\,v)-B(v,\,v)\]
\[B(x,\,u)=B(u-v,\,u)=B(u,\,u)-B(v,\,u)\]
Therefore, \[B(x,\,u)=-B(x,\,v)~~~~~~(1)\]

\[B(f(x),\,f(u))=B(f(u-v),\,f(u))=B(f(u),\,f(u))-B(f(v),\,f(u))=B(v,\,v)-B(v,\,v)=0\]

Then since \(B(f(x),\,f(u))=B(x,\,v)=0\) by (1) we get \(B(x,\,u)=0\)

\[\therefore B(f(x),\,f(u))=B(x,\,u)\]
 
Physics news on Phys.org
Sudharaka said:
Problem:

Let \(u,\,v\) be two vectors in a Euclidean space \(V\) such that \(|u|=|v|\). Prove that there is an orthogonal transformation \(f:\, V\rightarrow V\) such that \(v=f(u)\).

Solution:

We assume that \(u\) and \(v\) are non zero. Otherwise the result holds trivially.

Let \(B\) denote the associated symmetric bilinear function of the Euclidean space. Let us define the linear transformation \(f\) as,

\[f(x)=\begin{cases}x&\mbox{if}&x\neq u\\v&\mbox{if}&x=u\end{cases}\]
The problem with this is that the map $f$ can never be linear (unless $u=v$).

It may help to think in terms of a simple example. In the space $V=\mathbb{R}^2$, let $u=(1,0)$ and $v=(0,1)$. The only orthogonal transformations taking $u$ to $v$ are a rotation of the whole space through a right angle, or a reflection of the whole space in the line $y=x$. Either way, the transformation has to shift just about every vector in the space. The map that just takes $u$ to $v$ and leaves everything else fixed is not linear, and certainly not orthogonal.

To prove this result, you need to construct a linear map $f$ taking $u$ to $v$. The way to do that is to define $f$ on an orthonormal basis for $V$ and then extend it by linearity to a map on the whole of $V$. Start by constructing an orthonormal basis $\{e_1,e_2,\ldots,e_n\}$ such that $e_1$ is a multiple of $u$. Then do the same for $v$, showing that there is an orthonormal basis $\{g_1,g_2,\ldots,g_n\}$ such that $g_1$ is a multiple of $v$. You can then define $f$ by $f(e_k) = g_k$ for $1\leqslant k\leqslant n$.

It should then be straightforward to check that the map $f$ is orthogonal.
 
Opalg said:
The problem with this is that the map $f$ can never be linear (unless $u=v$).

It may help to think in terms of a simple example. In the space $V=\mathbb{R}^2$, let $u=(1,0)$ and $v=(0,1)$. The only orthogonal transformations taking $u$ to $v$ are a rotation of the whole space through a right angle, or a reflection of the whole space in the line $y=x$. Either way, the transformation has to shift just about every vector in the space. The map that just takes $u$ to $v$ and leaves everything else fixed is not linear, and certainly not orthogonal.

To prove this result, you need to construct a linear map $f$ taking $u$ to $v$. The way to do that is to define $f$ on an orthonormal basis for $V$ and then extend it by linearity to a map on the whole of $V$. Start by constructing an orthonormal basis $\{e_1,e_2,\ldots,e_n\}$ such that $e_1$ is a multiple of $u$. Then do the same for $v$, showing that there is an orthonormal basis $\{g_1,g_2,\ldots,g_n\}$ such that $g_1$ is a multiple of $v$. You can then define $f$ by $f(e_k) = g_k$ for $1\leqslant k\leqslant n$.

It should then be straightforward to check that the map $f$ is orthogonal.

Thanks so much for the informative reply. I think I am getting the idea. First we can choose an orthonormal basis \(\{e_1,e_2,\ldots,e_n\}\) such that \(e_1\) is a multiple of \(u\). Then if we rotate this basis by a certain angle so as to align \(e_1\) with \(v\) we could get the basis \(\{g_1,g_2,\ldots,g_n\}\). Since \(|v|=|u|\) our new basis would have \(g_1\) a multiple of \(v\). Am I correct? Or is there a more formal way of doing this? :)
 
I think what Opalg is getting at is this:

If you have a linear map that takes a basis to a basis, it is certainly invertible.

For $e_1$, we can always choose $e_1 = u/|u|$, and use something like Gram-Schmidt to turn any basis extension we create into an orthogonal basis:

$\{e_1,\dots,e_n\}$.

The same process is then used to create the 2nd basis:

$\{g_1,\dots,g_n\}$, where $g_1 = v/|v|$.

We then DEFINE, for any $x \in V$:

$T(x) = T(c_1e_1 + \cdots + c_ne_n) = c_1g_1 + \cdots + c_ng_n$.

Note that $T(u) = T(|u|e_1) = |u|T(e_1) = |v|T(e_1)$ (since $|u| = |v|$)

$= |v|g_1 = |v|(v/|v|) = v$.

Now proving orthogonality is a bit of a mess to write explicitly, but the idea is this:

Since both bases are ORTHOGONAL (we can actually insist on orthonormal by scaling the basis vectors to unit vectors), we have:

$B(e_i,e_i) = B(g_i,g_i) = 1$
$B(e_i,e_j) = B(g_i,g_j) = 0,\ i \neq j$.

So if:

$x = c_1e1 + \cdots + c_ne_n$
$y = d_1e_1 + \cdots + d_ne_n$, then:

$B(x,y) = B(c_1e1 + \cdots + c_ne_n,d_1e_1 + \cdots + d_ne_n)$

$\displaystyle = \sum_{i,j} c_id_jB(e_i,e_j) = \sum_i c_id_i$

by the bilinearity of $B$ and the orthogonality of our basis.

Similarly, evaluating $B(T(x),T(y))$ gives the same answer, and there you go.
 
Forgive the double-post, but I thought I would give an explicit example for $\Bbb R^3$.

For our symmetric Euclidean bilinear form, I will use the standard dot-product. It is possible to use so-called "weighted" inner products, but they just add needlessly complicated calculation to the scenario.

For our first vector, we will take $u = (2,0,0)$. For the second, we will take $v = (1,\sqrt{2},1)$, which I think are perfectly reasonable choices.

For our first basis, the usual $\{(1,0,0),(0,1,0),(0,0,1)\}$ will do quite nicely. The second basis is a bit of a pain to come up with, we start with the unit vector:

$g_1 = (\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})$.

To get a basis, we'll just add in (0,1,0) and (0,0,1) and apply Gram-Schmidt:

First, we calculate:

$(0,1,0) - \frac{(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})\cdot(0,1,0)}{(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})\cdot(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})}(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})$

$= (\frac{-\sqrt{2}}{4},\frac{1}{2},\frac{-\sqrt{2}}{4})$

and normalizing this gives us:

$g_2 = (\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})$

Finally, we calculate:

$(0,0,1) - \frac{(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})\cdot(0,0,1)}{(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})\cdot(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})}(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2}) - \frac{(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})\cdot(0,0,1)}{(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})\cdot(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})}(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})$

$= (\frac{-1}{2},0,\frac{1}{2})$ which upon normalization gives us:

$g_3 = (\frac{-\sqrt{2}}{2},0,\frac{\sqrt{2}}{2})$.

It is clear, then, then that the orthogonal linear mapping we are looking for is given by the matrix (relative to the standard basis for $\Bbb R^3$):

$[T] = \begin{bmatrix}\frac{1}{2}&\frac{-1}{2}&\frac{-\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2}&\frac{\sqrt{2}}{2}&0\\ \frac{1}{2}&\frac{-1}{2}&\frac{\sqrt{2}}{2} \end{bmatrix}$

which obviously (heh!) has determinant 1, and is orthogonal, and moreover:

$T(u) = T(2,0,0) = (1,\sqrt{2},1) = v$.
 
Deveno said:
I think what Opalg is getting at is this:

If you have a linear map that takes a basis to a basis, it is certainly invertible.

For $e_1$, we can always choose $e_1 = u/|u|$, and use something like Gram-Schmidt to turn any basis extension we create into an orthogonal basis:

$\{e_1,\dots,e_n\}$.

The same process is then used to create the 2nd basis:

$\{g_1,\dots,g_n\}$, where $g_1 = v/|v|$.

We then DEFINE, for any $x \in V$:

$T(x) = T(c_1e_1 + \cdots + c_ne_n) = c_1g_1 + \cdots + c_ng_n$.

Note that $T(u) = T(|u|e_1) = |u|T(e_1) = |v|T(e_1)$ (since $|u| = |v|$)

$= |v|g_1 = |v|(v/|v|) = v$.

Now proving orthogonality is a bit of a mess to write explicitly, but the idea is this:

Since both bases are ORTHOGONAL (we can actually insist on orthonormal by scaling the basis vectors to unit vectors), we have:

$B(e_i,e_i) = B(g_i,g_i) = 1$
$B(e_i,e_j) = B(g_i,g_j) = 0,\ i \neq j$.

So if:

$x = c_1e1 + \cdots + c_ne_n$
$y = d_1e_1 + \cdots + d_ne_n$, then:

$B(x,y) = B(c_1e1 + \cdots + c_ne_n,d_1e_1 + \cdots + d_ne_n)$

$\displaystyle = \sum_{i,j} c_id_jB(e_i,e_j) = \sum_i c_id_i$

by the bilinearity of $B$ and the orthogonality of our basis.

Similarly, evaluating $B(T(x),T(y))$ gives the same answer, and there you go.

Deveno said:
Forgive the double-post, but I thought I would give an explicit example for $\Bbb R^3$.

For our symmetric Euclidean bilinear form, I will use the standard dot-product. It is possible to use so-called "weighted" inner products, but they just add needlessly complicated calculation to the scenario.

For our first vector, we will take $u = (2,0,0)$. For the second, we will take $v = (1,\sqrt{2},1)$, which I think are perfectly reasonable choices.

For our first basis, the usual $\{(1,0,0),(0,1,0),(0,0,1)\}$ will do quite nicely. The second basis is a bit of a pain to come up with, we start with the unit vector:

$g_1 = (\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})$.

To get a basis, we'll just add in (0,1,0) and (0,0,1) and apply Gram-Schmidt:

First, we calculate:

$(0,1,0) - \frac{(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})\cdot(0,1,0)}{(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})\cdot(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})}(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})$

$= (\frac{-\sqrt{2}}{4},\frac{1}{2},\frac{-\sqrt{2}}{4})$

and normalizing this gives us:

$g_2 = (\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})$

Finally, we calculate:

$(0,0,1) - \frac{(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2})\cdot(0,0,1)}{(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})\cdot(\frac{1}{2},\frac{\sqrt{2}}{2}, \frac{1}{2})}(\frac{1}{2},\frac{\sqrt{2}}{2},\frac{1}{2}) - \frac{(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})\cdot(0,0,1)}{(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})\cdot(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})}(\frac{-1}{2},\frac{\sqrt{2}}{2},\frac{-1}{2})$

$= (\frac{-1}{2},0,\frac{1}{2})$ which upon normalization gives us:

$g_3 = (\frac{-\sqrt{2}}{2},0,\frac{\sqrt{2}}{2})$.

It is clear, then, then that the orthogonal linear mapping we are looking for is given by the matrix (relative to the standard basis for $\Bbb R^3$):

$[T] = \begin{bmatrix}\frac{1}{2}&\frac{-1}{2}&\frac{-\sqrt{2}}{2}\\ \frac{\sqrt{2}}{2}&\frac{\sqrt{2}}{2}&0\\ \frac{1}{2}&\frac{-1}{2}&\frac{\sqrt{2}}{2} \end{bmatrix}$

which obviously (heh!) has determinant 1, and is orthogonal, and moreover:

$T(u) = T(2,0,0) = (1,\sqrt{2},1) = v$.

Hi Denevo,

Thanks very much for both of your posts. After reading them I understood almost everything that is required to solve the problem. Now I think I should read more about the Gram-Schimdt process. :)
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...

Similar threads

Back
Top