Linear Transformation to Shrinking/Expand along a given direction

birulami
Messages
153
Reaction score
0
Assuming that shrinking/expanding in a given direction is a linear transformation in R^3, what would be the matrix to perform it?

To be more precise, given a vector

e=\left(\begin{array}{c}e_1\\e_2\\e_3\end{array}\right)

with a length of 1, i.e. ||e||=1 and a factor \lambda, I am looking for a matrix A such that for every vector x the vector y=A\cdotx has a projection on e that is longer than the projection of x by the factor \lambda, while all sizes orthogonal to e are kept unchanged.

I came up with a matrix A that contains squares and products of the e_i and, worse, would contain complex numbers for \lambda<1. I expected something simpler? Any ideas?

Thanks,
Harald.
 
Physics news on Phys.org
A fairly standard way of writing a linear transformation as a matrix is to see what it does to each of your basis vectors. The coefficients of the basis vectors when the result of applying the transformation to \vec{i} is written as a linear combination of the basis vectors are the first column, etc. In this case, applying this linear transformation to \vec{i} is the projection of \vec{i} on \vec{e} multiplied by \lambda.

In particular, since [e_1, e_2, e_]3] has length 1, the projection of \vec{i} on it is [e_1^2, e_1e_2,e_1e3], the projection of \vec{j} is [e_1e_2, e_2^2, e_2e_3], and the projection of \vek{k} is [e_1e_3, e_2e_3, e_3^2]. The matrix is
\left[\begin{array}{ccc} \lambda e_1^2 & \lambda e_1e_2 & \lambda e_1e_3 \\ \lambda e_1e_2 & \lambda e_2^2 & \lambda e_2e_3 \\ \lambda e_1e_3 & \lambda e_2e_3 & \lambda e_3^2 \end{array}\right]

If you have only squares and products, as my formula does, I don't see how you could possibly get complex numbers!
 
Something is missing in your matrix, it seems. Try e=(1,0,0). The matrix will have just \lambda in the top left corner. Now apply to vector (1,1,1). The result is (\lambda,0,0), so the coordinates orthogonal to e are not kept but killed.-(

Harald.
 
Your right- I just calculated the projection onto [e_1, e_2, e_3]. I'll try again. The vector [1, 0, 0] has projection [e_1^2, e_1e_2,e_1e3] onto [e_1, e_2, e_3] and orthogonal projection [e_1^2-1, e_1e_2,e_1e3] so your transformation maps [1, 0, 0] to [(\lambda-1)e_1^2- 1, (\lambda-1)e_1e_2, (\lambda-1)e_1e_3) and similarly for the [0, 1, 0] and [0, 0, 1].

Unless I have made another silly error (quite possible) the matrix is:
\left[\begin{array}{ccc} (\lambda-1)e_1^2-1 & (\lambda-1)e_1_e2 & (\lambda-1)e_1e_3 \\ (\lambda-1)e_1e_2 & (\lambda-1)e_2^2- 1 &(\lambda-1)e_2e_3 \\(\lambda-1)e_1e_3 & (\lambda-1)e_2e_3 & (\lambda-1)e_3^2-1\end{array}\right]
 
Great, thanks. I thought I made a mistake, because I hoped for something simpler.

The reason why I was talking about complex numbers was that I started from a vector k which, in your notation, would now be k=\sqrt{\lambda-1}\cdot e. This vector formally combines the direction e and the shrink/expand factor \lambda. Obviously the square root wil go complex for \lambda<1. But of course the complex numbers disappear again in the matrix itself.

The matrix can be written as k\cdot k^T-1_{diag} --- Not that this tells me anything interesting, though:-)

Seasons Greetings,
Harald.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top