Finding a Linear Transform for Contraction of Vectors

MikeLizzi
Messages
239
Reaction score
6
Hi folks,

This is my first post here. I hope this is the right forum for this question.

I am trying to come up with a linear transform to that will take as input a vector (x, y, z) and output a vector that is scaled in the direction of another vector.

For example:
Suppose I have the corners of a square defined by the four vectors
(4, 4, 0)
(-4, 4, 0)
(-4, -4, 0)
(4, -4, 0)

I want to scale those vectors by 50% in the direction specified by the vector < 1, 1, 0 >

I want to end up with the four vectors
(2, 2, 0)
(-4, 4, 0)
(-2, -2, 0)
(4, -4, 0)

The initial square has been “squashed” by 50% in the northeast/southwest direction.
Can anybody come up with a transform for that?
 
Physics news on Phys.org
So you want a matrix of the form
\begin{pmatrix}a &amp; b &amp; c \\ d &amp; e &amp; f \\ g &amp; h &amp; i\end{pmatrix}
such that
\begin{pmatrix}a &amp; b &amp; c \\ d &amp; e &amp; f \\ g &amp; h &amp; i\end{pmatrix}\begin{pmatrix}4 \\ 4 \\ 0\end{pmatrix}= \begin{pmatrix}2 \\ 2 \\ 0\end{pmatrix}
That gives the three equations 4a+ 4b= 2, 4d+ 4e= 2, and 4g+ 4h= 0.

You also want
\begin{pmatrix}a &amp; b &amp; c \\ d &amp; e &amp; f \\ g &amp; h &amp; i\end{pmatrix}\begin{pmatrix}-4 \\ 4 \\ 0\end{pmatrix}= \begin{pmatrix}-4 \\ 4 \\ 0\end{pmatrix}
That gives the three equations -4a+ 4b= -4, -4d+ 4e= 4, -4g+ 4h= 0.

Doing the same with the other two points will give you a total of twelve equations for the 9 values, a, b, c, d, e, f, g, h, and i. But I think you will find the last three redundant. Once you have the first three points, the requirement that the figure be a parallelogram fixes the fourth.
 
Last edited by a moderator:
Thank you HallsofIvy.

But I’m looking for a formula that will convert any vector. I will be using this formula in a computer program that will contract any shape by any amount in any direction.
This is my current strategy:

For any given point (x, y, z), direction vector (vx, vy, vz) and scale g;
Rotate the point so that its x-axis lines up with the direction vector.
That means two separate rotations for a point in 3d.
Then multiply the x value by the scale.
Then back out the two rotations in reverse order.

I built the following:
Matrix A rotates the point about the Z-axis
Matrix B rotates the point about the X-axis
Matrix C scales only the x dimension.
Matrix D is the inverse of B
Matrix E is the inverse of A

So if P (x, y, z) is the input and P’ (x’, y’, z’) is the output I have
P’ = [E][D][C][A]P
Unfortunately, the strategy is not working.
 
So you want a matrix T that scales by a factor of g along the vector v, while leaving everything orthogonal to v fixed. That is, Tv = gv, and Tu = u for all u orthogonal to v. You now have the eigenvalues and eigenspaces of T. Since the eigenspaces are orthogonal, to find T all you need is an orthogonal matrix Q such that Qv is on the first coordinate (x) axis; then you'll have T = QTDQ, where D = diag(g, 1, ..., 1) is the diagonal matrix of eigenvalues.

Observe that since your eigenvalues are real and eigenspaces are orthogonal, T will be a symmetric matrix.
 
adriank said:
So you want a matrix T that scales by a factor of g along the vector v, while leaving everything orthogonal to v fixed. That is, Tv = gv, and Tu = u for all u orthogonal to v. You now have the eigenvalues and eigenspaces of T. Since the eigenspaces are orthogonal, to find T all you need is an orthogonal matrix Q such that Qv is on the first coordinate (x) axis; then you'll have T = QTDQ, where D = diag(g, 1, ..., 1) is the diagonal matrix of eigenvalues.

Observe that since your eigenvalues are real and eigenspaces are orthogonal, T will be a symmetric matrix.

This sounds great, adriank, but what does Q look like? I'm thinking it has to be a transform that rotates v into the x-axis. And the inverse of Q would rotate it back. I got something that works when u and v are two dimensional vectors.

But my three dimensional attempt fails.
 
I think I got it!

My problem was more to do with the syntax of the programming language.

Thank you all.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
4
Views
2K
Replies
4
Views
2K
Replies
10
Views
2K
Replies
1
Views
3K
Replies
52
Views
3K
Replies
2
Views
1K
Back
Top