Regarding the Form of an Automorphism

  • Thread starter Thread starter e(ho0n3
  • Start date Start date
  • Tags Tags
    Form
Click For Summary

Homework Help Overview

The discussion revolves around proving that a function \( f : \mathbb{R}^n \rightarrow \mathbb{R}^n \) is an automorphism if and only if it can be expressed in a specific linear form involving constants. The participants explore the implications of this form and the properties required for \( f \) to be classified as an automorphism.

Discussion Character

  • Mixed

Approaches and Questions Raised

  • Participants discuss proving the forward direction of the statement is straightforward, while the converse presents challenges. They consider specific cases, such as \( n = 1 \) and \( n = 2 \), and explore the preservation of structure through linear combinations of vectors. The use of standard basis vectors is suggested as a potential approach.

Discussion Status

There is an ongoing exploration of the properties of \( f \) and the necessary conditions for it to be an automorphism. Some participants have provided hints and guidance regarding the use of basis vectors and the implications of linear transformations. The discussion includes questioning the completeness of the initial definition and the existence of non-trivial linear transformations that are not automorphisms.

Contextual Notes

Participants note that the original definition of \( f \) does not guarantee it will be an automorphism, raising concerns about cases where the function could be trivial. The conversation also touches on the nature of linear transformations and their injectivity and surjectivity as critical factors in determining whether they are automorphisms.

e(ho0n3
Messages
1,349
Reaction score
0
[SOLVED] Regarding the Form of an Automorphism

Homework Statement
I want to prove the following:

f : \mathbb{R}^n \rightarrow \mathbb{R}^n is an automorphism if and only if f has the form

\begin{pmatrix}x_1 \\ \vdots \\ x_n \end{pmatrix} \mapsto \begin{pmatrix} g_1(x_1, \ldots, x_n) \\ \vdots \\ g_n(x_1, \ldots, x_n) \end{pmatrix}

where

g_i(x_1, \ldots, x_n) = a_{i1}x_1 + \ldots + a_{in}x_n

and the a's are constants.

The attempt at a solution
Proving that f is an automorphism given that it has the latter form is easy. Proving the converse is hard.

It took me a while to prove the case for n = 1: For x not equal to 0, f(x)/x = f(x/x) = f(1) so f(x) = f(1)x. For x = 0, f(0) = f(0*0) = 0f(0) = f(1)0.

I was thinking about the 'n = 2' case. I can't employ the same method of proof as before since I'm dealing with vectors now. I'm thinking that the preservation of structure, i.e. f(u + v) = f(u) + f(v) and f(ru) = rf(u), must in some fashion force f to have the form described in the statement. However, I don't see how this could possibly be.
 
Physics news on Phys.org
Try using the standard basis vectors for R^n.
 
In what way? I thought about u and f(u) as linear combinations of the basis vectors and the structure preserving properties. I didn't obtain further insight into the problem.

Perhaps another hint is in order.
 
Last edited:
Let {e1,...,en} be the standard basis of R^n, and let u1=f(e1),...,un=f(en). Then given x in R^n, with x=x1e1+...+xnen, we have f(x)=x1u1+...+xnun.
 
Ah, I see now. Man, I feel stupid for not being able to see that. Thanks a lot.
 
e(ho0n3 said:
Proving that f is an automorphism given that it has the latter form is easy.

I wrote that thinking that it would be easy but as it turns out, it isn't. More specifically, showing that f preserves structure is easy:

Let

\vec{x} = \begin{pmatrix}x_1 \\ \hdots \\ x_n\end{pmatrix}

and

\vec{y} = \begin{pmatrix}y_1 \\ \hdots \\ y_n\end{pmatrix}

Let a and b be two scalars.

\begin{align*}f(a\vec{x} + b\vec{y}) &amp;= <br /> f\left(\begin{pmatrix}ax_1 + by_1 \\ \hdots \\ ax_n + by_n\end{pmatrix}\right) =<br /> \begin{pmatrix} g_1(ax_1 + by_1, \ldots, ax_n +by_n) \\ \vdots \\ g_n(ax_1 + by_1, \ldots, ax_n + by_n) \end{pmatrix} = <br /> \begin{pmatrix} ag_1(x_1, \ldots, x_n) + bg_1(y_1, \ldots, y_n) \\ \vdots \\ ag_n(x_1, \ldots, x_n) + bg_n(y_1, \ldots, y_n) \end{pmatrix}\\ &amp;= <br /> a\begin{pmatrix} g_1(x_1, \ldots, x_n) \\ \vdots \\ g_n(x_1, \ldots, x_n) \end{pmatrix} + b\begin{pmatrix} g_1(y_1, \ldots, yx_n) \\ \vdots \\ g_n(y_1, \ldots, y_n) \end{pmatrix} = <br /> af(\vec{x}) + bf(\vec{y})\end{align*}

Note that I've employed the fact that g_i is a homomorphism.

However, showing that f is bijective is hard:

To demonstrate that f is injective, suppose f(\vec{x}) = f(\vec{y}). Then f(\vec{x}) - f(\vec{y}) = \vec{0} which by the above implies that f(\vec{x} - \vec{y}) = \vec{0}. \vec{x} - \vec{y} could very well be \vec{0} since f(\vec{0}) = \vec{0}. However, there may be another vector \vec{v} \ne \vec{0} such that f(\vec{v}) = \vec{0}.

Another thing to notice is that f(\vec{x} - \vec{y}) = \vec{0} implies that

a_{i1}(x_1 - y_1) + \ldots + a_{in}(x_n - y_n) = 0

for i = 1, ..., n. This yields a homogeneous system of linear equations. Obviously, x_j = y_j is one solution. I don't know how to show that this is the only solution though.

Showing that f is surjective is just as difficult: Given a vector \vec{y} in the codomain, what vector \vec{x} satisfies \vec{y} = f(\vec{x})? I can write \vec{y} as a linear combination of the natural basis vectors, but then I would need to determine which vectors the basis vectors are the image of.
 
I wanted to comment on this earlier, but I assumed you just forgot to post that there were some conditions on the g_i. As it stands, the definition of f you're given does not guarantee that it will be an automorphism. For example if g_1 = g_2 = ... = g_n = 0, then certainly the g_i satisfy the requirements, and f(x)=0 for all x.
 
Last edited:
You're right.

I'm trying to think of a non-trivial map that is a linear transformation but is not an automorphism. So far I can't think of any. Perhaps all non-trivial linear transformations are automorphisms?
 
e(ho0n3 said:
You're right.

I'm trying to think of a non-trivial map that is a linear transformation but is not an automorphism. So far I can't think of any. Perhaps all non-trivial linear transformations are automorphisms?

Any non-injective or non-surjective endomorphism is not an automorphism. For example, any projection onto a nontrivial proper subspace, e.g. project R^2 onto the x-axis.
 
Last edited:
  • #10
Dang it. Why didn't I think of that. If f is given by

\binom{x}{y} \mapsto \binom{x}{0}

then f is a non-trivial linear transformation that is not injective and hence not an automorphism.

I will mark this thread as solved. Thanks everyone.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
34
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
Replies
8
Views
1K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K