[Linear Algebra] Help with Linear Transformations part 2

Click For Summary
The discussion focuses on proving that the sets S and A are subspaces of the vector space V, where S consists of vectors invariant under the linear transformation j and A consists of vectors that are negated by j. It is established that S and A intersect only at the zero vector, which is crucial for demonstrating that V can be expressed as the direct sum of S and A. The conversation also highlights the challenge of applying this understanding to relate symmetric and skew-symmetric matrices through a suitable linear transformation j. Additionally, references to external resources are provided to clarify the concepts and methodologies involved. The overall goal is to solidify the understanding of linear transformations and their implications in linear algebra.
iJake
Messages
41
Reaction score
0

Homework Statement



Homework Statement



(a) Let ##V## be an ##\mathbb R##-vector space and ##j : V \rightarrow V## a linear transformation such that ##j \circ j = id_V##. Now, let

##S = \{v \in V : j(v) = v\}## and ##A = \{v \in V : j(v) = -v\}##

Prove that ##S## and ##A## are subspaces and that ##V = S \oplus A##.

(b) Deduce that the decomposition of the matrices in direct sum from the symmetric and skew-symmetric matrices from part (a) (finding a convenient linear transformation ##j##)

[apologies if that last part is a bit weird sounding, I'm translating from Spanish]

Homework Equations


---

The Attempt at a Solution



a) The test for ##S## and ##A## being subspaces is fairly trivial so I don't include it. Now, to determine that ##V = S \oplus A## I'm finding it a little trickier. I observe clearly that ##S \cap A = \{0\}## but how do I formalize that and lead into it proving that V is the direct sum of S and A?

b) is also confusing me. I found this and it looks remarkably similar to my problem, but I do not know how to apply it here.

Thank you Physics Forums for any help.
 
Physics news on Phys.org
I apologize for taking so long to reply to this thread! I was away.

I'm still finding this one a bit tricky. At first I took ##u \in S## and ##w \in A## but these were not working for me I don't think. Now I've reached

##u \in A = j(v) + v = -v + v##
##w \in S = j(v) - v = v - v##

##v = u + w = (j(v) + v) + (j(v) - v) = (-v + v) + (v - v) = 0 + 0 = 0##

Is this the idea I'm meant to apply? I'm dubious as I don't think I really used the fact that ##j \circ j = id_V##.

I suppose once I'm sure about the methodology for this first part I'll be better equipped to deduce the relationship between symmetric and skew-symmetric matrices. What does my problem mean when it says to find the convenient linear map ##j##?

Thanks again for taking the time to assist me.
 
check out the relatively prime decomposition theorem on page 26 of these notes;

http://alpha.math.uga.edu/%7Eroy/4050sum08.pdfit says that since your map satisfies the polynomial X^2-1 = (X-1)(X+1), then your space is a direct sum of subspaces on which your map satisfies ether X-1 or X+1.

it is not entirely trivial since the proof uses the euclidean algorithm as i recall.

or look at the equivalent decomposition lemma at the beginning of chapter 4 of these notes:

http://alpha.math.uga.edu/%7Eroy/laprimexp.pdf
 
Question: A clock's minute hand has length 4 and its hour hand has length 3. What is the distance between the tips at the moment when it is increasing most rapidly?(Putnam Exam Question) Answer: Making assumption that both the hands moves at constant angular velocities, the answer is ## \sqrt{7} .## But don't you think this assumption is somewhat doubtful and wrong?

Similar threads

Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 26 ·
Replies
26
Views
3K
Replies
34
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K