Proof regarding transpose mapping

In summary, the task is to prove that if u is not an element of the image of linear mapping T from vector space V to vector space U, then the space U can be decomposed into the image of T, the span of u, and a subspace L.
  • #1
Adgorn
130
18

Homework Statement


Suppose T:V→U is linear and u ∈ U. Prove that u ∈ I am T or that there exists ##\phi## ∈ V* such that TT(##\phi##) = 0 and ##\phi##(u)=1.

Homework Equations


N/A

The Attempt at a Solution


Let ##\phi## ∈ Ker Tt, then Tt(##\phi##)(v)=##\phi##(T(v))=0 ∀T(v) ∈ I am T. So obviously if u ∈ I am T than ##\phi##(u)=0. I now need to prove that if u ∉ I am T, than there exists a linear functional which answers the above criteria, and this is where I'm stuck. I don't know which mapping I define that would answer the criteria.

Any help would be appreciated,
 
Physics news on Phys.org
  • #2
There seems to be something wrong with how this problem is written. If ##\phi\in V^*## then the domain of ##\phi## is ##V## which , from the problem specification, does not necessarily have any intersection with ##U##, so ##\phi(u)## is undefined.

There must be some missing information, or some implicit assumptions, which need to be brought out into the open.
 
  • #3
andrewkirk said:
There seems to be something wrong with how this problem is written. If ##\phi\in V^*## then the domain of ##\phi## is ##V## which , from the problem specification, does not necessarily have any intersection with ##U##, so ##\phi(u)## is undefined.

There must be some missing information, or some implicit assumptions, which need to be brought out into the open.
Ah, yes I forgot to mention that. I just assumed it meant ##\phi##∈U* and not V*. Damn book's full of mistakes.
 
  • #4
Some more info would be helpful. What are V and U? Vector spaces, or modules? If vector spaces, are they finite dimensional? Do they both have inner products? What about norms?

Also, what, exactly, is ##T^T## (which sometimes is written above as ##T^t##)? It looks like it's supposed to be some sort of inverse pushforward.
 
  • #5
andrewkirk said:
Some more info would be helpful. What are V and U? Vector spaces, or modules? If vector spaces, are they finite dimensional? Do they both have inner products? What about norms?

Also, what, exactly, is ##T^T## (which sometimes is written above as ##T^t##)? It looks like it's supposed to be some sort of inverse pushforward.
V and U are vector spaces, dimension is not specified (I quoted the question word for word) which naturally means I cannot use arguments of dimension as the vector spaces can be of both finite and infinite dimension. Inner products are not mentioned and are probably irrelevant here as the chapter of this question focuses mostly on linear functionals, same with norms.

Tt:U*→V* is the transpose mapping for T. If ##\phi## is a linear functional in U*, then Tt(##\phi##)=##\phi \circ T##, thus Tt(##\phi##)(v)=##\phi##(T(v)).
 
  • #6
-
 
  • #7
Adgorn said:
Suppose T:V→U is linear and u ∈ U. Prove that u ∈ I am T o

This is the same as to prove that if ##u\notin \mathrm{Im}\,T## then...
Prove that the space ##U## is decomposed as follows ##U=\mathrm{Im}\,T\oplus\mathrm{span}\,\{u\}\oplus L.##
 
Last edited:

1. What is the definition of transpose mapping?

The transpose mapping is a mathematical operation that involves flipping the rows and columns of a matrix or vector. This results in a new matrix or vector with the rows and columns interchanged.

2. How is the transpose mapping denoted?

The transpose mapping is denoted by the superscript "T" after the matrix or vector, such as AT. Alternatively, it can also be denoted as A'.

3. What are the properties of the transpose mapping?

The transpose mapping has several properties, including:

  • (AT)T = A (transpose of a transpose is the original matrix)
  • (A + B)T = AT + BT (transpose of a sum is the sum of transposes)
  • (kA)T = kAT (transpose of a scalar multiple is the multiple of the transpose)
  • (AB)T = BTAT (transpose of a product is the product of transposes in reverse order)

4. How is the transpose mapping used in linear algebra?

The transpose mapping is used in linear algebra to define concepts like orthogonal vectors, symmetric and skew-symmetric matrices, and to perform operations like matrix multiplication, solving systems of equations, and finding inverses of matrices.

5. What is the relationship between transpose mapping and the dot product?

The transpose mapping is closely related to the dot product of two vectors. In fact, the dot product can be seen as the transpose of one vector multiplied by the other vector. This relationship is important in many applications, such as in calculating projections and determining the angle between two vectors.

Similar threads

  • Calculus and Beyond Homework Help
Replies
0
Views
450
  • Calculus and Beyond Homework Help
Replies
8
Views
622
  • Calculus and Beyond Homework Help
Replies
7
Views
413
  • Calculus and Beyond Homework Help
Replies
1
Views
610
  • Calculus and Beyond Homework Help
Replies
24
Views
798
  • Calculus and Beyond Homework Help
Replies
1
Views
459
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
943
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
Back
Top