Linear Operators: False for Non-Finite Dimensional Vector Spaces

Click For Summary

Discussion Overview

The discussion revolves around the properties of linear operators in finite versus infinite dimensional vector spaces, specifically addressing the conditions under which an operator can be invertible. Participants explore examples and counterexamples to illustrate these concepts.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant states that if T is a linear operator on a finite dimensional vector space V and TU = I, then T is invertible with U as its inverse.
  • Another participant suggests that to find a counterexample in infinite dimensional spaces, it suffices to find a surjective but not injective operator, citing the backwards shift on the space of square summable sequences, ℓ².
  • A different participant expresses unfamiliarity with ℓ² and asks for examples in more familiar spaces, such as the vector space of real numbers over rationals or polynomials over any field.
  • One participant proposes a specific example using the space of polynomials, defining a linear operator T that is surjective but not injective.
  • Another participant adds to the previous example by defining a second operator U, demonstrating that TU = I while UT ≠ I.
  • One participant connects the discussion to a general method of finding injective but not surjective functions, relating it to mappings in other mathematical contexts.
  • A new question is introduced regarding the relationship between a linear operator T and the subspace W spanned by the column vectors of its matrix representation, discussing implications of linear independence and dependence.
  • Another participant clarifies that if the columns of the matrix are not independent, T does not have an inverse, and thus W cannot span the entire space F^n.

Areas of Agreement / Disagreement

Participants express differing views on the implications of linear independence and the nature of the examples provided. While some examples are agreed upon, the overall discussion remains unresolved regarding the broader implications of these properties in infinite dimensional spaces.

Contextual Notes

Participants note the importance of definitions and the specific contexts of the spaces being discussed, highlighting that results may vary based on dimensionality and the nature of the operators involved.

sihag
Messages
28
Reaction score
0
Let T be a linear operator on a finite dimensional vector space V, over the field F.
Suppose TU = I, where U is another linear operator on V, and I is the Identity operator.
It can ofcourse be shown that T is invertible and the invese of T is nothing but U itself.

What I want to know is an example explicitly to show that the above is false if V is not finite dimensional.

Thank You.
 
Physics news on Phys.org
It suffices to find an operator T that is surjective but not injective.

A standard example of this phenomenon is the backwards shift on [itex]\ell^2[/itex], the space of square summable sequences: T(x_1, x_2, ...) = (x_2, x_3, ...). I'll let you verify this.
 
This space is new to me !
Work me through this example.
Can't we show something on the vector space of R (reals) over rationals Q, or maybe even on vector space of all polynomials over any field.

Thank you.
 
We actually don't really need the space to be l^2, but this is the space this example usually comes up in.

Let's reduce it to a more familiar space: F[x], the space of polynomials over F. Define T:F[x]->F[x] by T(a0 + a1x + ... + anxn) = a1 + a2x + ... + anxn-1. I'll let you verify that this is linear and surjective, but not injective.
 
To add to what morphism just said, let U(a0+ a1x+ ...+ anxn)= 1+ a0x+ a1x2+ ...+ anxn+1 and it should be clear that TU= I but UT is not equal to I!
 
You've (hopefully) already seen this example, in another context. The construction the others have described precisely corresponds to finding a (set) function from the set of natural numbers to itself that is injective but not surjective.

This is actually a quite general method useful in many contexts -- all you have to do is find a way to relate natural numbers to the kind of structure you're studying.
 
hehe
finally got it !
yes, used similar mappings in ring theory !
 
Well I've come across another question.
It's from Hoffman and Kunze's text on linear algebra.
Let T be a linear operator on F^n, let A be the matrix of T in the standard ordered basis for F^n, and let W be the subspace of F^n spanned by the column vectors of A. What does W have to do with T?

W is simply the space spanned by the range of T, right ? And in case the columns of A are linearly independent, they would form a basis for F^n and so W would be F^n itself.
In this case the transformation T would be necessarily injective, since it is preserving linear independence.
In case the columns of A are linearly dependent, W might or might not span F^n. But the columns of A would in this case not form a basis.

Is there something I've missed here ?
 
If the columns are not independent, then T does not have an inverse and its range is not all of R^n. Since W does span the range of T, it cannot span F^n. The "might" part of "might or might not" is incorrect.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K