Can Linear Transformations Occur Between Infinite and Finite Dimensions?

ajayguhan
Messages
153
Reaction score
1
I know that every linear transformation from Rn to Rm can be represented in a matrix form.


What about a transformation from a

1. Infinite dimension to infinite dimension
2.finite to infinite dimension
3.infinite to finite dimension
Can they represented by matrix form...?

Before this one question is there a linear transformation from Infinite dimension to infinite dimension, finite to infinite dimension and vice versa..?


Any help appreciated.

Thanks in advance.
 
Physics news on Phys.org
Algebraic version:

If your definition of a linear transformation is just a linear map between two real vector spaces, T:V\to W, then it depends on how you define a matrix.
- One definition of a matrix is as an element of \mathbb R^{I\times J}, where I,J are arbitrary sets. Fix M=(m_{i,j})_{i\in I, j\in J}\in \mathbb R^{I\times J}... For any i\in I, we can think of the row i of M as (m_{i,j})_{j\in J}. For any j\in J, we can think of the column j of M as (m_{i,j})_{i\in I}.
- If this is how you define a matrix, then there's a sensible way of representing T as a matrix. Fix a basis \{v_j\}_{j\in J}\subseteq V and a basis \{w_i\}_{i\in I}\subseteq W. Here, a basis for the vector space means a subset such that every element of the vector space can be represented as a unique linear combination of finitely many elements of the subset. (Sometimes, this is called a Hamel basis.)
- Then define the matrix M\in \mathbb R^{I\times J} (in which every column has finitely many non-zero entries) by letting column j\in J satisfy T(v_j)=\sum_{i\in I} m_{i,j} w_i. (In this sum, just don't count entries with m_{i,j}=0, and then it's a finite sum). By definition of a basis, there's a unique way to do this (once the basis has been fixed).

- By doing the same thing in reverse, fixing bases (v_j)_{j\in J} and (w_i)_{i\in I}, every M\in \mathbb R^{I\times J} with each column having finitely many nonzero entries will then induce a unique linear transformation T:V\to W.
 
Analytic:

If you have a notion of convergence, and you want to allow for infinite sums, things become subtler, and matrices become a little more useful. [The construction in the above post is some form of matrix representation, but it's not really useful in any way.]

One example: There's a very nice way of using matrices (with countably many rows/columns) to describe continuous linear transformations between separable Hilbert spaces. E.g. between l^2 and l^2, or between \mathbb R^n and l^2. This is a totally different construction than the one I gave above, and it illuminates some senses in which l^2 behaves very similarly to \mathbb R^n. That's why it's many people's favourite example of an infinite-dimensional vector space.
 
We can represent the linear transformation from 1. Infinite dimension to infinite dimension2.finite to infinite dimension3.infinite to finite dimension interms of matrix, if convergence (catchy sequence) exist in the infinite dimension, if not we can't represent it in matrix form...correct..?
 
Transformations from and to infinite dimensional spaces are typically represented as itegrals of the form \int K(x, t)f(t)dt where "f(t)" gives, for each t, the componts of the "vector".

(Infinite dimensional vector spaces are typically not dealt with in "Linear Algebra", which is often defined as "the theory of finite dimensional vector spaces", but in "Functional Analysis".)
 
Can we transform a infinite dimensional space to finite dimension and vice versa ...?
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top