Vector Transformation in \mathbb{R}^n and \mathbb{R}^m with Separable Components

nille40
Messages
34
Reaction score
0
Hi! I'm in serious need of some help.

I am supposed to show that a transformation \mathcal{A} = \mathbb{R}^n \rightarrow \mathbb{R}^m can be separated into \mathcal{A} = i \circ \mathcal{B} \circ p where

  • p is the projection on the (orthogonal) complement of the kernel of \mathcal{A}.

    \mathcal{B} is an invertible transformation from the complement to the kernel to the image of \mathcal{A}.

    i is the inclusion of the image in \mathbb{R}^n

I hardly know where to start! I would really like some help. I asked this question before, in a different topic, but got a response I didn't understand. I posted a follow-up, but got no response on that.

Thanks in advance,
Nille
 
Physics news on Phys.org
let K be the kernel of B. Then A is K direct sum K*, where we'll use * to denote the complementary vector space.

Let p be the map p(k) = 0 if k in K, and p(x)=x for x in K*, extended linearly. This means that any vector in A can be written as x+k for x in K* and k in K, and then

p(x+k)=x.


This is your projection.

Notice that for all v in A that Bp(v)=v.

The inclusion is the dual construction:

Let I be the image of B. This is a subspace of of R^n. Pick a complementary subspace I*

Then there is a natural map from I to Idirect sum I*, just the inclusion of the vector, call tis map i.

Obviously the map iBp is the same as B.


This is just the Isomorphism theorems glued together.
 



Hi Nille,

I'd be happy to help you with this problem. Let's break it down step by step.

First, let's define our transformation \mathcal{A}: \mathbb{R}^n \rightarrow \mathbb{R}^m. This means that \mathcal{A} takes in a vector in \mathbb{R}^n and outputs a vector in \mathbb{R}^m. So we can represent \mathcal{A} as a matrix A with m rows and n columns.

Next, let's define the kernel of \mathcal{A}. The kernel of a transformation is the set of all vectors that get mapped to the zero vector in the output space. In other words, it's the set of all x \in \mathbb{R}^n such that \mathcal{A}(x) = 0.

Now, let's define the complement of the kernel. This is the set of all vectors in \mathbb{R}^n that are not in the kernel of \mathcal{A}. In other words, it's the set of all x \in \mathbb{R}^n such that \mathcal{A}(x) \neq 0.

The projection on the complement of the kernel of \mathcal{A} is a transformation p that takes in a vector x \in \mathbb{R}^n and outputs a vector p(x) \in \mathbb{R}^n, where p(x) is the projection of x onto the complement of the kernel of \mathcal{A}. This means that p(x) is the closest vector to x that is not in the kernel of \mathcal{A}. We can represent this as a matrix P with n rows and n columns.

Now, let's define \mathcal{B}. This is a transformation from the complement of the kernel of \mathcal{A} to the image of \mathcal{A}. This means that \mathcal{B} takes in a vector x \in \mathbb{R}^n and outputs a vector \mathcal{B}(x) \in \mathbb{R}^m, where \mathcal{B}(x) is the transformation of x by \mathcal{A}. We can represent this as a matrix B with m rows and n columns.

 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top