Bases of vector spaces and change of basis

"Don't panic!"
Messages
600
Reaction score
8
Hi all,

Just doing a bit of personal study on vector spaces and wanted to clear up my understanding on the following. This is my description of what I'm trying to understand, is it along the right lines? (apologies in advance, I am a physicist, not a pure mathematician, so there are most probably errors in the mathematical formalism I've used below):Given an n-dimensional vector space V and an ordered basis E= \left[\mathbf{v}_{i}\right]_{i=1,2,\ldots ,n}, one can represent any vector \mathbf{v}\in V with respect to this basis as a linear combination \qquad\qquad\qquad\qquad\qquad\qquad \mathbf{v}=\sum_{i=1}^{n}a_{i}\mathbf{v}_{i} where a_{i} are the components of \mathbf{v} with respect to the basis vectors \mathbf{v}_{i}.We can define an isomorphism between V and \mathbb{R}^{n}, f:V\rightarrow\mathbb{R}^{n} such that, with respect to the basis E
\qquad\qquad\qquad\qquad\qquad\qquad \sum_{i=1}^{n}a_{i}\mathbf{v}_{i} \;\; \longmapsto \;\; \left(\begin{matrix}a_{1} \\ a_{2} \\ \vdots \\ a_{n}\end{matrix}\right) Denoting this isomorphism as
\qquad\qquad\qquad\qquad\qquad\qquad \left[\mathbf{v}\right]_{E} = \left(\begin{matrix}a_{1} \\ a_{2} \\ \vdots \\ a_{n}\end{matrix}\right) we observe that \qquad\qquad\qquad\qquad\qquad\qquad \left[\mathbf{v} +\mathbf{w} \right]_{E} = \left[\mathbf{v} \right]_{E} +\left[\mathbf{w} \right]_{E} \;\;\forall \,\mathbf{v},\mathbf{w} \in V and
\qquad\qquad\qquad\qquad\qquad\qquad \left[c\mathbf{v} \right]_{E} = c\left[\mathbf{v} \right]_{E} \;\;\forall \,\mathbf{v}\in V\;\;\text{and} \;\; c\in F where F is the underlying scalar field of V.

Given this, we have that \qquad\qquad\qquad\qquad\qquad\qquad \left[\mathbf{v}\right]_{E} = \left[\sum_{i=1}^{n}a_{i}\mathbf{v}_{i}\right]_{E} = \sum_{i=1}^{n}a_{i} \left[\mathbf{v}_{i}\right]_{E} = \left(\begin{matrix}a_{1} \\ a_{2} \\ \vdots \\ a_{n}\end{matrix}\right) which implies that \qquad\qquad\qquad\qquad\qquad\qquad \left[\mathbf{v}_{i}\right]_{E} = \left(\begin{matrix}0 \\ \vdots \\ 1 \\ \vdots \\ 0\end{matrix}\right) where the only non-zero component is the i^{th} component with a value of unity. Hence, it can be seen that, using such an isomorphism, one can make the basis vectors of a given n-dimensional ordered basis, resemble the standard basis in \mathbb{R}^{n}. This is true for all such n-dimensional ordered bases for a given vector space V.

I am currently working my way through Nadir Jeevanjee's book "An Introduction to Tensors & Group Theory for Physicists" and in it he mentions that "given an ordered basis, it is always possible to represent the basis vectors, in the basis that they define, such that they resemble the standard basis" (in words close to that effect), and I'm just trying to make sense of the notion. (Please ignore the "change of basis" part of the title. I initially got a little over enthusiastic and subsequently realized discussing both in one thread might be a little too long-winded).
 
Physics news on Phys.org
All of that looks good to me. I guess I could nitpick some unusual notation. You denoted the ordered basis by ##[\mathbf v_i]_{i=1,\dots,n}##. I think ##(\mathbf v_i)_{i=1}^n## or ##\langle \mathbf v_i\rangle_{i=1}^n## is more appropriate, since (ordered) n-tuples are usually written as ##(x_1,\dots,x_n)## or ##\langle x_1,\dots,x_n\rangle##.

You may be interested in the https://www.physicsforums.com/showthread.php?t=694922 I wrote about matrix representations of linear operators. Your post explains how to use an ordered basis to represent vectors as matrices. My post explains how to use two ordered bases to represent linear transformations as matrices.
 
Last edited by a moderator:
Thanks Fredrik, really appreciate you having a look over it and the advice. Cheers for the link, very informative :)
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top