Vector Space Basis: Standard or Odd?

Newtime
Messages
347
Reaction score
0
In short: does every vector space have a "standard" basis in the sense as it is usually defined i.e. the set {(0,1),(1,0)} for R2? And another example is the standard basis for P3 which is the set {1,t,t2}. But for more abstract or odd vector spaces such as the space of linear transformations (automorphisms?) what would the standard basis be?
 
Physics news on Phys.org
No, not every vector space has a "standard" basis because there are many vector spaces no one has every looked at! There are, after all, an infinite number of vector spaces! A "standard" basis is simply a basis that has been declared "standard".
 
This makes sense. But a related question: What then would be any basis for the space of linear transformations of R^2 onto R^2? Any 2x2 matrix? or perhaps 4 arbitrary 2x2 matrices?
 
The space of Linear transformations L(U,V), where U and V are finite dimensional linear spaces, with dimensions m and n, is itself a linear space with dimension mn; its "standard" basis is the set of matrices Ekl, defined by:

[Ekl]_{ij} = \delta_{kilj}

These basis are called "standard", because they are built using only the unit (1) of the scalar field; therefore, given a representation of the vector relative to this basis, its coordinates are, in a sense, immediate.

Regarding the general question, every vector space, finite or infinite dimensional, has indeed a basis of this type, called an Hamel basis, and also because they are completely classified by their dimension (vector spaces with the same dimension are isomorphic); of course, in infinite dimensional spaces, the Hamel basis is uncomputable (and unenumerable); in finite dimensions, it coincides with the usual canonical (or "standard basis").
 
Last edited:
The "standard basis" for the vector space of 2 by 2 matrices (while not every vector space has a "standard" basis, simple one like this do) consists of the four matrices
\begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}
\begin{bmatrix}0 & 1 \\ 0 & 0\end{bmatrix}
\begin{bmatrix}0 & 0 \\ 1 & 0\end{bmatrix}
and
\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}

So that any matrix can be written as
\begin{bmatrix}a & b \\ c & d\end{bmatrix}= a\begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}+ b\begin{bmatrix}0 & 1 \\ 0 & 0\end{bmatrix}+ c\begin{bmatrix}0 & 0 \\ 1 & 0\end{bmatrix}+ d\begin{bmatrix}0 & 0 \\ 0 & 1\end{bmatrix}
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top