Do column 'vectors' need a basis?

  • Context: High School 
  • Thread starter Thread starter etotheipi
  • Start date Start date
  • Tags Tags
    Basis Column Vectors
Click For Summary

Discussion Overview

The discussion revolves around the nature of column vectors in relation to their representation in different bases within vector spaces. Participants explore whether column vectors can exist independently of a defined basis and how transformations between bases affect their representation. The scope includes theoretical considerations and conceptual clarifications regarding vector representation and basis transformations.

Discussion Character

  • Exploratory, Technical explanation, Conceptual clarification, Debate/contested

Main Points Raised

  • Some participants argue that column vectors can represent different forms of the same vector depending on the basis used, suggesting that the entries in a column vector are inherently tied to the underlying basis.
  • Others contend that the transformation of vectors between bases indicates that the column structures themselves do not represent the vector directly, but rather a tuple of numbers without an apparent basis unless specified.
  • A participant provides examples illustrating that different representations of a vector in different bases yield different coordinate values, yet they still represent the same geometric entity.
  • There is a discussion about the notation used to express the equality of vectors in different bases, with some participants questioning the implications of the equality sign in this context.
  • Some participants express confusion about the specific structure of the transformation equations and whether the column vectors can be equated to the original vector.

Areas of Agreement / Disagreement

Participants do not reach a consensus. There are competing views on whether column vectors can exist without a defined basis and how transformations affect their representation. The discussion remains unresolved regarding the implications of these differing perspectives.

Contextual Notes

Limitations include the dependence on definitions of vector representation and the ambiguity in notation used for expressing equality between vectors in different bases. The discussion highlights the need for clarity in the context of transformations and representations.

etotheipi
Consider the transformation of the components of a vector ##\vec{v}## from an orthonormal coordinate system with a basis ##\{\vec{e}_1, \vec{e}_2, \vec{e}_3 \}## to another with a basis ##\{\vec{e}'_1, \vec{e}'_2, \vec{e}'_3 \}##

The transformation equation for the components of ##\vec{v}## looks something like$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$Now since ##\vec{v} = v_i \vec{e}_i = v'_i\vec{e}'_i##, and the matrix is not the identity, the structures ##\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}## and ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}## in the above expression can't be ##\vec{v}##. Instead, they just appear to be 3-tuples of numbers with no apparent basis.

Similarly, in different contexts we sometimes put vectors in such a column structure, like ##\begin{pmatrix}\vec{e}_1\\\vec{e}_2\\\vec{e}_3\end{pmatrix}## e.g. when looking at how the basis transforms.

It appears to me then that something like ##\begin{pmatrix}a\\b\\c\end{pmatrix}## can represent the raw tuple ##(a,b,c)##, or an expression with certain basis vectors like ##a\vec{e}_1 + b\vec{e}_2 + c\vec{e}_3## , depending on the context. I wondered if someone could clarify whether this is along the right lines?
 
Last edited by a moderator:
Physics news on Phys.org
etotheipi said:
Consider the transformation of the components of a vector ##\vec{v}## from an orthonormal coordinate system with a basis ##\{\vec{e}_1, \vec{e}_2, \vec{e}_3 \}## to another with a basis ##\{\vec{e}'_1, \vec{e}'_2, \vec{e}'_3 \}##

The transformation equation for the components of ##\vec{v}## looks something like$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$Now since ##\vec{v} = v_i \vec{e}_i = v'_i\vec{e}'_i##, the structures ##\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}## and ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}## in the above expression can't be ##\vec{v}##.
Sure they can. They are just different representations of ##\vec v## in different bases.
etotheipi said:
Instead, they just appear to be 3-tuples of numbers with no apparent basis.
No. Any time you have the coordinate representation of a vector, the entries are the coordinates of the underlying basis.

For a simpler example, consider the vector ##\vec v = \begin{pmatrix}1\\2 \end{pmatrix}##. Without any other information, we would naturally assume that the coordinates are those of the standard basis for ##\mathbb R^2##, ##\hat {e_1}## and ##\hat{e_2}##.
If we instead write ##\vec v## in terms of the basis ##\{\begin{pmatrix}1\\0 \end{pmatrix}, \begin{pmatrix}1\\1 \end{pmatrix}\}##, the representation of ##\vec v## in terms of that basis would be ##\begin{pmatrix} -1\\2 \end{pmatrix}##.
etotheipi said:
Similarly, in different contexts we sometimes put vectors in such a column structure, like ##\begin{pmatrix}\vec{e}_1\\\vec{e}_2\\\vec{e}_3\end{pmatrix}## e.g. when looking at how the basis transforms.

It appears to me then that something like ##\begin{pmatrix}a\\b\\c\end{pmatrix}## can represent the raw tuple ##(a,b,c)##
Again, there's really no such thing as a "raw tuple". The coordinates a, b, and c are the scalar multipliers of whatever vectors are in the basis you're using.
etotheipi said:
, or an expression with certain basis vectors like ##a\vec{e}_1 + b\vec{e}_2 + c\vec{e}_3## , depending on the context. I wondered if someone could clarify whether this is along the right lines?
 
  • Like
Likes   Reactions: etotheipi
Mark44 said:
Sure they can. They are just different representations of ##\vec v## in different bases.
I don't agree. In the expression $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$ The matrix is not the identity, so $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} \neq \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$ It is instead ##\vec{v}## that is independent of the coordinate system, i.e. ##v'_i \vec{e}'_i = v_i \vec{e}_i = \vec{v}##.
 
etotheipi said:
I don't agree. In the expression $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$ The matrix is not the identity, so $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} \neq \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$ It is instead ##\vec{v}## that is independent of the coordinate system, i.e. ##v'_i \vec{e}'_i = v_i \vec{e}_i = \vec{v}##.
That's not the point. Of course the matrix is not the identity matrix. Also, I'm not saying that the coordinates of a vector in one basis will be pairwise equal to the coordinates of the same vector in another basis.

In the example I gave, we have two different representations of the same vector; namely (1, 2), in the standard basis for R^2, and (-1, 2), in another basis. Obviously the coordinates are different, but they nevertheless represent a single vector.
 
  • Like
Likes   Reactions: etotheipi
Mark44 said:
That's not the point. Of course the matrix is not the identity matrix. Also, I'm not saying that the coordinates of a vector in one basis will be pairwise equal to the coordinates of the same vector in another basis.

In the example I gave, we have two different representations of the same vector; namely (1, 2), in the standard basis for R^2, and (-1, 2), in another basis. Obviously the coordinates are different, but they nevertheless represent a single vector.

Though my question is really about the specific structure $$
\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\

\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}

\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$ The column structures in that expressions are evidently not equal to ##\vec{v}##. So I wonder what basis they are represented in.
 
etotheipi said:
I don't agree. In the expression $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$ The matrix is not the identity, so $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} \neq \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$ It is instead ##\vec{v}## that is independent of the coordinate system, i.e. ##v'_i \vec{e}'_i = v_i \vec{e}_i = \vec{v}##.

It depends what you mean by the ##=## sign. Some notations I use are:
$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}' = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$
$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}_{b2} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}_{b1}$$
$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} \leftrightarrow \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$
All these indicate that it's the same vector represented by a different column of numbers in different bases.
 
  • Like
Likes   Reactions: Ishika_96_sparkles, etotheipi, DaveE and 1 other person
But it is not the case that: $$

(v'_1 \vec{e}'_1 + v'_2 \vec{e}'_2 + v'_3 \vec{e}'_3) =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
(v_1 \vec{e}_1 + v_2 \vec{e}_2 + v_3 \vec{e}_3)

$$ I.e. my point is that it makes no sense to write: $$

\vec{v} =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\vec{v}

$$since the matrix is not the identity. So in the original expression $$
\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\

\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}

\

\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$those column vectors must not equal ##\vec{v}##! I then wondered what the basis was for each of those column vectors... :wink:
 
Of course, there is something like 'rawr' tuples. ##\mathbb{R}^3## is exactly the collections of formal tuples ##(a,b,c)## with ##a,b,c \in \mathbb{R}##. I don't need linear algebra, or coordinates, or bases to talk about this.

Given a vector ##(a,b,c) \in \mathbb{R}^3##, we can look at the coordinates of this vector w.r.t. to a basis ##E=(e_1, e_2, e_3)##. What are these coordinates? Well, we can write ##(a,b,c) = \lambda_1 e_1 + \lambda_2 e_2 + \lambda_3 e_3## for unique ##\lambda_1, \lambda_2, \lambda_3 \in \mathbb{R}##. The coordinates of ##(a,b,c)## w.r.t. the basis ##E## is then ##(\lambda_1, \lambda_2, \lambda_3)##. Let us introduce some notation. Let's write ##[(a,b,c)]_E= (\lambda_1, \lambda_2, \lambda_3)##.

Your question is basically: given bases ##E= (e_1, e_2, e_3)## and ##E' = (e_1', e_2', e_3')##, what is the relation between ##[(a,b,c)]_E## and ##[(a,b,c)]_{E'}##? The answer is that you can get from one to the other via a matrix multiplication.

I guess in your post ##(v_1, v_2, v_3)^T## is the coordinates of a vector w.r.t. ##E## and ##(v_1', v_2', v_3')^T## is the coordinates of the same vector w.r.t. ##E'##.
 
  • Love
Likes   Reactions: etotheipi
etotheipi said:
But it is not the case that: $$

(v'_1 \vec{e}'_1 + v'_2 \vec{e}'_2 + v'_3 \vec{e}'_3) =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
(v_1 \vec{e}_1 + v_2 \vec{e}_2 + v_3 \vec{e}_3) $$

When you say "is not the case", are you assuming the notation on the right hand side of the equal sign has a defined meaning?
 
  • #10
Math_QED said:
Your question is basically: given bases ##E= (e_1, e_2, e_3)## and ##E' = (e_1', e_2', e_3')##, what is the relation between ##[(a,b,c)]_E## and ##[(a,b,c)]_{E'}##? The answer is that you can get from one to the other via a matrix multiplication.

Thank you, yes this was my suspicion. That the column vectors on the LHS and RHS were tuples of the components with no basis, transformed via matrix multiplication.

Stephen Tashi said:
When you say "is not the case", are you assuming the notation on the right hand side of the equal sign has a defined meaning?

I suppose I took some liberties with that questionable notation to make the point, but essentially I meant that the column vectors on the RHS and LHS aren't even representations of the same vector in a different basis. But instead as @Math_QED mentioned they are just tuples of numbers.
 
  • #11
etotheipi said:
I suppose I took some liberties with that questionable notation to make the point, but essentially I meant that the column vectors on the RHS and LHS aren't even representations of the same vector in a different basis. But instead as @Math_QED mentioned they are just tuples of numbers.

The column vectors on the RHS an LHS are representing the same vector. Say your fixed vector is ##(a,b,c) \in \mathbb{R}^3##. Then ##(v_1, v_2, v_3) = [(a,b,c)]_E## and ##(v_1', v_2', v_3')= [(a,b,c)]_{E'}##.
 
  • Like
Likes   Reactions: etotheipi
  • #12
etotheipi said:
But it is not the case that: $$

(v'_1 \vec{e}'_1 + v'_2 \vec{e}'_2 + v'_3 \vec{e}'_3) =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
(v_1 \vec{e}_1 + v_2 \vec{e}_2 + v_3 \vec{e}_3)

$$ I.e. my point is that it makes no sense to write: $$

\vec{v} =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\vec{v}

$$since the matrix is not the identity. So in the original expression $$
\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\

\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}

\

\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$those column vectors must not equal ##\vec{v}##! I then wondered what the basis was for each of those column vectors... :wink:

There is a duality in linear algebra. Let's start with a vector, ##u## and a linear transformation ##T##. We have:
$$v = Tu$$
Where ##v## is another vector. Now, if we choose any basis, this equation takes the form of a matrix/tuple equation:
$$
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} = \begin{bmatrix}

T_{11} & T_{12} & T_{13} \\
T_{21} & T_{22} & T_{23} \\
T_{31} & T_{32} & T_{33} \end{bmatrix}

\

\begin{pmatrix}u_1\\u_2\\u_3\end{pmatrix}$$
But, we can also interpret a matrix equation like this as a change of basis, where now the numbers ##v_1, v_2, v_3## represent the components of ##u## in a new basis. So, you need to be clear about what you are doing.
 
  • Like
Likes   Reactions: etotheipi
  • #13
PeroK said:
There is a duality in linear algebra. Let's start with a vector, ##u## and a linear transformation ##T##. We have:
$$v = Tu$$
Where ##v## is another vector. Now, if we choose any basis, this equation takes the form of a matrix/tuple equation:
$$
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} = \begin{bmatrix}

T_{11} & T_{12} & T_{13} \\
T_{21} & T_{22} & T_{23} \\
T_{31} & T_{32} & T_{33} \end{bmatrix}

\

\begin{pmatrix}u_1\\u_2\\u_3\end{pmatrix}$$
But, we can also interpret a matrix equation like this as a change of basis, where now the numbers ##v_1, v_2, v_3## represent the components of ##u## in a new basis. So, you need to be clear about what you are doing.

Thanks, this sums it up nicely. With linear transformations you get out a new vector, but with component transformations you get out a representation of the same vector. So in the tuple equation you wrote up, ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} \not\equiv v_1 \hat{x} + v_2 \hat{y} + v_3 \hat{z}##, whilst this would be true for the linear transformation.

You can get other equations like

$$\begin{pmatrix}\vec{e}'_1\\\vec{e}'_2\\\vec{e}'_3\end{pmatrix} = \begin{bmatrix}
T_{11} & T_{12} & T_{13} \\

T_{21} & T_{22} & T_{23} \\

T_{31} & T_{32} & T_{33} \end{bmatrix}
\
\begin{pmatrix}\vec{e}_1\\\vec{e}_2\\\vec{e}_3\end{pmatrix}$$ where again this is matrix multiplication and the objects in each row of the column aren't coefficients of some basis vector (that wouldn't even make semantic sense in this case!).
 
  • #14
etotheipi said:
Thanks, this sums it up nicely. With linear transformations you get out a new vector, but with component transformations you get out a representation of the same vector.
The change-of-basis matrix is a linear transformation, so the distinction isn't as clear-cut as you seem to think.
etotheipi said:
So in the tuple equation you wrote up, ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} \not\equiv v_1 \hat{x} + v_2 \hat{y} + v_3 \hat{z}##, whilst this would be true for the linear transformation.
This is unclear. Is the vector ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}## in terms of the basis ##\{\hat x, \hat y, \hat z \}##? If so, then ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}## does equal ##v_1 \hat{x} + v_2 \hat{y} + v_3 \hat{z}##.
etotheipi said:
You can get other equations like

$$
\begin{pmatrix}\vec{e}'_1\\\vec{e}'_2\\\vec{e}'_3\end{pmatrix} = \begin{bmatrix}
T_{11} & T_{12} & T_{13} \\
T_{21} & T_{22} & T_{23} \\
T_{31} & T_{32} & T_{33} \end{bmatrix}
\
\begin{pmatrix}\vec{e}_1\\\vec{e}_2\\\vec{e}_3\end{pmatrix}$$ where again this is matrix multiplication and the objects in each row of the column aren't coefficients of some basis vector (that wouldn't even make semantic sense in this case!).
Which of the five columns do you mean?
 
  • Like
Likes   Reactions: etotheipi
  • #15
PeroK said:
There is a duality in linear algebra.

etotheipi said:
Consider the transformation of the components of a vector ##\vec{v}## from an orthonormal coordinate system with a basis ##\{\vec{e}_1, \vec{e}_2, \vec{e}_3 \}## to another with a basis ##\{\vec{e}'_1, \vec{e}'_2, \vec{e}'_3 \}##

The transformation equation for the components of ##\vec{v}## looks something like$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$

We could try to distinguish between the dual ideas by using the notation:

$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}_{b'} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}_b
$$

when both sides of the equation represent the same vector.
We could even go as far as putting parenthesis around the entire right hand side of the above equation and subscripting it with a ##b'##. (Something I don't know how to do in LaTex!)That would contrast to the notation for a linear transformation that maps a vector to a different vector. By analogy to expressions like ##y = 3x ## or ##x' = 3x## we could write

$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}_{b} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}_b
$$

with the understanding that ##\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}_{b} ## is a notation for a variable that represents a vector different than ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}_{b}##.
 
  • Like
Likes   Reactions: etotheipi and PeroK
  • #16
Actually I think I see what you guys were trying to say now. Really when we write $$\vec{v} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$this is shorthand for $$[\vec{v}]_{\beta} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$since the column structure is just a tuple, that can undergo matrix multiplication. I was previously under the (I think, now, erroneous) impression that the column structure "had basis vectors built in" but after this discussion I don't think that makes sense. It seems much more coherent for it to just be a tuple of numbers.

Sorry for the confusion, and thanks for helping out!
 
  • #18
etotheipi said:
Actually I think I see what you guys were trying to say now. Really when we write $$\vec{v} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$this is shorthand for $$[\vec{v}]_{\beta} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$since the column structure is just a tuple, that can undergo matrix multiplication.
The numbers ##v_1, v_2,## and ##v_3## are the scalar multiples of the basis vectors, so in a column vector, the basis vectors are built in.
etotheipi said:
I was previously under the (I think, now, erroneous) impression that the column structure "had basis vectors built in" but after this discussion I don't think that makes sense. It seems much more coherent for it to just be a tuple of numbers.
As I understand what you wrote in this thread, your previous impression was that a column vector was just a "raw tuple," as you put it. That's the impression that was erroneous. This has nothing to do with linear transformations. If the basis is not explicitly shown, we usually assume that we're dealing with the standard basis (i.e., Euclidean basis) for whatever space is being considered.
 
  • Like
Likes   Reactions: romsofia
  • #19
Mark44 said:
The numbers ##v_1, v_2,## and ##v_3## are the scalar multiples of the basis vectors, so in a column vector, the basis vectors are built in.

That's not what the Colorado notes seem to say. It appears that when we express the vector in column form (as a coordinate vector), we map the vector to a tuple in ##\mathbb{R}^n## (with entries as coefficients of a certain basis), and a tuple does not require any basis vectors (it's just a list).

I think this was what @Math_QED alluded to in #8.
 
  • #20
Mark44 said:
The numbers ##v_1, v_2,## and ##v_3## are the scalar multiples of the basis vectors, so in a column vector, the basis vectors are built in.
As I understand what you wrote in this thread, your previous impression was that a column vector was just a "raw tuple," as you put it. That's the impression that was erroneous. This has nothing to do with linear transformations. If the basis is not explicitly shown, we usually assume that we're dealing with the standard basis (i.e., Euclidean basis) for whatever space is being considered.

For instance, ##\vec{v} = B[\vec{v}]_{\beta} = [\vec{e}_1, \vec{e}_2, \vec{e}_3]\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}##, where we manually have to matrix multiply by the basis in order to obtain the vector. So the basis vectors aren't built-in to the tuple, we need to put them in manually.
 
  • #21
etotheipi said:
That's not what the Colorado notes seem to say. It appears that when we express the vector in column form (as a coordinate vector), we map the vector to a tuple in ##\mathbb{R}^n## (with entries as coefficients of a certain basis), and a tuple does not require any basis vectors (it's just a list).

A mapping onto ##\mathbb{R}^n## implies a basis, namely the vectors that get mapped to ##(1,0,0)## etc. A good idea to try to distinguish these things is to think of the polynomials as the vector space. Then you have a clear definition of your vectors, which aren't just tuples (like we have with ##\mathbb{R}^n## itself).
 
  • Like
Likes   Reactions: etotheipi
  • #22
PeroK said:
A mapping onto ##\mathbb{R}^n## implies a basis, namely the vectors that get mapped to ##(1,0,0)## etc. A good idea to try to distinguish these things is to think of the polynomials as the vector space. Then you have a clear definition of your vectors, which aren't just tuples (like we have with ##\mathbb{R}^n## itself).

Yes, sorry that's what I had in mind. I was trying to refer to the fact that the basis vector objects themselves aren't built into the column structure (a tuple), but instead must be put in as in #20.
 
  • #23
Final conceptual (though maybe actually notational :wink:) question; take the standard Cartesian basis ##\beta = \{\hat{x}, \hat{y}, \hat{z}\}##. If we were going to be pedantic, would we say: $$[\hat{x}]_{\beta} = (1,0,0)$$I ask because even the most thorough references, who make use of the notation ##[\vec{v}]_{\beta}##, still write ##\hat{x} = (1,0,0)##.

To confirm, this is just because it's considered obvious that ##\hat{x} \, \, (= 1\hat{x} + 0\hat{y} + 0\hat{z})## is expressed in the standard basis, right? There's no mathematical trickery going on that makes this any sort of exception?
 
  • #24
etotheipi said:
Final conceptual (though maybe actually notational :wink:) question; take the standard Cartesian basis ##\beta = \{\hat{x}, \hat{y}, \hat{z}\}##. If we were going to be pedantic, would we say: $$[\hat{x}]_{\beta} = (1,0,0)$$I ask because even the most thorough references, who make use of the notation ##[\vec{v}]_{\beta}##, still write ##\hat{x} = (1,0,0)##.

To confirm, this is just because it's considered obvious that ##\hat{x} \, \, (= 1\hat{x} + 0\hat{y} + 0\hat{z})## is expressed in the standard basis, right? There's no mathematical trickery going on that makes this any sort of exception?
This is too specfic. Until you define a norm, there is no concept of a "unit" vector. If the vector space is explicitly ##\mathbb{R}^n## then you can imagine things like ##\hat x##. But, if your vector space is the quadratic polynomials (and below), then what is ##\hat x##? What is the length of the vector ##x^2 + 3x + 1##? What is your "default Cartesian basis"?
 
  • #25
PeroK said:
This is too specfic. Until you define a norm, there is no concept of a "unit" vector. If the vector space is explicitly ##\mathbb{R}^n## then you can imagine things like ##\hat x##. But, if your vector space is the quadratic polynomials (and below), then what is ##\hat x##? What is the length of the vector ##x^2 + 3x + 1##? What is your "default Cartesian basis"?

I see your point. In pure maths, it it seems to makes perfect sense to have a basis consisting of tuples of elements of ##\mathbb{R}^n##. So your polynomial would have a coordinate vector of ##(1,3,1)## in the standard basis ##\beta = \{(1,0,0), (0,1,0), (0,0,1)\}## (Correction by @PeroK: it should of course be ##\beta = \{x^2, x, 1\}##). The basis vectors basically define themselves :wink:, I don't actually see any issue with the equality ##\vec{e}_1 = (1,0,0)## in pure maths.

But I have some trouble translating this to Physics. We have an affine base space, in which we choose an origin in addition to three (let's say orthogonal, not necessarily normalised since we have no metric yet) geometrical vectors (arrows). Now we can really just express any other arrow as a sum of multiples of the basis arrows.

The best we can do is something like ##\vec{e}_1 = 1\vec{e}_1 + 0\vec{e}_2 + 0\vec{e}_3##. To map that to the reals, I would have thought we need to write ##[\vec{e}_1]_{\beta} = (1,0,0)##. Maybe I don't know any better, but I'm slightly weary in this case about equating ##\vec{e}_1## and ##(1,0,0)##.

Perhaps this doesn't make much sense... I have been learning LA from Axler but it takes a very pure approach!
 
Last edited by a moderator:
  • #26
etotheipi said:
I see. In the pure maths sense, it makes perfect sense to have a basis consisting of tuples of elements of ##\mathbb{R}^n##. So your polynomial would have a coordinate vector of ##(1,3,1)## in the standard basis ##\{(1,0,0), (0,1,0), (0,0,1)\}##. The basis vectors basically define themselves :wink:.

The basis ##1, x, x^2## is not a Cartesian basis in any sense I can see. In fact, when you come to define an inner product on this space you find that (with the usual inner product), these vectors are not orthogonal. That's why if you want to get a better grasp of abstract vector spaces, you must consider function spaces and break the connection with ##\mathbb{R}^3##.

Note also that ##ax^2 + bx + c## would equally well be, by default, ##(a, b, c)## or ##(c, b, a)##.

Also, vector spaces of linear operators make good examples, where you must deal with the abstract concepts directly and not fall back on concepts like "Cartesian" that are in general not well defined.
 
  • Like
Likes   Reactions: etotheipi
  • #27
PeroK said:
The basis ##1, x, x^2## is not a Cartesian basis in any sense I can see.

o_O you're of course right, I wasn't thinking. I think that's a sign I need to go to sleep, I'm basically "zombie-typing" now...

I must have been reasoning along the lines of ##x^2 := (1,0,0)##, ##x := (0,1,0)##, ##1 := (0,0,1)##.
 
Last edited by a moderator:
  • #28
Interesting thread. I used to get confused by the following:
If each vector needs a basis to be represented by coordinates, then according to what base are the basis vectors represented? If I write ##e_1=(1,0,0)## then this vector is represented by coordinates that are defined by itself. It is like writing ##e_1=1\cdot e_1 + 0\cdot e_2 + 0\cdot e_3##. How can I get any information from something like this?

Therefore, I feel that there has to be the notion of "raw" tuples, some ground truth.
Each point in ##R^n## is given by such a raw tuple. This raw tuple coincides per definition with the coordinates to the standard basis ##(\vec{e_1}, \vec{e_2}, ...)## where each of the basis vectors are the raw tuples ##(1,0,...), (0,1,0...),...##.
Now I can also define new basis vectors, each of which can be written as a raw tuple. Then I can calculate the coordinates of any vector with respect to the new basis, so that I can either write the vector as its raw tuple (which is the same as writing a tuple of coordinates w.r.t. the standard basis) or I can write down a tuple of coordinates w.r.t. the new basis - of course, a subscript or something like this might be necessary to distinguish the meaning of the tuples:
Is it a raw tuple (coincides with standard basis coordinates), or is it the coordinates w.r.t. a basis which is not the standard basis?
 
  • Like
Likes   Reactions: etotheipi
  • #29
SchroedingersLion said:
Interesting thread. I used to get confused by the following:
If each vector needs a basis to be represented by coordinates, then according to what base are the basis vectors represented? If I write ##e_1=(1,0,0)## then this vector is represented by coordinates that are defined by itself. It is like writing ##e_1=1\cdot e_1 + 0\cdot e_2 + 0\cdot e_3##. How can I get any information from something like this?

Therefore, I feel that there has to be the notion of "raw" tuples, some ground truth.
Each point in ##R^n## is given by such a raw tuple. This raw tuple coincides per definition with the coordinates to the standard basis ##(\vec{e_1}, \vec{e_2}, ...)## where each of the basis vectors are the raw tuples ##(1,0,...), (0,1,0...),...##.
Now I can also define new basis vectors, each of which can be written as a raw tuple. Then I can calculate the coordinates of any vector with respect to the new basis, so that I can either write the vector as its raw tuple (which is the same as writing a tuple of coordinates w.r.t. the standard basis) or I can write down a tuple of coordinates w.r.t. the new basis - of course, a subscript or something like this might be necessary to distinguish the meaning of the tuples:
Is it a raw tuple (coincides with standard basis coordinates), or is it the coordinates w.r.t. a basis which is not the standard basis?

I think a lot of this is context dependent. For vectors in e.g. ##\mathbb{R}^2##, it makes sense to define a basis as just plain tuples of real numbers e.g. ##\beta = \{\begin{pmatrix}3\\2\end{pmatrix}, \begin{pmatrix}1\\3\end{pmatrix}\}##. The mapping between a vector and its coordinate vector is trivial.

For function spaces, we might have a basis as ##\{e^x, e^{-x}\}##. Then we can map functions to a coordinate vector, the tuple in ##\mathbb{R}^2##, so e.g. ##[5e^x + 3e^{-x}]_{\beta} = \begin{pmatrix}5\\3\end{pmatrix}##. For geometrical basis (3-)vectors in a Euclidian space, we do a similar thing also. The components are mapped to a tuple in ##\mathbb{R}^3##.

There is room for confusion about tuples vs vectors as has been demonstrated in this thread (by me!), but I can see why people often opt to omit a subscript for reasons of efficiency.

It is a bit like how number bases operate. When we describe the number "15" in base ten, that literally just means ##1 \times 10 + 5 \times 1##. We can't break down the basis ##\{10, 1\}## any further, it's literally a basis.
 
  • #30
SchroedingersLion said:
If each vector needs a basis to be represented by coordinates, then according to what base are the basis vectors represented? If I write ##e_1=(1,0,0)## then this vector is represented by coordinates that are defined by itself. It is like writing ##e_1=1\cdot e_1 + 0\cdot e_2 + 0\cdot e_3##. How can I get any information from something like this?
You get that ##e_1## equals itself. ##e_1## is the vector that extends 1 unit along the x-axis, from the origin to the point whose coordinates are (1, 0, 0).
SchroedingersLion said:
Therefore, I feel that there has to be the notion of "raw" tuples, some ground truth.
No, I disagree. If you have a vector in coordinate form, there is always an implied basis. The coordinate form is a shorthand way of writing ##\vec v = c_1\vec {b_1} + \dots c_n \vec {b_n}##, where ##\vec {b_i}## are the basis vectors.
etotheipi said:
I think a lot of this is context dependent. For vectors in e.g. ##\mathbb{R}^2##, it makes sense to define a basis as just plain tuples of real numbers e.g. ##\beta = \{\begin{pmatrix}3\\2\end{pmatrix}, \begin{pmatrix}1\\3\end{pmatrix}\}##.
No, they are not just plain or "raw" tuples. They are the coordinates of two vectors, relative to some basis. And it makes not difference whether you're in ##\mathbb R^2## or any other Euclidean space.
etotheipi said:
The mapping between a vector and its coordinate vector is trivial.
No, I disagree.
If ##\vec v = \begin{pmatrix}3\\2\end{pmatrix}## in terms of the standard basis for ##\mathbb R^2##, what are the coordinates of ##\vec v## in terms of the basis ##\{\begin{pmatrix}1\\2\end{pmatrix}, \begin{pmatrix}2\\-1\end{pmatrix}\}##?
 

Similar threads

Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 34 ·
2
Replies
34
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 27 ·
Replies
27
Views
3K