Do column 'vectors' need a basis?

R^2.In general, any vector can be written in terms of any basis, so long as the basis is a spanning set. Of course the coordinates of any one vector will be different in different bases, but the vectors themselves are the same.
  • #1
etotheipi
Consider the transformation of the components of a vector ##\vec{v}## from an orthonormal coordinate system with a basis ##\{\vec{e}_1, \vec{e}_2, \vec{e}_3 \}## to another with a basis ##\{\vec{e}'_1, \vec{e}'_2, \vec{e}'_3 \}##

The transformation equation for the components of ##\vec{v}## looks something like$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$Now since ##\vec{v} = v_i \vec{e}_i = v'_i\vec{e}'_i##, and the matrix is not the identity, the structures ##\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}## and ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}## in the above expression can't be ##\vec{v}##. Instead, they just appear to be 3-tuples of numbers with no apparent basis.

Similarly, in different contexts we sometimes put vectors in such a column structure, like ##\begin{pmatrix}\vec{e}_1\\\vec{e}_2\\\vec{e}_3\end{pmatrix}## e.g. when looking at how the basis transforms.

It appears to me then that something like ##\begin{pmatrix}a\\b\\c\end{pmatrix}## can represent the raw tuple ##(a,b,c)##, or an expression with certain basis vectors like ##a\vec{e}_1 + b\vec{e}_2 + c\vec{e}_3## , depending on the context. I wondered if someone could clarify whether this is along the right lines?
 
Last edited by a moderator:
Mathematics news on Phys.org
  • #2
etotheipi said:
Consider the transformation of the components of a vector ##\vec{v}## from an orthonormal coordinate system with a basis ##\{\vec{e}_1, \vec{e}_2, \vec{e}_3 \}## to another with a basis ##\{\vec{e}'_1, \vec{e}'_2, \vec{e}'_3 \}##

The transformation equation for the components of ##\vec{v}## looks something like$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$Now since ##\vec{v} = v_i \vec{e}_i = v'_i\vec{e}'_i##, the structures ##\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}## and ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}## in the above expression can't be ##\vec{v}##.
Sure they can. They are just different representations of ##\vec v## in different bases.
etotheipi said:
Instead, they just appear to be 3-tuples of numbers with no apparent basis.
No. Any time you have the coordinate representation of a vector, the entries are the coordinates of the underlying basis.

For a simpler example, consider the vector ##\vec v = \begin{pmatrix}1\\2 \end{pmatrix}##. Without any other information, we would naturally assume that the coordinates are those of the standard basis for ##\mathbb R^2##, ##\hat {e_1}## and ##\hat{e_2}##.
If we instead write ##\vec v## in terms of the basis ##\{\begin{pmatrix}1\\0 \end{pmatrix}, \begin{pmatrix}1\\1 \end{pmatrix}\}##, the representation of ##\vec v## in terms of that basis would be ##\begin{pmatrix} -1\\2 \end{pmatrix}##.
etotheipi said:
Similarly, in different contexts we sometimes put vectors in such a column structure, like ##\begin{pmatrix}\vec{e}_1\\\vec{e}_2\\\vec{e}_3\end{pmatrix}## e.g. when looking at how the basis transforms.

It appears to me then that something like ##\begin{pmatrix}a\\b\\c\end{pmatrix}## can represent the raw tuple ##(a,b,c)##
Again, there's really no such thing as a "raw tuple". The coordinates a, b, and c are the scalar multipliers of whatever vectors are in the basis you're using.
etotheipi said:
, or an expression with certain basis vectors like ##a\vec{e}_1 + b\vec{e}_2 + c\vec{e}_3## , depending on the context. I wondered if someone could clarify whether this is along the right lines?
 
  • Like
Likes etotheipi
  • #3
Mark44 said:
Sure they can. They are just different representations of ##\vec v## in different bases.
I don't agree. In the expression $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$ The matrix is not the identity, so $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} \neq \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$ It is instead ##\vec{v}## that is independent of the coordinate system, i.e. ##v'_i \vec{e}'_i = v_i \vec{e}_i = \vec{v}##.
 
  • #4
etotheipi said:
I don't agree. In the expression $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$ The matrix is not the identity, so $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} \neq \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$ It is instead ##\vec{v}## that is independent of the coordinate system, i.e. ##v'_i \vec{e}'_i = v_i \vec{e}_i = \vec{v}##.
That's not the point. Of course the matrix is not the identity matrix. Also, I'm not saying that the coordinates of a vector in one basis will be pairwise equal to the coordinates of the same vector in another basis.

In the example I gave, we have two different representations of the same vector; namely (1, 2), in the standard basis for R^2, and (-1, 2), in another basis. Obviously the coordinates are different, but they nevertheless represent a single vector.
 
  • Like
Likes etotheipi
  • #5
Mark44 said:
That's not the point. Of course the matrix is not the identity matrix. Also, I'm not saying that the coordinates of a vector in one basis will be pairwise equal to the coordinates of the same vector in another basis.

In the example I gave, we have two different representations of the same vector; namely (1, 2), in the standard basis for R^2, and (-1, 2), in another basis. Obviously the coordinates are different, but they nevertheless represent a single vector.

Though my question is really about the specific structure $$
\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\

\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}

\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$ The column structures in that expressions are evidently not equal to ##\vec{v}##. So I wonder what basis they are represented in.
 
  • #6
etotheipi said:
I don't agree. In the expression $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$ The matrix is not the identity, so $$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} \neq \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$ It is instead ##\vec{v}## that is independent of the coordinate system, i.e. ##v'_i \vec{e}'_i = v_i \vec{e}_i = \vec{v}##.

It depends what you mean by the ##=## sign. Some notations I use are:
$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}' = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$
$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}_{b2} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}_{b1}$$
$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} \leftrightarrow \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$
All these indicate that it's the same vector represented by a different column of numbers in different bases.
 
  • Like
Likes Ishika_96_sparkles, etotheipi, DaveE and 1 other person
  • #7
But it is not the case that: $$

(v'_1 \vec{e}'_1 + v'_2 \vec{e}'_2 + v'_3 \vec{e}'_3) =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
(v_1 \vec{e}_1 + v_2 \vec{e}_2 + v_3 \vec{e}_3)

$$ I.e. my point is that it makes no sense to write: $$

\vec{v} =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\vec{v}

$$since the matrix is not the identity. So in the original expression $$
\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\

\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}

\

\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$those column vectors must not equal ##\vec{v}##! I then wondered what the basis was for each of those column vectors... :wink:
 
  • #8
Of course, there is something like 'rawr' tuples. ##\mathbb{R}^3## is exactly the collections of formal tuples ##(a,b,c)## with ##a,b,c \in \mathbb{R}##. I don't need linear algebra, or coordinates, or bases to talk about this.

Given a vector ##(a,b,c) \in \mathbb{R}^3##, we can look at the coordinates of this vector w.r.t. to a basis ##E=(e_1, e_2, e_3)##. What are these coordinates? Well, we can write ##(a,b,c) = \lambda_1 e_1 + \lambda_2 e_2 + \lambda_3 e_3## for unique ##\lambda_1, \lambda_2, \lambda_3 \in \mathbb{R}##. The coordinates of ##(a,b,c)## w.r.t. the basis ##E## is then ##(\lambda_1, \lambda_2, \lambda_3)##. Let us introduce some notation. Let's write ##[(a,b,c)]_E= (\lambda_1, \lambda_2, \lambda_3)##.

Your question is basically: given bases ##E= (e_1, e_2, e_3)## and ##E' = (e_1', e_2', e_3')##, what is the relation between ##[(a,b,c)]_E## and ##[(a,b,c)]_{E'}##? The answer is that you can get from one to the other via a matrix multiplication.

I guess in your post ##(v_1, v_2, v_3)^T## is the coordinates of a vector w.r.t. ##E## and ##(v_1', v_2', v_3')^T## is the coordinates of the same vector w.r.t. ##E'##.
 
  • Love
Likes etotheipi
  • #9
etotheipi said:
But it is not the case that: $$

(v'_1 \vec{e}'_1 + v'_2 \vec{e}'_2 + v'_3 \vec{e}'_3) =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
(v_1 \vec{e}_1 + v_2 \vec{e}_2 + v_3 \vec{e}_3) $$

When you say "is not the case", are you assuming the notation on the right hand side of the equal sign has a defined meaning?
 
  • #10
Math_QED said:
Your question is basically: given bases ##E= (e_1, e_2, e_3)## and ##E' = (e_1', e_2', e_3')##, what is the relation between ##[(a,b,c)]_E## and ##[(a,b,c)]_{E'}##? The answer is that you can get from one to the other via a matrix multiplication.

Thank you, yes this was my suspicion. That the column vectors on the LHS and RHS were tuples of the components with no basis, transformed via matrix multiplication.

Stephen Tashi said:
When you say "is not the case", are you assuming the notation on the right hand side of the equal sign has a defined meaning?

I suppose I took some liberties with that questionable notation to make the point, but essentially I meant that the column vectors on the RHS and LHS aren't even representations of the same vector in a different basis. But instead as @Math_QED mentioned they are just tuples of numbers.
 
  • #11
etotheipi said:
I suppose I took some liberties with that questionable notation to make the point, but essentially I meant that the column vectors on the RHS and LHS aren't even representations of the same vector in a different basis. But instead as @Math_QED mentioned they are just tuples of numbers.

The column vectors on the RHS an LHS are representing the same vector. Say your fixed vector is ##(a,b,c) \in \mathbb{R}^3##. Then ##(v_1, v_2, v_3) = [(a,b,c)]_E## and ##(v_1', v_2', v_3')= [(a,b,c)]_{E'}##.
 
  • Like
Likes etotheipi
  • #12
etotheipi said:
But it is not the case that: $$

(v'_1 \vec{e}'_1 + v'_2 \vec{e}'_2 + v'_3 \vec{e}'_3) =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
(v_1 \vec{e}_1 + v_2 \vec{e}_2 + v_3 \vec{e}_3)

$$ I.e. my point is that it makes no sense to write: $$

\vec{v} =

\begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\vec{v}

$$since the matrix is not the identity. So in the original expression $$
\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}

\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\

\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\

\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}

\

\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$those column vectors must not equal ##\vec{v}##! I then wondered what the basis was for each of those column vectors... :wink:

There is a duality in linear algebra. Let's start with a vector, ##u## and a linear transformation ##T##. We have:
$$v = Tu$$
Where ##v## is another vector. Now, if we choose any basis, this equation takes the form of a matrix/tuple equation:
$$
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} = \begin{bmatrix}

T_{11} & T_{12} & T_{13} \\
T_{21} & T_{22} & T_{23} \\
T_{31} & T_{32} & T_{33} \end{bmatrix}

\

\begin{pmatrix}u_1\\u_2\\u_3\end{pmatrix}$$
But, we can also interpret a matrix equation like this as a change of basis, where now the numbers ##v_1, v_2, v_3## represent the components of ##u## in a new basis. So, you need to be clear about what you are doing.
 
  • Like
Likes etotheipi
  • #13
PeroK said:
There is a duality in linear algebra. Let's start with a vector, ##u## and a linear transformation ##T##. We have:
$$v = Tu$$
Where ##v## is another vector. Now, if we choose any basis, this equation takes the form of a matrix/tuple equation:
$$
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} = \begin{bmatrix}

T_{11} & T_{12} & T_{13} \\
T_{21} & T_{22} & T_{23} \\
T_{31} & T_{32} & T_{33} \end{bmatrix}

\

\begin{pmatrix}u_1\\u_2\\u_3\end{pmatrix}$$
But, we can also interpret a matrix equation like this as a change of basis, where now the numbers ##v_1, v_2, v_3## represent the components of ##u## in a new basis. So, you need to be clear about what you are doing.

Thanks, this sums it up nicely. With linear transformations you get out a new vector, but with component transformations you get out a representation of the same vector. So in the tuple equation you wrote up, ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} \not\equiv v_1 \hat{x} + v_2 \hat{y} + v_3 \hat{z}##, whilst this would be true for the linear transformation.

You can get other equations like

$$\begin{pmatrix}\vec{e}'_1\\\vec{e}'_2\\\vec{e}'_3\end{pmatrix} = \begin{bmatrix}
T_{11} & T_{12} & T_{13} \\

T_{21} & T_{22} & T_{23} \\

T_{31} & T_{32} & T_{33} \end{bmatrix}
\
\begin{pmatrix}\vec{e}_1\\\vec{e}_2\\\vec{e}_3\end{pmatrix}$$ where again this is matrix multiplication and the objects in each row of the column aren't coefficients of some basis vector (that wouldn't even make semantic sense in this case!).
 
  • #14
etotheipi said:
Thanks, this sums it up nicely. With linear transformations you get out a new vector, but with component transformations you get out a representation of the same vector.
The change-of-basis matrix is a linear transformation, so the distinction isn't as clear-cut as you seem to think.
etotheipi said:
So in the tuple equation you wrote up, ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix} \not\equiv v_1 \hat{x} + v_2 \hat{y} + v_3 \hat{z}##, whilst this would be true for the linear transformation.
This is unclear. Is the vector ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}## in terms of the basis ##\{\hat x, \hat y, \hat z \}##? If so, then ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}## does equal ##v_1 \hat{x} + v_2 \hat{y} + v_3 \hat{z}##.
etotheipi said:
You can get other equations like

$$
\begin{pmatrix}\vec{e}'_1\\\vec{e}'_2\\\vec{e}'_3\end{pmatrix} = \begin{bmatrix}
T_{11} & T_{12} & T_{13} \\
T_{21} & T_{22} & T_{23} \\
T_{31} & T_{32} & T_{33} \end{bmatrix}
\
\begin{pmatrix}\vec{e}_1\\\vec{e}_2\\\vec{e}_3\end{pmatrix}$$ where again this is matrix multiplication and the objects in each row of the column aren't coefficients of some basis vector (that wouldn't even make semantic sense in this case!).
Which of the five columns do you mean?
 
  • Like
Likes etotheipi
  • #15
PeroK said:
There is a duality in linear algebra.

etotheipi said:
Consider the transformation of the components of a vector ##\vec{v}## from an orthonormal coordinate system with a basis ##\{\vec{e}_1, \vec{e}_2, \vec{e}_3 \}## to another with a basis ##\{\vec{e}'_1, \vec{e}'_2, \vec{e}'_3 \}##

The transformation equation for the components of ##\vec{v}## looks something like$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}
$$

We could try to distinguish between the dual ideas by using the notation:

$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}_{b'} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}_b
$$

when both sides of the equation represent the same vector.
We could even go as far as putting parenthesis around the entire right hand side of the above equation and subscripting it with a ##b'##. (Something I don't know how to do in LaTex!)That would contrast to the notation for a linear transformation that maps a vector to a different vector. By analogy to expressions like ##y = 3x ## or ##x' = 3x## we could write

$$\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}_{b} = \begin{bmatrix}
\vec{e}'_1 \cdot \vec{e}_1 & \vec{e}'_1 \cdot \vec{e}_2 & \vec{e}'_1 \cdot \vec{e}_3 \\
\vec{e}'_2 \cdot \vec{e}_1 & \vec{e}'_2 \cdot \vec{e}_2 & \vec{e}'_2 \cdot \vec{e}_3 \\
\vec{e}'_3 \cdot \vec{e}_1 & \vec{e}'_3 \cdot \vec{e}_2 & \vec{e}'_3 \cdot \vec{e}_3 \end{bmatrix}
\
\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}_b
$$

with the understanding that ##\begin{pmatrix}v'_1\\v'_2\\v'_3\end{pmatrix}_{b} ## is a notation for a variable that represents a vector different than ##\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}_{b}##.
 
  • Like
Likes etotheipi and PeroK
  • #16
Actually I think I see what you guys were trying to say now. Really when we write $$\vec{v} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$this is shorthand for $$[\vec{v}]_{\beta} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$since the column structure is just a tuple, that can undergo matrix multiplication. I was previously under the (I think, now, erroneous) impression that the column structure "had basis vectors built in" but after this discussion I don't think that makes sense. It seems much more coherent for it to just be a tuple of numbers.

Sorry for the confusion, and thanks for helping out!
 
  • #18
etotheipi said:
Actually I think I see what you guys were trying to say now. Really when we write $$\vec{v} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$this is shorthand for $$[\vec{v}]_{\beta} = \begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}$$since the column structure is just a tuple, that can undergo matrix multiplication.
The numbers ##v_1, v_2,## and ##v_3## are the scalar multiples of the basis vectors, so in a column vector, the basis vectors are built in.
etotheipi said:
I was previously under the (I think, now, erroneous) impression that the column structure "had basis vectors built in" but after this discussion I don't think that makes sense. It seems much more coherent for it to just be a tuple of numbers.
As I understand what you wrote in this thread, your previous impression was that a column vector was just a "raw tuple," as you put it. That's the impression that was erroneous. This has nothing to do with linear transformations. If the basis is not explicitly shown, we usually assume that we're dealing with the standard basis (i.e., Euclidean basis) for whatever space is being considered.
 
  • Like
Likes romsofia
  • #19
Mark44 said:
The numbers ##v_1, v_2,## and ##v_3## are the scalar multiples of the basis vectors, so in a column vector, the basis vectors are built in.

That's not what the Colorado notes seem to say. It appears that when we express the vector in column form (as a coordinate vector), we map the vector to a tuple in ##\mathbb{R}^n## (with entries as coefficients of a certain basis), and a tuple does not require any basis vectors (it's just a list).

I think this was what @Math_QED alluded to in #8.
 
  • #20
Mark44 said:
The numbers ##v_1, v_2,## and ##v_3## are the scalar multiples of the basis vectors, so in a column vector, the basis vectors are built in.
As I understand what you wrote in this thread, your previous impression was that a column vector was just a "raw tuple," as you put it. That's the impression that was erroneous. This has nothing to do with linear transformations. If the basis is not explicitly shown, we usually assume that we're dealing with the standard basis (i.e., Euclidean basis) for whatever space is being considered.

For instance, ##\vec{v} = B[\vec{v}]_{\beta} = [\vec{e}_1, \vec{e}_2, \vec{e}_3]\begin{pmatrix}v_1\\v_2\\v_3\end{pmatrix}##, where we manually have to matrix multiply by the basis in order to obtain the vector. So the basis vectors aren't built-in to the tuple, we need to put them in manually.
 
  • #21
etotheipi said:
That's not what the Colorado notes seem to say. It appears that when we express the vector in column form (as a coordinate vector), we map the vector to a tuple in ##\mathbb{R}^n## (with entries as coefficients of a certain basis), and a tuple does not require any basis vectors (it's just a list).

A mapping onto ##\mathbb{R}^n## implies a basis, namely the vectors that get mapped to ##(1,0,0)## etc. A good idea to try to distinguish these things is to think of the polynomials as the vector space. Then you have a clear definition of your vectors, which aren't just tuples (like we have with ##\mathbb{R}^n## itself).
 
  • Like
Likes etotheipi
  • #22
PeroK said:
A mapping onto ##\mathbb{R}^n## implies a basis, namely the vectors that get mapped to ##(1,0,0)## etc. A good idea to try to distinguish these things is to think of the polynomials as the vector space. Then you have a clear definition of your vectors, which aren't just tuples (like we have with ##\mathbb{R}^n## itself).

Yes, sorry that's what I had in mind. I was trying to refer to the fact that the basis vector objects themselves aren't built into the column structure (a tuple), but instead must be put in as in #20.
 
  • #23
Final conceptual (though maybe actually notational :wink:) question; take the standard Cartesian basis ##\beta = \{\hat{x}, \hat{y}, \hat{z}\}##. If we were going to be pedantic, would we say: $$[\hat{x}]_{\beta} = (1,0,0)$$I ask because even the most thorough references, who make use of the notation ##[\vec{v}]_{\beta}##, still write ##\hat{x} = (1,0,0)##.

To confirm, this is just because it's considered obvious that ##\hat{x} \, \, (= 1\hat{x} + 0\hat{y} + 0\hat{z})## is expressed in the standard basis, right? There's no mathematical trickery going on that makes this any sort of exception?
 
  • #24
etotheipi said:
Final conceptual (though maybe actually notational :wink:) question; take the standard Cartesian basis ##\beta = \{\hat{x}, \hat{y}, \hat{z}\}##. If we were going to be pedantic, would we say: $$[\hat{x}]_{\beta} = (1,0,0)$$I ask because even the most thorough references, who make use of the notation ##[\vec{v}]_{\beta}##, still write ##\hat{x} = (1,0,0)##.

To confirm, this is just because it's considered obvious that ##\hat{x} \, \, (= 1\hat{x} + 0\hat{y} + 0\hat{z})## is expressed in the standard basis, right? There's no mathematical trickery going on that makes this any sort of exception?
This is too specfic. Until you define a norm, there is no concept of a "unit" vector. If the vector space is explicitly ##\mathbb{R}^n## then you can imagine things like ##\hat x##. But, if your vector space is the quadratic polynomials (and below), then what is ##\hat x##? What is the length of the vector ##x^2 + 3x + 1##? What is your "default Cartesian basis"?
 
  • #25
PeroK said:
This is too specfic. Until you define a norm, there is no concept of a "unit" vector. If the vector space is explicitly ##\mathbb{R}^n## then you can imagine things like ##\hat x##. But, if your vector space is the quadratic polynomials (and below), then what is ##\hat x##? What is the length of the vector ##x^2 + 3x + 1##? What is your "default Cartesian basis"?

I see your point. In pure maths, it it seems to makes perfect sense to have a basis consisting of tuples of elements of ##\mathbb{R}^n##. So your polynomial would have a coordinate vector of ##(1,3,1)## in the standard basis ##\beta = \{(1,0,0), (0,1,0), (0,0,1)\}## (Correction by @PeroK: it should of course be ##\beta = \{x^2, x, 1\}##). The basis vectors basically define themselves :wink:, I don't actually see any issue with the equality ##\vec{e}_1 = (1,0,0)## in pure maths.

But I have some trouble translating this to Physics. We have an affine base space, in which we choose an origin in addition to three (let's say orthogonal, not necessarily normalised since we have no metric yet) geometrical vectors (arrows). Now we can really just express any other arrow as a sum of multiples of the basis arrows.

The best we can do is something like ##\vec{e}_1 = 1\vec{e}_1 + 0\vec{e}_2 + 0\vec{e}_3##. To map that to the reals, I would have thought we need to write ##[\vec{e}_1]_{\beta} = (1,0,0)##. Maybe I don't know any better, but I'm slightly weary in this case about equating ##\vec{e}_1## and ##(1,0,0)##.

Perhaps this doesn't make much sense... I have been learning LA from Axler but it takes a very pure approach!
 
Last edited by a moderator:
  • #26
etotheipi said:
I see. In the pure maths sense, it makes perfect sense to have a basis consisting of tuples of elements of ##\mathbb{R}^n##. So your polynomial would have a coordinate vector of ##(1,3,1)## in the standard basis ##\{(1,0,0), (0,1,0), (0,0,1)\}##. The basis vectors basically define themselves :wink:.

The basis ##1, x, x^2## is not a Cartesian basis in any sense I can see. In fact, when you come to define an inner product on this space you find that (with the usual inner product), these vectors are not orthogonal. That's why if you want to get a better grasp of abstract vector spaces, you must consider function spaces and break the connection with ##\mathbb{R}^3##.

Note also that ##ax^2 + bx + c## would equally well be, by default, ##(a, b, c)## or ##(c, b, a)##.

Also, vector spaces of linear operators make good examples, where you must deal with the abstract concepts directly and not fall back on concepts like "Cartesian" that are in general not well defined.
 
  • Like
Likes etotheipi
  • #27
PeroK said:
The basis ##1, x, x^2## is not a Cartesian basis in any sense I can see.

o_O you're of course right, I wasn't thinking. I think that's a sign I need to go to sleep, I'm basically "zombie-typing" now...

I must have been reasoning along the lines of ##x^2 := (1,0,0)##, ##x := (0,1,0)##, ##1 := (0,0,1)##.
 
Last edited by a moderator:
  • #28
Interesting thread. I used to get confused by the following:
If each vector needs a basis to be represented by coordinates, then according to what base are the basis vectors represented? If I write ##e_1=(1,0,0)## then this vector is represented by coordinates that are defined by itself. It is like writing ##e_1=1\cdot e_1 + 0\cdot e_2 + 0\cdot e_3##. How can I get any information from something like this?

Therefore, I feel that there has to be the notion of "raw" tuples, some ground truth.
Each point in ##R^n## is given by such a raw tuple. This raw tuple coincides per definition with the coordinates to the standard basis ##(\vec{e_1}, \vec{e_2}, ...)## where each of the basis vectors are the raw tuples ##(1,0,...), (0,1,0...),...##.
Now I can also define new basis vectors, each of which can be written as a raw tuple. Then I can calculate the coordinates of any vector with respect to the new basis, so that I can either write the vector as its raw tuple (which is the same as writing a tuple of coordinates w.r.t. the standard basis) or I can write down a tuple of coordinates w.r.t. the new basis - of course, a subscript or something like this might be necessary to distinguish the meaning of the tuples:
Is it a raw tuple (coincides with standard basis coordinates), or is it the coordinates w.r.t. a basis which is not the standard basis?
 
  • Like
Likes etotheipi
  • #29
SchroedingersLion said:
Interesting thread. I used to get confused by the following:
If each vector needs a basis to be represented by coordinates, then according to what base are the basis vectors represented? If I write ##e_1=(1,0,0)## then this vector is represented by coordinates that are defined by itself. It is like writing ##e_1=1\cdot e_1 + 0\cdot e_2 + 0\cdot e_3##. How can I get any information from something like this?

Therefore, I feel that there has to be the notion of "raw" tuples, some ground truth.
Each point in ##R^n## is given by such a raw tuple. This raw tuple coincides per definition with the coordinates to the standard basis ##(\vec{e_1}, \vec{e_2}, ...)## where each of the basis vectors are the raw tuples ##(1,0,...), (0,1,0...),...##.
Now I can also define new basis vectors, each of which can be written as a raw tuple. Then I can calculate the coordinates of any vector with respect to the new basis, so that I can either write the vector as its raw tuple (which is the same as writing a tuple of coordinates w.r.t. the standard basis) or I can write down a tuple of coordinates w.r.t. the new basis - of course, a subscript or something like this might be necessary to distinguish the meaning of the tuples:
Is it a raw tuple (coincides with standard basis coordinates), or is it the coordinates w.r.t. a basis which is not the standard basis?

I think a lot of this is context dependent. For vectors in e.g. ##\mathbb{R}^2##, it makes sense to define a basis as just plain tuples of real numbers e.g. ##\beta = \{\begin{pmatrix}3\\2\end{pmatrix}, \begin{pmatrix}1\\3\end{pmatrix}\}##. The mapping between a vector and its coordinate vector is trivial.

For function spaces, we might have a basis as ##\{e^x, e^{-x}\}##. Then we can map functions to a coordinate vector, the tuple in ##\mathbb{R}^2##, so e.g. ##[5e^x + 3e^{-x}]_{\beta} = \begin{pmatrix}5\\3\end{pmatrix}##. For geometrical basis (3-)vectors in a Euclidian space, we do a similar thing also. The components are mapped to a tuple in ##\mathbb{R}^3##.

There is room for confusion about tuples vs vectors as has been demonstrated in this thread (by me!), but I can see why people often opt to omit a subscript for reasons of efficiency.

It is a bit like how number bases operate. When we describe the number "15" in base ten, that literally just means ##1 \times 10 + 5 \times 1##. We can't break down the basis ##\{10, 1\}## any further, it's literally a basis.
 
  • #30
SchroedingersLion said:
If each vector needs a basis to be represented by coordinates, then according to what base are the basis vectors represented? If I write ##e_1=(1,0,0)## then this vector is represented by coordinates that are defined by itself. It is like writing ##e_1=1\cdot e_1 + 0\cdot e_2 + 0\cdot e_3##. How can I get any information from something like this?
You get that ##e_1## equals itself. ##e_1## is the vector that extends 1 unit along the x-axis, from the origin to the point whose coordinates are (1, 0, 0).
SchroedingersLion said:
Therefore, I feel that there has to be the notion of "raw" tuples, some ground truth.
No, I disagree. If you have a vector in coordinate form, there is always an implied basis. The coordinate form is a shorthand way of writing ##\vec v = c_1\vec {b_1} + \dots c_n \vec {b_n}##, where ##\vec {b_i}## are the basis vectors.
etotheipi said:
I think a lot of this is context dependent. For vectors in e.g. ##\mathbb{R}^2##, it makes sense to define a basis as just plain tuples of real numbers e.g. ##\beta = \{\begin{pmatrix}3\\2\end{pmatrix}, \begin{pmatrix}1\\3\end{pmatrix}\}##.
No, they are not just plain or "raw" tuples. They are the coordinates of two vectors, relative to some basis. And it makes not difference whether you're in ##\mathbb R^2## or any other Euclidean space.
etotheipi said:
The mapping between a vector and its coordinate vector is trivial.
No, I disagree.
If ##\vec v = \begin{pmatrix}3\\2\end{pmatrix}## in terms of the standard basis for ##\mathbb R^2##, what are the coordinates of ##\vec v## in terms of the basis ##\{\begin{pmatrix}1\\2\end{pmatrix}, \begin{pmatrix}2\\-1\end{pmatrix}\}##?
 
  • #31
I would put it like this.

For an abstract vector space [itex]V[/itex], it is wrong to write expressions like [itex]\vec v = (1, 2, 3)^T[/itex]. The object [itex](1, 2, 3)^T[/itex] is not itself a vector in [itex]V[/itex] but a representation of such a vector in a certain basis.

But it is straightforward to take tuples like [itex](1, 2, 3)^T[/itex] and construct a different vector space out of them. This space is called a coordinate space and denoted by [itex]K^n[/itex]. Its elements are tuples so if we are talking about vectors from this space it is fine to write [itex]\vec v = (1, 2, 3)^T[/itex]. Such vectors can again be represented in any basis. This leads to the notation becoming ambiguous: with [itex](1, 2, 3)^T[/itex] I can either denote a vector or the representation of a vector in a certain basis. (Regarding bases, a coordinate vector space has a special kind of basis: the standard basis [itex]\{\vec e_1, ..., \vec e_n\}[/itex]. It is distinguished by the property that the coefficients of the representations of tuple vectors in this basis are identical to the tuple's components.)

I sometimes use the notation [itex]\vec v \doteq (1, 2, 3)^T[/itex] to denote a certain representation of [itex]\vec v[/itex], something which I adopted form Sakurai's Modern Quantum Mechanics.
 
  • Like
Likes SchroedingersLion and etotheipi
  • #32
SchroedingersLion said:
Interesting thread. I used to get confused by the following:
If each vector needs a basis to be represented by coordinates, then according to what base are the basis vectors represented? If I write ##e_1=(1,0,0)## then this vector is represented by coordinates that are defined by itself. It is like writing ##e_1=1\cdot e_1 + 0\cdot e_2 + 0\cdot e_3##. How can I get any information from something like this?

This is where it is useful to think of function spaces: quadratic and lower polynomials for example. Here we can unambiguously define our basis vectors/functions as ##e_1 = 1, \ e_2 = x, \ e_3 = x^2##. Then the function ##x## is represented in this basis as ##(0, 1, 0)##.

Here we are saying that the vector ##x## is our second basis vector, so we can write ##e_2 = x##, and we will also write this as a tuple ##(0, 1, 0)##. And we now have three ways to talk about any vector:
$$ax^2 + bx + c = ae_3 + be_2 + ce_1 = (c, b, a)$$
And you could replace the equals sign here with ##\dot =## or ##\equiv## or ##\leftrightarrow## if you prefer to indicate that it's more a notational correspondence.
 
  • Like
Likes etotheipi
  • #33
Mark44 said:
No, they are not just plain or "raw" tuples. They are the coordinates of two vectors, relative to some basis. And it makes not difference whether you're in ##\mathbb R^2## or any other Euclidean space.

Hmm but elements of the vector space ##\mathbb{R}^2## are plain tuples, any any basis you choose in ##\mathbb{R}^2## itself consists of plain tuples (even the standard basis!); tuples being an object wih operations defined on them that do obey the axioms of a vector space. For any other vectors, i.e. polynomials or vectors in Euclidian space, I would agree.

The fact that the vector is identically the coordinate matrix in ##\mathbb{R}^n## is what is causing the notational issue; e.g. in ##\mathbb{R}^3## we have ##\vec{v} = (a,b,c)^T## but also ##\vec{v}\ \dot{=} \ (a,b,c)^T## (in the standard basis). And ##[\vec{v}]_{\beta} = [(a,b,c)^T]_{\beta} = (d,e,f)^T## in some other basis.

So the distinction is slightly blurred here and I think this is why @PeroK has suggested looking at function spaces to gain intuition.
 
  • #34
etotheipi said:
Hmm but elements of the vector space ##\mathbb{R}^2## are plain tuples, any any basis you choose in ##\mathbb{R}^2## itself consists of plain tuples (even the standard basis!); tuples being an object wih operations defined on them that do obey the axioms of a vector space. For any other vectors, i.e. polynomials or vectors in Euclidian space, I would agree.

I agree with this. Let's look at ##\mathbb{R}## first. The numbers ##1, 2, \pi## etc. are well-defined. There's no sense in which the number ##1## is really just the same as ##-1## or ##\pi##.

But, if we consider ##\mathbb{R}## as a vector space over itself, then we can do these things. If we take our basis vector to be ##e_1 = -\pi##, then in this basis the number/vector ##\pi## is represented by the tuple ##(-1)##.

Now, if we consider ##\mathbb{R}^2##, then this has the same fundamentally well-defined nature as a set of uniquely defined tuples. Let's use square brackets for this. ##[\pi, 1]##, for example, uniquely defines a member of ##\mathbb{R}^2##. We can also unambiguously define ##\hat x = [1,0]## and ##\hat y = [0, 1]##. There's no sense in which such square-bracketed tuples can ever be interchanged. And ##\hat x, \hat y## are unambiguously defined.

Again, however, if we consider ##\mathbb{R}^2## as a vector space (over ##\mathbb{R}##), then we have a standard basis:
$$e_1 = (1, 0) \leftrightarrow [1, 0] = \hat x, \ \ \text{and} \ \ e_2 = (0, 1) \leftrightarrow [0, 1] = \hat y$$
But, we are also free to choose a new basis, where ##[1, 0]## and ##[0, 1]## are represented by other tuples.

In summary, each element of ##\mathbb{R}^2## must have an underlying definition as a specific, unique tuple.
 
Last edited:
  • Like
Likes SchroedingersLion, cianfa72 and (deleted member)
  • #35
PeroK said:
But, if we consider ##\mathbb{R}## as a vector space over itself, then we can do these things. If we take our basis vector to be ##e_1 = -\pi##, then in this basis the number/vector ##\pi## is represented by the tuple ##(-1)##.
But here (-1) is shorthand for ##-1 \cdot (-\pi)##, so every representation of an element in the vector space ##\mathbb R## by its coordinate is implicitly in terms of the basis, ##-\pi##. That's been my point all along in this thread.
 
<h2>1. What is a column vector?</h2><p>A column vector is a mathematical object that contains a list of numbers arranged in a single column. It is used to represent a quantity or set of quantities in a specific direction or dimension.</p><h2>2. What is a basis for a column vector?</h2><p>A basis for a column vector is a set of linearly independent vectors that can be used to represent any other vector in the same vector space. It is used to define the dimensions and directions in which the vector can be expressed.</p><h2>3. Do column vectors always need a basis?</h2><p>No, not all column vectors require a basis. If a vector is already expressed in terms of a basis, then it does not need another basis. However, if a vector is not expressed in terms of a basis, then a basis is needed to define its dimensions and directions.</p><h2>4. What happens if a column vector does not have a basis?</h2><p>If a column vector does not have a basis, then it cannot be fully defined or expressed. This means that its dimensions and directions are not clearly defined, making it difficult to perform operations or calculations involving the vector.</p><h2>5. Can a column vector have more than one basis?</h2><p>Yes, a column vector can have more than one basis. This is because there can be multiple sets of linearly independent vectors that can be used to represent the same vector in the same vector space. However, each basis will have a different set of dimensions and directions for the vector.</p>

1. What is a column vector?

A column vector is a mathematical object that contains a list of numbers arranged in a single column. It is used to represent a quantity or set of quantities in a specific direction or dimension.

2. What is a basis for a column vector?

A basis for a column vector is a set of linearly independent vectors that can be used to represent any other vector in the same vector space. It is used to define the dimensions and directions in which the vector can be expressed.

3. Do column vectors always need a basis?

No, not all column vectors require a basis. If a vector is already expressed in terms of a basis, then it does not need another basis. However, if a vector is not expressed in terms of a basis, then a basis is needed to define its dimensions and directions.

4. What happens if a column vector does not have a basis?

If a column vector does not have a basis, then it cannot be fully defined or expressed. This means that its dimensions and directions are not clearly defined, making it difficult to perform operations or calculations involving the vector.

5. Can a column vector have more than one basis?

Yes, a column vector can have more than one basis. This is because there can be multiple sets of linearly independent vectors that can be used to represent the same vector in the same vector space. However, each basis will have a different set of dimensions and directions for the vector.

Similar threads

  • General Math
Replies
5
Views
1K
Replies
8
Views
965
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
3
Views
456
Replies
2
Views
590
  • Calculus and Beyond Homework Help
Replies
16
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
15
Views
2K
  • Electromagnetism
Replies
2
Views
785
Replies
6
Views
5K
Back
Top