B Do column 'vectors' need a basis?

Click For Summary
The discussion revolves around the transformation of vector components between different orthonormal bases and whether column representations of vectors require a basis. It is clarified that while the transformation matrix is not the identity, the representations of the vector in different bases still correspond to the same underlying vector. Participants emphasize that the entries in the column vectors are indeed the coordinates of the vector relative to the chosen basis. The conversation also touches on the idea that these column structures can represent either raw tuples or expressions involving basis vectors, depending on the context. Ultimately, the consensus is that the column vectors are valid representations of the same vector, just expressed in different bases.
  • #31
I would put it like this.

For an abstract vector space V, it is wrong to write expressions like \vec v = (1, 2, 3)^T. The object (1, 2, 3)^T is not itself a vector in V but a representation of such a vector in a certain basis.

But it is straightforward to take tuples like (1, 2, 3)^T and construct a different vector space out of them. This space is called a coordinate space and denoted by K^n. Its elements are tuples so if we are talking about vectors from this space it is fine to write \vec v = (1, 2, 3)^T. Such vectors can again be represented in any basis. This leads to the notation becoming ambiguous: with (1, 2, 3)^T I can either denote a vector or the representation of a vector in a certain basis. (Regarding bases, a coordinate vector space has a special kind of basis: the standard basis \{\vec e_1, ..., \vec e_n\}. It is distinguished by the property that the coefficients of the representations of tuple vectors in this basis are identical to the tuple's components.)

I sometimes use the notation \vec v \doteq (1, 2, 3)^T to denote a certain representation of \vec v, something which I adopted form Sakurai's Modern Quantum Mechanics.
 
  • Like
Likes SchroedingersLion and etotheipi
Mathematics news on Phys.org
  • #32
SchroedingersLion said:
Interesting thread. I used to get confused by the following:
If each vector needs a basis to be represented by coordinates, then according to what base are the basis vectors represented? If I write ##e_1=(1,0,0)## then this vector is represented by coordinates that are defined by itself. It is like writing ##e_1=1\cdot e_1 + 0\cdot e_2 + 0\cdot e_3##. How can I get any information from something like this?

This is where it is useful to think of function spaces: quadratic and lower polynomials for example. Here we can unambiguously define our basis vectors/functions as ##e_1 = 1, \ e_2 = x, \ e_3 = x^2##. Then the function ##x## is represented in this basis as ##(0, 1, 0)##.

Here we are saying that the vector ##x## is our second basis vector, so we can write ##e_2 = x##, and we will also write this as a tuple ##(0, 1, 0)##. And we now have three ways to talk about any vector:
$$ax^2 + bx + c = ae_3 + be_2 + ce_1 = (c, b, a)$$
And you could replace the equals sign here with ##\dot =## or ##\equiv## or ##\leftrightarrow## if you prefer to indicate that it's more a notational correspondence.
 
  • Like
Likes etotheipi
  • #33
Mark44 said:
No, they are not just plain or "raw" tuples. They are the coordinates of two vectors, relative to some basis. And it makes not difference whether you're in ##\mathbb R^2## or any other Euclidean space.

Hmm but elements of the vector space ##\mathbb{R}^2## are plain tuples, any any basis you choose in ##\mathbb{R}^2## itself consists of plain tuples (even the standard basis!); tuples being an object wih operations defined on them that do obey the axioms of a vector space. For any other vectors, i.e. polynomials or vectors in Euclidian space, I would agree.

The fact that the vector is identically the coordinate matrix in ##\mathbb{R}^n## is what is causing the notational issue; e.g. in ##\mathbb{R}^3## we have ##\vec{v} = (a,b,c)^T## but also ##\vec{v}\ \dot{=} \ (a,b,c)^T## (in the standard basis). And ##[\vec{v}]_{\beta} = [(a,b,c)^T]_{\beta} = (d,e,f)^T## in some other basis.

So the distinction is slightly blurred here and I think this is why @PeroK has suggested looking at function spaces to gain intuition.
 
  • #34
etotheipi said:
Hmm but elements of the vector space ##\mathbb{R}^2## are plain tuples, any any basis you choose in ##\mathbb{R}^2## itself consists of plain tuples (even the standard basis!); tuples being an object wih operations defined on them that do obey the axioms of a vector space. For any other vectors, i.e. polynomials or vectors in Euclidian space, I would agree.

I agree with this. Let's look at ##\mathbb{R}## first. The numbers ##1, 2, \pi## etc. are well-defined. There's no sense in which the number ##1## is really just the same as ##-1## or ##\pi##.

But, if we consider ##\mathbb{R}## as a vector space over itself, then we can do these things. If we take our basis vector to be ##e_1 = -\pi##, then in this basis the number/vector ##\pi## is represented by the tuple ##(-1)##.

Now, if we consider ##\mathbb{R}^2##, then this has the same fundamentally well-defined nature as a set of uniquely defined tuples. Let's use square brackets for this. ##[\pi, 1]##, for example, uniquely defines a member of ##\mathbb{R}^2##. We can also unambiguously define ##\hat x = [1,0]## and ##\hat y = [0, 1]##. There's no sense in which such square-bracketed tuples can ever be interchanged. And ##\hat x, \hat y## are unambiguously defined.

Again, however, if we consider ##\mathbb{R}^2## as a vector space (over ##\mathbb{R}##), then we have a standard basis:
$$e_1 = (1, 0) \leftrightarrow [1, 0] = \hat x, \ \ \text{and} \ \ e_2 = (0, 1) \leftrightarrow [0, 1] = \hat y$$
But, we are also free to choose a new basis, where ##[1, 0]## and ##[0, 1]## are represented by other tuples.

In summary, each element of ##\mathbb{R}^2## must have an underlying definition as a specific, unique tuple.
 
Last edited:
  • Like
Likes SchroedingersLion, cianfa72 and (deleted member)
  • #35
PeroK said:
But, if we consider ##\mathbb{R}## as a vector space over itself, then we can do these things. If we take our basis vector to be ##e_1 = -\pi##, then in this basis the number/vector ##\pi## is represented by the tuple ##(-1)##.
But here (-1) is shorthand for ##-1 \cdot (-\pi)##, so every representation of an element in the vector space ##\mathbb R## by its coordinate is implicitly in terms of the basis, ##-\pi##. That's been my point all along in this thread.
 
  • #36
Mark44 said:
But here (-1) is shorthand for ##-1 \cdot (-\pi)##, so every representation of an element in the vector space ##\mathbb R## by its coordinate is implicitly in terms of the basis, ##-\pi##. That's been my point all along in this thread.

I think the point is that ##[-\pi]_{\beta} = (-1)##, that is the mapping to the coordinate vector w.r.t. ##\beta##. But the actual element of ##V## is ##-\pi##, a 1-tuple. The tuple itself, ##(-\pi)##, is not written in any basis, because it is the vector.
 
  • #37
Mark44 said:
But here (-1) is shorthand for ##-1 \cdot (-\pi)##, so every representation of an element in the vector space ##\mathbb R## by its coordinate is implicitly in terms of the basis, ##-\pi##. That's been my point all along in this thread.

My point is that you must be able to identify the vectors in some basis-independent way. Otherwise, you can't define a basis in the first place, as you have no way to identify the specific basis vectors you are talking about.

Which I think was a point made by @SchroedingersLion
 
  • Like
Likes SchroedingersLion and etotheipi
  • #38
PeroK said:
My point is that you must be able to identify the vectors in some basis-independent way. Otherwise, you can't define a basis in the first place, as you have no way to identify the specific basis vectors you are talking about.
OK, that makes sense.
For me, the key to all of this is the realization that a vector space is "over" some field, the set from which the coordinates come. For a Euclidean space ##\mathbb R^n##, the field is the real numbers.
 
  • #39
To clarify some points. You can legally download Sheldon Axler : Linear Algebra Done Right 3rd edition. Refer to section on Linear Transformations. I think its 3.A from memory. Then go up 3:C (Matrix of a Linear Transformation with Respect to Bases).

This will clear up your misconceptions.
 
  • Like
Likes etotheipi
  • #40
MidgetDwarf said:
To clarify some points. You can legally download Sheldon Axler : Linear Algebra Done Right 3rd edition. Refer to section on Linear Transformations. I think its 3.A from memory. Then go up 3:C (Matrix of a Linear Transformation with Respect to Bases).

This will clear up your misconceptions.

That's the one I'm currently working on, I managed to get it for free because of corona. It's very good but I haven't had the chance to read it through yet.

Thanks for the chapter, I'll have a look!
 
  • #41
etotheipi said:
That's the one I'm currently working on, I managed to get it for free because of corona. It's very good but I haven't had the chance to read it through yet.

Thanks for the chapter, I'll have a look!
The book is really good. There was one proof that through me off, it was more notational then anything, once I took Mathwonk advice to ditch the sum notation it was clear. In hindsight, it was a silly question. But yes, I think going through this will clear a lot of confusion you have in regards to a matrix of a linear transformation with respect to basis.

I had taken a linear algebra class out of Friedberg 4 years ago, but this was something I did not really understand. But Axler made it clear for me. I am currently on chapter 5 now.
 
  • Informative
Likes etotheipi
  • #42
etotheipi said:
I must have been reasoning along the lines of ##x^2 := (1,0,0)##, ##x := (0,1,0)##, ##1 := (0,0,1)##.

Which is not entirely false! The map ##(0,0,1) \mapsto 1, (0,1,0) \mapsto x, (1,0,0) \mapsto x^2## is a vector space isomorphism between ##\mathbb{R}^3## and the polynomials of degree at most ##2##.
 
  • Like
Likes etotheipi
  • #43
When you say you have vector (a,b,c) in \mathbb R^3 that's usually meant w.r.t to the canonical basis. These numbers are coefficients of the linear combination of basis vectors corresponding to the given vector. When you change basis, but leave the coordinates the same, you get a different vector.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 10 ·
Replies
10
Views
363
  • · Replies 11 ·
Replies
11
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 16 ·
Replies
16
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
6
Views
5K
  • · Replies 16 ·
Replies
16
Views
3K