Linear algebra, basis, linear transformation and matrix representation

In summary, The matrix representations of the given linear transformation and vector in the new basis are found by using the change-of-basis matrix, which is defined by the unit vectors of the new basis. After transforming the vector to the new basis, the diagonal matrix A' is obtained by using the eigenvalues of the given matrix A as its entries. This problem demonstrates the use of eigenvectors to diagonize a matrix.
  • #1
fluidistic
Gold Member
3,923
261

Homework Statement


In a given basis [itex]\{ e_i \}[/itex] of a vector space, a linear transformation and a given vector of this vector space are respectively determined by [itex]\begin{pmatrix} 2 & 1 & 0 \\ 1 & 2 & 0\\ 0&0&5\\ \end{pmatrix}[/itex] and [itex]\begin{pmatrix} 1 \\ 2 \\3 \end{pmatrix}[/itex].
Find the matrix representations of the transformation and the vector in a new basis such that the old one is represented by [itex]e_1 =\begin{pmatrix} 1 \\ 1 \\0 \end{pmatrix}[/itex], [itex]e_2=\begin{pmatrix} 1 \\ -1 \\0 \end{pmatrix}[/itex] and [itex]e_3=\begin{pmatrix} 0 \\ 0 \\1 \end{pmatrix}[/itex].

Homework Equations


Not even sure.

The Attempt at a Solution


I suspect it has to do with eigenvalues since the problem in the assignments is right after an eigenvalue problem. I would not have thought of this otherwise.
It reminds me of 2 similar matrices, say A and B, then A=P^(-1)BP but nothing more than this.
I've found that the given matrix has 3 different eigenvalues, thus 3 eigenvectors (hence non 0 vectors) that are linearly independent (since the 3 eigenvalues are all different) and thus the matrix is similar to a triangular one with the eigenvalues 1, 3 and 5 on the main diagonal.
So now I know that there exist an invertible matrix P such that A=P^(-1)BP. But I'm really stuck at seeing how this can help me.
Any tip's welcome.
 
Physics news on Phys.org
  • #2
First you'll need to find the change-of-basis matrix. That is, a matrix A such that if x are coordinates in the old basis, then Ax are coordinates in the new basis.

You know already what [itex]Ae_1,~Ae_2,~Ae_3[/itex] are. So can you use this to find the matrix A?
 
  • #3
micromass said:
First you'll need to find the change-of-basis matrix. That is, a matrix A such that if x are coordinates in the old basis, then Ax are coordinates in the new basis.

You know already what [itex]Ae_1,~Ae_2,~Ae_3[/itex] are. So can you use this to find the matrix A?

Thanks for your help. I'm confused, I didn't know I knew what [itex]Ae_i[/itex] are.
Are these the column vectors of the first matrix of my first post?
 
  • #4
fluidistic said:
Thanks for your help. I'm confused, I didn't know I knew what [itex]Ae_i[/itex] are.
Are these the column vectors of the first matrix of my first post?

Oh, no, sorry. I was thinking of the [itex]e_i[/itex]as the original basis.

So, if I rephrase it, you know what the image of (1,0,0), (0,1,0) and (0,0,1) under A are?
 
  • #5
micromass said:
Oh, no, sorry. I was thinking of the [itex]e_i[/itex]as the original basis.

So, if I rephrase it, you know what the image of (1,0,0), (0,1,0) and (0,0,1) under A are?
Ah ok. :)
Hmm no either. [itex]\{ e_i \}[/itex] isn't necessarily the canonical basis as far as I know...
 
  • #6
The coordinates of any basis is always (1,0,0), (0,1,0), (0,0,1). Regardless of whether it is the canonical basis.

By definition, the coordinates of a vector v with respect to [itex]\{e_1,e_2,e_3\}[/itex] are [itex](\alpha_1,\alpha_2,\alpha_3)[/itex] such that

[itex]v=\alpha_1e_1+\alpha_2e_2+\alpha_3e_3[/itex].

From this, we see that the coordinates of [itex]e_1[/itex] are always (1,0,0).
 
  • #7
Hi fluidistic! :smile:

First let's define our symbols.
Let's call your linear transformation matrix A, and your vector v.

What you need is a matrix B that defines a basis transformation.
B is defined by the fact that for instance the unit vector (1,0,0) is transformed to your e1. That is: B (1,0,0) = e1.

Can you deduce what the matrix B is?

If you apply this B to your vector v, you'll find the represention of the v in the alternative basis.
 
  • #8
I like Serena said:
Hi fluidistic! :smile:

First let's define our symbols.
Let's call your linear transformation matrix A, and your vector v.

What you need is a matrix B that defines a basis transformation.
B is defined by the fact that for instance the unit vector (1,0,0) is transformed to your e1. That is: B (1,0,0) = e1.

Can you deduce what the matrix B is?

If you apply this B to your vector v, you'll find the represention of the v in the alternative basis.
Hi!
Is [itex]B= \begin{pmatrix} 1 & 1 & 0 \\ 1& -1 & 0\\ 0& 0 & 1 \end{pmatrix}[/itex]? It seems to work for me. Its columns vectors are the one given in the problem statement.
 
  • #9
fluidistic said:
Hi!
Is [itex]B= \begin{pmatrix} 1 & 1 & 0 \\ 1& -1 & 0\\ 0& 0 & 1 \end{pmatrix}[/itex]? It seems to work for me. Its columns vectors are the one given in the problem statement.

Yep, that's it! :smile:
It's simply the given unit vectors as columns of the matrix B.

Calculate Bv and you have your representation of v in the basis defined by B.

For your matrix A, you need B-1AB.
That is, first you transform a vector to the alternate basis (wrt which A has been defined), then you apply A, and then you transform the result back to your original basis.
 
  • #10
I like Serena said:
Yep, that's it! :smile:
It's simply the given unit vectors as columns of the matrix B.

Calculate Bv and you have your representation of v in the basis defined by B.

For your matrix A, you need B-1AB.
That is, first you transform a vector to the alternate basis (wrt which A has been defined), then you apply A, and then you transform the result back to your original basis.
I reached [itex]v'= \begin{pmatrix} 3 \\ -1 \\ 3 \end{pmatrix}[/itex]. And A' is a diagonal matrix with the eigenvalues of A as entries... (3, 1 and 5).
 
  • #11
That's it! :smile:

I'm surprised myself that A' is diagonal, but I see that it is.

In retrospect it is clear that it is, since the given basis consists of exactly the eigenvectors of A.
Note that A is diagonized by a base transition consisting of its eigenvectors.

I suspect this problem was intended to demonstrate this feature of eigenvectors.
 
  • #12
I like Serena said:
That's it! :smile:

I'm surprised myself that A' is diagonal, but I see that it is.

In retrospect it is clear that it is, since the given basis consists of exactly the eigenvectors of A.
Note that A is diagonized by a base transition consisting of its eigenvectors.

I suspect this problem was intended to demonstrate this feature of eigenvectors.

Yes it is. In my first post I saw that A was diagonalizable.
So [itex]A=P^{-1}A'P[/itex]. I didn't know that [itex]P^{-1}=B^{-1}[/itex].
I had calculated the eigenvalues of A, so that I think I had calculated A'. (Up to the order of the eigenvalue on the main diagonal, but I don't think it's relevant).
Is there a way I could have found P=B which is faster than the one I've done?
In other words, knowing A' from start, is there a quick way to solve the problem?
 
  • #13
Well, this problem is actually not about eigenvectors.
It's about a base transformation.
You can't assume that the new basis consists of the eigenvectors!
 
  • #14
I like Serena said:
Well, this problem is actually not about eigenvectors.
It's about a base transformation.
You can't assume that the new basis consists of the eigenvectors!

Ok. I think I understand. :smile:
Thanks guys!
 

What is linear algebra?

Linear algebra is a branch of mathematics that deals with linear equations and their representations in vector spaces. It involves the study of vectors, matrices, and linear transformations and their properties.

What is a basis in linear algebra?

A basis in linear algebra is a set of linearly independent vectors that can be used to represent any vector in a given vector space. It is the building block for understanding linear algebra and is often used for solving systems of linear equations.

What is a linear transformation?

A linear transformation is a function that maps one vector space to another in a way that preserves the structure of the vector space. It is represented by a matrix and can be thought of as a change of basis in a vector space.

What is matrix representation in linear algebra?

Matrix representation is the process of representing a linear transformation using a matrix. It involves expressing the inputs and outputs of the transformation as vectors and representing the transformation as a matrix that operates on those vectors.

What are the applications of linear algebra?

Linear algebra has a wide range of applications in various fields such as engineering, physics, computer science, and economics. It is used for solving systems of linear equations, analyzing data, and developing algorithms for machine learning and computer graphics, among others.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
55
  • Calculus and Beyond Homework Help
Replies
6
Views
265
  • Calculus and Beyond Homework Help
Replies
2
Views
511
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
642
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
808
  • Calculus and Beyond Homework Help
Replies
3
Views
866
Back
Top