High School Elementary column operations and change-of-basis

  • Thread starter Thread starter Stephen Tashi
  • Start date Start date
  • Tags Tags
    Matrix
Click For Summary
SUMMARY

The discussion centers on the interpretation of elementary column and row operations as a change-of-basis in linear algebra, specifically referencing Evan Chen's blog post on linear maps between modules. It clarifies that pre-multiplying a matrix by an invertible matrix A and post-multiplying by an invertible matrix B results in a matrix that represents a linear transformation in terms of new bases. The conversation emphasizes that elementary operations affect the transformation represented by the matrix rather than merely changing the basis, particularly when the domain and range bases differ. The determinant theorem's applicability is also discussed, noting its restriction to square matrices.

PREREQUISITES
  • Understanding of linear transformations and their representation as matrices.
  • Familiarity with the concepts of basis and change-of-basis in vector spaces.
  • Knowledge of elementary row and column operations on matrices.
  • Basic understanding of determinants and their properties in linear algebra.
NEXT STEPS
  • Study the concept of linear maps between modules in depth.
  • Learn about the properties of determinants and their invariance under change-of-basis.
  • Explore the implications of pre- and post-multiplication of matrices in linear transformations.
  • Investigate the relationship between elementary operations and their effects on matrix representations of transformations.
USEFUL FOR

Mathematicians, students of linear algebra, and anyone interested in the theoretical aspects of linear transformations and matrix operations.

Stephen Tashi
Science Advisor
Homework Helper
Education Advisor
Messages
7,864
Reaction score
1,602
TL;DR
Does performing an elementary column (or row) operations count as a change of basis? Perhaps this is merely a vocabulary question about how people use the phrase "change of basis".
A blog post by Evan Chen https://blog.evanchen.cc/2016/07/12/the-structure-theorem-over-pids/ says that elementary row and column operations on a matrix can be interpreted as a change-of-basis.
ChenBlog.jpg


I assume this use of the phrase "change of-basis" refers to creating a matrix that uses a different basis for a vector in its domain (as a linear tranformation) than in its range. Is that correct?
 
Physics news on Phys.org
The linked post introduces a linear map T between modules, say from M1 to M2 . It says the map T can be "thought of" as a matrix (which it depicts) "for the standard bases of the two modules". This assumes such standard bases have already been agreed.
I feel it clearer to say that the matrix, which we should call something other than T, eg ##M_T##, "represents the map T in terms of the standard bases".
Given that, any valid pre-multiplication of the matrix by an invertible matrix A and valid post-multiplication by an invertible matrix B gives a matrix that represents T in terms of two new bases:
- for M1, represented in terms of the standard basis of M1 by the rows of A (k-th basis element represented by k-th row)
- for M2, represented in terms of the standard basis of M2 by the columns of B
I may have gotten my rows and columns mixed up. I often do. I can sort them out when important. It often isn't, when doing purely theoretical work.

Re your exact question. No, the bases are already different, as the domain and range modules are assumed . There are already two bases in play - one for domain an d one for range. The "change of basis" discussion is about changing one or both of those bases used to define the representation matrix, by the pre- and post-multiplication.
 
  • Like
Likes Stephen Tashi
From what you said (just as a point of vocabulary) there is a familiar result that states that the determinant of a linear transformation on a vector space is unique because it is the same in any basis - i.e. the determinant is invariant under a "change of basis". With that vocabulary , "a matrix for a linear transformation on a vector space" must be a matrix that uses the same basis for both the domain and range of the transformation. With that interpretation, an elementary column operation on such a matrix, results in a change of the linear transformation that is represented by the matrix, not merely a "change of basis" that represents the same transformation.
 
That's correct for the case where domain and range are the same. The change of basis referred to in that theorem you mention involves pre and post-multiplying the matrix, ie ##M\to A^{-1}MA##. A single row or column operation is either a pre or post multiplication, but not both, and so cannot be treated as just a change of basis. If we matched the elementary row (column) operation with the corresponding column (row) operation, the two effects on determinant would offset, satisfying the determinant theorem, and allowing it to be treated as a change of basis.
The determinant theorem would not apply to the general case that the blog post handles, because it only applies to square matrices and the matrix considered in the post need not be square. The transformation's representation matrix has no determinant to change or preserve when we change the bases of the domain and range.
 
andrewkirk said:
If we matched the elementary row (column) operation with the corresponding column (row) operation, the two effects on determinant would offset, satisfying the determinant theorem, and allowing it to be treated as a change of basis.
I'll have to think about that one - how to interpret "corresponding". It looks like the order would matter. Do we perform the column operation first and then a row operation - or vice versa?
 
Stephen Tashi said:
I'll have to think about that one - how to interpret "corresponding". It looks like the order would matter. Do we perform the column operation first and then a row operation - or vice versa?
I think the order doesn't matter because of the associative rule of multiplication. Let A be a n x n matrix such that MA for any n x n matrix gives M with columns 2 and 3 swapped. Then pre-multiplying by ##A^{-1}## swaps rows 2 and 3. the associative rule gives us:
$$A^{-1}MA = (A^{-1}M)A = A^{-1}(MA)$$
so it doesn't matter whether we do the pre- or the post-multiplication first. It follows that it makes no difference whether we swap cols 2 and 3 or rows 2 and 3 first.
 
  • Like
Likes Stephen Tashi
What confuses me is how to visualize the row-column pair of operations as changing the basis of the domain of the transformation with the column operation and changing the basis of the range of the transformation with the row operation. Using the example in the article, suppose my basis is ##B = \{e_1,e_2,e_3\}## and I want to change basis to ##C = \{e_1, e_2 + 3e_1, e_3\}##. The matrix for the column operation to change the basis in the domain is:

##A = \begin{pmatrix} 1&3&0\\0&1&0\\0&0&1 \end{pmatrix}##

So ##A^{-1} = \begin{pmatrix} 1&-3&0\\0&1&0\\0&0&1 \end{pmatrix}##

Is it supposed to be obvious that ##A^{-1}## changes basis ##B## to basis ##C## for the range of the transformation?
 
Last edited:
More analysis of language explains it.

The matrix of the linear transformation is ##M## assuming the domain and range of the transformation are expressed in ##B## coordinates. Saying that I want to change the basis used in the domain from basis ##B## to basis ##C##, means that I want to create a matrix computation that assumes a vector in the domain has been expressed in basis ##C##.

Let ##A## be the matrix that changes a vector in basis ##C## to a vector in basis ##B##. The product ##MAv## can be viewed as changing the vector ##v## as expressed in ##C## coordinates to a vector expressed in ##B## coordinates and then performing the operation ##M## (as it was orginally intended) on a vector in ##B## coordinates.

If I want the final output of the computation to be in ##C## coordinates, I need to apply the transformation from ##B## coordinates back to ##C## coordinates. So I use ##A^{-1}( M A v)## to accomplish that.

So I should direct my attention to how ##A## and its inverse affect the components of a vector instead of focusing on how they change rows and columns of a matrix.
 

Similar threads

  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 43 ·
2
Replies
43
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K