Intuition behind elementary operations on matrices

In summary, elementary operations on matrices involve manipulating the rows and columns of a matrix to simplify or transform it. These operations include adding, multiplying, and scaling rows or columns, as well as swapping rows or columns. They are based on the concept of linear combinations and can be used to solve systems of linear equations, find inverses, and perform other operations on matrices. These operations are essential in understanding advanced concepts in linear algebra and are used extensively in various fields of mathematics and science.
  • #36
My question is: Then how will you explain operations like changing R2 to R1-R3 not working for matrices in context of linear transformations. Have I made myself clear this time? I have understood why some operations like changing R2 to R2+R3 work for matrices while some other operations like changing R2 to R1+R3 don't, in context of a system of equations, now how does one explain all this in context of linear transformations?
 
Physics news on Phys.org
  • #37
Your question reveals a confusion of ideas that makes it hard to address directly. There is no reason that a technique appropriate to the manipulation of equations should have the specific effect you seem to demand.

The mathematics is clear. An equation of the form ##A = BC## is not preserved if all three matrices are subject to the same row operation.

A simple counterexample can easily be found and that should end the debate.

A system of linear equations does have the same solutions if the matrix representing them is subject to a row operation. That has been essentially proved in this thread. That too should end the debate.

Moreover, the proof and counterexamples provide all the reasons you need to understand why.

That is mathematics and there is nothing more to be said.
 
  • #38
PeroK said:
The mathematics is clear. An equation of the form ##A = BC## is not preserved if all three matrices are subject to the same operation.
Oh, I think this is the source of much of the confusion, because I didn't ever say or imply that all three matrices are subject to the same operation, I only talked about applying the operation in the RHS as well as the LHS, but since there are two matrices in the LHS, maybe you thought I was talking about changing both of them, when in fact I was talking only about one.
Now can you tell me why for an equality; C = (F)(G), some operations, like changing R2 to R2+R3 work while some other operations like changing R2 to R1+R3 don't, in context of linear transformations?(I have understood the reasons for this when we take matrices to be a system of equations)
 
  • #39
Mr Real said:
Oh, I think this is the source of much of the confusion, because I didn't ever say or imply that all three matrices are subject to the same operation, I only talked about applying the operation in the RHS as well as the LHS, but since there are two matrices in the LHS, maybe you thought I was talking about changing both of them, when in fact I was talking only about one.
In particular, you need to talk about applying the row operation only the first factor in a RHS that is a product of matrices ( rather than being something like FG + H)

Now can you tell me why for an equality; C = (F)(G), some operations, like changing R2 to R2+R3 work while some other operations like changing R2 to R1+R3 don't, in context of linear transformations?(I have understood the reasons for this when we take matrices to be a system of equations)

Again, we must define what it means for something to "work".

For example, the linear transformation given by:
##x_{new} = 2 x_{old} + 3 y_{old},##
##y_{new} = x_{old} - y_{old}##
defines a transformation (i.e. a function) that maps coordinates ##(x_{old},y_{old})## to coordinates ##(x_{new},y_{new})##. There no overt notion of solving for anything in the definition of a linear transformation.

However, we can think a "solutions" to the linear transformation as being quadruples of numbers of the form ##(x_{new},y_{new},x_{old},y_{old})## (such as (5,0, 1,1) in the example) where the second pair of numbers is the result of applying the linear transformation to the first pair of numbers. From this point of view, not all quadruples of numbers are solutions. For example (1,2,3,4) is not a solution to the example.

Taking that point of view, two linear transformations are "equivalent" when they have the same set of solutions.

Since your question involves 3 rows, we must look at an example of a linear transformation like:

##\begin{pmatrix} x_{new}\\ y_{new}\\ z_{new} \end{pmatrix} = \begin{pmatrix}3&1&1\\0&1&0 \\2&0&5\end{pmatrix} \begin{pmatrix} x_{old}\\y_{old}\\z_{old} \end{pmatrix} ##

This matrix equation can be interpreted as 3 linear equations (in 6 variables) that define the linear transformation:

eq. 1) ##x_{new} = 3x_{old} + y_{old} + z_{old}##
eq. 2) ##y_{new} = y_{old}##
eq. 3) ##z_{new} = 2x_{old} + 5 z_{old}##

(When people say that a matrix "is" a linear transformation, they mean that you can supply variables to form a matrix equation like the one above.)The customary way to write a system of equations is to put all the variables on the left hand side, but we don't need to do that for our purposes.

We have 3 equations in 6 unknowns, so we expect the system to have an infinite number of solutions. However, having an infinite number of solutions does not imply that any arbitrary sextuple of numbers will be one of those solutions.

Suppose we do the operation "Replace row 2 by the sum of row 1 and row 3". That's like erasing eq 2.) and putting the sum of equation 1) and equation 3) in its place. Would you expect that operation to preserve the solutions to the system of equations? It removes all information about ##y_{new}## and replaces it with an equation that is just a consequence of eq. 1) and eq. 3).
 
  • Like
Likes Mr Real
  • #40
Stephen Tashi said:
Suppose we do the operation "Replace row 2 by the sum of row 1 and row 3". That's like erasing eq 2.) and putting the sum of equation 1) and equation 3) in its place. Would you expect that operation to preserve the solutions to the system of equations? It removes all information about ##y_{new}## and replaces it with an equation that is just a consequence of eq. 1) and eq. 3).
Took me some time to understand it but yes, replacing row 2 by the sum of row 1 and row 3 will change the solutions. Thanks for replying. But then how will the solutions be preserved if we replace row 2 by the sum of row 2 and row 3.
Thanks!
Mr R
 
  • #41
Mr Real said:
But then how will the solutions be preserved if we replace row 2 by the sum of row 2 and row 3.
That procedure could be reversed by subtracting row 3 from the modified row 2.

It would require some work to write a formal proof of which row operations preserve solutions, but the intuitive idea is that operations that can be reversed do not lose or add information to requirements set by the original set of equations.
 
  • #42
Stephen Tashi said:
That procedure could be reversed by subtracting row 3 from the modified row 2.

It would require some work to write a formal proof of which row operations preserve solutions, but the intuitive idea is that operations that can be reversed do not lose or add information to requirements set by the original set of equations.
Thank you very much for clearing my doubts so well again. I think I need to personally practise more questions first to really grasp the concepts. Can you please suggest some references(websites, channels, etc.) for class 12 maths (for getting the intuition and solving problems too).
Thanks
Mr R
 
  • #43
Mr Real said:
Can you please suggest some references(websites, channels, etc.) for class 12 maths (for getting the intuition and solving problems too).

I haven't been a math student or teacher for over 20 years, so I'm not familiar what's available. Other forum members probably are.
 
  • #44
Stephen Tashi said:
I haven't been a math student or teacher for over 20 years, so I'm not familiar what's available. Other forum members probably are.
Thank you all the same.
:bow:
Mr R
 
  • #45
Stephen Tashi said:
With an equation in real variables, you can multiply both sides by the same number. For example, we can transform the equation ##0 = (x/7 - 1/7)(x-2)## to an equivalent equation by multiplying both sides by ##7##. When we multiply the right side by 7, we are permitted to multiply only the factor ##(x/7 - 1/7)## by 7. By analogy, if we have the matrix equation ##C = (F)(G)## we can multiply both sides of the equation by a matrix ##E##, obtaining an equivalent equation ##(E)(C) = (E)(F)(G)##. In evaluating the right hand side, we can compute it as ##(EF)(G)##
@Stephen Tashi, sorry to bother you but I had a doubt regarding this. You said here that in the equation 0 = (x/7 - 1/7)(x - 2), if we want to convert this equation to an equivalent equation by multiplying both sides by 7, then in the RHS we are only allowed to multiply the factor (x/7 - 1/7) by 7, but I don't see why we can't multiply the factor (x-2) instead,
As far as I can see (x/7 - 1/7)(7x - 14) = 0 and (x - 1)(x - 2) = 0 both have the same solutions as (x/7 - 1/7)(x - 2), i.e. x = 1,2.
Please can you clear my confusion?

Mr R
 
  • #46
Mr Real said:
but I don't see why we can't multiply the factor (x-2) instead,

I think you mean "why we can't multiply the factor (x-2) by 7 instead". We can do that. The point is that we can only multiply one of the two factors by 7.
 
  • #47
Stephen Tashi said:
I think you mean "why we can't multiply the factor (x-2) by 7 instead". We can do that. The point is that we c9an only multiply one of the two factors by 7.
But you had used this example as an analogy to help me understand that if we have an equality (C) = (F)(G), and we multiply this equality by an elementary matrix E on both sides to get an equivalent equality, then in the RHS, we'll get (E)(F)(G) and we'll evaluate it as (EF)(G); so, considering what you said now, Can the RHS also be evaluated as (E)(FG) to get the same equivalent equality which we get when we evaluate it as (EF)(G)?

Thanks so much
Mr R
 
  • #48
Mr Real said:
Can the RHS also be evaluated as (E)(FG) to get the same equivalent equality which we get when we evaluate it as (EF)(G)?

The equation (C)(E) = (F)(G)(E) will be equivalent to the the equation C = (F)(G) and also to the equation (E)(C) = (E)(F)(G) when E is an elementary matrix, but the equation (C)(E) = (F)(G)(E) need not be exactly the same equation as the equation (E)(C) = (E)(F)(G) when the matrix multiplications are worked out because, in general, (E)(C) is not equal to (C)(E).

For example, consider the elementary matrix ##E = \begin{pmatrix} 0&1\\ 1& 0 \end{pmatrix} ## and the matrix ##C = \begin{pmatrix} r & s \\ t & u \end{pmatrix} ##

##(E)(C) = \begin{pmatrix} t & u \\ r & s \end{pmatrix}##
## (C)(E) = \begin{pmatrix} s & r \\ u & t \end{pmatrix}##

Multiplying C on the left by E performs the row operation of interchanging the two rows of C.
Multiplying C on the right by E performs the column operation of interchanging the two columns of C.
 
  • #49
Stephen Tashi said:
Multiplying C on the left by E performs the row operation of interchanging the two rows of C.
Multiplying C on the right by E performs the column operation of interchanging the two columns of C.
Okay, now I am clear with this. Thanks, for clearing my doubts again! :oldsmile:

Mr R
 

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Precalculus Mathematics Homework Help
Replies
25
Views
986
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
7K
  • Linear and Abstract Algebra
Replies
8
Views
1K
Replies
2
Views
728
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
11
Views
6K
  • Linear and Abstract Algebra
Replies
14
Views
2K
Back
Top