Why are linear equations usually written down as matrices?

japplepie
Messages
93
Reaction score
0
I've been taught that for any system of linear equations, it has a corresponding matrix.

Why do people sometimes use systems of linear equations to describe something and other times matrices?

Is it all just a way of writing things down faster or are there things you could do to matrices that you couldn't do to linear equations?
 
Physics news on Phys.org
japplepie said:
I've been taught that for any system of linear equations, it has a corresponding matrix.

Why do people sometimes use systems of linear equations to describe something and other times matrices?

Is it all just a way of writing things down faster or are there things you could do to matrices that you couldn't do to linear equations?
Mostly matrices are a shorthand way of writing a system of linear equations, but there is one other advantage for certain systems : the ability to use a matrix inverse to solve the system.

For example, suppose we have this system:
2x + y = 5
x + 3y = 5

This system can be written in matrix form as:
##\begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}\begin{bmatrix} x \\ y\end{bmatrix} = \begin{bmatrix} 5 \\5 \end{bmatrix}##
Symbolically, the system is Ax = b, where A is the matrix of coefficients on the left, and b is the column vector whose entries are 5 and 5. (x is the column vector of variables x and y.)

Because I cooked this example up, I know that A has an inverse; namely ##A^{-1} = \frac 1 5 \begin{bmatrix} 3 & -1 \\ -1 & 2 \end{bmatrix}##
If I apply this inverse to both sides of Ax = b, I get ##A^{-1}Ax = A^{-1}b = \frac 1 5 \begin{bmatrix} 3 & -1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 5 \\5 \end{bmatrix}##
##= \begin{bmatrix} 2 \\1 \end{bmatrix}##

From this I see that x = 2 and y = 1. You can check that this is a solution by substituting these values in the system of equations.
 
  • Like
Likes japplepie and suremarc
Mark44 said:
Mostly matrices are a shorthand way of writing a system of linear equations, but there is one other advantage for certain systems : the ability to use a matrix inverse to solve the system.

For example, suppose we have this system:
2x + y = 5
x + 3y = 5

This system can be written in matrix form as:
##\begin{bmatrix} 2 & 1 \\ 1 & 3 \end{bmatrix}\begin{bmatrix} x \\ y\end{bmatrix} = \begin{bmatrix} 5 \\5 \end{bmatrix}##
Symbolically, the system is Ax = b, where A is the matrix of coefficients on the left, and b is the column vector whose entries are 5 and 5. (x is the column vector of variables x and y.)

Because I cooked this example up, I know that A has an inverse; namely ##A^{-1} = \frac 1 5 \begin{bmatrix} 3 & -1 \\ -1 & 2 \end{bmatrix}##
If I apply this inverse to both sides of Ax = b, I get ##A^{-1}Ax = A^{-1}b = \frac 1 5 \begin{bmatrix} 3 & -1 \\ -1 & 2 \end{bmatrix} \begin{bmatrix} 5 \\5 \end{bmatrix}##
##= \begin{bmatrix} 2 \\1 \end{bmatrix}##

From this I see that x = 2 and y = 1. You can check that this is a solution by substituting these values in the system of equations.
I see, thank you very much!
 
Essentially, matrices allow you to write any system of linear equations as the single equation "Ax= b", the simplest form.
 
The shorthand notation provided by the matrix is very beneficial. Keeping track of the variables that the matrix operates on often clutters up the calculations. If you compose a sequence of linear operations ( E = A * B * C * D ), you can do the matrix manipulations easily. If you try to name and keep track of all the intermediate values, it is just an unnecessary mess. ( x2 = Dx1; x3 = Cx2; x4 = Bx3; x5 = Ax4; so x5 = E x1 )
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top