Matrices and Matrix Calculations of Linear Algebra

  • #1
My intent is to create a thread for people interested in Linear Algebra using Matrices. However, I will explicitly state that I am only a student of this class myself and that many things could end up being wrong or an improper way to teach.

I will merely be going along with the class, mostly using excerpts and questions from the book, "Linear Algebra and its Applications: Third Edition," by David C. Lay. So truthfully, this is more for myself. Looking things up and explaining it to others seems to be the best way to learn.

If people have any questions or comments, feel free to share. Also, I know there are many knowledgeable people on this board, so be sure to correct me or make suggestions. This will be very similar to my Intro. to Differential Equations thread.

Some background;

A linear equations is one that has only base variables standing alone or scaled. In most cases I will be giving examples using the variable x with subscripts of whole numbers (i.e 1, 2, ..., n) A linear equation is there for something in this form;

a1x1 + a2x2 + ... +anxn

where the a's can be any real or complex number and the x's any variable.

For example: 3x1 - [squ](6)x2 = 10
is linear while: 2x1x2 + 6[squ](x3) = 10 is not. This is because of both the x1x2 and [squ](x3).

A system of linear equation is then just a set or combination of linear equations usually written one above the other with the symbol "{" denoting the set.


A system of a certain number of constants and equations can have three possibilities for solutions; No solution (inconsistent), one solution, or an infinite amount of solutions.

In general, if there are more variables than equations, then the system has infinite many solutions, and if the number of variables equal to the number of equations, the system will only have one solution. But this is not something to memorize, you will easily see this and the exceptions to this very soon. It can easily be shown using two variables and graphing the equations, the intersection will provide the solution.

Starting with the system;

3x1 + 2x2 = 6
1x1 + 1x2 = 1

A system can be solved by solving for one variable first, either by substitution or addition. Let's look at substitution of this example first.

Taking the second equation it can be rewritten as;
1x1 = 1 - 1x2

this can then be substituted into the first equations;
3(1 - 1x2) + 2x2 = 6

Now that there is only one variable the equation can be solved;
3 - 3x2 + 2x2 = 6
x2 = -3

This can then be substituted into any of the original equations, let's use the second since it is much simpler;
1x1 + 1(-3) = 1
1x1 = 4

Therefore the answer;
x1 = 4
x2 = -3

This can become extremely tedious with many places for error as the variables increase so let us look at a different method.

Any equation can be multiplied by a scalar and still be the same equations, so let us take the original and multiply the second by negative three (-3) to obtain;

3x1 + 2x2 = 6
-3x1 - 3x2 = -3

An addition of the two equation can the be simply be preformed to obtain;
-x2 = 3

and then this solution be plugged back in to obtain the same answers;
x1 = 4
x2 = -3

The idea with this is to cancel out variables by addition of appropriate scalar quantities. But using this form can become very confusing and cluttering so let us transform this into a matrix which will allow for a much more organized and clean look.

Let us simply take the coefficients of the last example and write a coefficient matrix; [Broken]

This is a 2X2 matrix, where the first number denotes the number of rows and the second the number of columns. You will later see why and how this can be done.

To actually solve this problem we will need the numbers for which each equation is equivilant to. Let us just add these to the end and make what is called an augmented matrix. [Broken]

Then just writting the coefficients of the previous system we see that;

3x1 + 2x2 = 6
-3x1 - 3x2 = -3 [Broken]

3x1 + 2x2 = 6
0x1 - 1x2 = 3 [Broken]

The idea of a matrix is to have the first term in the first row equal to one and then adding a scalar quantity of that row to the ones below to cancel that first term. Then turning the second number in the second row to a one and adding a scalar quantity of that row to the rows below it to cancel the second term. This goes on until all rows are done and matrix such as this has been created; [Broken]

Where the "k" denotes any real number.

This can be accomplished by using Elementary Row Operations:
1. Replacement - Replace one row by the sum of itself and a multiple of another row.
2. Interchange - Interchange two rows
3. Scaling - Multiply all entries in a row by a nonzero constant.

Let us try an example:
Given the system;
x1 - 2x2 + x3 = 0
x2 - 83 = 9
-3x2 + 13x3 = -9

Making sure we line up our x variable coefficient terms correctly we can write this in an augmented matrix [Broken]

Since the first term in the first row is already one we can skip 3 of the Elementary Row Operations (ERO). Since the first terms in the subsequent rows are already zero we can also skip 1 of the ERO for the first term. Now lets make the second term in the second row equal one by using 3 of the ERO [Broken]

Now let us use 1 to create this matrix; [Broken]

The result can be writen as;
x1 - 2x2 + x3 = 0
x2 - 4x3 = 4
x3 = 3

The solution is then;
x1 = 29
x2 = 16
x3 = 3

Some extra links
Last edited by a moderator:

Answers and Replies

  • #2
Row Reduction and Echelon Forms

I will try not to use so many pictures again, it was just a pain in the butt to do. So instead I will combine them all into one picture and label them as steps and explain them by the step number, unless someone know of how to put this directly onto the computer.
I also ask that you only open the image once while reading the entire post, I have very limited bandwidth.

A Matrix is in Echelon Form if it has the following three properties:
1. All nonzero rows are above any rows of all zeros.
2. Each leading entry of a row is in a column to the right of the leading entry of the row above it.
3. All entries in a column below a leading entry are zeros.

This is shown in examples one and two of the picture.

If the matrix also meets the following requirements it is in Reduced Echelon Form:
4. The leading entry in each nonzero row is 1
5. each leading 1 is the only nonzero entry in its column (except for the last column if it is an augmented matrix.

This is shown in examples three and four of the picture. The fifth example is a general form where * depicts any number and the arrows show where there must be zeros.

Pivot Positions and columns

A pivot position is the position of the first nonzero value of a matrix in echelon form. A pivot column is a column with a pivot postion. Each variable that does not have a pivot position is a free variable, one that the answer to is unknown.

Example six is of an augmented matrix first forwards row reduced to create echelon form to show pivot positions and columns, then backwards row reduced to obtain the answer. [Broken]
Last edited by a moderator:
  • #3
Vector Equations

A Matrix with only one column is called a column vector or simple a vector.

Addition of Vectors; add vectors by addition of their components
Ex. 1

Scalar Product; multiply by constant
Ex. 2

Algebraic Properties of Rn
for all u, v, w (where a bold variable will denote a vector) in Rn and all scalars c and d;

1. u + v = v + u
2. (u + v) + w = u + (v + w)
3. u + 0 = 0 + u = u
4. u + (-u) = -u + u = 0
5. c(u + v) = cu + cv
6. (c + d)u = cu + du
7. c(du) = (cd)u

Let's take a look at an addition of three vectors a1, a2, and a3 with components x1, x2, and x3 equal to b
Ex. 3

Let V1, ..., Vn be in IRn. The set of all linear combinations of V1, ..., Vn are denoted by the span {V1, ..., Vn} and is called the sub set of IRn spanned.

Span Rn depends not on the number of vectors or coordinates but on the number of pivot positions. So a matrix of n vectors with x components each will span RPivot Points [Broken]
Last edited by a moderator:
  • #4
The Matrix Equations : Ax = b

If A is an m X n matrix, with columns a1, ..., an, and if x is in Rn, then the product of A and x, denoted by Ax, is the linear combination of the columns of A using the corresponding entries in x as weights;

Ex. 1

For a multiplication of a matrix and a vector to occur, the number of entries in a vector must equal the number of columns in the matrix.

To solve, multiply the first term in the vector by the first column in the matrix, then adding the second by the second and so on. It can also be done by multiply the first term in the vector by the first term of the first row in the matrix, then adding the second by the second in the first row and so on. Then repeating this for any subsequent rows.

Examples 2 and 3

In the previous matrices we sort of magically made the Xn terms vanish and just wrote their coefficients. In thruth, these x terms factor out and become vector of x.

Let us solve;
x1 + 2x2 -x3 = 4
-5x2 + 3x3 = 1

Ex. 4

If the matrix is an identity matrix (a square matrix such that there are 1's across the downward diagonal and 0's elsewhere), the multiplication of any vector will result in that vector that spans Rsize of matrix. For any n X n identity matrix, multiplied by a vector x with n terms, will result in the x for every x in Rn.

Ex. 5

Properties of Matrix - Vector Product Ax
If A is an m X n matrix, u and v are vectors in Rn, and c is a scalar, then:
1. A(u) + v = Au + Av (Ex. 6)
2. A(cu) = c(Au) [Broken]
Last edited by a moderator:
  • #5
Sol. Sets of Linear Systems and (in)dependence

Homogeneous Linear Systems

This is; equations equal to zero, as in Ax = 0
Where A is an m X n matrix and 0 is a vector in Rm

This system always has atleast one solution, namely x = 0 (the trivial solution). If one x has a non trivial solution, then other terms can be defined by the nontrivial solution.

Ex. 1

Parametric Vector Form

NonHomogeneous Linea Systems

This is; simply an eqution equals to some number, as in Ax = b They are solved the same way as Homogeneous Linear Systems.

Ex. 2

w = p + vh

Writting a solution set (of a consistent system) in Parametric Vector Form.

Linear Independence

Is the trivial solution the only one?

An index set of vectors {v1, ..., vp} in Rn is said to be Linear Independent if the vector equation

x1v1 + x2v2 + ... + xpvp = 0

has only the trivial solution. The set {v1, ..., vp}is said to be linearly dependent if there exists c1, ..., cp, not all zero, such that

c1v1 + c2v2 + ... + cpvp = 0

Ex. 3 and Ex. 4

Linear Independence of Matrix Columns

Ex. 5

This is obviously not finished.
  • #6
Matrix Operations

First, lets start by denoting the entries in a matrix. In a m X n (m rowns, n columns) matrix the entries can be writen as arow number and column number, such that the first entry in the first column is denoted by a11. The second entry in this colum would then be a21, all the way to am1. The first entry in the second row would then be a12 and the last entry in the last row; amn.

Example 1

Sum of matrices.

Two matrices, A and B, can be added if each entry has a corresponding entry in the other matrix (i.e. they are the same size matrix). In such a case, matrix sums are the same as in vectors, where corresponding terms are added to form the sum.

Example 2

Also, matrices are scaled like vectors, where each entriy is multiplied by a scalar number c.

Example 3

Let A, B and C be matricies of the same size, and let r and s be scalars.

a. A + B = B + A
b. (A + B) + C = A + (B + C)
c. A + 0 = A
d. r(A + B) = rA + rB
e. (r + s)A = rA + sA
f. r(sA) = (rs)A

Matrix Multiplication

Remember how to multiply a vector of n entries to a m X n matrix, the process is the same for two matrices. In the case of multiplying matrices, the two matrices must not be equal in size but the first must have the same number of columns as the second in rows (i.e. AB only if matrix A is a m X n matrix and B is a n X p matrix). the result will then be matrix C, a m X p matrix.

Row-Column Rule for computing C
If the product of AB is defined, then the entry in row i and column j of C is the sum of the products of the corresponding entries from row i of A and column j of B.

Example 4

Prperties of Matrix Multiplication
Recall that Im represents the m X m identity matrix and Imx = x for all x in Rm

Let A be an m X n matrix, and let B and C have sizes for which the indicated sums and products are defined. The term r is just a scalar

a. A(BC) = (AB)C
b. A(B + C) = AB + AC
c. (B + C)A = BA + CA
d. r(AB) = (rA)B = A(rB)
e. ImA = A = A In

In general,
AB [x=] BA
If AC = AB, C [x=] B
If AB is the zero matrix, you cannot conclude that either A or B equal zero

Example 5

Powers of Matrix

If A is an n X n matrix and if k is a positive interfer, then Ak denotes the product of A k times.

If A is nonzero and if x is in Rn, then Akx is the resolt of left-multiplying x by A repeatedly k times. if k = 0, then A0x and should be x itself. Thus A0 is interpreted as the identity matrix.

Transpose of a Matrix
Given an m X n Matrix A, the transpose of A is the n X m matrix, denoted AT, whose columns are formed from the corresponding rows of A.

Example 6

Let A and B denot matrices whose sizes are appropriate for the following sums and products and r is a scalar,

a. (AT)T = A
b. (A + B)T = AT + BT
c. (rA)T = rAT
d. (AB)T = BTAT [Broken] [Broken]
Last edited by a moderator:
  • #7
Inverse of Matrix

Whoa! it has been way too long since I have last updated this. I have been too busy and concentrating too much on my other thread, even though it is way behind as well.

Let us speak briefly about matrix inversion, it is a relatively easy concept. It will be in the link below, I have changed format. I will no longer type here and include pictures, I will now instead use word documents with equation editor.

The Inverse of Matrix

I hope no one will have problems this way, if so be sure to tell me and I will try a different format.
Last edited:

Suggested for: Matrices and Matrix Calculations of Linear Algebra

  • Last Post
  • Last Post
  • Last Post
  • Last Post