MHB System of linear equations, all possible outcomes

Yankel
Messages
390
Reaction score
0
Hello all,

I need your help, I am trying to put some order in all the possible outcomes of a system of linear equations, I want to create a diagram or a table for my convenience.

What I want to know, which I am not sure of, is these kind of things:

If the system is homogeneous, and there are more variables than equations, then...

If the system is NOT homogeneous, and there are more variables than equations, then...

If the system is homogeneous, and there are more equations than variables, then...

If the system is NOT homogeneous, and there are more equations than variables, then...

If the system is homogeneous, and there are same number of equations and variables, then...If the system is NOT homogeneous, and there are same number of equations and variables, then...

the answer for each statement should be: no solution, single solution, infinite number of solutions, or two or three of the answers together.

Can you assist me with determining this ? Are there situations I forgot ?

Thank you !
 
Physics news on Phys.org
Here are the answers. Can you supply examples to show that each of the listed outcomes can occur, and proofs that none of the non-listed outcomes can occur?

I am using the abbreviations N for no solutions, S for single solution, I for infinite number of solutions. (For homogeneous systems, a single solution means the trivial solution where all the variables are zero.)If the system is homogeneous, and there are more variables than equations, then... I

If the system is NOT homogeneous, and there are more variables than equations, then... N or I

If the system is homogeneous, and there are more equations than variables, then... S or I

If the system is NOT homogeneous, and there are more equations than variables, then... N or S or I

If the system is homogeneous, and there are same number of equations and variables, then... S or I

If the system is NOT homogeneous, and there are same number of equations and variables, then... N or S or I
 
thanks !

Examples should be fairly easy (I think). Proofs on the other hand...
 
Some things I want to point out:

There are only infinite number of solutions when the vector space is infinite. Finite vector spaces DO exist.

"Number of equations" is really a "bad yardstick" to use. Here is why:

$x + y = 0$
$2x + 2y = 0$

Those are two equations, but the second one doesn't tell us anything more than the first one does.

Here is another example:

$x = 2$
$y = 3$
$x + y = 5$

Those are three equations, but we only need two of them.

Here is a really silly example:

$0 = 1$.

That's an equation...but NO variables. It also can never be true. In this case, the number of equations doesn't really tell us anything, except whoever wrote it (me) is a little soft in the head.

Here is a BETTER way to look at a system of equations:

$Ax = b$.

What we are really interested is the FUNCTION:

$x \mapsto Ax$

Things we want to know:

Is this function (which we'll also call $A$, just to be confusing) one-to-one?
Is this function onto?
Is $b$ actually in the range of $A$?

Now the number of equations is really the dimension of the space $b$ lives in (the co-domain of $A$).
The number of variables is the dimension of the space $x$ lives in, which is the domain of $A$.
The rank of $A$ tells us how much $A$ reduces the dimension of its domain to (the dimension of the range of $A$).
The nullity of $A$ tells us how much reduction takes place.

The rank-nullity theorem says these two things exactly balance, if the rank is $k$ and the domain has dimension $n$, then the nullity is $n-k$.

In other words, it's clearer to stop thinking about "the number of ____ in the system of equations" and think about the properties of the matrix $A$. We want to know its size, and we want to find a basis for its image and kernel (sometimes all we need to know is "how big are these bases", which is what rank and nullity tell us).

See, what the equations are, are linear combinations of the variables. And that is what vector spaces are all about: linear combinations. We row-reduce to find linearly independent (non-redundant) linear combinations. This actually reduces the number of equations we have to the bare minimum, and THAT's what we want to know, not how many equations we started out with.

You should try to do what Opalg suggests, anyway. Let us know where you get stuck.
 
Deveno, is there an linear algebra book that teaches this material from the point of view you just described ? any recommendations ?
 
Yankel said:
Deveno, is there an linear algebra book that teaches this material from the point of view you just described ? any recommendations ?

The only two books I am famiilar with enough to whole-heartedly recommend are:

Linear Algebra Done Right, A. Shelton

Linear Algebra (2nd ed.), Hoffman & Kunze

*************

That said, I suspect similar material can be found in a great many texts.

Here the thing: there is a trend in mathematics, which you may or may not have noticed. For example, take sets: while sets are interesting things in their own right, often what we are mostly occupied with is functions BETWEEN sets. These functions may take many forms, so that we don't even realize there IS a function involved.

For example, say I have two sets with $A \subseteq B$. There is a natural function associated with this, the function:

$f: A \to B$ with $f(a) = a$, for all $a \in A$. This type of function is called an INCLUSION function. A identity function:

$1_A:A \to A$ with $1_A(a) = a$ (note the similarity with above) reflects that fact that $A$ is included in $A$.

Another example: if we have a set with an equivalence relation $\sim$, there is a natural function:

$f: A \to A/\sim$ given by $f(a) = [a]$ which maps every element of $A$ to the equivalence class that contains it.

I cannot stress how important these two examples are, they occur is many "disguises" in many areas. Now, in linear algebra, the focus in many courses in on "vectors", you learn how to add them, to calculate their dot product, and cross-product, and to test sets of vectors for linear independence. But the real "meat" of linear algebra isn't even about vectors (which are pretty simple things, actually), it's about LINEAR TRANSFORMATIONS.

Well, this is an "abstract concept", with very wide applicability. Often, people feel more comfortable with "things they can get their hands on", that they can visualize, and relate to the world around them. Now the cool thing about linear transformations is: in a finite-dimensional vector space (such as the plane, or real 3-space), we actually have a "concrete realization" of what a linear transformation IS: we pick a basis (or two, if our domain space, and co-domain space are different dimensions), and form the matrix for the linear transformation in that basis (or bases).

So, in a sense, matrices form the "arithmetic" of linear algebra, much like "numbers" form the arithmetic of high-school algebra. For every theorem about vector spaces and linear transformations, we get a corresponding theorem about tuples (essentially, matrices with "one column") and matrices.

I like to think of the dimension of a vector space as telling us the "size" of the space. A linear mapping $L$ will preserve linear combinations:

$L(c_1x_1 + c_2x_2 + \cdots + c_nx_n) = c_1L(x_1) + c_2L(x_2) + \cdots + c_nL(x_n)$.

Now there are basically "two kinds of things" $L$ can do:

1. Preserve the size,
2. Shrink the size.

(Functions can only send one domain element to one image element, they never "expand" the domain).

As far as where $L$ sends the space (to its image), their are also two things that could happen:

3. $L$ could "cover" all of the target space
4. $L$ covers only PART of the target space.

These 4 pieces of information are what we want, they tell us "the general behavior" of a linear transformation. Let's look at one matrix in detail, to see how this fits together:

Suppose:

$A = \begin{bmatrix}1&2&1\\3&-1&0\\9&4&3 \end{bmatrix}$

If we choose "the typical basis" $\{(1,0,0),(0,1,0),(0,0,1)\}$ for $\Bbb R^3$, this is the matrix for THIS linear transformation:

$L(x,y,z) = (x+2y+z,3x-y,9x+4y+3z)$

If we pick $b = (3,2,13)$, then the equation $Av = b$ is this system of linear equations:

$x + 2y + z = 3$
$3x - y = 2$
$9x + 4y + z = 13$.

So, the first thing we want to know is: "does $A$ do any shrinking"? Let's be clear about what this means:

Suppose $Av_1 = Av_2$, with $v_1 \neq v_2$. This means $A$ sends two different vectors to the same one. We can re-write this as:

$Av_1 - Av_2 = 0$, and then because $A$ is linear, $A(v_1 - v_2) = 0$, and $v_1 - v_2$ is non-zero, since $v_1 \neq v_2$.

On the other hand, if $A$ sends some non-zero vector $u$ to the 0-vector, then for any vector $v \neq u$, we have:

$A(u + v) = Au + Av = 0 + Av = Av$, and $u + v \neq v$ since $u \neq 0$. To summarize this:

A matrix $A$ has collapsing if, and only if, it sends a nonzero vector TO the zero vector.

Thus, if $A$ DOES NOT collaspe anything, that is for each $u$ we get a UNIQUE $Au$, then the only possible vector $A$ sends to 0 is 0.

$A$ is injective $\iff \text{ker(A)} = \{0\} \iff \text{nullity}(A) = 0$.

So which category does OUR $A$ fall into? To find out, we solve the HOMOGENEOUS system:

$x + 2y + z = 0$
$3x - y = 0$
$9x + 4y + z = 0$.

We can do this different ways, I will row-reduce $A$ to get:

$\text{rref}(A) = \begin{bmatrix}1&0&\frac{1}{7}\\0&1&\frac{3}{7}\\0&0&0 \end{bmatrix}$

which corresponds to the REDUCED SYSTEM OF EQUATIONS:

$x + \dfrac{z}{7} = 0$

$y + \dfrac{3z}{7} = 0$

This tells us if we pick any value for $z$, say $t$, that $(x,y,z) = \left(\dfrac{-t}{7},\dfrac{-3t}{7},t\right)$

$= t(-\frac{1}{7},-\frac{3}{7},1)$

Note that we have only one "free parameter" ($t$), so the nullity of $A$ is 1, and:

$\text{ker}(A) = \{t(-\frac{1}{7},-\frac{3}{7},1): t \in \Bbb R\}$ that is to say:

$\{(-\frac{1}{7},-\frac{3}{7},1)\}$ is a basis for the null space of $A$.

This null space is bigger than just the 0-vector (0,0,0), so $A$ falls into category 2: it collapses.

Since we know that $A$ "loses (collapses) one dimension" that leaves us with 2 of our 3 we started with left, and indeed we see the rank of $A$ is 2 (it's rref has 2 non-zero rows). This makes sense: 1 + 2 = 3.

Since our target space has 3 dimensions, there's no way $A$ can "fill" all of it, so it also falls into category 4, it only covers part of the target space.

What is the range of $A$?

Well, $A(1,0,0) = (1,3,9)$ and $A(0,1,0) = (2,-1,4)$. So $A$ contains at LEAST the span of these two vectors:

$\{a(1,3,9) + b(2,-1,4): a,b \in \Bbb R\} \subseteq \text{im}(A)$.

A basis is a minimal spanning set, so if these two vectors ((1,3,9) and (2,-1,4)) are linearly independent, they form a basis for the range of $A$, or the COLUMN SPACE. Let's check to see if they ARE linearly independent:

Suppose:

$a(1,3,9) + b(2,-1,4) = 0$, that is:

$(a+2b,3a-b,9a+4b) = (0,0,0)$, so that:

$a+2b = 0$
$3a-b = 0$
$9a+4b = 0$

We have from the first equation: $a = -2b$, so the second equation becomes:

$-6b - b = 0 \implies -7b = 0 \implies b = 0$, and it is then evident we have have $a = 0$.

So the only solution is $a = b = 0$, so they are indeed linearly independent.

So we have found a basis for $\text{im}(A)$, namely $\{1,3,9),(2,-1,4)\}$. These two vectors determine a plane in $\Bbb R^3$.

Now we are in a position to answer: is $(3,2,13)$ in the range of $A$?

If it is, we have:

$a+2b = 3$
$3a-b = 2$
$9a+4b = 13$

We could solve this system, or use the rref of the augmented matrix, which turns out to be:

$\text{rref}(A|b) = \begin{bmatrix}1&0&\frac{1}{7}&|&1\\0&1&\frac{3}{7}&|&1\\0&0&0&|&0 \end{bmatrix}$

We can pick any value for $z$ we like, so let's pick $z = 0$ to make life easier on us. Our reduced non-homogenous system is equivalent to:

$x + \dfrac{z}{7} = 1$

$y + \dfrac{3z}{7} = 1$

which tells us that $(x,y,z) = (1,1,0)$ is a solution, or, equivalently:

$(1)(1,3,9) + (1)(2,-1,4) = (3,2,13)$, so $b$ is indeed in the range of $A$.

Note the crucial role the numbers 1,2 and 3 played in the above. What has happened to 3-space, as $A$ transformed it, is that $A$ shriunk the entire line that goes between the origin and (1,3,-7) (I used $t = -7$ to clear the fractions), leaving only a plane behind. Any point not on that plane, can never be reached by $A$.
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...

Similar threads

Back
Top