Is it possible for [A:B1] to have a unique solotion...?

  • Thread starter Rijad Hadzic
  • Start date
In summary, the conversation discusses the possibility of a system of linear equations having two augmented matrices with the same coefficient matrix A, where one has a unique solution and the other has many solutions. It is concluded that this is impossible due to the properties of A as a linear function, and the fact that the unique solution represents a point in R^3 where the three lines defined by the row vectors of A intersect. The method of finding the solution using Gauss Jordan elimination is also mentioned.
  • #1
Rijad Hadzic
321
20

Homework Statement


This is a linear algebra problem. Sorry I wasn't sure it to put this on precalc or here.

Consider two systems of linear equations having augmented matrices [A:B_1] and [A:B_2] where the matrix of coefficients of both systems is the same 3x3 matrix A.

Is it possible for [A:B_1] to have a unique solution and [A:B_2] to have many solutions?

Homework Equations

The Attempt at a Solution


I know that the answer is no due to my textbook's answer page, but I'm still a little confused as to why.

With the B_1 matrix, since it's a unique solution, I expect it to be in rref, by way of Gauss Jordan elimination, since this is the first chapter and that's the only method we know so far.

that means x1 = r x2 = s x3 = t for the first matrix since the solution is unique.

The second one has the same matrix of coefficients A. Since B_1 has a unique solution, and the matrix A didn't change, it would thus be impossible to have a set of solutions that involves parameters right?
 
Physics news on Phys.org
  • #2
Since it is 3x3, and solotion is unique, for B-1, that means your first row in column 1 you're going to have a leading one, and below it for the other two elements of that column, you're going to have zeros. For row 2 column 2, you have a leading 1 with zeros above and below it on the same column. For row three is the same deal with leading one at the last element with 0's above it in the same column.

This is the identity matrix, and it has unique solutions. This could be different If it wasn't 3x3, but since it's 3x3 that's what the properties of this matrix are. Therefore, it is impossible to have many solutions having the same 3x3 rref matrix... I know my explanation kinda sucks but can anyone confirm this is the reason why? Kinda having trouble with this problem lol
 
  • #3
I moved it to Precalculus as the main distinction parameter are derivatives, and this one is without.

What does it mean for ##A##, that ##[A:B1]## has a unique solution? What can be said about the nature of ##A##? Forget about the normal form and elimination procedures. What does it say about ##A## as a linear function?
 
  • #4
fresh_42 said:
I moved it to Precalculus as the main distinction parameter are derivatives, and this one is without.

What does it mean for ##A##, that ##[A:B1]## has a unique solution? What can be said about the nature of ##A##?

A must follow the 4 conventions:

- leading ones in every row
- Above and below those leading ones, in the same column, are all 0's
- Bottom row's are 0 rows
- The second leading one is to the right of the first, the 3d is to the right of the second.. and so on.

Since A is a 3x3 matrix, we're not going to be left with any free variables, thus having multiple solutions would be impossible. Not sure if this is right but it's how I'm viewing this question currently..
 
  • #5
##[A:B_1]## has one solution means ##A\cdot x = B_1## has one solution, which is a point. What does this mean for the linear function ##A\, : \,\mathbb{R}^3 \longrightarrow \mathbb{R}^3## or what does it mean geometrically? Do you know the dimension formula for ##A\,##?
 
  • #6
fresh_42 said:
##[A:B_1]## has one solution means ##A\cdot x = B_1## has one solution, which is a point. 1) What does this mean for the linear function ##A\, : \,\mathbb{R}^3 \longrightarrow \mathbb{R}^3## or what does it mean geometrically? 2) Do you know the dimension formula for ##A\,##?

It means that A will have solutions in a plane? Since it is in ##\mathbb{R}^3## ?

And no, I don't think we've gotten around to learning dimension formula yet..
 
  • #7
fresh_42 said:
##[A:B_1]## has one solution means ##A\cdot x = B_1## has one solution, which is a point. What does this mean for the linear function ##A\, : \,\mathbb{R}^3 \longrightarrow \mathbb{R}^3## or what does it mean geometrically? Do you know the dimension formula for ##A\,##?

Rijad Hadzic said:
It means that A will have solutions in a plane? Since it is in ##\mathbb{R}^3## ?
A is a matrix, so it doesn't have solutions. The solutions we're interested in are for the matrix equation Ax = B1. To clarify what @fresh_42 said, if ##A \cdot x = B_1## has a unique solution, that solution represents a point in ##\mathbb{R^3}##, not a plane or line.

Have you learned how to find such a solution?
 
  • #8
No. Each row vector ##a_i## of ##A## defines a line by ##a_i x = b_{1i}## not a plane. These are three lines which intersect in a single point. Do you know what this means for the three vectors which define these lines? Are the linear independent, linear dependent, how many are dependent? What do they span?
 
  • #9
fresh_42 said:
No. Each row vector ##a_i## of ##A## defines a line by ##a_i x = b_{1i}## not a plane. These are three lines which intersect in a single point. Do you know what this means for the three vectors which define these lines? Are the linear independent, linear dependent, how many are dependent? What do they span?

I mean I think I kinda understand. So we have 3 lines that intersect at one point, and that point solves our system of linear equations. Linear independent and span is the next chapter... I guess I'm going to read that first before I continue with this question..

Mark44 said:
To clarify what @fresh_42 said, if ##A \cdot x = B_1## has a unique solution, that solution represents a point in ##\mathbb{R^3}##.

Do you learned how to find such a solution?
I mean yes I think so. We would find the solution using Gauss Jordan, right? Since for matrix A we have a unique solution, some point in ##\mathbb{R^3}## , where we have a unique solotion x1 = r x2 = s x3=t, and those are the only answers for x1 x2 and x3 that will solve the system and tell us at which point in ##\mathbb{R^3}## solves that system, if we don't change the matrix A which is the identity matrix, it would be impossible to get any answer?
 
  • #10
Yes, you can find the solution by the Gaußian elimination procedure or whatever it is called in your book. At the end, you get (as you've said) the identity matrix. Now what happens if you substitute ##B_1## by ##B_2## and redo the process? Will anything on ##A## change?

I've even seen such a process, where the identity matrix is simultaneously considered and all operations on ##A## are applied to ##I## as well. So try ##[A:B_1:I]## and see what happens.
 
  • #11
fresh_42 said:
Yes, you can find the solution by the Gaußian elimination procedure or whatever it is called in your book. At the end, you get (as you've said) the identity matrix. Now what happens if you substitute ##B_1## by ##B_2## and redo the process? Will anything on ##A## change?

Yes! For B_2 to be a solution to A, A is going to have to be different.

If B_1 was a solution to the identity matrix A, since A is the identity matrix that means B_1 can be the only solution to A.
 
  • #12
Rijad Hadzic said:
A must follow the 4 conventions:

- leading ones in every row
- Above and below those leading ones, in the same column, are all 0's
- Bottom row's are 0 rows
- The second leading one is to the right of the first, the 3d is to the right of the second.. and so on.

Since A is a 3x3 matrix, we're not going to be left with any free variables, thus having multiple solutions would be impossible. Not sure if this is right but it's how I'm viewing this question currently..

An augmented matrix does not need to follow those rules you wrote. Before you perform row-operations you can have a general 3x3 matrix A.

To answer the question, look at how you would deal with the 3x3 linear system ##Ax = b##. When you write it all out you get three linear equations in the three unknowns. If we write the system as
$$ \begin{bmatrix} a_{11} & a_{12} & a_{13}\\
a_{12} & a_{22} & a_{23} \\
a_{31} & a_{32} & a_{33}
\end{bmatrix}
\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}
$$
then in order to have a unique solution, at least one element ##a_{i1}## in the first column must be non-zero (because if not, ##x_1## is not actually part of the system of equations and so can be anything---making the solution non-unique). So, without loss of generality we can assume that ##a_{11} \neq 0##; if not, interchange the rows to make that happen. The first equation reads as ##a_{11} x_1 + a_{12} x_2 + a_{13} x_3 = b_1##. The coefficient of ##x_1## is nonzero, so we can solve for ##x_1## in terms of ##x_2## and ##x_3##:
$$x_1 = \frac{b_1}{a_{11}} - \frac{a_{12}}{a_{11}} x_2 - \frac{a_{13}}{a_{11}} x_3$$
Substituting that expression for ##x_1## into the other two equations will give us a 2x2 system involving ##x_2, x_3## alone. This new system will have an altered 2x2 coefficient matrix on the left and a new 2x1 right-hand-side, say
$$\begin{bmatrix} a'_{22} & a'_{23} \\ a'_{32} & a'_{33} \end{bmatrix} \begin{bmatrix} x_2 \\ x_3 \end{bmatrix}
= \begin{bmatrix} b'_2 \\ b'_3 \end{bmatrix}
$$
Again, if the solution is unique, one of the elements ##a'_{22}, a'_{32}## must be nonzero, and so we can solve for ##x_2## in terms of ##x_3##. When we substitute that expression for ##x_2## into the other equation we obtain a single equation involving ##x_3## alone. If that has a unique solution the coefficient of ##x_3## must be nonzero.

All that is what happens if we have a right-hand-side ##b_1, b_2, b_3## in the original system. However, if we change to a new right-hand-side ##d_1, d_2, d_3##, nothing at all changes on the left when we repeat the steps outlined above. What does that tell you about solutions to the new system?

What happens if the original system does not have a unique solution, or does not have any solution at all? Well, in the course of our solution procedure we must hit a matrix or submatrix in which one of the columns is all zero, so that a particular ##x## does not appear at all in the system. Again, that affects only the left-hand-sides of the equations and so does not change when we go to a new right-hand-side. Think carefully about what that would mean about properties of solutions for the original linear system.
 
Last edited:
  • #13
Rijad Hadzic said:
Yes! For B_2 to be a solution to A, A is going to have to be different.
Yes, but the steps on ##A## are exactly the same. A different ##B_2## gives a different ##x##, but the steps on ##A## are the same.
 
  • #14
Rijad Hadzic said:
Yes! For B_2 to be a solution to A, A is going to have to be different.
This doesn't make good sense. B2 is not a solution to A -- A is just a matrix, not an equation. B1 and B2 are the vectors of constants on the right sides of the two augmented matrices. There is some vector x that is a solution to the equation Ax = B2.

To make things a little clearer, I hope, you have two different systems:
Ax1 = B1
Ax2 = B2
x1 in the first equation doesn't have to be the same as x2 in the second equation.

Rijad Hadzic said:
If B_1 was a solution to the identity matrix A, since A is the identity matrix that means B_1 can be the only solution to A.
The words you are writing don't mean anything.
"If B_1 was a solution to the identity matrix A"
B1 is not a solution -- it is the vector of constants in the augmented matrix.
A is not the identity matrix.
"since A is the identity matrix that means B_1 can be the only solution to A" -- Again, A is not the identity matrix, and B1 is not a solution of the equation Ax = B1. Some vector x is the solution.
 
  • #15
OK! I think I'm starting to understand thank you for all the patience guys! This was pretty hard for me to wrap my mind around..

So A* x = B1

Here, x is our solution

A * p = B2

here, p is our solution.

So the only solutions to B1 will be x, and the only solutions to B2 will be p.

So the original question was "Is it possible for [A:b1] to have a unique solution, and [A:b2] to have many solutions?"

Since all of the steps that it took to find the solution to A*x = B1 are the same for [A:B2], that must mean B2 is also going to have a unique solution?

Thanks again guys, like I said this has been really hard to wrap my mind around. My book hasent even introduced notation like "A*x = B1" to me before, (I haven't seen the augmented matrix times x notation at all)

But is what I said above getting closer to the real reason/answer of the question?
 
  • #16
Ray Vickson said:
All that is what happens if we have a right-hand-side ##b_1, b_2, b_3## in the original system. However, if we change to a new right-hand-side ##d_1, d_2, d_3##, nothing at all changes on the left when we repeat the steps outlined above. What does that tell you about solutions to the new system?

What happens if the original system does not have a unique solution, or does not have any solution at all? Well, in the course of our solution procedure we must hit a matrix or submatrix in which one of the columns is all zero, so that a particular ##x## does not appear at all in the system. Again, that affect only the left-hand-sides of the equations and so does not change when we go to a new right-hand-side. Think carefully about what that would mean about properties of solutions for the original linear system.
"What does that tell you about solutions to the new system?"
So if x was a solution to the first system, then y can be the solution to the new one?
 
  • #17
How do you recognize how many solutions there are? As I read above, it is the number of free parameters. So since ##[A:B_1]## has no free parameters and the operations for ##[A:B_2]## would be the same, how should it be possible to get free parameters for this one? In a concrete example I still recommend to do the operations on ##[A:B_1:I]##. The ##I## is the control matrix which when transformed has a certain meaning in the lessons ahead. I learned to use ##[I:A:B_1]## but the order isn't important as only the operation on ##A## determine the steps to do.
 
  • #18
Rijad Hadzic said:
OK! I think I'm starting to understand thank you for all the patience guys! This was pretty hard for me to wrap my mind around..

So A* x = B1

Here, x is our solution

A * p = B2

here, p is our solution.
Yes to both.
Rijad Hadzic said:
So the only solutions to B1 will be x, and the only solutions to B2 will be p.
The only solution to the equation Ax = B1 is x, and the only solution to the equation Ap = B2 is p.
Rijad Hadzic said:
So the original question was "Is it possible for [A:b1] to have a unique solution, and [A:b2] to have many solutions?"

Since all of the steps that it took to find the solution to A*x = B1 are the same for [A:B2], that must mean B2 is also going to have a unique solution?
Yes.
Rijad Hadzic said:
Thanks again guys, like I said this has been really hard to wrap my mind around. My book hasent even introduced notation like "A*x = B1" to me before, (I haven't seen the augmented matrix times x notation at all)

But is what I said above getting closer to the real reason/answer of the question?
Pretty much.

The notations Ax = b and [A:b] are really just shorthand for a system of equations, such as this system:
a1,1x1 + a1,2x2 + a1,3x3 = b1
a2,1x1 + a2,2x2 + a2,3x3 = b2
a3,1x1 + a3,2x2 + a3,3x3 = b3

The ai,j's are the coefficients of your 3x3 matrix. The xi's are the coefficients of your solution vector. The bi's are the constant terms on the right side.

As an augmented matrix, the system above would look like this:
##\begin{bmatrix} a_{11} & a_{12} & a_{13} & | & b_1 \\
a_{21} & a_{22} & a_{23} & | & b_2 \\
a_{31} & a_{32} & a_{33} & | & b_3 \end{bmatrix}##
 
  • #19
Rijad Hadzic said:
"What does that tell you about solutions to the new system?"
So if x was a solution to the first system, then y can be the solution to the new one?

I said "What happens if the original system does not have a unique solution, or does not have any solution at all?" Have you used that assumption in your answer?

To pin down the concept just invent for yourself a simple 2x2 linear system that has a non-unique solution for some particular right-hand-side. Now change to another right-hand-side. Does the new system have a unique solution? Can you find a new right-hand-side for which the new system has no solutions at all?
 
  • #20
Ok guys sorry to bump this thread but I wanted to know if what I'm writing makes sense

So forget the matrices, I went and wrote 2 random linear equations

3x + 2y = 1
2x + y = 2

solving for y

y = (1/2) - (3X/2)
y = 2 -2X

The questions asks if 3x + 2y and 2x + y has a unique solution, meaning at some point these two intersect, and they will never intersect again.

Now if I changed the equations to

3x + 2y = 4
2x + y = 3

This is analogous to [A:B1], [A:B2] right?? Anyways, I can clearly tell from plotting the next set of equations that going from B1 -> B2 only changes the positioning of the point you get when x = 0, it doesn't magically alter the slope of either equation in some way that would make them parrallel to each other (having multiple solutions)

again sorry for bumping this, but does my logic here make more sense than in my previous post above it?

Also Ray, I've done what you've told me to do but have gotten some weird results. Maybe I'm just not understanding them yet..
 
  • #21
Rijad Hadzic said:
Ok guys sorry to bump this thread but I wanted to know if what I'm writing makes sense

So forget the matrices, I went and wrote 2 random linear equations

3x + 2y = 1
2x + y = 2

solving for y

y = (1/2) - (3X/2)
y = 2 -2X

The questions asks if 3x + 2y and 2x + y has a unique solution, meaning at some point these two intersect, and they will never intersect again.

Now if I changed the equations to

3x + 2y = 4
2x + y = 3

This is analogous to [A:B1], [A:B2] right??
Right.
Rijad Hadzic said:
Anyways, I can clearly tell from plotting the next set of equations that going from B1 -> B2 only changes the positioning of the point you get when x = 0, it doesn't magically alter the slope of either equation in some way that would make them parrallel to each other (having multiple solutions)
Also right. Changing the right side just changes the point of intersection (given that the solution of the first system of equations was unique).
Rijad Hadzic said:
again sorry for bumping this, but does my logic here make more sense than in my previous post above it?
With your system of two equations in two unknowns, the geometry is that you have two lines in the plane. The lines could a) not intersect at all, b) intersect at a single point, or c) intersect at every point (the two equations are equivalent). Since the slopes of the two lines in your examples are different, the lines have to intersect. Changing the constants on the right side just alters the position of the point of intersection.

For the original question in post #1, you have three equations in three unknowns. The geometry here is that you have three planes in space (in ##\mathbb R^3##). There are a few more possibilities in this scenario: a) the planes don't intersect at all, meaning that no point is on all three planes, b) the planes intersect at a single point (a unique solution to the system), c) all three planes intersect in a line (giving an infinite number of solutions along that line, or d) all three planes coincide completely (giving a "double infinity" of points along any of the planes).

Rijad Hadzic said:
Also Ray, I've done what you've told me to do but have gotten some weird results. Maybe I'm just not understanding them yet..
Ray's question is, what happens if the original system does not have a unique solution (scenarios c and d, above) or doesn't have any solution at all (scenario a). Will changing the constants on the right side cause this new system to now have a unique solution?
 
1.

Is it possible for [A:B1] to have a unique solution?

This is a common question in the field of mathematics and engineering. The answer depends on the specific values of A and B1. In some cases, it is possible for there to be a unique solution, while in others there may be multiple solutions or no solution at all.

2.

What factors determine whether [A:B1] has a unique solution?

The main factor that determines whether [A:B1] has a unique solution is the number of equations and variables involved. A system of equations with the same number of equations and variables is more likely to have a unique solution. Other factors such as the type of equations, the form of the equations, and the values of the coefficients can also impact the uniqueness of the solution.

3.

Can the number of solutions for [A:B1] change?

Yes, the number of solutions for [A:B1] can change depending on the values of A and B1. For example, if one or more of the equations in the system is modified or removed, it can affect the number of solutions. Additionally, if the values of A and B1 are changed, it can also impact the number of solutions.

4.

How can I determine if [A:B1] has a unique solution?

The best way to determine if [A:B1] has a unique solution is to solve the system of equations using algebraic methods or using software such as a graphing calculator. If the solution set includes a single unique solution, then [A:B1] has a unique solution. However, if the solution set has multiple solutions or no solutions, then [A:B1] does not have a unique solution.

5.

What implications does a unique solution for [A:B1] have?

A unique solution for [A:B1] can have several implications depending on the context in which it is used. In mathematics, a unique solution can indicate that a problem has been solved correctly and can provide a definitive answer. In engineering, a unique solution can be used to design or optimize systems. In real-world applications, a unique solution can represent a single possible outcome or solution to a problem.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
976
Replies
5
Views
1K
  • Precalculus Mathematics Homework Help
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
817
  • Precalculus Mathematics Homework Help
Replies
1
Views
925
  • Precalculus Mathematics Homework Help
Replies
4
Views
1K
  • Precalculus Mathematics Homework Help
Replies
6
Views
1K
  • Precalculus Mathematics Homework Help
Replies
3
Views
2K
  • Precalculus Mathematics Homework Help
Replies
4
Views
1K
Back
Top