Linear Algebra augmented matrix

AI Thread Summary
The discussion revolves around solving a 3x3 augmented matrix for a system of linear equations and determining the conditions for the existence of solutions based on the values of variables a and b. Participants explore the implications of the rank of the matrix and how it relates to the number of solutions, noting that if the rank of the coefficient matrix equals the rank of the augmented matrix and is less than the number of variables, there are infinitely many solutions. Additionally, the conversation covers proving that a set of vectors defined by the equation x + y + z = 0 forms a subspace of R^3, with participants clarifying the requirements for linear dependence among given vectors. The discussion also includes practical steps for row reduction and finding specific values for a and b that yield different types of solutions. Overall, the thread emphasizes understanding the properties of linear systems and vector spaces in linear algebra.
ultima9999
Messages
43
Reaction score
0
1. The augmented matrix for a system of linear equations in the variables x, y and z is given below:

[ 1...-1...1...|..2 ]
[ 0...2...a -1.|..4 ]
[ -1...3...1...|..b ]

*It's a 3x3 augmented matrix btw. Can't do the big square brackets, so I made do with the smaller ones...*

For which values of a and b does the system have:
a) no solutions;
b) exactly one solution;
c) infinitely many solutions?

For the values of a and b in c), find all solutions of the system.

2. a) Show that the set of all vectors (x, y, z) such that x + y + z = 0 are subspaces of R^3 (Euclidean space).

b) Let u = (2$, -1, -1), v = (-1, 2$, -1) and w = (-1, -1, 2$),

i) For what real values of $ do the vectors u, v and w form a linearly dependant set in R^3?

ii) For each of these values express one of the vectors as a linear combination of the other two.
 
Physics news on Phys.org
Where are you having trouble?
 
For 1., I'm not sure where to start. I know it is an inhomogenous system with the left part of the matrix being "A", the unknowns x, y, z being x and the right part of the matrix being b.
Then if Rank(A) = Rank(A|b) and < than the number of unknowns, there are infinite solutions; if = to the number of unknowns, then there is 1 solution. Now, if Rank(A) =/= Rank(A|b), there are no solutions.

But the problem for me is how to find the unknowns to get to that stage?

For 2a), I tried to go:

Let S be the set of all vectors of the form (0, 0, 0) and let u and v be vectors in S.

Therefore: u = (u, 0, 0) and v = (v, 0, 0), u, v є R

Now we test:
i) u + v = (u, 0, 0) + (v, 0, 0) = (u + v, 0, 0)
Therefore u + v є S (since u + v є R)

ii) ku = k(u, 0, 0), k є R
= (ku, 0, 0)

Therefore: ku є S

Therefore: S is a subspace of R^3

and for 2b), I'm not sure how to find the values. If there wasn't the unknown $ and it asked to determine whether it was linearly dependant, then I could do it; but this is just puzzling me.
 
How about starting by doing exactly what you would do to solve the matrix equation:
\left(\begin{array}{cccc} 1 &amp; -1 &amp; 1 &amp; 2 \\ 0 &amp; 2 &amp; a- 1 &amp; 4 \\ -1 &amp; 3 &amp; 1 &amp; b\end{array}\right)

Since you already have a 1 in the "first column, first row", and a 0 in the "first column, second row", you need to get a 0 in the "first column, third row" and you can do that by adding the first row to the last row.

Continue row-reducing as far as you can. Of course, at some point you may need to divide by something involving a- you can do that and get a single unique solution as long as that "something" is not 0. For what value of a is that "something" 0? In that case, you wind up with a third row consisting of all 0s except possibly the fourth column. What does that mean? What happens if the fourth column is also 0 and for what values of a and b does that happen?
 
Last edited by a moderator:
Reply

I do not really understand your working for question 2a. Does S represent the set of all vectors (x, y, z) such that x + y + z = 0? If yes, then your vectors u and v are not in S.
Instead, why not define u = (x_{1} \ y_{1} \ z_{1}) and v = (x_{2} \ y_{2} \ z_{2}), where x_{1} + \ y_{1} + \ z_{1} = 0 \ and \ x_{2} +\ y_{2} +\ z_{2} = 0. You can then carry on proving from here!

For question 2b, just do what you normally would had there not been any unknowns! After you finish the row operations, set the last row to be a zero row to determine the values of $ for linear dependence. Do you know why this is done?
 
Last edited:
For 2a), I wrote:

let S be the set of all vectors of the form (x, y, z) such that x + y + z = 0; x, y, z є R; and let u and v be vectors in S

Therefore: u = (x1, y1, z1 and v = (x2, y2, z2), where x1 + y1 + z1 = 0 and x2 + y2 + z2 = 0, and x1, x2, y1, y2, z1, z2 є R

i) u + v = (x1 + x2, y1 + y2, z1 + z2)
Therefore u + v є S (since x1+x2, y1+y2, z2+z2 є R)

ii)ku = (kx1, ky1, kz1), k є R
Therefore: ku є S (since kx1, ky1, kz1 є R)

Therefore: S is a subspace of R^3


Working on 2b). For 1, I row reduced, but then I get weird forms of a in the 3rd column.

Btw, where can I read a tutorial on how to use the mathematical text that you guys use?
 
ultima9999 said:
For 2a), I wrote:

let S be the set of all vectors of the form (x, y, z) such that x + y + z = 0; x, y, z є R; and let u and v be vectors in S

Therefore: u = (x1, y1, z1 and v = (x2, y2, z2), where x1 + y1 + z1 = 0 and x2 + y2 + z2 = 0, and x1, x2, y1, y2, z1, z2 є R

i) u + v = (x1 + x2, y1 + y2, z1 + z2)
Therefore u + v є S (since x1+x2, y1+y2, z2+z2 є R)

It doesn't follow that the sum is in S just because the indvidual components are in R. You need to show that the sum vector also satisfies the definition of S: that (x1+ x2)+ (y1+y2)+ (z1+ z2)= 0.

ii)ku = (kx1, ky1, kz1), k є R
Therefore: ku є S (since kx1, ky1, kz1 є R)

Once again, just saying that the components are in R is not sufficient. You must show that kx1+ ky1+ kz1= 0.

Therefore: S is a subspace of R^3

Yes, provided you clean up (i) and (ii).


Working on 2b). For 1, I row reduced, but then I get weird forms of a in the 3rd column.
You have u = (2x, -1, -1), v = (-1, 2x, -1) and w = (-1, -1, 2x) and are asked for what values of x they are linearly dependent. Row reducing will work and can be simplified by putting (-1, -1, 2x) as the first row, (-1, 2x, -1) as the second row, and (2x, -1, -1) as the third row. Doing that I get -4x2+ 2x- 2 as the remaining number in the third row and these will be linearly dependent if that is 0.


Btw, where can I read a tutorial on how to use the mathematical text that you guys use?
There is a tutorial on LaTex formatting in the "Tutorials" section under "Science Education":
https://www.physicsforums.com/showthread.php?t=8997
 
Last edited by a moderator:
HallsofIvy said:
It doesn't follow that the sum is in S just because the indvidual components are in R. You need to show that the sum vector also satisfies the definition of S: that (x1+ x2)+ (y1+y2)+ (z1+ z2)= 0. Once again, just saying that the components are in R is not sufficient. You must show that kx1+ ky1+ kz1= 0. Yes, provided you clean up (i) and (ii).

Ok, thanks!
HallsofIvy said:
You have u = (2x, -1, -1), v = (-1, 2x, -1) and w = (-1, -1, 2x) and are asked for what values of x they are linearly dependent. Row reducing will work and can be simplified by putting (-1, -1, 2x) as the first row, (-1, 2x, -1) as the second row, and (2x, -1, -1) as the third row. Doing that I get -4x2+ 2x- 2 as the remaining number in the third row and these will be linearly dependent if that is 0.

Yeah, I worked that out myself, and was just about to post an update. I apologize for not posting my remark clearly; it should have been:

"I'm working on 2b now.
However, for 1., I row reduced, but then I get weird forms of a in the 3rd column."Anyway, I got $ = 1 or -1/2. Furthermore, when $ = -1/2, the second row is a zero row as well, from my final matrix of ($ replaced by \lambda):

\left(\begin{array}{ccc|c}1 &amp; 1 &amp; -2\lambda &amp; 0\\0 &amp; 2\lambda + 1 &amp; -2\lambda -1 &amp; 0\\0 &amp; 0 &amp; 4\lambda^2 -2\lambda -2 &amp; 0\end{array}\right)
HallsofIvy said:
There is a tutorial on LaTex formatting in the "Tutorials" section under "Science Education":
https://www.physicsforums.com/showthread.php?t=8997

Thanks dude, you've been a great help!
 
Last edited:
So for \lambda = 1,

\left(\begin{array}{ccc|c}1 &amp; 1 &amp; -2 &amp; 0\\0 &amp; 3 &amp; -3 &amp; 0\\0 &amp; 0 &amp; 0 &amp; 0\end{array}\right)

c_{3} is arbitrary; c_{3} = t, t \epsilon R
3c_{2} = 3c_{3}<br /> c_{1} = 2c_{3} - c_{2} = 2t -t = t

(because c_{1}\left(\begin{array}{ccc}2\lambda &amp; -1 &amp; -1\end{array}\right) + c_{2}\left(\begin{array}{ccc}-1 &amp; 2\lambda &amp; -1\end{array}\right) + c_{3}\left(\begin{array}{ccc}-1 &amp; -1 &amp; 2\lambda\end{array}\right) = \mathbf {0})

solution space: \left(\begin{array}{ccc}c_{1} &amp; c_{2} &amp; c_{3}\end{array}\right) = t\left(\begin{array}{ccc}1 &amp; 1 &amp; 1\end{array}\right) where t \epsilon R

let t = 1, \left(\begin{array}{ccc}c_{1} &amp; c_{2} &amp; c_{3}\end{array}\right) = \left(\begin{array}{ccc}1 &amp; 1 &amp; 1\end{array}\right)A\mathbf{x} = \mathbf{b}
\left(\begin{array}{ccc}2\lambda &amp; -1 &amp; -1\end{array}\right) + \left(\begin{array}{ccc}-1 &amp; 2\lambda &amp; -1\end{array}\right) + \left(\begin{array}{ccc}-1 &amp; -1 &amp; 2\lambda\end{array}\right) = \mathbf{0}
\left(\begin{array}{ccc}2\lambda &amp; -1 &amp; -1\end{array}\right) = - \left(\begin{array}{ccc}-1 &amp; 2\lambda &amp; -1\end{array}\right) - \left(\begin{array}{ccc}-1 &amp; -1 &amp; 2\lambda\end{array}\right)

And I'll leave out -1/2 because it takes ages to type out all that Latex...
 

Similar threads

Replies
69
Views
8K
Replies
2
Views
2K
Replies
4
Views
3K
Replies
32
Views
2K
Replies
3
Views
2K
Replies
19
Views
2K
Replies
4
Views
2K
Back
Top