Finding values of t for which the set is linearly independent

Click For Summary
The discussion centers on determining the values of t for which the set of vectors s = {(t,1,1),(1,t,1),(1,1,t)} is linearly independent. Participants suggest using a matrix representation and calculating the determinant, which leads to the polynomial 2 - 3t + t^2, indicating linear dependence when t equals -2 or 1. The conversation also touches on row reduction techniques and alternative methods like the outer product to assess linear independence. It is emphasized that the determinant being zero signifies linear dependence, and care must be taken with specific values of t. Ultimately, the consensus is that the set is linearly dependent for -2 ≤ t ≤ 1.
mitch_1211
Messages
95
Reaction score
1
hey i have the set

s = {(t,1,1),(1,t,1),(1,1,t)} and i want to find for which values of t this set is linearly independent.
For a set of vectors containing all numbers i setup c1v1 + c2v2 .. +cnvn = 0 and I need the only solution to be c1=c2=c3..=cn=0 for linear independence.

so then put this into a matrix (v1 v2 v3 .. vn)(c)=(0) or (v1 v2 v2 .. vn|0) and reduce this to determine if there are infinite solution, unique solution or no solution.

I can setup the matrices fine with the t's but not sure how to go about solving the system. Also not sure if the matrix approach is correct..

Mitch
 
Physics news on Phys.org
Hi mitch_211! :smile:

You likely obtained the system

\left(\begin{array}{ccc} t & 1 & 1\\ 1 & t & 1\\ 1 & 1 & t\\ \end{array}\right)

What's your problem in reducing this matrix? It should be analogous to the case where t is just a number...
 
micromass said:
Hi mitch_211! :smile:

You likely obtained the system

\left(\begin{array}{ccc} t & 1 & 1\\ 1 & t & 1\\ 1 & 1 & t\\ \end{array}\right)

What's your problem in reducing this matrix? It should be analogous to the case where t is just a number...

I'm not sure how to get a leading one in the top left corner or how to 'zero out' the lower ones in the first column
 
If there was a 2 instead of a t, you would be able to solve it, right? Now you just do the same thing!

For example, if there was a 2 instead of a 2, then you would get a 1 in the upper-left corner by dividing by 2. Now you do the same: you divide the row by t... (note, this does require t to be nonzero! So for t=0, you'll have to do something different).
 
micromass said:
Now you do the same: you divide the row by t...

that makes sense now! thank you so much
 
If the rows/columns are linearly dependent then the determinant will be zero.
So can't you just calculate the determinant
det{(t,1,1),(1,t,1),(1,1,t)} = 2 - 3 t + t3
and solve for when it's zero
2 - 3 t + t2 =
0 ⇒ t=1 or t=-2 (there's a double zero at t=1)

So the set is linearly dependent for -2 ≠ t ≠ 1

That said, I haven't manually done a row reduction since 1st year uni
(which is an embarrassingly long time ago)
so under the spoiler are the elementary row operations that reduce the matrix.
Care needs to be taken if t=-1,0,1.
But the final result clearly shows the same -2 ≠ t ≠ 1 result.

\left(<br /> \begin{array}{ccc}<br /> 1 &amp; 0 &amp; 0 \\<br /> 0 &amp; 1 &amp; 0 \\<br /> -1 &amp; \frac{1-t}{t} &amp; 1<br /> \end{array}<br /> \right).\left(<br /> \begin{array}{ccc}<br /> 1 &amp; 0 &amp; 0 \\<br /> 0 &amp; -\frac{t}{(t-1) (t+1)} &amp; 0 \\<br /> 0 &amp; 0 &amp; 1<br /> \end{array}<br /> \right).\left(<br /> \begin{array}{ccc}<br /> 1 &amp; 0 &amp; 0 \\<br /> 1 &amp; -1 &amp; 0 \\<br /> 0 &amp; 0 &amp; 1<br /> \end{array}<br /> \right).\left(<br /> \begin{array}{ccc}<br /> t^{-1} &amp; 0 &amp; 0 \\<br /> 0 &amp; 1 &amp; 0 \\<br /> 0 &amp; 0 &amp; 1<br /> \end{array}<br /> \right).\left(<br /> \begin{array}{ccc}<br /> t &amp; 1 &amp; 1 \\<br /> 1 &amp; t &amp; 1 \\<br /> 1 &amp; 1 &amp; t<br /> \end{array}<br /> \right)=\left(<br /> \begin{array}{ccc}<br /> 1 &amp; \frac{1}{t} &amp; \frac{1}{t} \\<br /> 0 &amp; 1 &amp; \frac{1}{t+1} \\<br /> 0 &amp; 0 &amp; \frac{(t-1) (t+2)}{t+1}<br /> \end{array}<br /> \right)

An alternative set of row operations that don't use division, but introduce the spurious -1 ≠ t ≠ 0 conditions
(which come from the fact that the determinant of the row ops is -t2 (t + 1) ):

\left(<br /> \begin{array}{ccc}<br /> 1 &amp; 0 &amp; 0 \\<br /> 0 &amp; 1 &amp; 0 \\<br /> -(1+t) &amp; 1 &amp; t (1+t)<br /> \end{array}<br /> \right).\left(<br /> \begin{array}{ccc}<br /> 1 &amp; 0 &amp; 0 \\<br /> 1 &amp; -t &amp; 0 \\<br /> 0 &amp; 0 &amp; 1<br /> \end{array}<br /> \right).\left(<br /> \begin{array}{ccc}<br /> t &amp; 1 &amp; 1 \\<br /> 1 &amp; t &amp; 1 \\<br /> 1 &amp; 1 &amp; t<br /> \end{array}<br /> \right)=\left(<br /> \begin{array}{ccc}<br /> t &amp; 1 &amp; 1 \\<br /> 0 &amp; 1-t^2 &amp; 1-t \\<br /> 0 &amp; 0 &amp; t^3+t^2-2 t<br /> \end{array}<br /> \right)
 
For my money, the simplest test of linear independence is the outer product. A set of vectors a_1, a_2, \ldots, a_n is linearly dependent if and only if a_1 \wedge a_2 \wedge \ldots \wedge a_1 = 0.

The outer product of a bunch of vectors:
a) swaps sign when you interchange any pair of vectors, and
b) is zero when any vector is a linear combination of the others (as mentioned before)

In this case, you would express your vectors in terms of the basis you've implied; let's call it e_1, e_2, e_3:
a_1 = te_1 + e_2 + e_3
etc. Then you want to find out whether
a_1 \wedge a_2 \wedge a_3 = 0
or not.

Here's an example practical calculation to get you started:
<br /> \begin{align}<br /> a_1 \wedge a_2 &amp;= (te_1 + e_2 + e_3) \wedge (e_1 + te_2 + e_3) \\<br /> &amp;= te_1\wedge e_1 + t^2e_1\wedge e_2 + te_1 \wedge e_3 + e_2\wedge e_1 + e_2\wedge e_2 + e_2 \wedge e_3 + e_3\wedge e_1 + te_3\wedge e_2 + e_3 \wedge e_3 \\<br /> &amp;= t(0) + t^2e_1\wedge e_2 + t(-e_3 \wedge e_1) + (-e_2\wedge e_1) + (0) + e_2 \wedge e_3 + e_3\wedge e_1 + t(-e_3\wedge e_2) + (0) \\<br /> &amp;= (t^2-1)e_1\wedge e_2 + (1-t)(e_3 \wedge e_1) + (1-t)e_2 \wedge e_3<br /> \end{align}<br />

Note that any vector wedged with itself is zero (because the set e_1,e_1 is obviously linearly dependent), and that terms like e_2 \wedge e_1 can be switched to -e_1 \wedge e_2.

To finish the calculation, you would simply take the previous result and wedge with a_3. You end up with
a_1 \wedge a_2 \wedge a_3 = (t^2 - 3t + 2)e_1 \wedge e_2 \wedge e_3
exactly the same polynomial as from other posters here.

One reason I like this approach is the simplicity of calculation: I can just go through the steps using simple algebra. But mainly I like it because it gives a geometric meaning to the calculations I'm doing. Specifically: a_1 \wedge a_2 \wedge a_3 is a 3-dimensional volume spanned by those three vectors. But three linearly DEpendent vectors can't span a 3-D volume! So the wedge product is zero whenever they're linearly dependent. As opposed to matrix determinants, or row operations -- kind of abstract -- the geometric approach is something you can visualize, something you can almost reach out and touch.

A highly readable Linear Algebra textbook to get from your library:
http://faculty.luther.edu/~macdonal/laga/index.html
Enjoy!
 
Of course, the wedge product of n n-vectors is just the determinant * the unit volume...
 
thanks for your help everyone!

I'm going to try several of these methods and see what works best for me!

:)
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
4
Views
11K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K