How Do Eigenvalues and Eigenvectors Depend on Parameter 'a' in This Matrix?

flyingpig
Messages
2,574
Reaction score
1

Homework Statement



http://img687.imageshack.us/img687/9065/matrixk.th.png

Uploaded with ImageShack.us

The Attempt at a Solution



Alright, here is how I did it.

First I need to find my eigenvalues

\begin{bmatrix}<br /> 0-\lambda &amp; 1&amp;2 \\ <br /> 0&amp; 3-\lambda&amp; a\\ <br /> 0 &amp; 0 &amp;0 -\lambda<br /> \end{bmatrix}

So just reading it off, the eigenvalues are 3 and 0.

So I do my [A - Iλ | 0]

So let's begin with λ = 0

\left[\begin{array}{ccc|c}<br /> 0 &amp; 1&amp; 2 &amp;0\\ <br /> 0&amp; 3&amp; a &amp;0\\ <br /> 0 &amp; 0 &amp; 0 &amp;0<br /> \end{array}\right] Rowreduce with my pencil and paper\left[\begin{array}{ccc|c}<br /> 0 &amp; 1&amp;2 &amp;0\\ <br /> 0&amp; 0&amp; a-6 &amp;0\\ <br /> 0 &amp; 0 &amp;0 &amp;0<br /> \end{array}\right]

Now here is the problem, does it matter what a really is? if a is say, 6, I get 0x3 = 0, which means there are still two pivots. I know that my matrix better be linearly dependent if I want to get some eigenvectors.

I just don't know where to go now lol.
 
Last edited by a moderator:
Physics news on Phys.org


Tex is now fixed lol
 


You've almost got it. Obviously, your reduced matrix is "independent" (not the best word here- you mean "singular" so that it is not one-to-one and its kernel is non-trivial) because that last row consist all of 0s. But you want two independent eigenvectors for the eigenvalue 0 so it is not enough that the kernel have dimension 1- it must have dimension 2 which means that you need the last two rows to be all 0s. What must a be to make that true?

Here, by the way, is how I would have done the problem:
A matrix is "diagonalizable" if and only if there exist a "complete set" of eigenvectors- that is, there exist a basis consisting of eigenvectors. In the case of a 3 by 3 matrix, that means there must be 3 independent eigenvectors. In the case of a 4 by 4 matrix, there must be 4 independent eigenvectors.

The first matrix is "upper triangular" so, as you say, we can "read off" its eigenvalues- they are the numbers on the main diagonal. If there were three distinct eigenvalues, then, because eigenvectors corresponding to different eigenvalues are necessarily independent, the matrix would necessarily be diagonalizable. But here, we have 0 as a double eigenvalue so we need to determine if some values of a will give two independent eigenvectors corresponding to eigenvalue 0. Of course, a vector, v, is an eigenvector corresponding to eigenvalue \lambda if and only if Av= \lambda v. In order to be an eigenvector corresponding to eigenvalue 0, we must have
\begin{bmatrix}0 &amp; 1 &amp; 2 \\ 0 &amp; 3 &amp; a \\ 0 &amp; 0 &amp; 0 \end{bmatrix}\begin{bmatrix}x \\ y \\ z\end{bmatrix}= \begin{bmatrix}0 \\ 0 \\ 0\end{bmatrix}
\begin{bmatrix}y + 2z \\ 3y+ az \\ 0\end{bmatrix}= \begin{bmatrix}0 \\ 0 \\ 0 \end{bmatrix}
So we have y+ 2z= 0, 3y+ az= 0. Clearly x can be anything so one eigenvector is [1, 0, 0]. The first equation says that y= -2z, the second that y= -(a/3)z. What must a be so that there exist an infinite number of y, z, pairs that satisfy those? That will be the case that there exist a second, independent eigenvector so that the matrix is diagonalizable.
 


So a must be 6

Also, I confess that I don't understand the word "kernel", but I have seen it before
 


The kernel of a function, f, is the set of all x such that f(x)= 0. In Linear Algebra, the "kernel" of a linear transformation is also called its "null space".
 


Kernel and nullspace are synonymous. For a matrix A, the kernel is the set of vectors x such that Ax = 0.

If the matrix is square (i.e., n x n) and its determinant is nonzero, the kernel is just the zero vector. If its determinant is zero, the kernel is a subspace of Rn whose dimension can be anything from 1 through n.
 


But am I right that a= must be 6?
 


Mark44 said:
Kernel and nullspace are synonymous. For a matrix A, the kernel is the set of vectors x such that Ax = 0.

If the matrix is square (i.e., n x n) and its determinant is nonzero, the kernel is just the zero vector. If its determinant is zero, the kernel is a subspace of Rn whose dimension can be anything from 1 through n.
In Linear Algebra, kernel and null space are synonymous. But "kernel" is more general. If f is a homomophism from group G to group H, the set of all x such that f(x)= the identity in H is the kernel of f- but not a "null space".
 


My exam is in like 3 days, please just give me a straightforward answer lol. Does that mean a = 6?
 
  • #10


Yes. Keep in mind what you're trying to do, though, which is to find a basis for the solution space of (A - \lambdaI)x = 0 (for \lambda = 0) that contains two nonzero vectors.

Your final matrix in post #1 is
\left[\begin{array}{ccc|c}0 &amp; 1&amp;2 &amp;0\\ 0&amp; 0&amp; a-6 &amp;0\\ 0 &amp; 0 &amp;0 &amp;0\end{array}\right]
If a = 6, the basis for the solution space consists of two linearly independent vectors: <1, 0, 0> and <0, -2, 1>.
On the other hand, if a is any value other than 6, then the coefficient of the third coordinate is 0, which makes the coefficient of the second coordinate also 0. In this case the basis for the solution space consists of only one vector, <1, 0, 0>.
 
Back
Top