Linearly independent eigen vectors

Jonnyb302
Messages
22
Reaction score
0
Hello everyone, this nxn matrix arises in my numerical scheme for solving a diffusion PDE.
<br /> M =<br /> \left(\begin{array}{cccccccccc}1-\frac{Dk}{Vh} &amp; \frac{Dk}{Vh} &amp; 0 &amp; 0 &amp; &amp; &amp; \ldots &amp; &amp; &amp; 0<br /> \\[6pt] \frac{Dk}{h^2} &amp; 1-2\frac{Dk}{h^2} &amp; \frac{Dk}{h^2} &amp; 0 &amp; &amp; &amp; &amp; &amp; &amp;<br /> \\[6pt]0 &amp; \frac{Dk}{h^2} &amp; 1-2\frac{Dk}{h^2} &amp;\frac{Dk}{h^2} &amp; &amp; &amp; &amp; &amp; &amp;<br /> \\<br /> \\<br /> \\\vdots &amp; &amp; &amp; &amp; \ddots &amp; &amp; &amp; &amp; &amp; \vdots<br /> \\<br /> \\<br /> \\<br /> \\[6pt] &amp; &amp; &amp; &amp; &amp; &amp; \frac{Dk}{h^2} &amp; 1-2\frac{Dk}{h^2} &amp; \frac{Dk}{h^2} &amp; 0<br /> \\[6pt] &amp; &amp; &amp; &amp; &amp; &amp; &amp; \frac{Dk}{h^2} &amp; 1-2\frac{Dk}{h^2} &amp; \frac{Dk}{h^2}<br /> \\[6pt] 0 &amp;\ldots &amp; &amp; &amp; &amp; &amp; &amp; &amp; 2\frac{Dk}{h^2} &amp; 1-2\frac{Dk}{h^2}<br /> \end{array}\right)<br />

I can easily use Gershgorin disks, and the freedom to set a constraint between h and k to guarantee that all eigen vectors are between -1 and 1, or even between 0 and 1 if that is more advantageous.

I need to prove I can diagonalize this matrix, so I am attempting to show that there are n linearly independent eigen vectors.

I considered trying to show that the eigen vectors are distinct but I am really not sure where to start.

I seem to remember one method being something like guessing λ = cos(2\pi/n) but I don't see how to go form there.

Any suggestions? I have lots of experience in PDE's and ODE's but have no formal linear algebra experience, just what I have taught myself.
 
Physics news on Phys.org
Hey Jonnyb302 and welcome to the forums.

Since you are going to look for a symbolic proof, it's probably going to be necessary to get a symbolic representation of the eigenvalues. You have a lot of zero's which is good for you since you get a simplification.

I don't know whether this matrix has a particular form, but if you know of one that fits it, you might want to see if people have come up with a reduced form for the determinant (with the eigenvalue lambda included).

Then the next step will be to do a symbolic evaluation of the eigenvectors and then using the theorem of independence that states that if all vectors are independent then the only thing that satisfies Sum_over_i (a_i*x_i) = 0 is when all a_i real number coeffecients are zero (where the x_i's are 0).

Just out of curiosity, what kind of field is this? I'm going to speculate by saying if this is a common problem, then some one has probably taken a look at it and if they haven't answer it fully, then maybe partial results exist (like the value of the determinant say and its calculation in symbolic reduced form).
 
I did take a look and try to find a classification of this matrix, but I did not see anything.

If the first row was not present then I could easily do it, as the determinant of the nxn matrix can be represented through a recursive relation of the determinant of the (n-1)x(n-1) and the (n-2)x(n-2) determinants.

Unfortunately the first row is causing almost all my troubles.

This is close to a common problem. A typical easy problem is the flow of heat from an infinite heat reservoir into a rod in a vacuum. This problem represents the flow of heat from a finite reservoir, and as you can probably guess, the first row which breaks the symmetry represents that finiteness.

In truth what I need to prove is that the spectral norm of this matrix is less than 1, however using my method so far to prove this, it all reduces to showing there is a linearly independent eigen basis.

Although, I think I am starting to see a way to do this. I am going to pursue it and if I figure it out I will post my result.
 
ok I think I got it.

http://en.wikipedia.org/wiki/Determinant#Block_matrices

In the block matrices section. I can think of my matrix has the simple case where the block C is a (n-1)x1 matrix off all zeros. So I can get the eigen value from the top left corner element of the matrix (itself) and I already know how to find the eigen values of my matrix if you chop off the top row and left column via:
http://digilander.libero.it/foxes/matrix/eigen_tridiad_uniform.pdf

So then all I do is show that they are all distinct eigen values, and I have proven I have n linearly independent eigen vectors.

yay?
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top