# Matrix Methods for Difference Problems II

• MHB
• topsquark
In summary, the problem with using the diagonalization method is that we are changing the basis for each value of n.

#### topsquark

Gold Member
MHB
This is related to a recent (mainly unserious) post I recently made. I did some more work on a similar problem and I'd like to bounce off an idea why this doesn't work. I really am not sure if I'm right so I'd appreciate a comment.

I am working with some simple systems of difference equations. For example, writing it in matrix form it's:
$$\displaystyle \left ( \begin{matrix} a_{n + 1} \\ b_{n + 1} \end{matrix} \right ) = \left ( \begin{matrix} 0 & n \\ 1 & 0 \end{matrix} \right ) \left ( \begin{matrix} a_n \\ b_n \end{matrix} \right )$$

The idea is to use matrix methods to simplify the system, ie if I diagonalize the matrix and transform the vectors into the eigenbasis I can separate the a's and b's. So I do that, solve the matrix recursion and transform back to the original basis. Done, right? Wrong! The solution doesn't check.

I know we can do this with a system with constant matrix elements so my question is: Does the fact that we have a non-constant transformation matrix (ie it depends on n) change the recursion when we change to the eigenbases?

-Dan

Transforming to PF ## \LaTeX ##:
$$\begin{pmatrix} a_{n + 1} \\ b_{n + 1} \end{pmatrix} = \begin{pmatrix} 0 & n \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_n \\ b_n \end{pmatrix}$$
topsquark said:
Done, right?
Unless you show your workings it is impossible to tell.

topsquark said:
I know we can do this with a system with constant matrix elements so my question is: Does the fact that we have a non-constant transformation matrix (ie it depends on n) change the recursion when we change to the eigenbases?
I think you have probably answered your own question. But if you think about it, the eigenvectors of ## T_n ## are meaningless because they tell you about ## T_n(T_n(s)) ## but you are actually looking for ## T_{n + 1}(T_n(s)) ##.

topsquark and Greg Bernhardt
Actually, for this aspect of the problem I was working on the derivation of the original matrix equation is irrelevant. I was simply looking to find an explanation of why the diagonalization method didn't work.

As it happens the diagonalization method won't work because (at least the way I was approaching it) we were changing the basis for each value of n. So taking the product made no sense. I was eventually able to write out an example for a few values of n in W|A and was able to verify that it does work if you insert the correct transformation matrix between the diagonal matrices for each n, but the process is rather mind bogglingly intense. I have given up for now and, frankly, since there is a better way to approach the problem I'm not likely to get back to it.

Thanks for the reply!

-Dan

Greg Bernhardt
I mean, in this particular case you can diagonalise the system by letting
$$A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}$$

Orodruin said:
I mean, in this particular case you can diagonalise the system by letting
$$A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}$$
Okay, let me be a bit more clear about the issue and what I was trying to do. If we have the system with constant coefficients
##\left ( \begin{matrix} a_{n+1} \\ b_{n+1} \end{matrix} \right ) = \left ( \begin{matrix} w & x \\ y & z \end{matrix} \right ) \left ( \begin{matrix} a_n \\ b_n \end{matrix} \right )##

Then you can just diagonalize the coefficient matrix and multiply out, like you said:
##\left ( \begin{matrix} a_n \\ b_n \end{matrix} \right ) = \left ( \begin{matrix} p & 0 \\ 0 & q \end{matrix} \right ) ^n \left ( \begin{matrix} a_0 \\ b_0 \end{matrix} \right )##

But since we are using a coefficient matrix that depends on n we have to use a change of basis transformation every time we change n. So the formula becomes
$$\left ( \begin{matrix} a_n \\ b_n \end{matrix} \right ) = T(n) A_n T^{-1}(n) T(n-1) A_{n-1} T^{-1}(n-1) \text{... } T(0) A_0 T^{-1}(0) \left ( \begin{matrix} a_0 \\ b_0 \end{matrix} \right )$$
where the T(n) are the transformation matrices to diagonalize the ##A_n##.

It gets very ugly, very fast!

-Dan

topsquark said:
Then you can just diagonalize the coefficient matrix and multiply out, like you said:
That is not what I said.

To be a bit more specific, the particular case of the OP can be rewritten by instead using
Orodruin said:
$$A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}$$
as your variables. Introducing the notation
$$P = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix}$$
and
$$x_n = \begin{pmatrix} a_n \\ b_n\end{pmatrix}$$
you would have ##A_n = P^n x_n## with ##P^2 = 1## and your recursion relation is on the form
$$x_{n+1} = T_n x_n$$
$$A_{n+1} = P^{n+1} T_n P^n A_n$$
where in the case of the OP, ##P^{n+1} T_n P^n## is diagonal.

An alternative approach to the problem in the OP is to consider that
$$b_{n+2} = a_{n+1} = n b_n$$
and therefore
$$b_{2n} = (2n-2)! b_0$$
and
$$b_{2n+1} = (2n-1)! b_1 = (2n-1)! a_0$$
with the ##a_n## following from ##a_n = b_{n+1}##.

topsquark
I generally see what you are saying, though I'm not certain how you are using the ##A_n## and P^n##. But it looks to me like we are, at least, in general agreement. (I haven't been able to find material online about non-constant coefficient cases so you may be applying a method which includes more details than I'm familiar with.) I can do the substitution method, though. I was trying to find a method to "generalize" the work I was doing on first order non-constant coefficient recursions in a similar fashion to how matrix methods generalize the solution method for systems of equations. Turns out it didn't work all that well!

Thanks for putting in the time.

-Dan