Matrix Methods for Difference Problems II

  • Context: MHB 
  • Thread starter Thread starter topsquark
  • Start date Start date
  • Tags Tags
    Difference Matrix
Click For Summary

Discussion Overview

The discussion revolves around the application of matrix methods to solve systems of difference equations, particularly focusing on the challenges posed by non-constant transformation matrices. Participants explore the implications of diagonalization in such systems and the difficulties encountered in verifying solutions.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant, Dan, describes an attempt to simplify a system of difference equations using matrix methods but finds that the solution does not check out, raising questions about the impact of a non-constant transformation matrix on the recursion.
  • Another participant suggests that the eigenvectors of the non-constant matrix are not meaningful for the problem at hand, as they pertain to a different transformation than what is needed.
  • Dan later clarifies that the diagonalization method fails because the basis changes with each value of n, complicating the process of taking products of matrices.
  • Another participant proposes a specific form for the transformation matrix and discusses how to rewrite the recursion relation using this approach.
  • There is mention of an alternative approach involving direct relationships between the variables, suggesting a different method to derive the recursion relations.
  • Dan expresses uncertainty about the methods being discussed and acknowledges the complexity of generalizing solutions for non-constant coefficient recursions.

Areas of Agreement / Disagreement

Participants express differing views on the applicability of diagonalization methods in the context of non-constant coefficients. While some agree on the challenges posed by changing bases, others propose alternative formulations and approaches, indicating that the discussion remains unresolved with multiple competing perspectives.

Contextual Notes

Participants note that the derivation of the original matrix equation may not be relevant to the diagonalization method's effectiveness, and the complexity increases significantly when considering non-constant coefficients. There is also mention of the need for careful handling of transformation matrices at each step.

topsquark
Science Advisor
Homework Helper
Insights Author
MHB
Messages
2,020
Reaction score
843
This is related to a recent (mainly unserious) post I recently made. I did some more work on a similar problem and I'd like to bounce off an idea why this doesn't work. I really am not sure if I'm right so I'd appreciate a comment.

I am working with some simple systems of difference equations. For example, writing it in matrix form it's:
[math]\left ( \begin{matrix} a_{n + 1} \\ b_{n + 1} \end{matrix} \right ) = \left ( \begin{matrix} 0 & n \\ 1 & 0 \end{matrix} \right ) \left ( \begin{matrix} a_n \\ b_n \end{matrix} \right )[/math]

The idea is to use matrix methods to simplify the system, ie if I diagonalize the matrix and transform the vectors into the eigenbasis I can separate the a's and b's. So I do that, solve the matrix recursion and transform back to the original basis. Done, right? Wrong! The solution doesn't check.

I know we can do this with a system with constant matrix elements so my question is: Does the fact that we have a non-constant transformation matrix (ie it depends on n) change the recursion when we change to the eigenbases?

-Dan
 
Physics news on Phys.org
Transforming to PF ## \LaTeX ##:
$$ \begin{pmatrix} a_{n + 1} \\ b_{n + 1} \end{pmatrix} = \begin{pmatrix} 0 & n \\ 1 & 0 \end{pmatrix} \begin{pmatrix} a_n \\ b_n \end{pmatrix} $$
topsquark said:
Done, right?
Unless you show your workings it is impossible to tell.

topsquark said:
I know we can do this with a system with constant matrix elements so my question is: Does the fact that we have a non-constant transformation matrix (ie it depends on n) change the recursion when we change to the eigenbases?
I think you have probably answered your own question. But if you think about it, the eigenvectors of ## T_n ## are meaningless because they tell you about ## T_n(T_n(s)) ## but you are actually looking for ## T_{n + 1}(T_n(s)) ##.
 
  • Like
Likes   Reactions: topsquark and Greg Bernhardt
Actually, for this aspect of the problem I was working on the derivation of the original matrix equation is irrelevant. I was simply looking to find an explanation of why the diagonalization method didn't work.

As it happens the diagonalization method won't work because (at least the way I was approaching it) we were changing the basis for each value of n. So taking the product made no sense. I was eventually able to write out an example for a few values of n in W|A and was able to verify that it does work if you insert the correct transformation matrix between the diagonal matrices for each n, but the process is rather mind bogglingly intense. I have given up for now and, frankly, since there is a better way to approach the problem I'm not likely to get back to it.

Thanks for the reply!

-Dan
 
  • Like
Likes   Reactions: Greg Bernhardt
I mean, in this particular case you can diagonalise the system by letting
$$
A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}
$$
 
Orodruin said:
I mean, in this particular case you can diagonalise the system by letting
$$
A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}
$$
Okay, let me be a bit more clear about the issue and what I was trying to do. If we have the system with constant coefficients
##\left ( \begin{matrix} a_{n+1} \\ b_{n+1} \end{matrix} \right ) = \left ( \begin{matrix} w & x \\ y & z \end{matrix} \right ) \left ( \begin{matrix} a_n \\ b_n \end{matrix} \right )##

Then you can just diagonalize the coefficient matrix and multiply out, like you said:
##\left ( \begin{matrix} a_n \\ b_n \end{matrix} \right ) = \left ( \begin{matrix} p & 0 \\ 0 & q \end{matrix} \right ) ^n \left ( \begin{matrix} a_0 \\ b_0 \end{matrix} \right )##

But since we are using a coefficient matrix that depends on n we have to use a change of basis transformation every time we change n. So the formula becomes
$$\left ( \begin{matrix} a_n \\ b_n \end{matrix} \right ) = T(n) A_n T^{-1}(n) T(n-1) A_{n-1} T^{-1}(n-1) \text{... } T(0) A_0 T^{-1}(0) \left ( \begin{matrix} a_0 \\ b_0 \end{matrix} \right )$$
where the T(n) are the transformation matrices to diagonalize the ##A_n##.

It gets very ugly, very fast!

-Dan
 
topsquark said:
Then you can just diagonalize the coefficient matrix and multiply out, like you said:
That is not what I said.
 
To be a bit more specific, the particular case of the OP can be rewritten by instead using
Orodruin said:
$$
A_n = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix} ^n\begin{pmatrix} a_n \\ b_n\end{pmatrix}
$$
as your variables. Introducing the notation
$$
P = \begin{pmatrix}0 & 1\\ 1& 0\end{pmatrix}
$$
and
$$
x_n = \begin{pmatrix} a_n \\ b_n\end{pmatrix}
$$
you would have ##A_n = P^n x_n## with ##P^2 = 1## and your recursion relation is on the form
$$
x_{n+1} = T_n x_n
$$
leading to
$$
A_{n+1} = P^{n+1} T_n P^n A_n
$$
where in the case of the OP, ##P^{n+1} T_n P^n## is diagonal.

An alternative approach to the problem in the OP is to consider that
$$
b_{n+2} = a_{n+1} = n b_n
$$
and therefore
$$
b_{2n} = (2n-2)! b_0
$$
and
$$
b_{2n+1} = (2n-1)! b_1 = (2n-1)! a_0
$$
with the ##a_n## following from ##a_n = b_{n+1}##.
 
  • Like
Likes   Reactions: topsquark
I generally see what you are saying, though I'm not certain how you are using the ##A_n## and $$P^n##. But it looks to me like we are, at least, in general agreement. (I haven't been able to find material online about non-constant coefficient cases so you may be applying a method which includes more details than I'm familiar with.) I can do the substitution method, though. I was trying to find a method to "generalize" the work I was doing on first order non-constant coefficient recursions in a similar fashion to how matrix methods generalize the solution method for systems of equations. Turns out it didn't work all that well!

Thanks for putting in the time.

-Dan
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K