Linear algebra matrix to compute series

In summary: E_2 = (B + 2 E_3) / 4##Finally, we can now compute the sequence ##a_n = B^{n-1} \begin{pmatrix} 1 \\ 5 \\ 1 \end{pmatrix}## as$$a_n = B^{n-1} \begin{pmatrix} 1 \\ 5 \\ 1 \end{pmatrix} = (r_1^{n-1} E_1 + r_2^{n-1} E_2 + r_3^{n-1} E_3) \begin{pmatrix} 1 \\ 5 \\ 1 \end{pm
  • #1
fiksx
77
1
Post moved by moderator, so missing the homework template.
series ##{a_n}## is define by ##a_1=1 ## , ##a_2=5 ## , ##a_3=1 ##, ##a_{n+3}=a_{n+2}+4a_{n+1}-4a_n ## ( ##n \geq 1 ##).

$$\begin{pmatrix}a_{n+3} \\ a_{n+2} \\ a_{n+1} \\ \end{pmatrix}=B\begin{pmatrix}a_{n+2} \\ a_{n+1} \\ a_{n} \\ \end{pmatrix}$$
find ##B##
find general term of ##a_n## series

I found matrix b is ##B=\begin{bmatrix}1&4&-4\\1&0&0\\0&1&0\end{bmatrix} ##
but how can I find ##a_n ##?
I can compute that ##\begin{pmatrix}a_{4} \\ a_{3} \\ a_{2} \\ \end{pmatrix}=B\begin{pmatrix}a_{3} \\ a_{2} \\ a_{1} \\ \end{pmatrix} ##
##\begin{pmatrix}a_{4} \\ a_{3} \\ a_{2} \\ \end{pmatrix}=B\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix} ##
##\begin{pmatrix}a_{5} \\ a_{4} \\ a_{3} \\ \end{pmatrix}=B^2\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix} ##
can I just plug In ##n=1## to the equation?
i see there is relation about this.
##a_{n+1}=B^na_n ##<br> suppose D is diagonal matrix that similar to B <br>
##a_{n+1}=TD^nT^- a_n ##
which mean i need to find diagonal matrix and its inver to find ##a_n ##? is my assumption wrong?

but the right formula is ##\begin{pmatrix} a_{n+2} \\ a_{n+1} \\ a_n \end{pmatrix}=B^{n-1}\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix}## , how can I get this? and to find ##a_n## should I find ##B^n## or ##B^{n-1}##?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
This is basically right. The matrix is diagonalizable, and this is a diagonalization exercise.

It's very easy to make indexing errors but the idea is for ##n \gt 3## (technically this restriction isn't needed since the matrix is invertible)

##a_n = \mathbf e_1^T B^{n-3} \mathbf v##

where ##\mathbf v :=
\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix}##

and ##\mathbf e_1## is the first standard basis vector.

In this form you can iteratively multiply the matrix as many times as you want to get the answer you need. But if you want a closed form answer, you need to diagonalize it.

fiksx said:
which mean i need to find diagonal matrix and its inverse to find ##a_n ##? is my assumption wrong?

This isn't quite right. To be sure, what you need to find is ##B T = TD##, then find the inverse of ##T## so that you get ##B = B T T^{-1} = TDT^{-1}##, and of course ##B^k = TD^k T^{-1}## (why?). Conceptually its straightforward, but its a lot of work to find all this by hand. What tools do you have at your disposal?

Curiously if you use the reflection matrix
##
J = \begin{bmatrix}
0& 0 & 1\\
0& 1 &0 \\
1& 0 &0
\end{bmatrix}##

to effect a similarity transform you find that your matrix is similar to the (transpose of the) Companion Matrix, which has well known eigenvectors. But if you aren't familiar with the Companion Matrix, then ignore this comment.
 
  • Like
Likes FactChecker
  • #3
fiksx said:
Post moved by moderator, so missing the homework template.
series ##{a_n}## is define by ##a_1=1 ## , ##a_2=5 ## , ##a_3=1 ##, ##a_{n+3}=a_{n+2}+4a_{n+1}-4a_n ## ( ##n \geq 1 ##).

$$\begin{pmatrix}a_{n+3} \\ a_{n+2} \\ a_{n+1} \\ \end{pmatrix}=B\begin{pmatrix}a_{n+2} \\ a_{n+1} \\ a_{n} \\ \end{pmatrix}$$
find ##B##
find general term of ##a_n## series

I found matrix b is ##B=\begin{bmatrix}1&4&-4\\1&0&0\\0&1&0\end{bmatrix} ##
but how can I find ##a_n ##?
I can compute that ##\begin{pmatrix}a_{4} \\ a_{3} \\ a_{2} \\ \end{pmatrix}=B\begin{pmatrix}a_{3} \\ a_{2} \\ a_{1} \\ \end{pmatrix} ##
##\begin{pmatrix}a_{4} \\ a_{3} \\ a_{2} \\ \end{pmatrix}=B\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix} ##
##\begin{pmatrix}a_{5} \\ a_{4} \\ a_{3} \\ \end{pmatrix}=B^2\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix} ##
can I just plug In ##n=1## to the equation?
i see there is relation about this.
##a_{n+1}=B^na_n ##<br> suppose D is diagonal matrix that similar to B <br>
##a_{n+1}=TD^nT^- a_n ##
which mean i need to find diagonal matrix and its inver to find ##a_n ##? is my assumption wrong?

but the right formula is ##\begin{pmatrix} a_{n+2} \\ a_{n+1} \\ a_n \end{pmatrix}=B^{n-1}\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix}## , how can I get this? and to find ##a_n## should I find ##B^n## or ##B^{n-1}##?

Using diagonalization (as suggested in #2) is one way, but another way (that I personally prefer) is to use some basic facts from "Matrix Anlysis". For your ##B## the eigenvalues are ##r_1 = 1, r_2 = 2## and ##r_3 = -2.## Without actually doing the diagonalization, it follows from the fact of diagonalizability that the matrix can be decomposed as ##B = r_1 E_1 + r_2 E_2 + r_3 E_3## for some matrices ##E_1,E_2,E_3##. Furthermore, a bit of theory shows that for any analytic function ##f(x)## the matrix function ##f(B)## is given as
$$f(B) = f(r_1) E_1 + f(r_2) E_2 + f(r_3) E_3,$$
so the ##E_i## are not changed by the function.

We can find ##E_1, E_2, E_3## by using the three functions ##f_(x)= x^0 = 1 \Rightarrow f_1(B) = I## (the identity matrix), ##f_2(x) = x \Rightarrow f_2(B) = B## and ##f_3(x) = x^2 \Rightarrow f_3(B) = B^2##. This gives
$$\begin{array}{rrcl}
f_1(B): & I &=& 1^0 E_1 + 2^0 E_2 + (-2)^0 E_3 \\
f_2(B): & B &=& 1 E_1 + 2 E_2 - 2 E_3 \\
f_3(B): & B^2 &=& 1^2 E_1 + 2^2 E_2 + (-2)^2 E_3
\end{array}
$$
Since we can easily compute ##B^2## we can get just solve the three equations to get the ##E_i##; I recommend writing them as symbolically as
$$ i = e_1+e_2+e_3, \; b = e_1 + 2e_2 - 2e_3, \; b2 = e_1 + 4 e_2 + 4 e_3, $$
and solving for ##e_1,e_2,e_3## in terms of ##i,b,b2##. Then substitute ##i=I, b=B## and ##b2=B^2## at the end to get your matrices ##E_i##.

Anyway, once you have the ##E_i## the rest is easy: ##B^n = E_1 + 2^n E_2 + (-2)^n E_3.##
 
  • #4
Ray Vickson said:
Using diagonalization (as suggested in #2) is one way, but another way (that I personally prefer) is to use some basic facts from "Matrix Anlysis". For your ##B## the eigenvalues are ##r_1 = 1, r_2 = 2## and ##r_3 = -2.## Without actually doing the diagonalization, it follows from the fact of diagonalizability that the matrix can be decomposed as ##B = r_1 E_1 + r_2 E_2 + r_3 E_3## for some matrices ##E_1,E_2,E_3##. Furthermore, a bit of theory shows that for any analytic function ##f(x)## the matrix function ##f(B)## is given as
$$f(B) = f(r_1) E_1 + f(r_2) E_2 + f(r_3) E_3,$$
so the ##E_i## are not changed by the function.

We can find ##E_1, E_2, E_3## by using the three functions ##f_(x)= x^0 = 1 \Rightarrow f_1(B) = I## (the identity matrix), ##f_2(x) = x \Rightarrow f_2(B) = B## and ##f_3(x) = x^2 \Rightarrow f_3(B) = B^2##. This gives
$$\begin{array}{rrcl}
f_1(B): & I &=& 1^0 E_1 + 2^0 E_2 + (-2)^0 E_3 \\
f_2(B): & B &=& 1 E_1 + 2 E_2 - 2 E_3 \\
f_3(B): & B^2 &=& 1^2 E_1 + 2^2 E_2 + (-2)^2 E_3
\end{array}
$$
Since we can easily compute ##B^2## we can get just solve the three equations to get the ##E_i##; I recommend writing them as symbolically as
$$ i = e_1+e_2+e_3, \; b = e_1 + 2e_2 - 2e_3, \; b2 = e_1 + 4 e_2 + 4 e_3, $$
and solving for ##e_1,e_2,e_3## in terms of ##i,b,b2##. Then substitute ##i=I, b=B## and ##b2=B^2## at the end to get your matrices ##E_i##.

Anyway, once you have the ##E_i## the rest is easy: ##B^n = E_1 + 2^n E_2 + (-2)^n E_3.##

thanks if I use diagonalization , it really takes long time. ##f(B) = f(r_1) E_1 + f(r_2) E_2 + f(r_3) E_3## is matrix that has 3 Eigen value can always be decompose to this?
is this related to cayley Hamilton(?) why did you choose
$f_(x)= x $? is this related to matrix characteristic polynomial? or do you have any website that explain about this theorem?
what the idea behind this? sorry maybe my base is not strong enough :/
 
  • #5
fiksx said:
thanks if I use diagonalization , it really takes long time. ##f(B) = f(r_1) E_1 + f(r_2) E_2 + f(r_3) E_3## is matrix that has 3 Eigen value can always be decompose to this?
is this related to cayley Hamilton(?) why did you choose
$f_(x)= x $? is this related to matrix characteristic polynomial? or do you have any website that explain about this theorem?
what the idea behind this? sorry maybe my base is not strong enough :/

If an ##n\times n## matrix ##A## has ##n## distinct eigenvalues ##\lambda_1, \lambda_2, \ldots, \lambda_n##, then the diagonalizability of ##A## implies that for any analytic function ##f(x)## we have
$$f(A) = f(\lambda_1) E_1 + f(\lambda_2) E_2 + \cdots + f(\lambda_n) E_n$$
Here, the matrices ##E_i## are independent of the function ##f##. So, how do you find the ##E_1?## Well, just apply that equation to several functions ##f(\cdot)## for which you already know (or can easily get) the values of ##f(A)##. Using ##A^0. A^1, A^2, \ldots, A^{n-1}## is convenient because we can compute these functions of ##A## explicitly and so can solve the equations for the ##E_i##.

If you have repeated eigenvalues it is less simple; in principle, we could do the same type of thing using the Jordan canonical form, but again, in practice another method seems to work well. For example, suppose we have a ##3 \times 3## matrix ##A## with eigenvalues ##r,r,r##. The so-called Jordan canonical form for ##A## could be either ##J_1, J_2## or ##J_3##:
$$J_1 = \begin{bmatrix}r&0&0\\0&r&0\\0&0&r \end{bmatrix}, \;
J_2 = \begin{bmatrix}r&0&0\\0&r&1\\0&0&r \end{bmatrix}\;
J_3= \begin{bmatrix}r&1&0\\0&r&1\\0&0&r\end{bmatrix}
$$
If ##J_1## is the appropriate form, then ##f(A) = E_1 f(r)##. If ##J_2## is the appropriate form we have ##f(A) = E_{11} f(r) + E_{21} f(r) + E_{22} f'(r)##, which can be written as ## E_1 f(r) + E_2 f'(r)##. Finally, if ##J_3## is the appropriate form we have ##f(A) = E_1 f(r) + E_2 f'(r) + E_3 f''(r)##.

If we have not computed the Jordan form we do not know which of these is the right one. However, in all three cases we can write
$$f(A) = E_1 f(r) + E_2 f'(r) + E_3 f''(r), $$ where we have ##E_2 = E_3=0## in case 1 and ##E_3 = 0## in case 2.

I have found that the following procedure always works (although there may sometimes be round-off issues that need to be dealt with). Using ##f_1(x) = x^0## we have ##f'(x) = f''(x) = 0## and so ##I = E_1 r^0 + E_2 0 + E_3 0##. Using ##f(x) = x## we have ##f'(x) = 1## and ##f''(x) = 0##, so we have ##A = E_1 r + E_2 1 + E_3 0.## Finally, for ##f(x) = x^2## we have ##f'(x) = 2x## and ##f''(x) = 2,## so ##A^2 = E_1 r^2 + E_2 2r + E_3 2##. That is, we should solve the equations
$$i = e_1, \; a = r e_1 + e_2, \; a2 = r^2 e_1 + 2r e_2 + 2 e_3,$$
then substitute ##i = I, a=A## and ##a2=A^2## into the solution, to get the ##E_1, E_2, E_3##. Sometimes you get some of the ##E_i = 0##, so indirectly that tells you what the correct Jordan canonical form should have been. What is nice, however, is that we don't need to compute the Jordan form explicitly.

In the presence of uncontrolled roundoff errors it is not clear which method is best; perhaps using published Jordan form algorithms is advisable, just because these have been under development for decades by teams of experts, who have already dealt with the issues. However, whenever I have used the method (in Maple) I could just increase the numerical precision when needed; for example, using 50 or 100 digits of computational precision is straightforward enough in Maple or Mathematica or similar packages.
 
  • #6
fiksx said:
thanks if I use diagonalization , it really takes long time. ##f(B) = f(r_1) E_1 + f(r_2) E_2 + f(r_3) E_3## is matrix that has 3 Eigen value can always be decompose to this?
is this related to cayley Hamilton(?) why did you choose
$f_(x)= x $? is this related to matrix characteristic polynomial?

2 Things:

First, in the case of all eigenvalues being unique, it is diagonalization, just using the rank-one update / outer product interpretation of matrix multiplication.

I.e.##\mathbf P :=
\bigg[\begin{array}{c|c|c|c|c}
\mathbf p_1 & \mathbf p_2 &\cdots & \mathbf p_{n-1} & \mathbf p_{n}
\end{array}\bigg]
##

##\mathbf P^{-1}:= \mathbf S =
\begin{bmatrix}
\tilde{ \mathbf s_1}^T \\
\tilde{ \mathbf s_2}^T \\
\vdots\\
\tilde{ \mathbf s}_{n-1}^T \\
\tilde{ \mathbf s_n}^T
\end{bmatrix}
##

and ##\mathbf D## is diagonal, with eigenvalues ##\lambda_1, \lambda_2, ... \lambda_n## along the diagonal (in that order).

To be sure, ##\mathbf P^{-1} = \mathbf S## is the inverse of your right eigenvector matrix (or equivalently, a collection of your left eigenvectors, so long as you select the lengths correctly, i.e. ensure ##\tilde{ \mathbf s_k}^T \mathbf p_k = 1## .)

so diagonalizing ##\mathbf B##, we see

##\mathbf B = \mathbf {PDP}^{-1} = \mathbf {PDS} = \big(\mathbf {PD}\big)\mathbf S = \bigg[\begin{array}{c|c|c|c}
\lambda_1 \mathbf p_1 & \lambda_2 \mathbf p_2 &\cdots & \lambda_n \mathbf p_{n}
\end{array}\bigg]
\mathbf S = \lambda_1 \mathbf p_1 \tilde{ \mathbf s_1}^T + \lambda_2 \mathbf p_2 \tilde{ \mathbf s_2}^T + ... + \lambda_n \mathbf p_n \tilde{ \mathbf s_n}^T ##

where ##\mathbf E_k := \mathbf p_k \tilde{ \mathbf s_k}^T##

and the familiar result that for diagonalizable matrix ##\mathbf B##,
##f\big(\mathbf B\big) = \mathbf Pf \big(\mathbf D\big) \mathbf P^{-1} = f( \lambda_1)\mathbf p_1 \tilde{ \mathbf s_1}^T + f(\lambda_2) \mathbf p_2 \tilde{ \mathbf s_2}^T + ... + f(\lambda_n) \mathbf p_n \tilde{ \mathbf s_n}^T = f(\lambda_1)\mathbf E_1 + f(\lambda_2)\mathbf E_2 + ... + f(\lambda_n)\mathbf E_n##For theoretical purposes, the main benefit comes when you have repeated eigenvalues -- this spectral projector approach allows you to easily switch to Jordan Canonical form, if the matrix is defective, and if the matrix is not defective, it allows you to bypass irrelevant nitpicks / linguistic issues-- e.g. suppose you have a normal matrix with repeated eigenvalues -- in the case of repeated eigenvalue ##\lambda_r##, you may choose to have the eigenvectors associated with ##\lambda_r## be mutually orthonormal or not -- you can go for the weaker plain old linearly independent if you want, but you don't need to get into the weeds of this and how/why you chose it if you just show ##\mathbf E_r##.

It's a nice approach.

Second:
It's well known that the Vandermonde matrix-- which incidentally is my favorite matrix I think-- diagonalizes the Companion matrix (assuming all eigenvalues are unique). Ray's already stated the eigenvalues. If you go by the diagonalization approach, all you have to do is check how the Vandermonde matrix "fits" with ##\mathbf B## and then compute the vandermonde inverse. In general people don't ask you to diagonalize 3 x 3 matrices by hand unless there is some special structure to exploit. And if there isn't special structure... then you really should just push the mechanics down to a computer in my view.

You have discretion in how to write ##\mathbf B##. I'd suggest re-modelling it as

##\mathbf B :=\begin{bmatrix}0&1&0\\0&0&1\\-4&4&1\end{bmatrix}##

which is a canonical form. Take a look at this page.

https://en.wikipedia.org/wiki/Companion_matrix
 
Last edited:
  • #7
Here is another perspective that might help. The characteristic equation of your matrix B can be found by substituting λk for ak in the linear difference equation and then dividing through by the lowest power of λ like this:

##a_{n+3}=a_{n+2}+4a_{n+1}-4a_n##
##λ^{n+3}=λ^{n+2}+4λ^{n+1}-4λ^n##
##λ^3=λ^2+4λ-4##

It is easy to prove this identity using expansion by minors. @StoneTemplePython in post #6 mentioned the companion matrix. B is the companion matrix of this polynomial.
If λ1 is a root of this polynomial (an eigenvalue of B), then the series generated by ##λ^{n+3}=λ^{n+2}+4λ^{n+1}-4λ^n## is also generated by ##a_{n+3}=a_{n+2}+4a_{n+1}-4a_n##. That is, if

##a_n=λ_1^n##
##a_{n+1}=λ_1^{n+1}## and
##a_{n+2}=λ_1^{n+2}##

then

##a_{n+3}=λ_1^{n+3}##

If the roots of characteristic equation are all distinct (B has no repeated eigenvalues), then ##a_n## is a linear combination of the nth powers of the eigenvalues. To see this look at

##
\begin{pmatrix}a_{n} \\ a_{n-1} \\ a_{n-2} \\ \end{pmatrix}=B^n\begin{pmatrix}a_0 \\ a_{-1} \\ a_{-2} \\ \end{pmatrix}
##
##\begin{pmatrix}a_{n} \\ a_{n-1} \\ a_{n-2} \\ \end{pmatrix}=TD^nT^{-1}\begin{pmatrix}a_0 \\ a_{-1} \\ a_{-2}\\ \end{pmatrix}##

The only thing you really need to solve for here is ##a_{n}##. When you get all done calculating eigenvectors and inverting T, the resulting solution for ##a_{n}## will be some linear combination of powers of the eigenvalues (once again assuming no repeated eigenvalues)

##a_{n} = c_1 λ_1^n + c_2 λ_2^n + c_3 λ_3^n##

This should work for whatever values of n you choose, so choose the ones you already have sequence values for

##\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix}=
\begin{pmatrix}
λ_1^3 & λ_2^3& λ_3^3 \\
λ_1^2 & λ_2^2& λ_3^2 \\
λ_1 & λ_2 & λ_3
\end{pmatrix}
\begin{pmatrix}c_1 \\ c_2 \\ c_3\\ \end{pmatrix}##

@Ray Vickson has already given you the eigenvalues, so now all you need to do is solve for the vector of coefficients.
You will notice that we have ended up with something close to @StoneTemplePilot's favorite matrix here, the Vandermonde matrix. If we reindexed the ##a_n##'s, defining ##a_0 = 1##, ##a_1 = 5## and ##a_2 = 1##, then we would see the Vandermonde matrix

##\begin{pmatrix}1 \\ 5 \\ 1 \\ \end{pmatrix}=
\begin{pmatrix}
λ_1^2 & λ_2^2& λ_3^2 \\
λ_1 & λ_2& λ_3 \\
1 & 1 & 1
\end{pmatrix}
\begin{pmatrix}c_1 \\ c_2 \\ c_3\\ \end{pmatrix}##

This does result in a different - but no less valid - set of coefficients.
 

What is a linear algebra matrix?

A linear algebra matrix is a rectangular array of numbers or symbols that represents a set of linear equations. It is used to perform various mathematical operations, such as addition, subtraction, and multiplication, on vectors and matrices.

How is linear algebra used to compute series?

Linear algebra is used to compute series by representing the series as a matrix and then using matrix operations to solve for the sum of the series. This is done by first converting the series into a vector and then using matrix multiplication to compute the sum.

What are the main operations used in linear algebra to compute series?

The main operations used in linear algebra to compute series are addition, subtraction, and multiplication. Addition and subtraction are used to combine multiple series into one, while multiplication is used to perform the actual computation of the series sum.

Why is linear algebra important for computing series?

Linear algebra is important for computing series because it provides a powerful framework for manipulating and solving systems of linear equations. This allows for efficient and accurate computation of series sums, which are often used in various mathematical and scientific applications.

What are some real-world applications of using linear algebra to compute series?

Some real-world applications of using linear algebra to compute series include financial modeling, data analysis and pattern recognition, computer graphics and animation, and optimization problems in engineering and science. Linear algebra is also used extensively in machine learning and deep learning algorithms for data processing and predictive modeling.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
100
  • Calculus and Beyond Homework Help
Replies
6
Views
312
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
965
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
532
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
12
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
Back
Top