Raising Matrices to power of n. (complex)

In summary: I'm not sure about irrational numbers. I suspect that the result of raising your example matrix to an irrational power is transcendental. The Taylor series for ln(1+A) is a finite sum when A is a square matrix and so the result should be algebraic. I don't know how to prove it off hand. I also suspect that for irrational numbers, you would have to define the matrix M^{p/q} in terms of the eigenvalues and eigenvectors of M. I don't know how to do this.In summary, it is possible to raise a singular matrix to a power of 1/n for any positive integer n and yield a result using the formula X^n=2^(
  • #1
Epsillon
70
1
So let's say you have a matrice that has a det of 0

So X= [1 1; 1 1]

So using some verification and algebra u arive at

X^n= 2^(n-1)[1 1; 1 1]


So I am trying to define what n can be.

I already established n cannot be a - since one would get infinity for an answer since it is a singular matrix.

Another limiation is 0. Since that just gives you an identity which the formula cannot express.

But I am confused with fractions.

Technically they should work but I do not know how to explain it.
Same with irrational numbers.

so can n=1/2?




and if that is not hard enough what about imaginary numbers?


For some reason I know they work with non singular matrices .
 
Physics news on Phys.org
  • #2
anyone?
 
  • #3
Your formula does work for n=1/2. (If you square 2^(1/2-1) [1,1;1,1] you get back your original matrix.) In fact, the formula works for n=1/b for any positive integer b. Since you know it works for positive integers it also works for n=a/b for positive integers a/b. So it works for positive rational numbers.

For irrational and imaginary n you have to decide what it means to raise a matrix to this type of power. Hopefully someone here will have an idea.

Mathematica actually let's you raise your matrix to a positive irrational power and it gives back your formula. (It gave an error message when I tried to do imaginary powers). I don't know how to interpret this result, though.
 
  • #4
For imaginary (or in general, complex) powers, the definition for ordinary numbers (not matrices) is

[tex]\alpha^{\beta} = e^{\beta \ln \alpha}[/tex]

For a matrix A, this is only defined if ln(A) exists. This is not always the case (in fact, I think det(A)=0 is precisely one case for which ln(A) is not defined...the determinant is something like the "magnitude" of a matrix).

Note: One can take arbitrary functions of matrices f(A), so long as f(A) can be defined as a convergent Taylor series. The matrix logarithm can be given by

[tex]\ln (1+A) = A - \frac12 A^2 + \frac13 A^3 - \frac14 A^4 + ...[/tex]

which does not converge for your matrix.
 
Last edited:
  • #5
I used MATLAB and it gave values for fractions. So I am confused there.

Is there a proof for fractions?

I want to algebriacly proove whyy fractions work. Perhaps through induction?
 
  • #6
A proof of what for fractions? You haven't stated what it is you want to prove.


The matrix
[tex]A= \left[\begin{array}{cc}1 & 1 \\ 1 & 1\end{array}\right][/tex]
has two distinct eigenvalues and so is diagonalizable. That is, there exist a matrix P such that [itex]P^{-1}AP= D[/itex] where D is a diagonal matrix having the eigenvalues 0 and 1 on the main diagonal. Then [itex]A= PDP^{-1}[/itex] and
[tex]A^n= (PDP^{-1})(PDP^{-1})(PDP^{-1})\cdot\cdot\cdot(PDP^{-1})= PD^nP^{-1}[/tex]
where Dn is easy- it is the diagonal matrix with 0 and 2n on the main diagonal.

Specifically, an eigenvector corresponding to the eigenvalue 0 is <1, -1> and an eigenvector corresponding to the eigenvalue 1 is <1, 1> so we can take
[tex]P= \left[\begin{array}{cc}1 & 1 \\ -1 & 1\end{array}\right][/tex]
and then
[tex]P^{-1}= \left[\begin{array}{cc} \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2}\end{array}\right][/tex]

That is
[tex]\left[\begin{array}{cc}1 & 1 \\ 1 & 1\end{array}\right]= \left[\begin{array}{cc}1 & 1 \\ -1 & 1\end{array}\right]\left[\begin{array}{cc}0 & 0 \\ 0 & 2\end{array}\right]\left[\begin{array}{cc} \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2}\end{array}\right][/tex]

and so
[tex]\left[\begin{array}{cc}1 & 1 \\ 1 & 1\end{array}\right]^n= \left[\begin{array}{cc}1 & 1 \\ -1 & 1\end{array}\right]\left[\begin{array}{cc}0 & 0 \\ 0 & 2^n\end{array}\right]\left[\begin{array}{cc} \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2}\end{array}\right][/tex]


You can show that the formula works for 1/n as well by reversing it:
[tex]\left[\begin{array}{cc}1 & 1 \\ 1 & 1\end{array}\right]= \left[\begin{array}{cc}1 & 1 \\ -1 & 1\end{array}\right]\left[\begin{array}{cc}0 & 0 \\ 0 & 2^{1/n}\end{array}\right]^n\left[\begin{array}{cc} \frac{1}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{1}{2}\end{array}\right][/tex]
 
  • #7
). I basicly want to prove that when you have a singular matrix X. And you raise it to the power of 0<n<1 one can yield results.

Thanks for the exmplanation I get how to do it now. But I need to research more on eigenvalues and eigenvectors since in the course I am in currently we do not learn about those key properties.

Couple more things:
Without using eigen values can you simply prove that by saying
X[tex]^{1/n}[/tex]=M[tex]^{n}[/tex]
Where M is the result?

and I gota questions can you also use taylor series to explain why irrational numbers work?

Or perhaps like a pi expansion where you put fractions for n for your required precesion?


An why exactly doesn't ln of X work when logs work.
 
Last edited:
  • #8
If X1/n= M then Mn= X, not "X1/n= Mn".

I don't know what you mean by "why exactly doesn't ln of X work when logs work. "

ln (the natural logarithm) works whenever any log "works". loga(X)= ln(X)/ln(a).
 

What is the process for raising a matrix to a power of n?

To raise a matrix to a power of n, you need to follow a specific process. First, you need to find the eigenvalues and eigenvectors of the matrix. Then, use these eigenvalues and eigenvectors to create a diagonal matrix. Finally, raise the diagonal matrix to the power of n and multiply it by the matrix of eigenvectors to get the final result.

Can a matrix be raised to a negative power?

Yes, a matrix can be raised to a negative power. When raising a matrix to a negative power, the process is similar to raising it to a positive power, but with an additional step. After finding the eigenvalues and eigenvectors, you need to find the inverse of the diagonal matrix before multiplying it with the matrix of eigenvectors.

How do you know if a matrix can be raised to a power of n?

A matrix can be raised to a power of n if it is a square matrix, meaning it has the same number of rows and columns. Additionally, the matrix must have distinct eigenvalues, which means all of its eigenvalues are unique.

What is the significance of raising a matrix to a power of n?

Raising a matrix to a power of n is useful in many areas of mathematics, such as linear algebra and differential equations. It allows for the efficient computation of repeated transformations, which can be used to solve complex problems and model real-world situations.

Can a non-square matrix be raised to a power of n?

No, a non-square matrix cannot be raised to a power of n. This is because the process for raising a matrix to a power of n involves creating a diagonal matrix, which can only be done with a square matrix. Non-square matrices do not have distinct eigenvalues, making it impossible to create a diagonal matrix.

Similar threads

Replies
7
Views
2K
Replies
7
Views
828
  • Linear and Abstract Algebra
Replies
1
Views
797
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
11
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
805
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
2K
Back
Top