This is easy if your matrix is nilpotent or diagonizable.
If it is neither of those, then you will want to triangulate your matrix to write it as a sum of a diagonizable matrix and a nilpotent matrix.
Did you have any particular matrixx in your mind?
#3
arkajad
1,481
4
It does not matter whether you matrix is "constant" or "non-constant". You define A=A(t) and calculate exp(A).
Added: Unless you have in mind so called http://en.wikipedia.org/wiki/Ordered_exponential"
Last edited by a moderator:
#4
ranoo
9
0
the 2x2 matrix is A(t)=[0 1]
0 t
I can't write the matrix but tha first row 0,1 and the second row 0,t
#5
arkajad
1,481
4
You have , for n\geq 1
A(t)^n=\begin{pmatrix}0&t^{n-1}\\0&t^n\end{pmatrix}
I hope you will be able to finish. But better check the above. I could have made a mistake!
#6
kof9595995
676
2
Remember exp(A) is defined as a taylor expansion exponential function, the series actually converge for any matrix A, so in principle we can always express exp(A(t)) in this way, then each entry of exp(A) is an infinite series of numbers, so you can try and work out the sum to get a closed form. There're better ways to find the closed form of exp(A), if A is diagonalizable just diagonlize it, if not you can always use a Jordan decomposition, it works in a similar manner.
This is the question,
I understand the concept, in ##\mathbb{Z_n}## an element is a is a unit if and only if gcd( a,n) =1.
My understanding of backwards substitution,
...
i have using Euclidean algorithm,
##471 = 3⋅121 + 108##
##121 = 1⋅108 + 13##
##108 =8⋅13+4##
##13=3⋅4+1##
##4=4⋅1+0##
using back-substitution,
##1=13-3⋅4##
##=(121-1⋅108)-3(108-8⋅13)##
...
##= 121-(471-3⋅121)-3⋅471+9⋅121+24⋅121-24(471-3⋅121##
##=121-471+3⋅121-3⋅471+9⋅121+24⋅121-24⋅471+72⋅121##...
##\textbf{Exercise 10}:##
I came across the following solution online:
Questions:
1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##"
2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
It is well known that a vector space always admits an algebraic (Hamel) basis. This is a theorem that follows from Zorn's lemma based on the Axiom of Choice (AC).
Now consider any specific instance of vector space. Since the AC axiom may or may not be included in the underlying set theory, might there be examples of vector spaces in which an Hamel basis actually doesn't exist ?