Graduate Elementwise Derivative of a Matrix Exponential

Click For Summary
The discussion centers on maximizing a function involving a matrix exponential and its derivatives with respect to a matrix A. The user has made progress in differentiating the function but faces computational challenges due to the need to solve multiple Lyapunov equations for each element of A. They seek to improve efficiency by focusing on specific directions that influence the function rather than calculating all elementwise derivatives. A paper on directional derivatives of the matrix exponential is referenced as potentially useful for this purpose. The user is exploring ways to optimize the numerical approach while managing the complexity of the problem.
madness
Messages
813
Reaction score
69
TL;DR
How can I analytically or numerically maximise an expression involving matrix exponentials?
Hi all. A problem has arisen whereby I need to maximize a function which looks like $$ f(A) = \mathbf{w}^T \left[\int_0^t e^{\tau A} M e^{\tau A^T} d\tau \right]^{-1} \mathbf{w} $$ with respect to the nxn matrix A (here, M is a covariance matrix, so nxn symmetrix and positive-definite, w is an n-dimensional vector, so f(A) is a scalar). I want to differentiate wrt elements of A, and by using some matrix identities I can make some headway into this. But eventually I have to differentiate the matrix exponential wrt its elements. This looks to be a challenging problem - similar problems seem to have arisen in the context of optimal control theory but I'm not sure this one has been addressed. I'm happy to use a numerical approach in the end, but would like to derive some gradient that can be climbed, perhaps using an approximation to the derivative of the matrix exponential.

I found this paper on the directional derivative of the matrix exponential (https://www.sciencedirect.com/science/article/pii/S0196885885710172). Am I correct in saying that my problem reduces to taking directional derivatives along the matrix direction $$ \mathbf{w} \mathbf{w}^T $$? If so, maybe these results could be used. Otherwise, each elementwise derivative is actually a directional derivative itself, but I'd need to find n^2 of them which would be computationally intensive. If not, is there perhaps some other more general way of numerically taking derivatives of difficult matrix functions such as this?

Thanks for your help!
 
Last edited:
Physics news on Phys.org
How do you define the integral of a matrix?
 
It's an elementwise integral. This is a standard integral called the controllability Gramian, whose solution is given by a Lyapunov equation.

I've now managed to differentiate the function f(A) with respect to the elements of A and optimise numerically. However it's very slow as I have to solve two Lyapunov equations for each element of A I want to get the derivative of, which for now is all N^2 elements. And I have to do this iteratively as I climb the gradient to optimise f(A). I'm hoping there is a way to increase efficiency by only differentiating along directions that affect f(A).
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 24 ·
Replies
24
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 17 ·
Replies
17
Views
6K