Stuck on evaluating this functional determinant

In summary, given the stochastic differential equation: ##\dot{x} = W(x(\tau))+\eta(\tau),## we have ##det|\frac{d\eta(\tau)}{dx(\tau')}| = exp^{\int_{0}^{T}d\tau \,Tr \ln([\frac{d}{d\tau}-W'(x(\tau))]\delta (\tau - \tau'))} = exp^{\frac{1}{2}\int_{0}^{T}d\tau W'(x(\tau))}.##
  • #1
TroyElliott
59
3
I am trying to show that given the following stochastic differential equation: ##\dot{x} = W(x(\tau))+\eta(\tau),## we have

##det|\frac{d\eta(\tau)}{dx(\tau')}| = exp^{\int_{0}^{T}d\tau \,Tr \ln([\frac{d}{d\tau}-W'(x(\tau))]\delta (\tau - \tau'))} = exp^{\frac{1}{2}\int_{0}^{T}d\tau W'(x(\tau))}.##

Attempt at the solution: We can write the differential equation as ##\eta(\tau) = \dot{x} - W(x(\tau)),## and then take the function derivative with respect to ##x(\tau')## to get ##\frac{d\eta(\tau)}{dx(\tau')} = (\frac{d}{d\tau}-W')\delta(\tau - \tau')##. Next we can use the identity ##det(exp^{M}) = exp^{Tr(M)}## to write ##det|\frac{d\eta(\tau)}{dx(\tau')}| = exp^{Tr \, \ln(\frac{d\eta(\tau)}{dx(\tau')})} = exp^{Tr \, \ln[(\frac{d}{d\tau}-W')\delta(\tau - \tau')]}.##

This is where I am stuck at. I don't see how the integral arises in the first equality in the above equation, and furthermore I do not see how to evaluate this integral to end up with the second equality in the above equation. Any insight would be greatly appreciated. Thanks!
 
Physics news on Phys.org
  • #2
TroyElliott said:
I don't see how the integral arises in the first equality in the above equation, and furthermore I do not see how to evaluate this integral to end up with the second equality in the above equation. Any insight would be greatly appreciated. Thanks!

The integral arises from the definition of the trace of infinite-dimensional matrices. A function [itex]A( \tau , \sigma)[/itex] of two real variables can be regarded as [itex]\infty \times \infty[/itex] matrix [itex]A[/itex]. The trace of such matrix is defined by [tex]\mbox{Tr}(A) = \int d \tau \ A(\tau , \tau) .[/tex] And the trace of a product of two such matrices, say [itex]A^{-1}B[/itex], is given by [tex]\mbox{Tr}(A^{-1}B) = \int d \tau d \sigma \ A^{-1}(\tau , \sigma)B (\sigma , \tau ) , \ \ \ \ \ \ \ \ (1)[/tex] where the inverse matrix [itex]A^{-1}[/itex] is defined by [tex]\int d \lambda \ A( \tau , \lambda )A^{-1} (\lambda , \sigma ) = \delta ( \tau - \sigma ) . \ \ \ \ \ \ \ \ \ \ \ \ \ (2)[/tex]
Okay, I must say that I am not an expert on stochastic processes, but let’s see what we can do about your problem. I write your functional derivative as [tex]\frac{\delta \eta (\tau)}{\delta x (\sigma)} = \frac{\partial}{\partial \tau} \delta ( \tau - \sigma ) - \frac{\partial W}{\partial x} \delta ( \tau - \sigma ) . \ \ \ \ \ (3)[/tex] Now, if we define the matrices [tex]M(\tau , \sigma ) = \frac{\delta \eta (\tau)}{\delta x (\sigma)} ,[/tex][tex]A(\tau , \sigma ) = \frac{\partial}{\partial \tau} \delta ( \tau - \sigma ) , \ \ \ \ \ (4)[/tex][tex]B( \tau , \sigma ) = \frac{\partial W}{\partial x} \delta ( \tau - \sigma ) , \ \ \ \ \ \ (5)[/tex] then we can write (3) as [tex]M = A ( 1 - A^{-1}B ).[/tex] Thus [tex]\mbox{det}(M) = \mbox{det}(A) \ \mbox{det} ( 1 - A^{-1}B ) .[/tex] Notice that all dynamical information is contained in the matrix [itex]A^{-1}B[/itex]. And since [itex]A[/itex] does not depend on trajectory [itex]x(\tau)[/itex], we may include [itex]\mbox{det}(A)[/itex] in the definition of an overall normalisation constant (if you have done path integral, you know what I am talking about). So, we may write [tex]\mbox{det}(M) = C \ \mbox{det} ( 1 - A^{-1}B ) = C \exp \left( \mbox{Tr} \ln (1 - A^{-1}B)\right) .[/tex] Expanding the function [itex]\ln (1 - y)[/itex], we get [tex]\mbox{det}(M) = C \exp \left( - \sum_{k = 1} \frac{1}{k} \mbox{Tr}(A^{-1}B)^{k} \right) ,[/tex] or [tex]\mbox{det}(M) = C \exp \left( - \mbox{Tr}(A^{-1}B) - \frac{1}{2} \mbox{Tr}(A^{-1}B)^{2} - \cdots \ \right) . \ \ \ \ (6)[/tex] This is our final result. To make sense of (6), we need to use our definitions (4) and (5) and evaluate the all traces in (6). First, substituting (4) in (2), we find [tex]\frac{\partial}{\partial \tau} A^{-1}( \tau , \sigma ) = \delta ( \tau - \sigma ) .[/tex] This means that [itex]A^{-1}[/itex] is the step function [recall the identity [itex]\theta^{\prime}(x) = \delta (x)[/itex]] [tex]A^{-1}( \tau , \sigma ) = \theta ( \tau - \sigma ) . \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (7)[/tex] Now, by substituting (7) and (5) in (1), we find [tex]\mbox{Tr} (A^{-1}B) = \int d \tau d \sigma \ \theta ( \tau - \sigma) \delta (\sigma - \tau) \frac{\partial W}{\partial x} ( \sigma ) = \theta (0) \int d \tau \frac{\partial W}{\partial x} .[/tex] Since the step function [itex]\theta ( \tau ) = +1[/itex] for [itex]\tau > 0[/itex], and [itex]\theta (\tau) = 0[/itex] for [itex]\tau < 0[/itex], it is reasonable to take [itex]\theta (0) = \frac{1}{2}[/itex]. Indeed, this follows from the following regularized form of the step function [tex]\theta ( \tau ) = \lim_{\lambda \to 0} \left( \frac{1}{2} + \frac{1}{\pi} \tan^{-1} \left(\frac{\tau}{\lambda} \right)\right) .[/tex] So, we have found our first trace [tex]\mbox{Tr} (A^{-1}B) = \frac{1}{2} \int d \tau \ \frac{\partial W}{\partial x} . \ \ \ \ (8)[/tex] In fact, one can show that (8) is the only non-zero trace, i.e., [itex]\mbox{Tr}(A^{-1}B)^{n} = 0[/itex], for all [itex]n > 1[/itex]. To make the equation shorter, I will prove that only for [itex]n = 2[/itex] case, because the general case is exactly the same:
[tex]\begin{align*}\mbox{Tr}(A^{-1}B)^{2} &= \int d \tau d \sigma \ (A^{-1}B)( \tau , \sigma ) \ (A^{-1}B) ( \sigma , \tau ) \\ &= \int d \tau d \sigma \left( \int d \lambda \ A^{-1}( \tau , \lambda ) B( \lambda , \sigma )\right) \left( \int d \rho \ A^{-1}( \sigma , \rho ) B( \rho , \tau )\right) . \end{align*}[/tex] Now, if you substitute for [itex]B[/itex]'s their corresponding definition from (5), and do the delta’s integrations, you find [tex]\mbox{Tr}(A^{-1}B)^{2} = \int d \tau d \sigma \ A^{-1}( \tau , \sigma ) \ A^{-1}( \sigma , \tau ) \left( \frac{\partial W}{\partial x} \right)^{2} .[/tex] Substituting for the [itex]A^{-1}[/itex]’s their corresponding step functions from (7), we get [tex]\mbox{Tr}(A^{-1}B)^{2} = \int d \tau d \sigma \ \theta ( \tau - \sigma) \theta ( \sigma - \tau ) \left( \frac{\partial W}{\partial x}\right)^{2} .[/tex] But the step functions are non-zero only if [itex]\tau > \sigma > \tau[/itex] which is non-sense. Thus the integral must vanish. Technically speaking, when the integrand has support on a set of zero measure, the integral vanishes (because in this case there is no delta function to help us out). So, our final result (6) becomes [after using (8)][tex]\mbox{det} \left( \frac{\delta \eta}{\delta x}\right) = C \ e^{- \frac{1}{2} \int d \tau \ \frac{\partial W}{\partial x}} .[/tex]
 
  • Like
Likes vanhees71 and TroyElliott
  • #3
samalkhaiat said:
The integral arises from the definition of the trace of infinite-dimensional matrices. A function [itex]A( \tau , \sigma)[/itex] of two real variables can be regarded as [itex]\infty \times \infty[/itex] matrix [itex]A[/itex]. The trace of such matrix is defined by [tex]\mbox{Tr}(A) = \int d \tau \ A(\tau , \tau) .[/tex] And the trace of a product of two such matrices, say [itex]A^{-1}B[/itex], is given by [tex]\mbox{Tr}(A^{-1}B) = \int d \tau d \sigma \ A^{-1}(\tau , \sigma)B (\sigma , \tau ) , \ \ \ \ \ \ \ \ (1)[/tex] where the inverse matrix [itex]A^{-1}[/itex] is defined by [tex]\int d \lambda \ A( \tau , \lambda )A^{-1} (\lambda , \sigma ) = \delta ( \tau - \sigma ) . \ \ \ \ \ \ \ \ \ \ \ \ \ (2)[/tex]
Okay, I must say that I am not an expert on stochastic processes, but let’s see what we can do about your problem. I write your functional derivative as [tex]\frac{\delta \eta (\tau)}{\delta x (\sigma)} = \frac{\partial}{\partial \tau} \delta ( \tau - \sigma ) - \frac{\partial W}{\partial x} \delta ( \tau - \sigma ) . \ \ \ \ \ (3)[/tex] Now, if we define the matrices [tex]M(\tau , \sigma ) = \frac{\delta \eta (\tau)}{\delta x (\sigma)} ,[/tex][tex]A(\tau , \sigma ) = \frac{\partial}{\partial \tau} \delta ( \tau - \sigma ) , \ \ \ \ \ (4)[/tex][tex]B( \tau , \sigma ) = \frac{\partial W}{\partial x} \delta ( \tau - \sigma ) , \ \ \ \ \ \ (5)[/tex] then we can write (3) as [tex]M = A ( 1 - A^{-1}B ).[/tex] Thus [tex]\mbox{det}(M) = \mbox{det}(A) \ \mbox{det} ( 1 - A^{-1}B ) .[/tex] Notice that all dynamical information is contained in the matrix [itex]A^{-1}B[/itex]. And since [itex]A[/itex] does not depend on trajectory [itex]x(\tau)[/itex], we may include [itex]\mbox{det}(A)[/itex] in the definition of an overall normalisation constant (if you have done path integral, you know what I am talking about). So, we may write [tex]\mbox{det}(M) = C \ \mbox{det} ( 1 - A^{-1}B ) = C \exp \left( \mbox{Tr} \ln (1 - A^{-1}B)\right) .[/tex] Expanding the function [itex]\ln (1 - y)[/itex], we get [tex]\mbox{det}(M) = C \exp \left( - \sum_{k = 1} \frac{1}{k} \mbox{Tr}(A^{-1}B)^{k} \right) ,[/tex] or [tex]\mbox{det}(M) = C \exp \left( - \mbox{Tr}(A^{-1}B) - \frac{1}{2} \mbox{Tr}(A^{-1}B)^{2} - \cdots \ \right) . \ \ \ \ (6)[/tex] This is our final result. To make sense of (6), we need to use our definitions (4) and (5) and evaluate the all traces in (6). First, substituting (4) in (2), we find [tex]\frac{\partial}{\partial \tau} A^{-1}( \tau , \sigma ) = \delta ( \tau - \sigma ) .[/tex] This means that [itex]A^{-1}[/itex] is the step function [recall the identity [itex]\theta^{\prime}(x) = \delta (x)[/itex]] [tex]A^{-1}( \tau , \sigma ) = \theta ( \tau - \sigma ) . \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (7)[/tex] Now, by substituting (7) and (5) in (1), we find [tex]\mbox{Tr} (A^{-1}B) = \int d \tau d \sigma \ \theta ( \tau - \sigma) \delta (\sigma - \tau) \frac{\partial W}{\partial x} ( \sigma ) = \theta (0) \int d \tau \frac{\partial W}{\partial x} .[/tex] Since the step function [itex]\theta ( \tau ) = +1[/itex] for [itex]\tau > 0[/itex], and [itex]\theta (\tau) = 0[/itex] for [itex]\tau < 0[/itex], it is reasonable to take [itex]\theta (0) = \frac{1}{2}[/itex]. Indeed, this follows from the following regularized form of the step function [tex]\theta ( \tau ) = \lim_{\lambda \to 0} \left( \frac{1}{2} + \frac{1}{\pi} \tan^{-1} \left(\frac{\tau}{\lambda} \right)\right) .[/tex] So, we have found our first trace [tex]\mbox{Tr} (A^{-1}B) = \frac{1}{2} \int d \tau \ \frac{\partial W}{\partial x} . \ \ \ \ (8)[/tex] In fact, one can show that (8) is the only non-zero trace, i.e., [itex]\mbox{Tr}(A^{-1}B)^{n} = 0[/itex], for all [itex]n > 1[/itex]. To make the equation shorter, I will prove that only for [itex]n = 2[/itex] case, because the general case is exactly the same:
[tex]\begin{align*}\mbox{Tr}(A^{-1}B)^{2} &= \int d \tau d \sigma \ (A^{-1}B)( \tau , \sigma ) \ (A^{-1}B) ( \sigma , \tau ) \\ &= \int d \tau d \sigma \left( \int d \lambda \ A^{-1}( \tau , \lambda ) B( \lambda , \sigma )\right) \left( \int d \rho \ A^{-1}( \sigma , \rho ) B( \rho , \tau )\right) . \end{align*}[/tex] Now, if you substitute for [itex]B[/itex]'s their corresponding definition from (5), and do the delta’s integrations, you find [tex]\mbox{Tr}(A^{-1}B)^{2} = \int d \tau d \sigma \ A^{-1}( \tau , \sigma ) \ A^{-1}( \sigma , \tau ) \left( \frac{\partial W}{\partial x} \right)^{2} .[/tex] Substituting for the [itex]A^{-1}[/itex]’s their corresponding step functions from (7), we get [tex]\mbox{Tr}(A^{-1}B)^{2} = \int d \tau d \sigma \ \theta ( \tau - \sigma) \theta ( \sigma - \tau ) \left( \frac{\partial W}{\partial x}\right)^{2} .[/tex] But the step functions are non-zero only if [itex]\tau > \sigma > \tau[/itex] which is non-sense. Thus the integral must vanish. Technically speaking, when the integrand has support on a set of zero measure, the integral vanishes (because in this case there is no delta function to help us out). So, our final result (6) becomes [after using (8)][tex]\mbox{det} \left( \frac{\delta \eta}{\delta x}\right) = C \ e^{- \frac{1}{2} \int d \tau \ \frac{\partial W}{\partial x}} .[/tex]

Thank you. I really appreciate you taking your time to write out such a detailed solution!
 
  • Like
Likes vanhees71

1. What is a functional determinant?

A functional determinant is a mathematical concept used in functional analysis to measure the change in a function under a linear transformation. It is similar to a regular determinant, but instead of matrices, it operates on functions.

2. Why is evaluating a functional determinant important?

Evaluating a functional determinant is important because it allows us to analyze the behavior of a function under different transformations. This can help us understand the properties of the function and make predictions about its behavior.

3. What are some common methods for evaluating a functional determinant?

Some common methods for evaluating a functional determinant include using the Leibniz formula, the Laplace expansion, and the Jacobi formula. These methods involve breaking down the determinant into smaller components and using known properties of determinants to simplify the calculation.

4. What are some applications of functional determinants?

Functional determinants have various applications in mathematics and physics. They are used in the study of differential equations, quantum mechanics, and statistical mechanics, among others. They also have practical applications in fields such as image processing and signal analysis.

5. Are there any challenges in evaluating a functional determinant?

Yes, evaluating a functional determinant can be challenging due to the complexity of the calculations involved. It requires a strong understanding of linear algebra and functional analysis. Additionally, the functions involved may be complex and difficult to manipulate, making the evaluation process more challenging.

Similar threads

Replies
1
Views
858
  • Quantum Physics
Replies
1
Views
947
  • Calculus and Beyond Homework Help
Replies
1
Views
705
  • Calculus and Beyond Homework Help
Replies
0
Views
167
  • Differential Equations
Replies
1
Views
752
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
4
Views
1K
Replies
6
Views
1K
  • Special and General Relativity
Replies
8
Views
1K
  • Advanced Physics Homework Help
2
Replies
46
Views
2K
Back
Top