# What physical meaning can the “determinant” of a divergency have?

• A
• Anixx
In summary, the conversation discusses the concept of a determinant of "divergencies", which are infinite integrals, series, and germs. The speaker mentions constructing something similar to the determinant of a matrix for these entities and introduces a formula involving a regularized version of the natural logarithm. They also note that like the determinant of a matrix, the determinant of a divergency can be negative and does not follow the requirements for a norm. The conversation also includes a table showing some examples of divergencies and their finite parts and determinants. The speaker wonders if the determinant of a divergent integral or series can reveal anything about its properties and mentions a possible connection with functional determinants and zeta functions. They also consider the possibility off

#### Anixx

TL;DR Summary
What physical meaning “determinant” of a divergency (divergent integral or series) can have? Is there a parallel with functional determinant?
I am [working][1] on the algebra of "divergencies", that is, infinite integrals, series and germs.
So, I decided to construct something similar to determinant of a matrix of these entities.

$$\det w=\exp(\operatorname{reg }\ln w)$$
which is analogous to how determinant of a matrix can be expressed, except we take finite part (regularize) instead of taking trace.

Like determinant of a matrix, determinant of a divergency can be negative. It does not follow the requirements for a norm (Pythagorean theorem), and is not even continuous. Still, it has some usual properties, like $$1/\det w=\det 1/w$$.

Below is a table of some divergencies with their finite parts and determinants (in the last column).

An interesting property is that the constant ##e^{-\gamma}## often appears in the expressions for these determinants.

I wonder, what can indicate such determinant of a divergent integral or series? Can it tell something about its properties?

Given that the regular part (which is analog of trace) is defined as
$$\operatorname{reg}(\omega_-+x)^a= B_a(x)=-a\zeta(1-a,x)$$
I see some similarity with functional determinant that also includes a zeta function.

$$\begin{array}{cccccc} \text{Delta form} & \text{In terms of } \tau, \omega_+,\omega_- & \text{Finite part} & \text{Integral or series form} & \text{Germ form} &\text{Determinant}\\ \pi \delta (0) & \tau & 0 & \int_0^{\infty } \, dx;\int_0^{\infty } \frac{1}{x^2} \, dx & \underset{x\to\infty}{\operatorname{germ}} x;\underset{x\to0^+}{\operatorname{germ}}\frac1x&\frac{e^{-\gamma}}4 \\ \pi \delta (0)-\frac{1}{2} & \omega _-;\tau-\frac{1}{2} & -\frac{1}{2} & \sum _{k=1}^{\infty } 1 & \underset{x\to\infty}{\operatorname{germ}} (x-1/2) &e^{-\gamma} \\ \pi \delta (0)+\frac{1}{2} & \omega _+;\tau+\frac{1}{2} & \frac{1}{2} &\sum _{k=0}^{\infty } 1 & \underset{x\to\infty}{\operatorname{germ}} (x+1/2) & e^{-\gamma} \\ 2 \pi \delta (i) & e^{\omega_+}-e^{\omega_-}-1 & 0 & \int_{-\infty }^{\infty } e^x \, dx & \underset{x\to\infty}{\operatorname{germ}} e^x \\ & \frac{\tau ^2}{2}+\frac{1}{24};\frac{\omega_+^3-\omega_-^3}6 & 0 & \int_0^{\infty} x \, dx;\int_0^\infty \frac2{x^3}dx & \underset{x\to\infty}{\operatorname{germ}}\frac{x^2}2;\underset{x\to0^+}{\operatorname{germ}} \frac1{x^2}\\ & \frac{\tau ^2}{2}-\frac{1}{24} & -\frac1{12} & \sum _{k=0}^{\infty } k & \underset{x\to\infty}{\operatorname{germ}} \left(\frac{x^2}2-\frac1{12}\right) \\ -\pi \delta''(0) &\frac {\tau^3}3 +\frac\tau{12};\frac{\omega_+^4-\omega_-^4}{12}& 0 & \int_0^\infty x^2dx;\int_0^\infty\frac6{x^4}dx&\underset{x\to\infty}{\operatorname{germ}}\frac{x^3}3;\underset{x\to0^+}{\operatorname{germ}} \frac2{x^3}\\ \pi^2\delta(0)^2-\pi\delta(0)+1/4&\omega_-^2&\frac16&2 \int_0^{\infty } \left(x-\frac{1}{2}\right) \, dx+\frac{1}{6}&\underset{x\to\infty}{\operatorname{germ}}B_2(x)&e^{-2\gamma}\\ \pi^2\delta(0)^2+\pi\delta(0)+1/4&\omega_+^2&\frac16&2 \int_0^{\infty } \left(x+\frac{1}{2}\right) \, dx+\frac{1}{6}&\underset{x\to\infty}{\operatorname{germ}}B_2(x+1)&e^{-2\gamma}\\ \pi^2\delta(0)^2&\tau^2&-\frac1{12}&\int_{-\infty}^{\infty } |x| \, dx-\frac{1}{12}&\underset{x\to\infty}{\operatorname{germ}}B_2(x+1/2)&\frac{e^{-2\gamma}}{16} \\ &\ln \omega_++\gamma&0&\int_1^\infty \frac{dx}x;\sum_{k=1}^\infty \frac1x -\gamma&\underset{x\to\infty}{\operatorname{germ}}\ln x\\ -3\pi\delta''(0)-\frac14 \pi\delta(0);\pi^3\delta(0)^3&\tau^3&0&\int_0^\infty \left(3x^2-\frac1{4}\right)dx&\underset{x\to\infty}{\operatorname{germ}}B_3(x+1/2)&\frac{e^{-3\gamma}}{64} \\ \frac{2\pi\delta(i)+1}{e-1}&e^{\omega_-}&\frac1{e-1}&\frac1{e-1}+\frac1{e-1}\int_{-\infty}^\infty e^x dx&\underset{x\to\infty}{\operatorname{germ}} \frac{e^x+1}{e-1}&\frac1{\sqrt{e}}\\ \frac{2\pi\delta(i)+1}{1-e^{-1}}&e^{\omega_+}&\frac1{1-e^{-1}}&\frac1{1-e^{-1}}+\frac1{1-e^{-1}}\int_{-\infty}^\infty e^x dx&\underset{x\to\infty}{\operatorname{germ}} \frac{e^x+1}{1-e^{-1}}&\sqrt{e}\\ &(-1)^\tau&\frac\pi{2}&&&1\\ \end{array}$$

[1]: https://mathoverflow.net/questions/115743/an-algebra-of-integrals/342651#342651

Geometrically speaking, determinants are (signed i.e. oriented) volumes or signed volume scaling factors of operators.
Given a basis ##[u_1, u_2, \ldots, u_n]## for a normed vector space ##V## define the "determinant" form:
$$det(u_1,u_2,\ldots, u_n) = vol(\{x: x = w_1u_1+w_2 u_2 \cdots\; 0\le w_k \le 1 \& w_1+w_2+\cdots+w_n=1\})$$
If the basis is ortho-normal and right-hand oriented then this is a unit n-cube with volume 1. Then the determinant of an operator is the volume resulting from its action on an ortho-normal basis or equivalently the multiplicative change in the volume when it acts on an arbitrary basis. If the "basis" vectors are not linearly independent then your volume is zero.

To apply this to your topic you'd really need to get some type of operator interpretation for your expressions such that the determinants multiply when you compose (multiply) them as operators. Now often we view an integral as a "trace" when we are implicitly integrating a function of two variables, say x and y on the constraint surface (or some part thereof) of the constraint ##x=y##.

[From here I dive into some possibilities which may note apply.]

Viewed more generally integration across a domain (with a possible weighting function) can be viewed as a linear functional, i.e. a dual vector to the space of functions under consideration. Likewise a trace is an element of the dual space to the space of operators. In particular the trace on an operator algebra with an hermitian adjoint ##\dagger## would be (the dimension times) the dual to the identity operator under the "natural" extension of the inner product. Call this ##\boldsymbol{1}^\text{‡} ##.

Hence ## \boldsymbol{1}^\text{ ‡ }(\boldsymbol{1})=dim(V)## and in general ##\boldsymbol{1}^\text{ ‡ }(A)=tr(A)##.

Of course the case of infinite dimensional spaces will lead to divergent traces but you can consider series of partial traces defined for series of projection operators converging to the identity which project onto the span of first ##k## elements of an ortho-normal basis. (Basically the identity matrix for the first ##k## diagonals but elswhere zero). And then normalize the series with a factor of 1/k.

But in order to exponentiate these traces to define a determinant the exponential map must be taken using the operator product. ## e^A = 1+A + A^{2\circ}/2! + A^{3\circ}/3!+\cdots## where ##A^{k\circ} = A\circ A \circ A\circ\cdots## with ##k## factors.

For an algebra where this will be consistent with integrals of functions you can associate with each function ##f## the functional operator of left multiplication: ##f\cdot## such that ##[f\cdot g](x) = f(x)\cdot g(x)##.
An equivalent way of defining this is to consider functional operators of the form:
$$G: h \mapsto G[h] : G[h](x) = \int_{\Omega} G(x,y)h(y)dy$$
(Think Green's functions!)

Then the left multiplication operator ##f\cdot## can be defined in this format using ##G(x,y) = f(x)\delta(x-y)## with ##\delta## the Dirac delta-"function". With this operator definition we note that the exponential of the "left multiplication by ##f(x)##" operator will simply be the "left multiplication by ##e^{f(x)}##" operator i.e. the operator exponential reverts to the "normal" exponential of the function's value.

Here's a little more background on the algebra of multiplication of functions. If you restrict yourself for the moment to analytic functions (on some interval) whereby they have convergent power series expansions (on that interval) you may see this as the algebra generated by ##q=x\cdot##. It is, by itself, and Abelian algebra but you may extend it with the those elements generated by the derivative operator ##p=\frac{d}{dx}## and with the commutation relation ## [p,q]=1## you get the canonical algebra of analytic functions of both ##p## and ##q##, equivalent to the algebra of bosonic creation and annihilation operators or the algebra of position and momentum operators of a quantum particle (absorbing the ##i\hbar## into one of the operators, of course). This was the context in which I delved into these ideas.

I think it relates to where you are touching upon the Fourier transforms of functions which is applied in quantum theory to transform between position and momentum representations. Note that I could have alternatively associated with each function ##f## the convolution operator:
$$f*: [f*g](x) = \int_{\Omega} f(x-y)g(y)dy$$
However, recall that the Fourier transform of a product yields a convolution and vise versa. So by considering the canonical extension you get two for the price of one.

I know I have taken something of a crooked path through several thoughts here, but the hope was to stimulate some ideas in your application. I think you do really need to construct some such similar algebraic representation if you want to justify the name "determinant". You will also, I believe, run into functional integration (see Wikipedia: Functional Integration) i.e. integrating over spaces of functions to write down an "expression" for these determinants. I recall seeing functional determinants defined in texts on field theory in this way (often with the use of "anti-commuting c-numbers" i.e. use of Grassmann algebras to get convergent results.) I'll have to revisit this some time soon.

In my system the analog of trace is the regularized value (finite part) of a divergency. There is no problem to find a "trace" usually.

I initially was going to name this thing "modulus" but later decided to call it "determinant" because it does not obey the triangle inequality and does not have other qualities of a norm.

But later I also noticed the striking similarity to the functional determinant. In functional determinant we have:
$$\zeta _{S}(a)=\operatorname {tr} \,S^{-a}\,$$

$$\det S=e^{-\zeta _{S}'(0)}\,,$$

In my system we have:

$$\operatorname{reg } (\omega_-+s)^a=-a\zeta(1-a,s)$$
(by definition)

and this (I specifically tested this in Mathematica):
$$\det (\omega_-+s)=e^{p.v._{a\to0}\frac{d}{da}( -a\zeta(1-a,s))}$$

The power has a pole at ##a=0##, so we find the principal value there.

This similarity looked like a coincidence to me.

Last edited: