- #1
Rasalhague
- 1,387
- 2
I'm reading Daniel T. Gillespie's A QM Primer: An Elementary Introduction to the Formal Theory of QM. In the section on continuous eigenvalues, he admits to playing "fast and loose" with the laws of calculus, with respect to the Dirac delta function. I'd like to understand it better, or, if such understanding requires more advanced conceptual machinery than I have yet, at least to get some pointers. Here's what I've been able to find so far:
Formalised as a probability measure, I think the Dirac delta means
[tex]\delta_x : \Sigma \rightarrow [0,1],[/tex]
[tex]\[ \delta_x(A) = \left\{
\begin{array}{l l}
1 & \quad \mbox{if $x$ is in $A$}\\
0 & \quad \mbox{if $x$ is not in $A$}\\ \end{array} \right. \][/tex]
where [itex]\Sigma[/itex] is any sigma algebra on the configuration space of the system.
Formalised as a distribution, I think it means
[tex]\delta_x : L^2(\mathbb{R},\mathbb{C}) \rightarrow \mathbb{C},[/tex]
[tex]\delta_x(\Psi) = \Psi(x).[/tex]
A distribution is a continuous (in a certain sense) linear functional on the vector space of test functions. A test function is a smooth function with bounded support (Griffel: Applied Functional Analysis).
I wonder how these definitions relate to Gillespie's "fast and loose" Dirac delta. In an expression like
[tex](\delta_x, \Psi),[/tex]
should I reinterpret this notation, for what Gillespie calls the inner product between two state vectors, as
[tex]\delta_x(\Psi)[/tex]
(that is, the value of a dual vector on a vector) or is there a state vector in L2 dual to each Dirac delta distribution? If not, does this mean that the position operator has no actual eigenvectors, only something analogous to eigenvectors, which Gillespie informally treats as eigenvectors but which is really a set of dual vectors? How does the probability measure definition relate to the distribution definition? How does it formalise Gillespie's idea of an infinitesimal multiplied by a function of infinite value?
In the equation
[tex]f(a) \int_{-\infty}^{\infty} f(x) \delta_a(x) dx,[/tex]
should I interpret this as, with the distribution formalism, an integral of the product of [itex]f[/itex] with [itex]\delta_a \circ Id[/itex] considered as a function, where Id denotes the identity test function. And in the measure formalism, as an integral of the product of [itex]f[/itex] with [itex]\delta_a \circ g[/itex], where
[tex]g : \mathbb{R} \rightarrow \Sigma,[/tex]
[tex]g(x) = {x},[/tex]
for some appropriate sigma? It seems to me these naive guesses reintroduce exactly the problem such formalisms are supposed to solve.*
Can anyone recommend a text, online or in print, which takes up these ideas where Gillespie leaves off, beginning at a similarly elementary level.*EDIT. Ah, perhaps the rule is, when viewing the Dirac delta as a distribution (in the sense of a continuous linear functional on test functions), take the notation
[tex]\int_{-\infty}^{\infty} \Psi(x) \delta_a(a) \; dx[/tex]
to mean, not what it would denote if delta was replaced by any other symbol (in which case, the value would always be zero), but rather take the integral notation to mean [itex]\delta_a(\Psi) \equiv \Psi(a)[/itex]. Not sure how this relates to the measure idea, but maybe some similar device is employed there.
(I wonder if there's any connection between the use of the word distribution for this kind of function, and its use in probability theory to mean a probability measure on the sigma algebra associated with an observation space.)
Formalised as a probability measure, I think the Dirac delta means
[tex]\delta_x : \Sigma \rightarrow [0,1],[/tex]
[tex]\[ \delta_x(A) = \left\{
\begin{array}{l l}
1 & \quad \mbox{if $x$ is in $A$}\\
0 & \quad \mbox{if $x$ is not in $A$}\\ \end{array} \right. \][/tex]
where [itex]\Sigma[/itex] is any sigma algebra on the configuration space of the system.
Formalised as a distribution, I think it means
[tex]\delta_x : L^2(\mathbb{R},\mathbb{C}) \rightarrow \mathbb{C},[/tex]
[tex]\delta_x(\Psi) = \Psi(x).[/tex]
A distribution is a continuous (in a certain sense) linear functional on the vector space of test functions. A test function is a smooth function with bounded support (Griffel: Applied Functional Analysis).
I wonder how these definitions relate to Gillespie's "fast and loose" Dirac delta. In an expression like
[tex](\delta_x, \Psi),[/tex]
should I reinterpret this notation, for what Gillespie calls the inner product between two state vectors, as
[tex]\delta_x(\Psi)[/tex]
(that is, the value of a dual vector on a vector) or is there a state vector in L2 dual to each Dirac delta distribution? If not, does this mean that the position operator has no actual eigenvectors, only something analogous to eigenvectors, which Gillespie informally treats as eigenvectors but which is really a set of dual vectors? How does the probability measure definition relate to the distribution definition? How does it formalise Gillespie's idea of an infinitesimal multiplied by a function of infinite value?
In the equation
[tex]f(a) \int_{-\infty}^{\infty} f(x) \delta_a(x) dx,[/tex]
should I interpret this as, with the distribution formalism, an integral of the product of [itex]f[/itex] with [itex]\delta_a \circ Id[/itex] considered as a function, where Id denotes the identity test function. And in the measure formalism, as an integral of the product of [itex]f[/itex] with [itex]\delta_a \circ g[/itex], where
[tex]g : \mathbb{R} \rightarrow \Sigma,[/tex]
[tex]g(x) = {x},[/tex]
for some appropriate sigma? It seems to me these naive guesses reintroduce exactly the problem such formalisms are supposed to solve.*
Can anyone recommend a text, online or in print, which takes up these ideas where Gillespie leaves off, beginning at a similarly elementary level.*EDIT. Ah, perhaps the rule is, when viewing the Dirac delta as a distribution (in the sense of a continuous linear functional on test functions), take the notation
[tex]\int_{-\infty}^{\infty} \Psi(x) \delta_a(a) \; dx[/tex]
to mean, not what it would denote if delta was replaced by any other symbol (in which case, the value would always be zero), but rather take the integral notation to mean [itex]\delta_a(\Psi) \equiv \Psi(a)[/itex]. Not sure how this relates to the measure idea, but maybe some similar device is employed there.
(I wonder if there's any connection between the use of the word distribution for this kind of function, and its use in probability theory to mean a probability measure on the sigma algebra associated with an observation space.)
Last edited: