Interpreting path integral averages as measure integrals

In summary, The conversation discusses the difference between the Riemann-Stieljes integral and the Lebesgue integral, with a focus on the latter. The Lebesgue integral is defined for measurable sets and allows for more flexibility in integration, as seen in examples such as the Dirichlet function. The conversation also touches on the concept of partitioning the range vs partitioning the domain in probability, and the confusion that can arise when people refer to integrals that may require additional definitions.
  • #1
rs123
4
0
Hi all,

Sorry if this is in the wrong place. I'm trying to understand probability theory a bit more rigorously and so am coming up against things like lebesgue integration and measure theory etc and have a couple of points I haven't quite got my head around.

So starting from the basics, (someone please correct me if I'm wrong on any of this) in contrast to the Riemann integral (I'm very aware I'm not being rigorous here)

[tex] \int_a^b f(x)dx =\lim_{n\to\infty}\sum_i^n f(x_i)(x_{i+1}-x_i) [/tex]

the Lebesgue integral, which exists if the Riemann integral exists, is, for [itex]y=f(x)[/itex],

[tex] \int_{[a,b]} fd\mu =\lim_{n\to\infty}\sum_i^n y_i \mu(x\in [a,b]|y_i\leq f(x)\leq y_{i+1}) [/tex]

where we consider the limit (again not rigorously) of a sum of intervals in the range of the function for which each value we assign a weight or `measure' which generalises length etc as opposed to the Riemann case where we form intervals on the domain.

So far so good (unless I've made a howler). Now with this second approach I see the point of discussing probability measures (being measures which map to [0,1]) as heuristically we consider, for example, expected values, of for example the function [itex]y=f(x)[/itex], as a sum over the range of a given function such that [itex]\bar{y}=\sum_i y_i P(y_i)[/itex] where [itex]P(y)[/itex] is the probability measure of [itex]y[/itex] or rather, the probability of [itex]f(x)=y[/itex].

Now, taking as an example the Wiener measure [itex]P_W(\omega)[/itex] where now [itex]\omega[/itex] is a set of paths, I keep seeing expectation integrals of the form

[tex]\int_{\Omega}f(\omega)dP_W(\omega)=\bar{f(\omega)}[/tex]

referred to as Lesbesgue type integrals and this I don't understand. This is because [itex]P_W(\omega)[/itex] is the measure of a (set of) path(s). That is [itex]P_W(\omega)[/itex] returns the probability of a set of paths (unless I'm drastically misunderstanding its definition), not the probability of observing [itex]f(\omega)=y[/itex] and they are not the same thing. As such the integral only makes sense to me if the integral is summing over the domain not the range and so is surely not really a Lebesgue integral? As a simpler equivalent consider rolling a number of dice with outcomes [itex]x\in \Omega[/itex] and we want to calculate the expected value of the sum of all of the shown faces, [itex]f(x)=y[/itex]. We can either write this as

[tex]\sum_i y_i P(f(x)=y_i)[/tex]

such that we sum over all possible sums of the shown faces on the dice (eg for two dice [itex]y_i\in \{2,3,4 \ldots 12\}[/itex]) akin to a lesbesgue integral or we can write it as

[tex]\sum_i f(x_i) P(x_i)[/tex]

such that we consider a sum over all events [itex]x_i\in\{\{1,1\},\{1,2\},\ldots,\{6,6\}\}[/itex] which is not akin to a Lebesgue integral yet this is how all expectation values using, for example the Wiener measure, are written.

Further I cannot seem to rationalise this when writing out, in such an example, the explicit path integral as

[tex]\bar{f(\omega)}=\int_\Omega f(\omega)dP_W(\omega)=\int_{\Omega}[\mathcal{D}\omega] f(\omega)e^{-S(\omega)}[/tex]

again [itex][\mathcal{D}\omega][/itex] is constantly referred to as a path integral measure, but as before the integral only makes any sense if you are summing over the domain, ie taking a probability or rather a weight ([itex]e^{-S(\omega)}[/itex]) for every path, it doesn't have any thing to do with the domain which depends on [itex]f(\omega)[/itex].

I guess this could come down to misunderstanding integration with measure. Am I wrong to think that the summation always has to occur over the range? To clarify I understand that one can always write the integral

[tex] \int_a^b f(x)dx =\int_{[a,b]} fd\mu[/tex],

but this does depend on appropriately defining the measure [itex]\mu[/itex]. What I don't understand is when the measure featuring in the above integral is explicitly something that doesn't depend on [itex]f[/itex], for example, the Wiener measure, which is defined as the probability of observing a set of paths.

Sorry for the rambling! Would be most appreciative if someone could point me in the right direction.

Many thanks,

R
 
Physics news on Phys.org
  • #2
Strictly speaking, what you are calling the "Lebesque integral", allowing some measure fuction defined on intervals is the "Riemann-Stieljes" integral where the measure function does not even have to be differentiable or continuous, just increasing. For example, if we define [itex]\mu(x)[/itex] to be the largest integer less than or equal to x, [tex]\int_0^n f(x) d\mu= \sum_{i= 0}^n f(i)[/tex].

But for the general Lesbesque integral, the measure only has to be defined on "measurable" sets, not necessarily intervals. For example, the Lebesque measure on any countable set is 0 so we can integrate the function "f(x)= 2 if x is irrational, 1 is x is rational" over any interval and get 2 times the length of the interval.
 
  • #3
Let's forget intervals for now, that was just to make the two distinct with notation; I understand that it is is for measurable sets allowing the discussion of your example or the Dirichlet function or what have you: this surely is a result of being able to consider it as a sum over intervals in the range as opposed to the domain?

I'm very aware I'm pretty shaky on this, and I apologise if I'm really missing the point, but I'm struggling with your first example and why it helps me... I see that one uses the the Riemann Stieljes integral to define [itex] \int f(x)dg(x)[/itex] in the usual Riemann sense, again partitioning the domain, but what I call the Lebesgue integral (which by all sources I see is the Lebesgue integral, granted with intervals changed to sets) is fundamentally different in the summation procedure. Furthermore, whilst I can see in a handwavy way that a Riemann Stieljes approach to your choice of [itex]\mu(x)[/itex] might give that result (but I don't think integrable in that sense), trying to formulate it according to the Lebesgue integral as the limit of a sum of measure multiplied by value of simple functions, partitioned in the range, doesn't seem to converge. So is my definition (my second equation in my first post) at all correct?

All of this doesn't help me with the partition the range vs partition the domain thing in probability, for which either integral form I can conceptually `get', but don't understand when people refer to integrals that look as though they need to explicitly sum over the domain as Lebesgue integrals. (Is the problem in my definition of lebesgue integrals for example?)

To clarify: here is what everywhere I have read says about Lebesgue integrals of the form [itex] \int f(x)d\mu[/itex] or [itex] \int f(x)dP[/itex] where [itex] y=f(x)[/itex] and the latter would be an expectation value

A. Partition the range into [itex] y_i[/itex] (using simple functions)
B. Assign a number [itex] m_i[/itex] to each [itex]y_i [/itex] through the measure. (This could, for example, then be the Lebesgue measure, such that for simple 1D integration on the real line, [itex] m_i[/itex] corresponds to the `size' of the domain that gives [itex] y_i[/itex]. Or when using a probability measure it would be the probability, but surely necessarily of observing [itex] f(x)\in \{y_i,y_{i+1}\}[/itex].)
C. Take the limit of the sum of [itex] y_i m_i[/itex]

My question is:

1. Given that I see expectation integrals of the form [itex] \int f(x) dP(x)[/itex] in the probability literature, explicitly described as Lebesgue integrals
2. And that [itex] P(x)[/itex] is simultaneously explicitly described as a probability measure: the probability of the set of events [itex] x[/itex]
3. How does this fit into the definition of Lebesgue integration?

Explicitly: given for it to fit the exact description of Lebesgue integration ( A-C, particularly B), surely one would require [itex] P(x)[/itex] to be the probability of [itex] f(x)[/itex] not of [itex] x[/itex]. Again see my dice example where simple functions could be used directly.

Sorry again if I've inadvertently sidestepped the point of your post/ totally have no idea what I'm doing.
 
  • #4
To be more clear about why I can't make sense of your example, surely your measure function

`[itex]\mu(x)[/itex] is the largest integer less than or equal to [itex]x[/itex]'

doesn't obey the countable additivity requirement [itex]\mu\left(\cup_{i\in\Omega}A_i\right)=\sum_{i\in \Omega}\mu(A_i)\;\;\forall A_i\cap A_j=\emptyset[/itex]

Since for example [itex]\mu((2,3))=2\neq\mu((2,2.5))+\mu((2.5,3))[/itex] because [itex]\mu((2,2.5))=2[/itex] and [itex]\mu((2.5,3))=2[/itex]

As such if I approximate [itex]f(x)[/itex] on the interval [itex](2,3)[/itex] using a simple function with one value I get [itex]I=y_1*2[/itex], but if I approximate [itex]f(x)[/itex] with a simple function which takes two values I get [itex]I=y_1*2+y_2*2[/itex]. Taking the limit of a simple function with an increasing number of values on the interval [itex](2,3)[/itex] surely makes the approximation diverge? precisely because of the countable additivity issue.
 
Last edited:

1. What is the concept of interpreting path integral averages as measure integrals?

The concept of interpreting path integral averages as measure integrals is a mathematical approach used in quantum mechanics to calculate the expectation value of a physical quantity. It involves expressing the average value of a quantity as an integral over all possible paths of a particle traveling from one point to another.

2. How is this concept useful in quantum mechanics?

This concept is useful in quantum mechanics because it allows us to calculate the probability of a particle taking a certain path between two points. This is important in understanding the behavior of quantum particles, which can often have multiple possible paths between two points.

3. Can you explain the mathematical formula for interpreting path integral averages as measure integrals?

The mathematical formula for interpreting path integral averages as measure integrals is given by the Feynman path integral, which is represented as an integral over all possible paths of a particle, with a weight factor that takes into account the probability of each path.

4. How does this concept relate to the uncertainty principle?

This concept relates to the uncertainty principle because it allows us to calculate the probability of a particle taking different paths, which is inherently uncertain in quantum mechanics. By integrating over all possible paths, we can determine the most probable path and thus understand the behavior of the particle.

5. Are there any limitations to interpreting path integral averages as measure integrals?

There are some limitations to this concept, as it relies on the assumption that the particle takes all possible paths simultaneously. This may not always be accurate, especially in cases where the particle's behavior is highly influenced by external factors. Additionally, the calculations involved can be complex and may not always have a closed-form solution.

Similar threads

Replies
3
Views
1K
Replies
1
Views
937
Replies
16
Views
2K
Replies
3
Views
2K
Replies
20
Views
2K
Replies
7
Views
1K
Replies
1
Views
2K
Replies
24
Views
2K
Replies
1
Views
818
Replies
5
Views
1K
Back
Top