## interpreting path integral averages as measure integrals

Hi all,

Sorry if this is in the wrong place. I'm trying to understand probability theory a bit more rigorously and so am coming up against things like lebesgue integration and measure theory etc and have a couple of points I haven't quite got my head around.

So starting from the basics, (someone please correct me if I'm wrong on any of this) in contrast to the Riemann integral (I'm very aware I'm not being rigorous here)

$$\int_a^b f(x)dx =\lim_{n\to\infty}\sum_i^n f(x_i)(x_{i+1}-x_i)$$

the Lebesgue integral, which exists if the Riemann integral exists, is, for $y=f(x)$,

$$\int_{[a,b]} fd\mu =\lim_{n\to\infty}\sum_i^n y_i \mu(x\in [a,b]|y_i\leq f(x)\leq y_{i+1})$$

where we consider the limit (again not rigorously) of a sum of intervals in the range of the function for which each value we assign a weight or measure' which generalises length etc as opposed to the Riemann case where we form intervals on the domain.

So far so good (unless I've made a howler). Now with this second approach I see the point of discussing probability measures (being measures which map to [0,1]) as heuristically we consider, for example, expected values, of for example the function $y=f(x)$, as a sum over the range of a given function such that $\bar{y}=\sum_i y_i P(y_i)$ where $P(y)$ is the probability measure of $y$ or rather, the probability of $f(x)=y$.

Now, taking as an example the Wiener measure $P_W(\omega)$ where now $\omega$ is a set of paths, I keep seeing expectation integrals of the form

$$\int_{\Omega}f(\omega)dP_W(\omega)=\bar{f(\omega)}$$

referred to as Lesbesgue type integrals and this I don't understand. This is because $P_W(\omega)$ is the measure of a (set of) path(s). That is $P_W(\omega)$ returns the probability of a set of paths (unless I'm drastically misunderstanding its definition), not the probability of observing $f(\omega)=y$ and they are not the same thing. As such the integral only makes sense to me if the integral is summing over the domain not the range and so is surely not really a Lebesgue integral? As a simpler equivalent consider rolling a number of dice with outcomes $x\in \Omega$ and we want to calculate the expected value of the sum of all of the shown faces, $f(x)=y$. We can either write this as

$$\sum_i y_i P(f(x)=y_i)$$

such that we sum over all possible sums of the shown faces on the dice (eg for two dice $y_i\in \{2,3,4 \ldots 12\}$) akin to a lesbesgue integral or we can write it as

$$\sum_i f(x_i) P(x_i)$$

such that we consider a sum over all events $x_i\in\{\{1,1\},\{1,2\},\ldots,\{6,6\}\}$ which is not akin to a Lebesgue integral yet this is how all expectation values using, for example the Wiener measure, are written.

Further I cannot seem to rationalise this when writing out, in such an example, the explicit path integral as

$$\bar{f(\omega)}=\int_\Omega f(\omega)dP_W(\omega)=\int_{\Omega}[\mathcal{D}\omega] f(\omega)e^{-S(\omega)}$$

again $[\mathcal{D}\omega]$ is constantly referred to as a path integral measure, but as before the integral only makes any sense if you are summing over the domain, ie taking a probability or rather a weight ($e^{-S(\omega)}$) for every path, it doesn't have any thing to do with the domain which depends on $f(\omega)$.

I guess this could come down to misunderstanding integration with measure. Am I wrong to think that the summation always has to occur over the range? To clarify I understand that one can always write the integral

$$\int_a^b f(x)dx =\int_{[a,b]} fd\mu$$,

but this does depend on appropriately defining the measure $\mu$. What I don't understand is when the measure featuring in the above integral is explicitly something that doesn't depend on $f$, for example, the Wiener measure, which is defined as the probability of observing a set of paths.

Sorry for the rambling! Would be most appreciative if someone could point me in the right direction.

Many thanks,

R
 PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study
 Recognitions: Gold Member Science Advisor Staff Emeritus Strictly speaking, what you are calling the "Lebesque integral", allowing some measure fuction defined on intervals is the "Riemann-Stieljes" integral where the measure function does not even have to be differentiable or continuous, just increasing. For example, if we define $\mu(x)$ to be the largest integer less than or equal to x, $$\int_0^n f(x) d\mu= \sum_{i= 0}^n f(i)$$. But for the general Lesbesque integral, the measure only has to be defined on "measurable" sets, not necessarily intervals. For example, the Lebesque measure on any countable set is 0 so we can integrate the function "f(x)= 2 if x is irrational, 1 is x is rational" over any interval and get 2 times the length of the interval.
 Let's forget intervals for now, that was just to make the two distinct with notation; I understand that it is is for measurable sets allowing the discussion of your example or the Dirichlet function or what have you: this surely is a result of being able to consider it as a sum over intervals in the range as opposed to the domain? I'm very aware I'm pretty shaky on this, and I apologise if I'm really missing the point, but I'm struggling with your first example and why it helps me... I see that one uses the the Riemann Stieljes integral to define $\int f(x)dg(x)$ in the usual Riemann sense, again partitioning the domain, but what I call the Lebesgue integral (which by all sources I see is the Lebesgue integral, granted with intervals changed to sets) is fundamentally different in the summation procedure. Furthermore, whilst I can see in a handwavy way that a Riemann Stieljes approach to your choice of $\mu(x)$ might give that result (but I don't think integrable in that sense), trying to formulate it according to the Lebesgue integral as the limit of a sum of measure multiplied by value of simple functions, partitioned in the range, doesn't seem to converge. So is my definition (my second equation in my first post) at all correct? All of this doesn't help me with the partition the range vs partition the domain thing in probability, for which either integral form I can conceptually get', but don't understand when people refer to integrals that look as though they need to explicitly sum over the domain as Lebesgue integrals. (Is the problem in my definition of lebesgue integrals for example?) To clarify: here is what everywhere I have read says about Lebesgue integrals of the form $\int f(x)d\mu$ or $\int f(x)dP$ where $y=f(x)$ and the latter would be an expectation value A. Partition the range into $y_i$ (using simple functions) B. Assign a number $m_i$ to each $y_i$ through the measure. (This could, for example, then be the Lebesgue measure, such that for simple 1D integration on the real line, $m_i$ corresponds to the size' of the domain that gives $y_i$. Or when using a probability measure it would be the probability, but surely necessarily of observing $f(x)\in \{y_i,y_{i+1}\}$.) C. Take the limit of the sum of $y_i m_i$ My question is: 1. Given that I see expectation integrals of the form $\int f(x) dP(x)$ in the probability literature, explicitly described as Lebesgue integrals 2. And that $P(x)$ is simultaneously explicitly described as a probability measure: the probability of the set of events $x$ 3. How does this fit into the definition of Lebesgue integration? Explicitly: given for it to fit the exact description of Lebesgue integration ( A-C, particularly B), surely one would require $P(x)$ to be the probability of $f(x)$ not of $x$. Again see my dice example where simple functions could be used directly. Sorry again if I've inadvertently sidestepped the point of your post/ totally have no idea what I'm doing.

## interpreting path integral averages as measure integrals

To be more clear about why I can't make sense of your example, surely your measure function

$\mu(x)$ is the largest integer less than or equal to $x$'

doesn't obey the countable additivity requirement $\mu\left(\cup_{i\in\Omega}A_i\right)=\sum_{i\in \Omega}\mu(A_i)\;\;\forall A_i\cap A_j=\emptyset$

Since for example $\mu((2,3))=2\neq\mu((2,2.5))+\mu((2.5,3))$ because $\mu((2,2.5))=2$ and $\mu((2.5,3))=2$

As such if I approximate $f(x)$ on the interval $(2,3)$ using a simple function with one value I get $I=y_1*2$, but if I approximate $f(x)$ with a simple function which takes two values I get $I=y_1*2+y_2*2$. Taking the limit of a simple function with an increasing number of values on the interval $(2,3)$ surely makes the approximation diverge? precisely because of the countable additivity issue.