## General Mathematical Theorem?

<jabberwocky><div class="vbmenu_control"><a href="jabberwocky:;" onClick="newWindow=window.open('','usenetCode','toolbar=no,location=no, scrollbars=yes,resizable=yes,status=no,width=650,height=400'); newWindow.document.write('<HTML><HEAD><TITLE>Usenet ASCII</TITLE></HEAD><BODY topmargin=0 leftmargin=0 BGCOLOR=#F1F1F1><table border=0 width=625><td bgcolor=midnightblue><font color=#F1F1F1>This Usenet message\'s original ASCII form: </font></td></tr><tr><td width=449><br><br><font face=courier><UL><PRE>\n\nHello,\n\nIn Schrodinger\'s Statistical Thermodynamics on pg.12 he states-&gt;\n\nTo give a more direct proof:\n\nF+U*mu=G\n\nThen, from a general mathematical theorem, the ratio of two integrating factors\n\n1/T and mu is a function of G.\n\n\nWhat general mathematical theorem is he talking about?\n\nIf anyone is familiar with this work could you let me know?\n\nCheers,\n\nBert\n</UL></PRE></font></td></tr></table></BODY><HTML>');"> <IMG SRC=/images/buttons/ip.gif BORDER=0 ALIGN=CENTER ALT="View this Usenet post in original ASCII form">&nbsp;&nbsp;View this Usenet post in original ASCII form </a></div><P></jabberwocky>Hello,

In Schrodinger's Statistical Thermodynamics on pg.12 he states->

To give a more direct proof:

$$F+U*\mu=G$$

Then, from a general mathematical theorem, the ratio of two integrating factors

1/T and \mu is a function of G.

What general mathematical theorem is he talking about?

If anyone is familiar with this work could you let me know?

Cheers,

Bert

 PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire


On Mon, 30 Aug 2004 04:57:37 $-0400,$ Bert wrote: > > > Hello, > > In Schrodinger's Statistical Thermodynamics on pg.12 he states-> > > To give a more direct proof: > > $F+U*\mu=G$ [...] Not everyone has a copy of this book. Some context and an explanation of the notation you are using would be helpful in answering your question. Igor



As I understand it, having the book, the problem referred to is: (for the 2 dimensional case but really asked for any dim) given an integrating factor $\mu(x,y)$ for the expression Pdx + Qdy $= 0,$ where $P=P(x,y), Q=Q(x,y)$ are functions of (x,y) so that $\mu * { Pdx+Qdy } = dz$ for some function z(x,y) (ie z(x,y)=constant solves the diff eqn $Pdx+Qdy=0),$ then for any other integrating factor mu1(x,y) then mu1 $/ \mu = F(z),$ that is, $mu1/\mu$ must be a function of z(x,y) only.

## General Mathematical Theorem?

> As I understand it, having the book, the problem referred to is:
>
> (for the 2 dimensional case but really asked for any dim)
>
>
>
> given an integrating factor $\mu(x,y)$ for the expression Pdx + Qdy $= 0,$
>
> where $P=P(x,y), Q=Q(x,y)$ are functions of (x,y) so that
>
> $\mu * { Pdx+Qdy } = dz$ for some function z(x,y) (ie z(x,y)=constant
>
> solves the diff eqn $Pdx+Qdy=0),$ then for any other integrating factor
>
> mu1(x,y) then
>
> mu1 $/ \mu = F(z),$ that is, $mu1/\mu$ must be a function of z(x,y) only.
>

Your conclusion is correct (modulo technical hypotheses),
but I don't know of any elementary text that explains clearly the reason.
I also don't know of any advanced text that treats the issue,
though I assume there must be some, since physics texts often
seem to regard it as obvious.

The problem lies a bit outside the mainstream of the kind of
mathematics relevant to its proof.
My guess is that the most likely place to find a proof
would be some old-fashioned text on mathematical physics.

I think of your conclusion as
an application of standard facts of advanced calculus,
in particular that a mapping on Euclidean space (the plane in this case)
is locally invertible at a point if its Jacobian is nonzero at that
point. I don't know if it is a theorem with a generally recognized name.

The basic idea is that given a function $v = v(x,y)$ on the plane,
we can usually introduce new local coordinates (u,v) so that when
expressed in
new coordinates, v is the second coordinate function.
Here "usually" means "generically",
in the absence of certain degeneracies
which I'll initially ignore for expositional simplicity.

For example, if v(x,y) $:= \sqrt(x^2 + y^2),$
the curves of constant v are circles centered at the origin,
and polar coordinates give one way to do this.

Indeed, we can usually do this in an even simpler way
in which the new first coordinate u coincides with the old first
coordinate: $u = x$.
In the previous example v(x,y) $:= \sqrt(x^2 + y^2),$ this corresponds to
using x and the usual radial polar coordinate as new coordinates---this
can be done locally at all points off the x-axis.

The moral is that just about any function can be viewed locally as
a coordinate function with respect to appropriately chosen coordinates,
so anything that is true about a coordinate function is quite likely
true locally
for general functions (perhaps under additional hypotheses to rule out
degenerate cases).

It's much easier to think about coordinate functions than arbitrary
functions,
so this insight is quite a powerful heuristic.
Here is how it can be used to solve the problem at hand.

Consider a differential form

(1) P(x,y) $dx + Q(x,y) dy ,$

and an integrating factor $m = m(x,y)$. That is, m is a nonzero function such
that

(2) m(x,y) P(x,y) $dx + m(x,y) Q(x,y) dy = dz$

for some function z. Suppose, in the spirit of the above observation,
that z happens to be a coordinate function, say z(x,y) = y for all x,y.
Then $dz =$ dy, and (2) becomes

(3) m(x,y) P(x,y) $dx + m(x,y) Q(x,y) dy = dy .$

Equating coefficients of dx and dy in (3) shows that this can happen
only if P(x,y)
vanishes identically, and also

(4) m(x,y) $= 1/Q(x,y) ,$ equivalently Q(x,y) $= 1/m(x,y)$ .

NOTATION: In the following, if $f = f(x,y)$ is a function of two variables,
$df/dx$ denotes the partial derivative of f with respect to x,
and similarly for $df/dy$.

Now consider a second integrating factor $m1 = m1(x,y)$ for (1).
Since we now know that $P=0,$ this means that

(5) m1(x,y) Q(x,y) $dy = df(x,y) := df/dx dx + df/dy dy$

for some function $f = f(x,y)$.
Again equating coeficients of dx on both sides,
we see that $df/dx = .$ This implies that
f(x,y) is actually independent of x,
so we can write f(x,y) $= f1(y)$ for some function f1 of just one variable.

Using (4), we rewrite (5) as:

$$m1(x,y)/m(x,y) dy = df1/dy dy[/itex] , so $m1(x,y)/m(x,y) = df1/dy$ . The right side is a function of y alone: call it F(y) $:= df1/dy$. So, we've shown that $m1(x,y)/m(x,y) = F(y) = F(z(x,y)) ,$$ which is the desired conclusion for our special case z(x,y) [itex]:= y$.

Now let's examine the general case in which the function $z = z(x,y)$
in (2) is not assumed to be of this special form.
The plan is to change coordinates so that $it *is*$ of the special form,
and then check
that the coordinate change doesn't affect the conclusion.

Consider the mapping on the plane defined by:

(6) (x,y) ---> (x, z(x,y)) .

Suppose that this map is invertible in a neighborhood of a given point
(x0,y0)
(a hypothesis which we'll examine later).
Use this map to introduce new coordinates (x,z) as follows:

(7) A point with old coordinates (x,y) is assigned new coordinates (x,
z(x,y)).

A differential form whose expression in old coordinates was
P(x,y) $dx + Q(x,y) dy$ will have an expression
P'(x,z) $dx + Q'(x,z) dz$ in new coordinates,
where P' and Q' are some functions (of x and z):

(8) P $dx + Q dy =$ P' $dx + Q' dz$ .

[From the laws of transformations of differential forms,
one easily obtains expressions for P' and Q',
but we do not need these.
For example, $P' = P + Q dy/dx,$
where y is considered as a function of new coordinates x,z.]

This is somewhat abbreviated notation; to forestall any ambiguity or
confusion,
let's also present (8) more explicitly as:

(8)' P(x,y) $dx + Q(x,y) dy = P'(x, z(x,y)) dx + Q'(x, z(x,y)) dz$
for all x,y.

Let $m' = m'(x,z)$ denote the previous integrating factor expressed as
a function of x and z. That is,

m'(x,z(x,y)) $= m(x,y)$ for all x,y .

Recall from (2) that $z = z(x,y)$ was originally introduced as a function
satisfying

(9) $dz = m(P dx + Q dy) .$

In terms of the new coordinates x,z, this reads:

(10) $dz = m'(x,z) P'(x,z) dx + m'(x,z) Q'(x,z) dz$ .

Incidentally, this implies that P' vanishes identically and
$m' = 1/Q',$ but we don't need this.
If we were giving the proof from scratch,
without having first worked out
the special case in which $z = z(x,y) = y$ is a coordinate function,
we would follow that proof from here.

But since we have already proved the special case,
we can simply invoke that result, and we're done.

There is just one subtle point which might bother a careful reader:
Are the "dz" in (9) and the "dz" in (10) really the same?
That is, the definition of the "dz" in (9) is $dz := dz/dx dx + dz/dy$ dy,
while the "dz" in (10) is just "dz" in new coordinates, i.e., "d" of the
new z coordinate.
Is it possible that these two "dz"s are somehow different?

To settle this, we need a precise definition of "differential form".
I'll leave as an exercise the application of the reader's favorite
definition
(there are several, all essentially equivalent) to check that
the two "dz"s really are the same.

Alternatively, just apply the proof of the special case
starting with (10) instead of with (3).
This alternative completion is less abstract and perhaps more reassuring.
We could have given the proof in this way
without doing the special case first,
but that would have obscured
the generally useful insight which led to the proof,
namely that nearly any function
can serve locally as a coordinate function.

LOCAL INVERTIBILITY OF THE MAP (6): (x,y) --> (x, z(x,y))

The above proof assumed the local invertibility of the map (6).
Now let's discuss when that map will be invertible.

A standard result in advanced calculus implies that it will be invertible
in a neighborhood of a point (x0,y0) provided that its Jacobian does not
vanish
at that point.
Calculation of the Jacobian reveals that this is equivalent
to the nonvanishing of $dz/dy (x0,y0)$.

So, the above proof is valid in a neighborhood of (x0,y0)
so long as $dz/dy(x0,y0)$ is nonzero.
If it happens that $dz/dy(x0,y0) = 0, we$ can carry out the same argument
with the roles of x and y interchanged, obtaining the desired conclusion
except possibly in "degenerate" cases in which

$dz/dx (x0,y0) == dz/dy (x0,y0)$ .

Since

$dz = dz/dx dx + dz/dy dy= m(x,y)P(x,y) dx + m(x,y)Q(x,y)$ dy,

and since "integrating factors" m are (I assume) nonzero by definition,
we are done if we can assume that P and Q cannot vanish simultaneously.
This seems a reasonable hypothesis for the desired conclusion that
$m/m1 = F(z)$ for some function F.

That some such hypothesis on P and Q is necessary can be seen from
the observation that the assertion is false when P and Q vanish identically.
In that case, ANY nonvanishing function m is an integrating factor for P
$dx + Q$ dy,
so ANY nonvanishing function g(x,y) can be
the quotient of two integrating functions.

Similarly, the assertion can fail
if P and Q vanish identically on an open set.
I think the above argument might extend to prove the assertion
under the hypothesis that
P and Q do not vanish identically on any open set,
but the simpler hypothesis that
P and Q do not vanish simultaneously at any point
is probably sufficient for most physics applications.



diamonis@hotmail.com (diamonis) wrote in message news:<608c8f11.0409132357.1ea176e3@p...google.com>... > As I understand it, having the book, the problem referred to is: > (for the 2 dimensional case but really asked for any dim) > > given an integrating factor $\mu(x,y)$ for the expression Pdx + Qdy $= 0,$ > where $P=P(x,y), Q=Q(x,y)$ are functions of (x,y) so that > $\mu * { Pdx+Qdy } = dz$ for some function z(x,y) (ie z(x,y)=constant > solves the diff eqn $Pdx+Qdy=0),$ then for any other integrating factor > mu1(x,y) then > mu1 $/ \mu = F(z),$ that is, $mu1/\mu$ must be a function of z(x,y) only. In addition to Stephen Parrott's detailed explanation, I can offer a more pedestrian and rather more direct approach. What I will prove is that $mu1(x,y)/\mu(x,y)$ is constant on integral curves of Pdx + Qdy, which are also constant contours of z(x,y). The last statement, for most intents and purposes, is equivalent to $mu1/\mu = F(z),$ but is well posed even when the would-be function F(z) is not single valued. The following requires some familiarity with differential forms but the same idea can be used if you're just working with partial derivatives of $\mu,$ mu1, P, and Q while keeping in mind the definition of an integrating factor. Let me introduce the two differential forms $w = P dx + Q dy$ and $v = \mu$ dmu1 - mu1 dmu, and a vector field t defined such that t(x,y) is a tangent vector to an integral curve of w passing through (x,y). In other words, the integral curves of w and t are identical and we have the identity $w(t) =$ . Consider the differential $d(mu1/\mu) = (\mu$ dmu1 - mu1 $dmu)/\mu^2 = v/\mu^2$. For $mu1/\mu$ to be constant on integral curves of w, the quantity $d(mu1/\mu)(t) = v(t)/\mu^2$ must be identically zero, which is equivalent too the identity v(t) = . Since both mu1 and \mu are integrating factors for the equation $w = 0,$ by Poincare's lemma $d(\mu w) =$ dmu $/\ w + \mu dw =$ and d(mu1 $w) =$ dmu1 $/\ w + \mu dw =$ . Multiplying the last two expressions by mu1 and \mu respectively and then subtracting gives the identity $(\mu$ dmu1 - mu1 dmu) $/\ w = v /\ w = .$ Plugging t and any other vector field s into the differential form $v /\ w$ gives $(v /\ w)(t,s) = v(t) w(s) - v(s) w(t) =$ . The second term vanishes since w(t) = leaving us with v(t) w(s) $= 0,$ for all vector fields s. But since w is not identically zero, this last equation implies that v(t) = . Which proves that the function $mu1/\mu$ is constant on integral curves of the original equation $v = P dx + Q dy = 0,$ and hence also on constant contours of either z(x,y) or z1(x,y). Hope this helps. Igor