Math Experiment: Let's Prove Something

  • Challenge
  • Thread starter fresh_42
  • Start date
  • Tags
    Experiment
  • Featured
In summary: I'm not sure what).In summary, the conversation is about an experiment involving a multiplicative group of square integrable smooth functions on the unit interval, with a Lie algebra defined as the left-invariant smooth vector fields on the group. The main objective is to create a proof or theorem using the given framework. Possible moves include drawing conclusions from previously given facts, adding conditions or objects, but the only rule is not to post twice in a row. The conversation also touches on the smooth structure of the group and the requirement for elements to have smooth inverses.
  • #36
(11, 12, 16, 17, 24, 25)
Uniqueness of 'square roots'
fresh_42 said:
The functions ##1## and ##x\longmapsto 1-x## are clearly involutions and since ##\left(f\iota f^{-1}\right)=f \iota^2 f^{-1}=1##, the involutions form a normal, infinite subgroup of ##G##. Of course ##f\iota f^{-1} \stackrel{i.g.}{\neq}f^{-1}\iota f\,.##

The functional ##\mathcal{F}[g](x)=g(1-x)## exchanges the spaces of increasing and decreasing bijections and is its own inverse, and so ##G## consists of two topologically equivalent connected components, say ##G=G_+\cup G_-## (a path from ##\mathbb I(x)## to ##1-x## would violate continuity at the end points, but monotonic functions ##g(x)## of a given type are connected to either ##\mathbb I(x)=x## or ##1-x## through one-parameter flows, ##g_\lambda(x),\,\lambda\in\mathbb R,\,g_0(x)=x## or ##g_0(x)=1-x##, ##g_1(x)=g(x)##.)

Focusing on ##G_+##, the 'flows' can be defined in several ways. One approach is to first examine the square root operator alluded to in 17, and then define ##g_\lambda(x)## in terms of the binomial expansion of ##\lambda##.` Another (and perhaps more elegant) approach might involve concepts from density equalizing maps. First, it is necessary to prove that the 'square root' operator is uniquely determined on functions with positive derivative. For increasing ##g(x)##, with ##g(0)=0## and ##g(1)=1##, one conceptually straightforward (though somewhat messy) approach is to consider the equation
$$
f_{\frac{1}{2}}'(f_{\frac{1}{2}}(x))f_{\frac{1}{2}}'(x)=f'(x),\quad f_{\frac{1}{2}}(0)=0
$$
obtained from differentiating ##f_\frac{1}{2}(f_\frac{1}{2}(x))=f(x)##.
Suppose ##f_{\frac{1}{2}}(x)## has been defined on the interval ##[0,\epsilon]##. If ##f_{\frac{1}{2}}(\epsilon)>\epsilon##, let ##s## be the solution to ##f_{\frac{1}{2}}(s)=\epsilon##. Then ##f_{\frac{1}{2}}'(\epsilon)## can be determined from
$$
f_{\frac{1}{2}}'(\epsilon)=\frac{f'(s(\epsilon))}{f_\frac{1}{2}'(s(\epsilon))}.
$$
If instead ##f_\frac{1}{2}(\epsilon)<\epsilon##, then
$$
f_{\frac{1}{2}}'(\epsilon)=\frac{f'(\epsilon)}{f_\frac{1}{2}'(f_\frac{1}{2}(\epsilon))}
$$

Hence, we may write
$$
f_\frac{1}{2}(x)=\int_0^xdt\bigg[\theta(t-f_\frac{1}{2}(t))\frac{f'(t)}{f_\frac{1}{2}'(f_\frac{1}{2}(t))}+\theta(f_\frac{1}{2}(t)-t)\frac{f'(f_{-\frac{1}{2}}(t))}{f_\frac{1}{2}'(f_{-\frac{1}{2}}(t))}\bigg]
$$
where ##f_{-\frac{1}{2}}(x)## is the inverse of ##f_{\frac{1}{2}}##, or
$$
f_\frac{1}{2}(x)=\int_0^xdt\bigg[\frac{f'(\min(t,f_{-\frac{1}{2}}(t)))}{f_\frac{1}{2}'(\min(f_\frac{1}{2}(t),f_{-\frac{1}{2}}(t)))}\bigg]
$$
and so ##f_\frac{1}{2}(x)## is uniquely defined on ##G_+##.
 
  • Like
Likes fresh_42
Physics news on Phys.org
  • #37
@Couchyam Sorry, if this is too stupid for you. But could you explain what you did for us rusty readers?
I follow you until the epsilontic starts with "Suppose". My questions are:

What is the idea behind the construction, esp. why distinguish the two cases and where does the asymmetry in ##s## come from?

Where do we get the existence of ##f_{\frac{1}{2}}## from, since you start with the necessary condition?

How does uniqueness follow from a recursive construction? My guess is ##f_{\frac{1}{2}}(x) = \lim_{\varepsilon \to x} \displaystyle{\int_0^\varepsilon} \ldots dt ## but we don't know anything about ##f_{-\frac{1}{2}}## on ##[0,\varepsilon]##.

Edit: I tried to follow your argumentation on ##f(x)=4x^3-6x^2+3x \in G_+##.
 
Last edited:
  • #38
36, 37

fresh_42 said:
I follow you until the epsilontic starts with "Suppose". My questions are:

What is the idea behind the construction, esp. why distinguish the two cases and where does the asymmetry in ##s## come from?

Where do we get the existence of ##f_{\frac{1}{2}}## from, since you start with the necessary condition?

How does uniqueness follow from a recursive construction? My guess is ##f_{\frac{1}{2}}(x) = \lim_{\varepsilon \to x} \displaystyle{\int_0^\varepsilon} \ldots dt ## but we don't know anything about ##f_{-\frac{1}{2}}## on ##[0,\varepsilon]##.

Edit: I tried to follow your argumentation on ##f(x)=4x^3-6x^2+3x \in G_+##.

Apologies for the more-than-somewhat incomplete post. The idea is to construct ##f_{\frac{1}{2}}(x)## from a given ##f(x)## by solving the 'differential equation' ## f_\frac{1}{2}'(f_\frac{1}{2}(x))f_\frac{1}{2}'(x)=f'(x)## on ##[0,1]##, or whatever the existence interval turns out to be, starting at ##x=0##. The interesting part is that the left hand side depends on derivatives of ##f_\frac{1}{2}## evaluated at two points that are typically different (##f_\frac{1}{2}(x)## and ##x##), and one of these, namely ##f_\frac{1}{2}(x)##, could be outside the interval ##[0,\epsilon]## where ##f_\frac{1}{2}## has been defined. To get around this, one evaluates the derivative of ##f_\frac{1}{2}## at the rightmost edge in two different ways, depending on whether ##f_\frac{1}{2}(\epsilon)## is greater or less than ##\epsilon##. If ##f_\frac{1}{2}(\epsilon)\leq\epsilon##, set ##f_\frac{1}{2}'(\epsilon)=\frac{f'(\epsilon)}{f_\frac{1}{2}'(f_\frac{1}{2}(\epsilon))}##. Otherwise, by continuity (and positivity of ##f_\frac{1}{2}'(x)## on ##[0,\epsilon]##) there must exist a point ##s\in[0,\epsilon]## where ##f_\frac{1}{2}(s)=\epsilon##. ##f_\frac{1}{2}'(\epsilon)## can then be expressed in terms of predefined quantities by evaluating the differential equation at ##s## instead of ##\epsilon##. Incidentally, ##s(\epsilon)## is just the inverse of ##f_\frac{1}{2}## evaluated at ##\epsilon##: if ##f_\frac{1}{2}(x)## is defined on ##[0,\epsilon]##, then ##f_{-\frac{1}{2}}(x)## is defined on ##[0,f_\frac{1}{2}(\epsilon)]## (which includes ##[0,\epsilon]## in the case where ##f_{-\frac{1}{2}}(x)## is needed.)

For your example (the function ##4x^3-6x^2+3x##), one starts with ##f_\frac{1}{2}(0)=0##, ##f_\frac{1}{2}'(0)=\sqrt{f'(0)}=\sqrt{3}>1##, so in a neighborhood of ##0## we expect ##f_\frac{1}{2}(x)>x##, and the relevant equation (at least up to the first point where ##f_\frac{1}{2}(x)=x##) is the one involving ##f_{-\frac{1}{2}}(x)##, which can be used to compute ##f_\frac{1}{2}## using e.g. Euler's method. An especially tricky case is ##f(x)=1+\frac{1}{2}\arcsin\Big(\pi(x-\frac{1}{2})\Big)## (or ##(f(x))^n##), whose derivative diverges at ##x=0##.

Here's another approach to establishing conditions for uniqueness of the 'square root', which has the added benefit of elucidating how ambiguities can arise depending on what amount of regularity is assumed. Suppose ##h## and ##p## are both in ##G_+##, and that ##h(h(x))=p(p(x))## for all ##x\in[0,1]##. Note that if ##h(h(x))=x##, then ##h(x)=x## (and ##p(x)=x##), and if ##{x_j}_{j\in\Lambda}## is the (ordered) set of all fixed points of ##h^2=p^2##, then ##h(x)## and ##p(x)## can be decomposed into self-maps of the intervals ##[x_j,x_{j+1}]##, which themselves can be identified with elements of ##G_+##. So without loss of generality, we can assume that ##h^2(x)>x## and ##p^2(x)>x##.
Prop: under the conditions mentioned above, ##h(x)>x##, ##p(x)>x##, and there exists at least one point ##x_0\in(0,1)## where ##h(x_0)=p(x_0)##.
Proof:
(By contradiction.) First, suppose that ##h(x)<x## for some ##x##. Then ##h(h(x))>x>h(x)##, and so ##h(x)## is decreasing somewhere on ##[h(x),x]##. We know, however, that ##h\in G_+## (##\Rightarrow \Leftarrow##.)
Now, suppose that no solution to ##h(x)=p(x)## exists, apart from ##x=0## and ##x=1##. Then without loss of generality, assume ##h(x)>p(x)## on ##(0,1)##. However, if ##h(s)>p(s)## for all ##s\in(0,1)##, and ##h(h(x))=p(p(x))##, then ##h(p(x))>p(p(x))=h(h(x))##, which would imply that ##h## is decreasing on ##[p(x),h(x)]## and thus not in ##G_+## (##\Rightarrow\Leftarrow##.)

Hence, there must be at least one nontrivial solution to ##h(x)=p(x)##, and this generates a countable set of additional solutions from the orbit under powers of ##h(x)## (or ##p(x)##) and its inverse.
Prop: Let ##s_{j}##, ##j\in\mathbb Z##, be the orbit described above. Then the functions ##h(x)## and ##p(x)## can each be associated with a countable set of maps in ##G_+##, ##h\rightarrow \{f_j\}_{j\in\mathbb Z}##, ##p\rightarrow\{g_j\}_{j\in\mathbb Z}##, where ##f_j(x) = \frac{1}{h(s_{j+1})-h(s_j)}\Big(h((s_{j+1}-s_j)x+s_j)-h(s_j)\Big)## and ##g_j(x)## are defined analogously. The functions ##f_j(x)## and ##g_j(x)## are related by ##f_{j+1}f_j=g_{j+1}g_j## for all ##j\in\mathbb Z##.
Proof:
This is just another way of expressing the fact that ##h(h(x))=p(p(x))## and that both ##h## and ##p## map ##[s_j,s_{j+1})## bijectively to ##[s_{j+1},s_{j+2})##.

The above proposition gives insight into the extent to which ##p(x)## can differ from ##h(x)##.

Prop: Given ##f_j## and ##g_j##, ##j\in\mathbb Z## as above, let ##C_j=f_{j-1} f_{j-2} \cdots f_1 f_0## for ##j>0##. Then for ##i>0##,
$$
g_i=\begin{cases}
f_i(C_i(g_0^{-1}f_0)C_i^{-1}),\quad i\text{ odd}\\
f_i(C_i(f_0^{-1}g_0) C_i^{-1}),\quad i\text{ even}
\end{cases}
$$
and for ##i<0##, letting ##D_j=f_0f_{-1}\cdots f_{j+1}##,
$$
g_i=\begin{cases}
(D_i^{-1}(f_0g_0^{-1})D_i )f_i\quad i\text{ odd}\\
(D_i^{-1}(g_0f_0^{-1})D_i )f_i\quad i\text{ even}
\end{cases}
$$
Proof:
(By induction, applying ##g_{j+1}=f_{j+1}f_jg_j^{-1}##.)

Note that ##g_i=f_i## (and ##p=h##) if and only if ##g_0f_0^{-1}=\mathbb I(x)##. This is probably at a point where I think I can begin referring to published literature, but due to my ignorance I shall stumble onward.

First, an example might be helpful. Consider the function
$$
h(x)=\begin{cases}
\frac{3}{2}x,\quad x<\frac{1}{2},\\
\frac{1}{2}x+\frac{1}{2},\quad x\geq \frac{1}{2}.
\end{cases}
$$
and orbits of the point ##x_0=\frac{1}{2}##. In this case, the functions ##f_j## are all equal to ##\mathbb{I}(x)##, and the orbits partition ##[0,1]## into two collections of exponentially shrinking subintervals.
Let's suppose (adhering to the principle of least action) that ##g_0=h##, so that ##g_i=h## if ##i## is even and ##g_i=h^{-1}## for ##i## odd. The associated graph of ##p(x)## has a somewhat irregular fractal shape, but clearly ##p^2=h^2## and ##p\neq h##. Moreover, ##p(x)## is locally as well-behaved as ##h(x)## on the interior ##(0,1)##, apart from having an infinite number of cusps that accumulate near the end points, and has a bounded first derivative. So boundary conditions are essential to uniqueness.
hpoplotomus.png
hsqpsq.png

If we require that functions have continuous first derivatives everywhere, then uniqueness follows from the observation that the intervals ##[s_i,s_{i+1})## shrink to zero width near end-points. Continuity of first derivatives in ##h## then implies ##f_j## converge to ##\mathbb I## as ##j\rightarrow \pm\infty##, so any bumps in ##f_mg_m^{-1}##, with ##[s_m,s_{m+1})## near an end-point, are conserved in ##g_j## for arbitrarily large ##|j|##, causing (lower-bounded) oscillations in the derivative over vanishingly small intervals, in contradiction to the ##C^1## condition. (The extent to which regularity can be weakened at all from ##C^1## is TBD.)
 

Attachments

  • hsqpsq.png
    hsqpsq.png
    6.9 KB · Views: 247
Last edited:
  • #39
Also, not that this should be necessary, but I also acknowledge that the extent of my mathematical ability is quite meager compared to a large number of readers and members of PF.
 
  • #40
(1, 11, 12, 16, 24, 25, 36, 38)

Not much new under the sun, but I wanted to gather what we have.

If I got it right, we now have two normal (infinite), proper subgroups, the involutions and the ##1-##component ##G_{\mathbf{+}} ## which consists of all square roots of itself. Especially ##G## is not simple.
$$
I:=\{\,g\in G\,|\,g^2=1\,\}\, , \,G_{\mathbf{+}} = \{\,f^2\,|\,f\in G_{\mathbf{+}}\,\} \triangleleft \; G = G_{\mathbf{+}} \,\dot{\cup }\; G_{\mathbf{-}}
$$
with the two ##L^2-##topological connection components
\begin{align*}
1 \in G_{\mathbf{+}}&=\{\,g\in G\,|\,g(0)=0\, , \,g(1)=1\,\}=\{\,g\in G\,|\,g\sim 1\,\}\\
1-x\in G_{\mathbf{-}}&=\{\,g\in G\,|\,g(0)=1\, , \,g(1)=0\,\}=\{\,g\in G\,|\,g\sim 1-x\,\}\\
G_{\mathbf{-}}&=G_{\mathbf{+}}\circ (1-x)
\end{align*}
which is a ##\mathbb{Z}_2## grading since ##G_{\mathbf{\varepsilon}} \cdot G_{\mathbf{\eta}} \subseteq G_{\mathbf{\varepsilon \cdot \eta}}\,;\,\varepsilon,\eta \in \{\,\mathbf{+},\mathbf{-}\,\}\,.##

##d(f,g) := \dfrac{1}{2}\left(\|x-f^{-1}\circ g(x)\|_2+\|x-g^{-1}\circ f(x)\|_2\right)=d(1,f^{-1}g)=d(1,g^{-1}f)## defines a symmetric, under left multiplication invariant function, such that ##d(f,g)=0 \Longleftrightarrow f=g\,.##

The center of ##G## is trivial. Assume ##f\in Z(G)## and ##f(\alpha)\neq \alpha \in (0,1)##. Set ##\beta := -\dfrac{\log 2}{\log \alpha} ## and consider ##g(x)=x^\beta##. Since ##\beta > 0## we have ##g\in G## and
$$
f(g(\alpha))=f(\alpha^\beta)=f\left(\dfrac{1}{2}\right) = g(f(\alpha))=f(\alpha)^\beta \neq \alpha^\beta = \dfrac{1}{2}
$$
On the other hand we have from ##f\circ (1-x)=(1-x)\circ f## at ##x=\dfrac{1}{2}## that ##f(1/2)=1-f(1/2)## and ##f## has a fixed point at ##x=1/2## contradicting the previous. Thus ##Z(G)=1\,.##

Useful group elements:
##1\, : \,x\longmapsto x \; , \;1-x\, : \,x\longmapsto 1-x\; , \;x\longmapsto x^\beta\;(\beta > 0)\; , \;x\longmapsto \dfrac{1}{2}\left(2x-1\right)^{2n+1}+\dfrac{1}{2}##
##f_n\, : \,x\longmapsto \dfrac{e^{x^n}-1}{e-1}\; , \;f_n^{-1}\, : \,x\longmapsto \sqrt[n]{\log |(e-1)x+1|}##

We even have an exponential function ##\exp\, : \,G \longrightarrow G\;## given by
$$
(\exp (f))(x) = \dfrac{1}{e-1} \sum_{k=0}^\infty \dfrac{f^n}{n!}(x)= \dfrac{1}{e-1} \left(x+f(x)+\dfrac{f(f(x))}{2!}+\dfrac{f(f(f(x)))}{3!} + \cdots\right)
$$
and ##G## operates on the Hilbert space ##L^2([0,1];\mathbb{R})## by ##\varphi(g)(F)=g.F := F\circ g^{-1}##. Hence ##\left(L^2([0,1];\mathbb{R})\,,\,\varphi\right)## is a representation of ##G##.Further conjectures are:
  • ##G## is a Lie group with the induced topology of ##L^2## (##2## covering of ##G_\mathbb{+}##)
  • ##d(\cdot,\cdot)## is a metric on ##G##, possibly with a modified triangle inequality.
 
Last edited:

Similar threads

  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
Replies
33
Views
7K
  • Math Proof Training and Practice
3
Replies
93
Views
6K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
3
Replies
77
Views
12K
  • Math Proof Training and Practice
3
Replies
104
Views
13K
  • Math Proof Training and Practice
2
Replies
46
Views
4K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
4
Replies
121
Views
18K
Back
Top