Challenge Math Experiment: Let's Prove Something

  • Thread starter Thread starter fresh_42
  • Start date Start date
  • Tags Tags
    Experiment
fresh_42
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
2024 Award
Messages
20,676
Reaction score
27,963
This is an experiment. I thought of a way to bridge the gap between the usual challenge threads. Of course we could shorten the monthly period, but given that there are almost always untouched problems, more of them might not be the solution. Today we had a thread "Is math a language" by @frankin garcia and most of us will certainly think so. So if it is, then we can write an essay in this language, formerly known as proof. I have no idea how it will go, where it ends, or if it makes sense at all. I thought we could give it a try until the next load of challenges in December. We start with:

Let ##G## be a not necessarily Abelian group of square integrable smooth, real functions ##f\, : \,I=[0,1]\longrightarrow [0,1]## on the unit interval, and ##\mathfrak{g}## its real Lie algebra.

Now everybody can either add a conclusion based on all previous posts, e.g. "Since ##I## is compact, all functions ..." or add additional properties, e.g. "Assume ##G## is simple." or focus on additional perspectives, e.g. "Let us consider the center ##Z(G)## of ##G## ...". Conclusions must be proven (keep it short) or the relevant theorem must be quoted. In my example it could be e.g. Heine-Borel or Weierstrass, depending on which direction you want to go: topology or analysis. Please choose only one of these possibilities per post and do not post more than once in a row, i.e. you may continue after somebody else posted something. The projected runtime is until end of month, but it will depend on what actually will happen.

Now let's see whether we can prove something!

Edit: Group operation is ##(fg)(x)=f(g(x))## and integration Lebesgue. I also corrected the domain accordingly. Two dimensional might have been more fun, but let's start simple and see who participates.
 
Last edited:
  • Like
  • Wow
Likes Demystifier, member 587159 and frankin garcia
Physics news on Phys.org
Interesting experiment:

To make the thread self-contained: For what operation is ##G## a group? And by what Bracket is the Lie-algebra given?
 
Yes, Lebesgue. But as I assume invertible functions, there shouldn't occur any Nullsets or other major differences between the two on a compact domain.

Let's assume that the group operation is the natural ##(f\circ g)(x,y)=f(g(x,y))## and ##\mathfrak{g}## the left-invariant smooth vector fields on ##G##, i.e. the smooth sections of the tangent bundle. Multiplication is then defined by the flows along these fields.

Let ##X,Y## be two vector fields on the smooth manifold ##G## and ##X(\gamma(t))## a flow of ##X##.
The Lie derivative from ##Y## along ##X## is defined by
\begin{equation}
\mathcal{L}_XY = \left. \frac{d}{dt} \right|_{t=0}(X^*(\gamma(t))Y)
\end{equation}
Then
\begin{equation}
\mathcal{L}_XY = [X,Y] = X\circ Y - Y \circ X
\end{equation}
 
It looks like a Hilbert space to me - with the usual inner product <f, g> = \int f\cdot g.
 
Svein said:
It looks like a Hilbert space to me - with the usual inner product <f, g> = \int f\cdot g.
Not quite. As the elements form a multiplicative group, it cannot be a vector space.
 
fresh_42 said:
Not quite. As the elements form a multiplicative group, it cannot be a vector space.
Why not? I can come up with two bases straight away:
  1. The functions 1, x, x^{2},x^{3},x^{4},... form a countable base for G
  2. The functions \left[ \sin(n(2\pi (x-\frac{1}{2}))),\cos(n(2\pi (x-\frac{1}{2}))) \right]_{n=0}^{\infty} form another countable base
 
Svein said:
Why not? I can come up with two bases straight away:
  1. The functions 1, x, x^{2},x^{3},x^{4},... form a countable base for G
  2. The functions \left[ \sin(n(2\pi (x-\frac{1}{2}))),\cos(n(2\pi (x-\frac{1}{2}))) \right]_{n=0}^{\infty} form another countable base
##0 \notin G##. And a basis would mean only finitely many coefficients were different from zero, but ##G## contains power series. But as ##x-x \notin G##, the question doesn't come up.
 
Last edited:
How are you putting a smooth structure on ##G##? Is the topology on ##G## given by the compact-open topology?
 
I only wanted to give a frame: group of smooth functions. I haven't elaborated details. Square integrability was already redundant after the requirement, that group multiplication is consecutive application. The Lie algebra multiplication of vector fields should only ensure, that the functions are the points of the manifold, and their tangents shouldn't be regarded to define ##\mathfrak{g}##.

So which topology applies is already a first step. Just choose a "normal" one. The group allows the ##L^2## metric which is a natural candidate. I guess it's even dense in ##L^2([0,1])##.
 
  • #10
What's the group operation? If you multiply you cannot have inverses unless you consider ##f: f (x)\neq 0##? Or maybe you want your target to be ##(0,1]## or ##[\epsilon,1]; \epsilon >0## for compactness?
 
  • #11
I'm not sure whether it's better to end the experiment or restart it somehow. The goal is not to guess a proof or a theorem which I have in mind, the goal is to create one!

Let $$G=\left(\{\,f \in L^2([0,1];[0,1])\,|\,f \text{ is bijective and smooth }\,\}\, , \,\circ \, : \,(f,g) \longmapsto (x \longmapsto f(g(x)))\right)$$ be a multiplicative group with the induced topology from the ##L^2## inner product, norm and metric. This should be a Lie group, so we also have a Lie algebra ##\mathfrak{g}##, the smooth tangent vector field of flows in ##G##. I also guess that ##G \subseteq L^2([0,1];[0,1])## is an open, dense subset.

So that's it so far. Possible moves are:

1. conclusion from previously given or found facts (e.g. all ##f## have a minimum ##x_f## and a maximum ##x_F## by Weierstraß)
2. addition of conditions (e.g. we assume that ##\mathfrak{g}## is solvable, which also restricts ##G##)
3. addition of objects (e.g. let ##D## the maximal simple subgroup of ##G##)

and the only rule is not to post twice in a row. From here on it's just creativity.
 
  • #12
I'll try to get the party started.

Consider the smooth map ##T:G\to\mathbb{R}## given by ##T(f)=\int_0^1 |f(x)|^2 dx##.

I'm still not totally sure what the smooth structure on ##G## is though- especially as it's infinite-dimensional, and shouldn't we also require elements of ##G## to have smooth inverses (so that ##G## is closed under inversion)?
 
  • Like
Likes Greg Bernhardt
  • #13
Infrared said:
I'm still not totally sure what the smooth structure on ##G## is though- especially as it's infinite-dimensional, and shouldn't we also require elements of ##G## to have smooth inverses (so that ##G## is closed under inversion)?
Yes, but we can pretend that it is given by the word "group".
 
  • #14
fresh_42 said:
Yes, but we can pretend that it is given by the word "group".

I think he means that the map ##G \to G: g \mapsto g^{-1}## is smooth, not that inverses of smooth maps itself are smooth, but the inversion map itself.
 
  • #15
Math_QED said:
I think he means that the map ##G \to G: g \mapsto g^{-1}## is smooth, not that inverses of smooth maps itself are smooth, but the inversion map itself.
Well, it can be decided mathematically: prove it or find a counterexample. But as there is so much boundary around, I wouldn't expect a counterexample.

To keep track of the problem posts in comparison to side discussions, please enter the sequence of previously relevant posts in any post on the problem. Right now we are at the prefix

(11,12)
 
  • #16
(11,12)

Infrared said:
Consider the smooth map ##T:G\to\mathbb{R}## given by ##T(f)=\int_0^1 |f(x)|^2 dx##.
This means, that ##T(f)=||f||_2^2## is the restriction of the squared ##L^2-##norm of ##\mathcal{A}:= L^2([0,1];\mathbb{R})## on ##G## and ##\operatorname{im}(T)=(0,1).## ##G## is not closed in this norm topology: the sequence ##f_n(x)= \dfrac{e^{x^n}-1}{e-1} \in G## converges to ##f(x)=\begin{cases}0 &,\,x<1\\1&,\,x=1\end{cases}## which is not in ##G##.

(... which is a hint that ##G## might not be a Lie group.)
 
Last edited:
  • #17
Here's a basic question: what's the smallest subset in ##G## that generates ##G##? (Or even more basic: what subsets in ##G## generate all of ##G##?) Here are a few simple conjectures:
$$\text{(i) Existence of (e.g. square) roots: if }g\in G,\text{ then }\exists! \sqrt{g} \text{ s.t. }\sqrt{g}\circ\sqrt{g}=g.$$
$$\text{(ii) Existence of flows: if }g\in G,\exists! \{g_\lambda\}_{\lambda\in \mathbb{R}^+}:\,g_1= g,\,g_0=\mathbb{I},\text{ and }g_a\circ g_b = g_{a+b}$$
$$\text{(iii) Convexity: if }g_1,\,g_2\in G\text{ and both }g_1\,g_2\text{ are convex, then so is }g_1\circ g_2.$$
$$\text{(iv) Noncommutativity: e.g. let }g_1(x)=x^2,\,g_2(x)=\frac{2}{\pi}\arcsin(x)$$
$$\text{(v) Piecewise linear approximation: if }g\in G,\,\epsilon> 0,\,\exists g_{\epsilon}(x)\text{ piecewise linear, such that }\|g\circ g'-g_\epsilon\circ g'\|<\epsilon\,\forall g'\in G.$$
$$\text{(vi) Same as (v) for powers of }g\in G\text{ up to some }n\in\mathbb N$$
$$\text{(vii) Interleaving properties of cusp singularities in piecewise linear functions under composition.}$$
$$\text{(viii) Approximating arbitrary p.w. linear functions with those generated from a countable subset.}$$
$$\text{(ix) Same as (viii) for finite subsets (of piecewise linear functions.)}$$
$$\text{(x) Criteria for commutativity: }g_1\text{ and }g_2\in G \text{ commute iff they belong to the same one-parameter flow, }g_\lambda,\,\lambda\in\mathbb R^+.(?)$$
 
Last edited:
  • #18
(i) Consider derivatives of ##\sqrt{g}##.
(ii) After (i), uniqueness of flows follows from continuity.
(x) There are a large number of mutually commuting, distinct, one-parameter flows. (Are there more?)
(xi) The 'center' of ##G## consists of just the identity. This can be seen by considering the shift operation, ##g(x)\rightarrow T(g,s)(x)=\overline{g(\overline{x-s})-g(1-s)}## (where ##\bar x\equiv x\text{ mod }1##.)
 
  • #19
Couchyam said:
(i) Consider derivatives of ##\sqrt{g}##.
(ii) After (i), uniqueness of flows follows from continuity.
(x) There are a large number of mutually commuting, distinct, one-parameter flows. (Are there more?)
(xi) The 'center' of ##G## consists of just the identity. This can be seen by considering the shift operation, ##g(x)\rightarrow T(g,s)(x)=\overline{g(\overline{x-s})-g(1-s)}## (where ##\bar x\equiv x\text{ mod }1##.)
That's not exactly what I meant. Choose an option, prove it and post it with a first line (11,12,16) indicating the posts which contributed to our proof so far. Brainstorming isn't meant as contribution. So your post should have been:

(11,12,16)

The center of ##G## is trivial: ##Z(G)=\{\,\operatorname{id}_{[0,1]}=1\,\}.##
Proof: Say ##f\in Z(G) ## and ##g(x):= \ldots ##

Or you can still prove or disprove whether ##G## is actually a Lie group.
 
  • #20
Infrared said:
I'll try to get the party started.

Consider the smooth map ##T:G\to\mathbb{R}## given by ##T(f)=\int_0^1 |f(x)|^2 dx##.

I'm still not totally sure what the smooth structure on ##G## is though- especially as it's infinite-dimensional, and shouldn't we also require elements of ##G## to have smooth inverses (so that ##G## is closed under inversion)?
Maybe a Banach manifold? I don't know if they can be Lie groups?
 
  • #21
WWGD said:
Maybe a Banach manifold? I don't know if they can be Lie groups?
Addition is a problem within ##[0,1]## which is needed for the multiplication ##(fg)(x)=f(g(x))##.

All my attempts to find an example ##||f^{-1}-g^{-1}||_2 > \varepsilon## given ##||f-g||_2 < \delta ## were in vain. If two functions are close, then ##f^{-1}## and ##g^{-1}## were also close. But I don't have a proof; or a counterexample. So the Lie question is still open. Other topological features except "not closed" are also still open.
 
  • #22
fresh_42 said:
All my attempts to find an example ##||f^{-1}-g^{-1}||_2 > \varepsilon## given ##||f-g||_2 < \delta ## were in vain. If two functions are close, then ##f^{-1}## and ##g^{-1}## were also close. But I don't have a proof; or a counterexample. So the Lie question is still open. Other topological features except "not closed" are also still open.
I might try using the reflection property of inverse functions, and the fact that ##x<1\Rightarrow x^2<x## (so the ##L^2## norm is bounded by the ##L^1## norm in this case.) It could be there exists a more natural metric, however (one that is invariant with respect to the Lie group multiplication.)
 
  • Like
Likes fresh_42
  • #23
Another interesting subject are involutions. But it's not my turn.
 
  • #24
(11, 12, 16, 21)

Prop:
The function ##d(f,g) \equiv \frac{1}{2}(\|x-f^{-1}\circ g(x)\|_2+\|x-g^{-1}\circ f(x)\|_2)## is invariant under left multiplication.
Proof:
##d(af,ag) = \frac{1}{2}(\|x-f^{-1}\circ (a^{-1}\circ a)\circ g(x)\|_2+\|x-g^{-1}\circ (a^{-1}\circ a)\circ f(x)\|_2)=d(f,g).##

(The above function is just the 'average' of the distances to ##g## from the perspective of ##f## and vice versa.)
 
  • Like
Likes fresh_42
  • #25
(11, 12, 16, 24)

The functions ##1## and ##x\longmapsto 1-x## are clearly involutions and since ##\left(f\iota f^{-1}\right)=f \iota^2 f^{-1}=1##, the involutions form a normal, infinite subgroup of ##G##. Of course ##f\iota f^{-1} \stackrel{i.g.}{\neq}f^{-1}\iota f\,.##
 
  • Like
Likes Couchyam
  • #26
fresh_42 said:
Summary: Another Kurzweil.

This is an experiment. I thought of a way to bridge the gap between the usual challenge threads. Of course we could shorten the monthly period, but given that there are almost always untouched problems, more of them might not be the solution. Today we had a thread "Is math a language" by @frankin garcia and most of us will certainly think so. So if it is, then we can write an essay in this language, formerly known as proof. I have no idea how it will go, where it ends, or if it makes sense at all. I thought we could give it a try until the next load of challenges in December. We start with:

Let ##G## be a not necessarily Abelian group of square integrable smooth, real functions ##f\, : \,I=[0,1]\longrightarrow [0,1]## on the unit interval, and ##\mathfrak{g}## its real Lie algebra.

Now everybody can either add a conclusion based on all previous posts, e.g. "Since ##I## is compact, all functions ..." or add additional properties, e.g. "Assume ##G## is simple." or focus on additional perspectives, e.g. "Let us consider the center ##Z(G)## of ##G## ...". Conclusions must be proven (keep it short) or the relevant theorem must be quoted. In my example it could be e.g. Heine-Borel or Weierstrass, depending on which direction you want to go: topology or analysis. Please choose only one of these possibilities per post and do not post more than once in a row, i.e. you may continue after somebody else posted something. The projected runtime is until end of month, but it will depend on what actually will happen.

Now let's see whether we can prove something!

Edit: Group operation is ##(fg)(x)=f(g(x))## and integration Lebesgue. I also corrected the domain accordingly. Two dimensional might have been more fun, but let's start simple and see who participates.
To me a Kurzweil is a keyboard musical instrument -- I would like to ask that a constraint be added to this venture, and to ask also (by "also", I mean 'too'; not 'thus' :oldwink:) that it be adopted as retroactive; viz, the constraint that all terms or words used be absent of names of persons, and instead be named after by-reason-intellegible characteristics, so that. for example, the term 'non-Abelian' does not occur -- I don't have an agenda against crediting Prof. Abel, who contributed so much of great worth; however, one can understand the term 'non-commutative' without having to learn much about who discovered or invented what.
 
  • #27
Abelian is a standard to refer to commutativity in abstract algebra. It shouldn't be a big thing on a website where you can hardly find a post without a Lagrangian or a Hamiltonian.
 
  • Like
Likes sysprog
  • #28
fresh_42 said:
Abelian is a standard to refer to commutativity in abstract algebra. It shouldn't be a big thing on a website where you can hardly find a post without a Lagrangian or a Hamiltonian.
It's a pet peeve of mine -- I detest the practice of naming things after persons -- I think that it's better to name things after their properties.
 
  • #29
sysprog said:
It's a pet peeve of mine -- I detest the practice of naming things after persons -- I think that it's better to name things after their properties.
.
I prefer to use what is standard. E.g. when you look for "commutative group" on Wikipedia, you end up here:
https://en.wikipedia.org/wiki/Abelian_group. And all the other examples, where a name shortens a variety of properties: Artinian or Noetherian rings, Lie groups, the many algebras named after persons? This is a senseless discussion.
 
  • Like
Likes sysprog
  • #30
fresh_42 said:
.
I prefer to use what is standard. E.g. when you look for "commutative group" on Wikipedia, you end up here:
https://en.wikipedia.org/wiki/Abelian_group. And all the other examples, where a name shortens a variety of properties: Artinian or Noetherian rings, Lie groups, the many algebras named after persons? This is a senseless discussion.
Standards -- so many of them to choose from -- in system(s) programming, we often shorten names of things by using acronyms -- not as often by naming things after persons -- I disagree with your contention regarding this discussion being "senseless", and would prefer, for example, along with my prior example, to see the term 'planar', where and when reasonable, used in preference to 'Euclidian', at least whenever planar is all that is actually meant.
 
  • #31
Most people associate planar with something two dimensional whereas Euclidean is not limited. As this is off-topic here, I suggest to start a thread in general discussion about it. There is certainly more to say about this topic.
 
  • Like
Likes sysprog and Couchyam
  • #32
Oh and by the way, especially for anyone else reading this, an acknowledgment to for and of @fresh_42: I'm pretty sure that he's very extremely much better at math than I am.
 
  • #33
. . . and he's not the only one here on PF of whom that's true . . .
 
  • #34
sysprog said:
Oh and by the way, especially for anyone else reading this, an acknowledgment to for and of @fresh_42: I'm pretty sure that he's very extremely much better at math than I am.
Thanks, but I already have forgotten many things. Ask @Infrared and @Math_QED , they catch all my carelessness and inaccuracies. We are a quite luxurious website, because errors don't go unseen. I love this about PF and it can't be said too often.
 
  • Like
Likes Couchyam, Infrared and sysprog
  • #35
Resume where we are:

(1) Let ##G## be the group of square integrable smooth, real functions ##f\, : \,I=[0,1]\longrightarrow [0,1]##.

(11) ##G \subseteq L^2([0,1],\mathbb{R})## and ##(f\circ g)(x)=f(g(x))##

(12) Consider the smooth map ##T\, : \,G \longrightarrow \mathbb{R}## given by ##T(f)=\displaystyle{\int_0^1} |f(x)|^2 dx##.

(16) This means, that ##T(f)=||f||_2^2## is the restriction of the squared ##L^2-##norm of ##\mathcal{A}:= L^2([0,1];\mathbb{R})## on ##G## and ##\operatorname{im}(T)=(0,1).##
##G## is not closed in this norm topology: the sequence ##f_n(x)= \dfrac{e^{x^n}-1}{e-1} \in G## converges to ##f(x)=\begin{cases}0 &,\,x<1\\1&,\,x=1\end{cases}## which is not in ##G##.

(24) The function ##d(f,g) \equiv \dfrac{1}{2}\left(\|x-f^{-1}\circ g(x)\|_2+\|x-g^{-1}\circ f(x)\|_2\right)## is invariant under left multiplication.

(25) ##x \longmapsto 1-x ## is an involution of ##G##. Involutions form a normal, infinite subgroup of ##G##.

Copy template for continuation: (1, 11, 12, 16, 24, 25)
 
  • #36
(11, 12, 16, 17, 24, 25)
Uniqueness of 'square roots'
fresh_42 said:
The functions ##1## and ##x\longmapsto 1-x## are clearly involutions and since ##\left(f\iota f^{-1}\right)=f \iota^2 f^{-1}=1##, the involutions form a normal, infinite subgroup of ##G##. Of course ##f\iota f^{-1} \stackrel{i.g.}{\neq}f^{-1}\iota f\,.##

The functional ##\mathcal{F}[g](x)=g(1-x)## exchanges the spaces of increasing and decreasing bijections and is its own inverse, and so ##G## consists of two topologically equivalent connected components, say ##G=G_+\cup G_-## (a path from ##\mathbb I(x)## to ##1-x## would violate continuity at the end points, but monotonic functions ##g(x)## of a given type are connected to either ##\mathbb I(x)=x## or ##1-x## through one-parameter flows, ##g_\lambda(x),\,\lambda\in\mathbb R,\,g_0(x)=x## or ##g_0(x)=1-x##, ##g_1(x)=g(x)##.)

Focusing on ##G_+##, the 'flows' can be defined in several ways. One approach is to first examine the square root operator alluded to in 17, and then define ##g_\lambda(x)## in terms of the binomial expansion of ##\lambda##.` Another (and perhaps more elegant) approach might involve concepts from density equalizing maps. First, it is necessary to prove that the 'square root' operator is uniquely determined on functions with positive derivative. For increasing ##g(x)##, with ##g(0)=0## and ##g(1)=1##, one conceptually straightforward (though somewhat messy) approach is to consider the equation
$$
f_{\frac{1}{2}}'(f_{\frac{1}{2}}(x))f_{\frac{1}{2}}'(x)=f'(x),\quad f_{\frac{1}{2}}(0)=0
$$
obtained from differentiating ##f_\frac{1}{2}(f_\frac{1}{2}(x))=f(x)##.
Suppose ##f_{\frac{1}{2}}(x)## has been defined on the interval ##[0,\epsilon]##. If ##f_{\frac{1}{2}}(\epsilon)>\epsilon##, let ##s## be the solution to ##f_{\frac{1}{2}}(s)=\epsilon##. Then ##f_{\frac{1}{2}}'(\epsilon)## can be determined from
$$
f_{\frac{1}{2}}'(\epsilon)=\frac{f'(s(\epsilon))}{f_\frac{1}{2}'(s(\epsilon))}.
$$
If instead ##f_\frac{1}{2}(\epsilon)<\epsilon##, then
$$
f_{\frac{1}{2}}'(\epsilon)=\frac{f'(\epsilon)}{f_\frac{1}{2}'(f_\frac{1}{2}(\epsilon))}
$$

Hence, we may write
$$
f_\frac{1}{2}(x)=\int_0^xdt\bigg[\theta(t-f_\frac{1}{2}(t))\frac{f'(t)}{f_\frac{1}{2}'(f_\frac{1}{2}(t))}+\theta(f_\frac{1}{2}(t)-t)\frac{f'(f_{-\frac{1}{2}}(t))}{f_\frac{1}{2}'(f_{-\frac{1}{2}}(t))}\bigg]
$$
where ##f_{-\frac{1}{2}}(x)## is the inverse of ##f_{\frac{1}{2}}##, or
$$
f_\frac{1}{2}(x)=\int_0^xdt\bigg[\frac{f'(\min(t,f_{-\frac{1}{2}}(t)))}{f_\frac{1}{2}'(\min(f_\frac{1}{2}(t),f_{-\frac{1}{2}}(t)))}\bigg]
$$
and so ##f_\frac{1}{2}(x)## is uniquely defined on ##G_+##.
 
  • Like
Likes fresh_42
  • #37
@Couchyam Sorry, if this is too stupid for you. But could you explain what you did for us rusty readers?
I follow you until the epsilontic starts with "Suppose". My questions are:

What is the idea behind the construction, esp. why distinguish the two cases and where does the asymmetry in ##s## come from?

Where do we get the existence of ##f_{\frac{1}{2}}## from, since you start with the necessary condition?

How does uniqueness follow from a recursive construction? My guess is ##f_{\frac{1}{2}}(x) = \lim_{\varepsilon \to x} \displaystyle{\int_0^\varepsilon} \ldots dt ## but we don't know anything about ##f_{-\frac{1}{2}}## on ##[0,\varepsilon]##.

Edit: I tried to follow your argumentation on ##f(x)=4x^3-6x^2+3x \in G_+##.
 
Last edited:
  • #38
36, 37

fresh_42 said:
I follow you until the epsilontic starts with "Suppose". My questions are:

What is the idea behind the construction, esp. why distinguish the two cases and where does the asymmetry in ##s## come from?

Where do we get the existence of ##f_{\frac{1}{2}}## from, since you start with the necessary condition?

How does uniqueness follow from a recursive construction? My guess is ##f_{\frac{1}{2}}(x) = \lim_{\varepsilon \to x} \displaystyle{\int_0^\varepsilon} \ldots dt ## but we don't know anything about ##f_{-\frac{1}{2}}## on ##[0,\varepsilon]##.

Edit: I tried to follow your argumentation on ##f(x)=4x^3-6x^2+3x \in G_+##.

Apologies for the more-than-somewhat incomplete post. The idea is to construct ##f_{\frac{1}{2}}(x)## from a given ##f(x)## by solving the 'differential equation' ## f_\frac{1}{2}'(f_\frac{1}{2}(x))f_\frac{1}{2}'(x)=f'(x)## on ##[0,1]##, or whatever the existence interval turns out to be, starting at ##x=0##. The interesting part is that the left hand side depends on derivatives of ##f_\frac{1}{2}## evaluated at two points that are typically different (##f_\frac{1}{2}(x)## and ##x##), and one of these, namely ##f_\frac{1}{2}(x)##, could be outside the interval ##[0,\epsilon]## where ##f_\frac{1}{2}## has been defined. To get around this, one evaluates the derivative of ##f_\frac{1}{2}## at the rightmost edge in two different ways, depending on whether ##f_\frac{1}{2}(\epsilon)## is greater or less than ##\epsilon##. If ##f_\frac{1}{2}(\epsilon)\leq\epsilon##, set ##f_\frac{1}{2}'(\epsilon)=\frac{f'(\epsilon)}{f_\frac{1}{2}'(f_\frac{1}{2}(\epsilon))}##. Otherwise, by continuity (and positivity of ##f_\frac{1}{2}'(x)## on ##[0,\epsilon]##) there must exist a point ##s\in[0,\epsilon]## where ##f_\frac{1}{2}(s)=\epsilon##. ##f_\frac{1}{2}'(\epsilon)## can then be expressed in terms of predefined quantities by evaluating the differential equation at ##s## instead of ##\epsilon##. Incidentally, ##s(\epsilon)## is just the inverse of ##f_\frac{1}{2}## evaluated at ##\epsilon##: if ##f_\frac{1}{2}(x)## is defined on ##[0,\epsilon]##, then ##f_{-\frac{1}{2}}(x)## is defined on ##[0,f_\frac{1}{2}(\epsilon)]## (which includes ##[0,\epsilon]## in the case where ##f_{-\frac{1}{2}}(x)## is needed.)

For your example (the function ##4x^3-6x^2+3x##), one starts with ##f_\frac{1}{2}(0)=0##, ##f_\frac{1}{2}'(0)=\sqrt{f'(0)}=\sqrt{3}>1##, so in a neighborhood of ##0## we expect ##f_\frac{1}{2}(x)>x##, and the relevant equation (at least up to the first point where ##f_\frac{1}{2}(x)=x##) is the one involving ##f_{-\frac{1}{2}}(x)##, which can be used to compute ##f_\frac{1}{2}## using e.g. Euler's method. An especially tricky case is ##f(x)=1+\frac{1}{2}\arcsin\Big(\pi(x-\frac{1}{2})\Big)## (or ##(f(x))^n##), whose derivative diverges at ##x=0##.

Here's another approach to establishing conditions for uniqueness of the 'square root', which has the added benefit of elucidating how ambiguities can arise depending on what amount of regularity is assumed. Suppose ##h## and ##p## are both in ##G_+##, and that ##h(h(x))=p(p(x))## for all ##x\in[0,1]##. Note that if ##h(h(x))=x##, then ##h(x)=x## (and ##p(x)=x##), and if ##{x_j}_{j\in\Lambda}## is the (ordered) set of all fixed points of ##h^2=p^2##, then ##h(x)## and ##p(x)## can be decomposed into self-maps of the intervals ##[x_j,x_{j+1}]##, which themselves can be identified with elements of ##G_+##. So without loss of generality, we can assume that ##h^2(x)>x## and ##p^2(x)>x##.
Prop: under the conditions mentioned above, ##h(x)>x##, ##p(x)>x##, and there exists at least one point ##x_0\in(0,1)## where ##h(x_0)=p(x_0)##.
Proof:
(By contradiction.) First, suppose that ##h(x)<x## for some ##x##. Then ##h(h(x))>x>h(x)##, and so ##h(x)## is decreasing somewhere on ##[h(x),x]##. We know, however, that ##h\in G_+## (##\Rightarrow \Leftarrow##.)
Now, suppose that no solution to ##h(x)=p(x)## exists, apart from ##x=0## and ##x=1##. Then without loss of generality, assume ##h(x)>p(x)## on ##(0,1)##. However, if ##h(s)>p(s)## for all ##s\in(0,1)##, and ##h(h(x))=p(p(x))##, then ##h(p(x))>p(p(x))=h(h(x))##, which would imply that ##h## is decreasing on ##[p(x),h(x)]## and thus not in ##G_+## (##\Rightarrow\Leftarrow##.)

Hence, there must be at least one nontrivial solution to ##h(x)=p(x)##, and this generates a countable set of additional solutions from the orbit under powers of ##h(x)## (or ##p(x)##) and its inverse.
Prop: Let ##s_{j}##, ##j\in\mathbb Z##, be the orbit described above. Then the functions ##h(x)## and ##p(x)## can each be associated with a countable set of maps in ##G_+##, ##h\rightarrow \{f_j\}_{j\in\mathbb Z}##, ##p\rightarrow\{g_j\}_{j\in\mathbb Z}##, where ##f_j(x) = \frac{1}{h(s_{j+1})-h(s_j)}\Big(h((s_{j+1}-s_j)x+s_j)-h(s_j)\Big)## and ##g_j(x)## are defined analogously. The functions ##f_j(x)## and ##g_j(x)## are related by ##f_{j+1}f_j=g_{j+1}g_j## for all ##j\in\mathbb Z##.
Proof:
This is just another way of expressing the fact that ##h(h(x))=p(p(x))## and that both ##h## and ##p## map ##[s_j,s_{j+1})## bijectively to ##[s_{j+1},s_{j+2})##.

The above proposition gives insight into the extent to which ##p(x)## can differ from ##h(x)##.

Prop: Given ##f_j## and ##g_j##, ##j\in\mathbb Z## as above, let ##C_j=f_{j-1} f_{j-2} \cdots f_1 f_0## for ##j>0##. Then for ##i>0##,
$$
g_i=\begin{cases}
f_i(C_i(g_0^{-1}f_0)C_i^{-1}),\quad i\text{ odd}\\
f_i(C_i(f_0^{-1}g_0) C_i^{-1}),\quad i\text{ even}
\end{cases}
$$
and for ##i<0##, letting ##D_j=f_0f_{-1}\cdots f_{j+1}##,
$$
g_i=\begin{cases}
(D_i^{-1}(f_0g_0^{-1})D_i )f_i\quad i\text{ odd}\\
(D_i^{-1}(g_0f_0^{-1})D_i )f_i\quad i\text{ even}
\end{cases}
$$
Proof:
(By induction, applying ##g_{j+1}=f_{j+1}f_jg_j^{-1}##.)

Note that ##g_i=f_i## (and ##p=h##) if and only if ##g_0f_0^{-1}=\mathbb I(x)##. This is probably at a point where I think I can begin referring to published literature, but due to my ignorance I shall stumble onward.

First, an example might be helpful. Consider the function
$$
h(x)=\begin{cases}
\frac{3}{2}x,\quad x<\frac{1}{2},\\
\frac{1}{2}x+\frac{1}{2},\quad x\geq \frac{1}{2}.
\end{cases}
$$
and orbits of the point ##x_0=\frac{1}{2}##. In this case, the functions ##f_j## are all equal to ##\mathbb{I}(x)##, and the orbits partition ##[0,1]## into two collections of exponentially shrinking subintervals.
Let's suppose (adhering to the principle of least action) that ##g_0=h##, so that ##g_i=h## if ##i## is even and ##g_i=h^{-1}## for ##i## odd. The associated graph of ##p(x)## has a somewhat irregular fractal shape, but clearly ##p^2=h^2## and ##p\neq h##. Moreover, ##p(x)## is locally as well-behaved as ##h(x)## on the interior ##(0,1)##, apart from having an infinite number of cusps that accumulate near the end points, and has a bounded first derivative. So boundary conditions are essential to uniqueness.
hpoplotomus.png
hsqpsq.png

If we require that functions have continuous first derivatives everywhere, then uniqueness follows from the observation that the intervals ##[s_i,s_{i+1})## shrink to zero width near end-points. Continuity of first derivatives in ##h## then implies ##f_j## converge to ##\mathbb I## as ##j\rightarrow \pm\infty##, so any bumps in ##f_mg_m^{-1}##, with ##[s_m,s_{m+1})## near an end-point, are conserved in ##g_j## for arbitrarily large ##|j|##, causing (lower-bounded) oscillations in the derivative over vanishingly small intervals, in contradiction to the ##C^1## condition. (The extent to which regularity can be weakened at all from ##C^1## is TBD.)
 

Attachments

  • hsqpsq.png
    hsqpsq.png
    6.9 KB · Views: 312
Last edited:
  • #39
Also, not that this should be necessary, but I also acknowledge that the extent of my mathematical ability is quite meager compared to a large number of readers and members of PF.
 
  • #40
(1, 11, 12, 16, 24, 25, 36, 38)

Not much new under the sun, but I wanted to gather what we have.

If I got it right, we now have two normal (infinite), proper subgroups, the involutions and the ##1-##component ##G_{\mathbf{+}} ## which consists of all square roots of itself. Especially ##G## is not simple.
$$
I:=\{\,g\in G\,|\,g^2=1\,\}\, , \,G_{\mathbf{+}} = \{\,f^2\,|\,f\in G_{\mathbf{+}}\,\} \triangleleft \; G = G_{\mathbf{+}} \,\dot{\cup }\; G_{\mathbf{-}}
$$
with the two ##L^2-##topological connection components
\begin{align*}
1 \in G_{\mathbf{+}}&=\{\,g\in G\,|\,g(0)=0\, , \,g(1)=1\,\}=\{\,g\in G\,|\,g\sim 1\,\}\\
1-x\in G_{\mathbf{-}}&=\{\,g\in G\,|\,g(0)=1\, , \,g(1)=0\,\}=\{\,g\in G\,|\,g\sim 1-x\,\}\\
G_{\mathbf{-}}&=G_{\mathbf{+}}\circ (1-x)
\end{align*}
which is a ##\mathbb{Z}_2## grading since ##G_{\mathbf{\varepsilon}} \cdot G_{\mathbf{\eta}} \subseteq G_{\mathbf{\varepsilon \cdot \eta}}\,;\,\varepsilon,\eta \in \{\,\mathbf{+},\mathbf{-}\,\}\,.##

##d(f,g) := \dfrac{1}{2}\left(\|x-f^{-1}\circ g(x)\|_2+\|x-g^{-1}\circ f(x)\|_2\right)=d(1,f^{-1}g)=d(1,g^{-1}f)## defines a symmetric, under left multiplication invariant function, such that ##d(f,g)=0 \Longleftrightarrow f=g\,.##

The center of ##G## is trivial. Assume ##f\in Z(G)## and ##f(\alpha)\neq \alpha \in (0,1)##. Set ##\beta := -\dfrac{\log 2}{\log \alpha} ## and consider ##g(x)=x^\beta##. Since ##\beta > 0## we have ##g\in G## and
$$
f(g(\alpha))=f(\alpha^\beta)=f\left(\dfrac{1}{2}\right) = g(f(\alpha))=f(\alpha)^\beta \neq \alpha^\beta = \dfrac{1}{2}
$$
On the other hand we have from ##f\circ (1-x)=(1-x)\circ f## at ##x=\dfrac{1}{2}## that ##f(1/2)=1-f(1/2)## and ##f## has a fixed point at ##x=1/2## contradicting the previous. Thus ##Z(G)=1\,.##

Useful group elements:
##1\, : \,x\longmapsto x \; , \;1-x\, : \,x\longmapsto 1-x\; , \;x\longmapsto x^\beta\;(\beta > 0)\; , \;x\longmapsto \dfrac{1}{2}\left(2x-1\right)^{2n+1}+\dfrac{1}{2}##
##f_n\, : \,x\longmapsto \dfrac{e^{x^n}-1}{e-1}\; , \;f_n^{-1}\, : \,x\longmapsto \sqrt[n]{\log |(e-1)x+1|}##

We even have an exponential function ##\exp\, : \,G \longrightarrow G\;## given by
$$
(\exp (f))(x) = \dfrac{1}{e-1} \sum_{k=0}^\infty \dfrac{f^n}{n!}(x)= \dfrac{1}{e-1} \left(x+f(x)+\dfrac{f(f(x))}{2!}+\dfrac{f(f(f(x)))}{3!} + \cdots\right)
$$
and ##G## operates on the Hilbert space ##L^2([0,1];\mathbb{R})## by ##\varphi(g)(F)=g.F := F\circ g^{-1}##. Hence ##\left(L^2([0,1];\mathbb{R})\,,\,\varphi\right)## is a representation of ##G##.Further conjectures are:
  • ##G## is a Lie group with the induced topology of ##L^2## (##2## covering of ##G_\mathbb{+}##)
  • ##d(\cdot,\cdot)## is a metric on ##G##, possibly with a modified triangle inequality.
 
Last edited:

Similar threads

2
Replies
93
Views
15K
2
Replies
93
Views
11K
Replies
33
Views
8K
3
Replies
100
Views
11K
2
Replies
61
Views
11K
2
Replies
77
Views
15K
3
Replies
104
Views
16K
3
Replies
114
Views
10K
2
Replies
69
Views
8K
Replies
42
Views
10K
Back
Top