Challenge Math Challenge - August 2020

  • #51
nuuskur said:
This implies there exists c\in\mathbb R such that F = I_{[c,\infty]}.

Explain this line.
 
Physics news on Phys.org
  • #52
Math_QED said:
Hmm, I need a little more details here. Explain why the discontinuities arise. I find your argument a bit too handwavy.
Effectively it boils down to the discontinuity of ##\arg z##. I thought that was trivial and well established!
 
  • Haha
  • Like
Likes nuuskur and etotheipi
  • #53
nuuskur said:
We can kill a fly with a fly swatter. Don't need the rifle.
Suppose g:\mathbb C^* \to\mathbb C is continuous and satisfies the identity e^{g(z)} = z. Since g is right inverse to \exp, it is injective, thus g(\mathbb C^*) \cong \mathbb C^*. But now \exp is forced to be injective on \mathbb C^*. Indeed, for any u,v\neq 0 we have
e^u = e^v \Leftrightarrow e^{g(z)} = e^{g(w)} \Leftrightarrow z=w \Rightarrow u=v.
But that's impossible.
Ok, here are the details. Firstly, since g is injective, it means g(\mathbb C^*) \cong \mathbb C^* as sets. That means any u\neq 0 can be written uniquely as u=g(z). But we also have a homemorphism with the obvious choice ( this is forced, really) f(g(z)) := z,\ z\neq 0. For continuity of f:g(\mathbb C^*) \to \mathbb C^* suppose g(z_n) \to g(z), then continuity of exponential map implies e^{g(z_n)} \to e^{g(z)} which by assumption is the same as z_n\to z so f is continuous.

What fails if g is not continuous: it wouldn't be a morphism in the category with continuous maps, so the above won't apply.
 
Last edited:
  • #54
Math_QED said:
Explain this line.
By definition of limit. As x\to -\infty we must have A>0 such that x\leq -A implies F(x) = \mathbb P\{X\leq x\}=0. Since such A are bounded from below, take the infimum i.e c := \inf \{x\in\mathbb R \mid X\leq x\}. Now, whatever happens for x>c must occur with probability 1 (otherwise c wouldn't be the infimum). There can be no in-betweens so F = I_{[c,\infty]} is forced due to right continuity of F i.e F(c+) = 1.
 
Last edited:
  • Like
Likes member 587159
  • #55
Suffices to show C(f) = \{x\in X \mid f\text{ is continuous at }x\} is a Borel set, it can also be empty. Then C(f)^c = D(f) is a Borel set. Let's chase epsilons from the definition of continuity. f is continuous at x\in X if and only if
<br /> \forall n\in\mathbb N,\ \exists \delta &gt;0,\ \forall z\in X,\quad z\in B(x,\delta) \Rightarrow f(z) \in B\left (f(x), n^{-1}\right ).<br />
In other words we have \{x\}= \bigcap _{n\in\mathbb N} \{U\subseteq X \mid x\in U,\ U\text{ is open and }f(U) \subseteq B(f(x), n^{-1}) \}. Do this for all points of continuity, then we have the G_\delta set
<br /> \bigcap _{n\in\mathbb N} \bigcup \left\{ U\subseteq X \mid U\text{ is open, }\ \sup_{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \} = C(f).<br />
The equality should be clear.
 
Last edited:
  • #56
nuuskur said:
That means any u\neq 0 can be written uniquely as u=g(z).

Why?
 
  • #57
Infrared said:
Why?
Oops, that's bad expression by me. I really had in mind that u is identified uniquely with g(z). But now e^u = e^v \Rightarrow e^{g(z)} = e^{g(w)} might break. Thanks for noticing. The homemorphism part still works, but usable if g is an inclusion as @Math_QED said.

I'll stick to my rifle at #32 for now. Yet I'm not convinced continuity is a required assumption. I get a feeling e^{g(z)} =z forces g to be continuous and in this case, holomorphic.
 
  • #58
nuuskur said:
Yet I'm not convinced continuity is a required assumption. I get a feeling e^{g(z)} =z forces g to be continuous and in this case, holomorphic

It doesn't. ##g(z)=\ln|z|+\arg(z)## is a counterexample
 
  • Like
Likes benorin and nuuskur
  • #59
Infrared said:
It doesn't. ##g(z)=\ln|z|+\arg(z)## is a counterexample
Of course *smacks forehead*. I had a thought maybe |z|\leq |e^z|, but that's not true in \mathbb C.
 
  • #60
Math_QED said:
Hmm, I need a little more details here. Explain why the discontinuities arise. I find your argument a bit too handwavy.
We have a function ##\beta (\theta)## for ##0 \le \theta < 2\pi## with ##\cos \beta (\theta) = \cos \theta## and ##\sin \beta (\theta) = \sin \theta##. We know that ##\beta(0) = 2\pi n## for some ##n##. As adding a constant does not effect the continuity of a function we can, wlog, take ##\beta(0) = 0##.

The technical point outstanding is that if ##\beta## is continuous, then ##\beta(\theta) = \theta##.

In general, we have ##\beta(\theta) = \theta + 2\pi n(\theta)## with ##n(0) = 0##, as above.

The only continuous functions of the form ##2\pi n(\theta)## are constant functions. I'll spare everyone an epsilon-delta proof of that. Therefore, we must have ##g(z) = i\theta = i\arg z## for ## 0 \le \theta < 2\pi##.

The second technical point is that ##\arg(z)## is discontinuous on the unit circle. I'll quote that as a known result.

In any case, that proves that ##g## cannot be continuous.
 
  • Like
Likes member 587159
  • #61
fresh_42 said:
11. Let ##a < b < c < d## be real numbers. Sort ##x = ab + cd, y = bc + ad, z = ac + bd## and prove it.

The sorted order is ##y < z < x##, which is proved as follows by first proving 2 pairwise inequalities.

$$
y - z = (bc + ad) - (ac + bd) = c(b-a) + d(a-b) = (c-d)(b-a) < 0
\Rightarrow (y - z) < 0 \Rightarrow y < z
$$
where ##(c-d)(b-a) < 0## follows from the fact that ##c < d## and ##b > a## as per the given conditions.

$$
x - z = (ab + cd) - (ac + bd) = a(b-c) + d(c-b) = (d-a)(c-b) > 0
\Rightarrow (x - z) > 0 \Rightarrow x > z
$$
where ##(d-a)(c-b) > 0## follows from the fact that ##d > a## and ##c > b## as per the given conditions.

Combining the 2 inequalities gives the ordering inequality ##x > z > y##.
 
  • #62
Problem 8.
I first state the Bohr-Mollerup theorem,
Let ##f:(0, \infty ) \rightarrow \mathbb R^+## be a function satisfying:
(i) ##f(x+1)=xf(x)##.
(ii) f is a log-convex function.
(iii) ##f(1)=1##.
##f(x)=\Gamma (x)## on it's domain.

##\Gamma## is meromorphic. The identity theorem states: If two meromorphic functions in ##\mathbb C## agree on a set with a limit point in ##\mathbb C##, then they agree everywhere in ##\mathbb C##. In particular, two meromorphic functions that agree on ##(0, \infty )## agree everywhere on ##\mathbb C##.

Thus condition (i) above holds in the complex plane. The two meromorphic functions, ##\Gamma (z+1)## and ##z\Gamma (z)##, that agree on ##(0, \infty )## implies ##\Gamma (z+1)=z\Gamma (z)## for all ##z \in \mathbb C##.
Wielant's theorem (https://www.jstor.org/stable/2975370) states that condition (ii) "f is a log-convex function" can be replaced by "##f(x)## is in the bounded strip ##\{ z\in \mathbb C | 1 \leq \mathfrak{ R}(z) \leq 2\}##".
Because ##F(z)## is in the bounded strip ##\{ z\in \mathbb C | 1 \leq \mathfrak{ R}(z) \leq 2\}## it satisfies the Bohr-Mollerup theorem, extended to the complex plane, up to a real constant ##F(1) = a## and thus ##F(z)=F(1)\Gamma (z)##.
 
  • Like
Likes benorin
  • #63
Fred Wright said:
Problem 8.
I first state the Bohr-Mollerup theorem,
Let ##f:(0, \infty ) \rightarrow \mathbb R^+## be a function satisfying:
(i) ##f(x+1)=xf(x)##.
(ii) f is a log-convex function.
(iii) ##f(1)=1##.
##f(x)=\Gamma (x)## on it's domain.

##\Gamma## is meromorphic. The identity theorem states: If two meromorphic functions in ##\mathbb C## agree on a set with a limit point in ##\mathbb C##, then they agree everywhere in ##\mathbb C##. In particular, two meromorphic functions that agree on ##(0, \infty )## agree everywhere on ##\mathbb C##.

Thus condition (i) above holds in the complex plane. The two meromorphic functions, ##\Gamma (z+1)## and ##z\Gamma (z)##, that agree on ##(0, \infty )## implies ##\Gamma (z+1)=z\Gamma (z)## for all ##z \in \mathbb C##.
Wielant's theorem (https://www.jstor.org/stable/2975370) states that condition (ii) "f is a log-convex function" can be replaced by "##f(x)## is in the bounded strip ##\{ z\in \mathbb C | 1 \leq \mathfrak{ R}(z) \leq 2\}##".
Because ##F(z)## is in the bounded strip ##\{ z\in \mathbb C | 1 \leq \mathfrak{ R}(z) \leq 2\}## it satisfies the Bohr-Mollerup theorem, extended to the complex plane, up to a real constant ##F(1) = a## and thus ##F(z)=F(1)\Gamma (z)##.
Well, problem #8 IS Wielandt's theorem. You just shifted the problem to Bohr-Mollerup. This is a bit like proving AC by Zorn.

There is still another - in a way elementary - proof possible. It uses a common theorem of complex analysis.
 
  • Haha
Likes nuuskur
  • #64
nuuskur said:
Of course *smacks forehead*. I had a thought maybe |z|\leq |e^z|, but that's not true in \mathbb C.
Nor is it true in ##\mathbb{R}## (negative numbers)!
 
  • Like
Likes nuuskur
  • #65
SPOILER #9:g would have to restrict a continuous injection from the unit circle to the line {it: all t in R}, which is impossible. I.e. g is injective since e^g is injective, and the only complex numbers z with e^z lying on the unit circle are of form z = iy with y real, so g would hve to rstrict to a continuous injection from the unit circle to the "imaginary real line" of complex numbers of form iy with y real. Now it is immediate that a continuous map from the unit circle to a copy of the real line cannot be injective, since it has a maximum say M at p and a minimum say m at q, and then both arcs joining p to q on the circle must map onto the same interval [m,M], by the intermediate value theorem, so g is not injective.

With a larger weapon, one could say that since g is locally inverse to exp, it must be smooth if it is continuous, so using the strong smooth jordan curve theorem, it maps the unit circle isomorphically onto a smooth manifold, whose interior in the complex plane serves as its boundary. Then integrating the pullback of the closed form dtheta, gives a contradiction of the sort suggested earlier using Cauchy's theorem to integrate dz/z. I.e. since the pullback is closed, the integral of d of it over the interior of the manifold is zero, but since the boundary is parametrized by g, which pulls back the form dtheta to itself, the integral is 2π, contradicting stokes theorem.

More abstractly, these maps, if they existed, would induce a group homomorphism of fundamental groups, or of 1st homology groups, whose composition would be the identity map Z-->Z, while nonetheless factoring though the zero group Z-->0-->Z, an impossibility. Or as Bott put it long ago, to prove there is no such map, all you need is "a homotopy invariant functor that does not vanish on the circle".

By covering space theory, which is essentially the same argument, since exp is a covering space of C*, via C-->C*, a map C*-->C* can only factor through exp if it induces a map on fundamental groups, whose image is zero, not the case for the identity map. The existence of g would also violate unique path lifting, since the parametrization t-->e^2πit of the circle is via a lift t-->2πit, through the exponential covering that sends the two ends points 0 and 2π to different points of C, while the existence of g would give a lift that sends them both to the same point. This is the essential content of the earlier answer that any lift via g must be discontinuous as a map on the circle.

Oh yes, and now I see that my hint amounted to noticing that such a map g exists, only if one also exists for the restricted diagram: S^1-->iR-->S^1.

here is another similar argument. If such a g existed, its restriction to the unit circle would factor the injection from the unit circle to C*, through C, and hence would prove that this injection is homotopic to a constant. Then Cauchy's theorem, the homotopy version, would imply that any holomorphic differential in C* would integrate to zero over the unit circle, but that violates the integral of dz/z being 2πi, as pointed out earlier.

By the way this last argument, as well as those using fundamental group and 1st homology group, prove that even if you are allowed to replace exp by any continuous map of your choice, there still is no such map g. The fact that the given map is indeed exp, allows the more elementary first argument above, which only uses the intermediate value theorem.
 
Last edited:
  • #66
@Fred Wright Nice solution for #8, I had thought of the Bohr-Mollerup theorem but didn't know how to prove log convexity.
 
  • #67
PeroK said:
We have a function ##\beta (\theta)## for ##0 \le \theta < 2\pi## with ##\cos \beta (\theta) = \cos \theta## and ##\sin \beta (\theta) = \sin \theta##. We know that ##\beta(0) = 2\pi n## for some ##n##. As adding a constant does not effect the continuity of a function we can, wlog, take ##\beta(0) = 0##.

The technical point outstanding is that if ##\beta## is continuous, then ##\beta(\theta) = \theta##.

In general, we have ##\beta(\theta) = \theta + 2\pi n(\theta)## with ##n(0) = 0##, as above.

The only continuous functions of the form ##2\pi n(\theta)## are constant functions. I'll spare everyone an epsilon-delta proof of that. Therefore, we must have ##g(z) = i\theta = i\arg z## for ## 0 \le \theta < 2\pi##.

The second technical point is that ##\arg(z)## is discontinuous on the unit circle. I'll quote that as a known result.

In any case, that proves that ##g## cannot be continuous.

This is much more readable to me than your previous post. Your approach has all the right ideas. Especially the line "The only continuous functions of the form ##2\pi n(\theta)## are constant functions." is the key to an elementary approach that does not use black magic. I consider this question solved. Well done!
 
  • Haha
  • Like
Likes PeroK and benorin
  • #68
@Math_QED +1 style point (and a like) for using the phrase "black magic" in reference to mathematics.
 
  • Like
Likes etotheipi
  • #69
nuuskur said:
For every x\in\mathbb R we have \mathbb P\{X\leq x\}\in \{0,1\}. Let F be the distribution function for X. F is right continuous and we have \lim _{x\to\infty} F(x) = 1 and \lim _{x\to -\infty}F(x) = 0. This implies there exists c\in\mathbb R such that F = I_{[c,\infty]}. Now \mathbb P\{X&lt;c\} = F(c-) = 0, therefore X=c a.s.

Your first solution + the clarification in post #54 solves the question (I didn't look at the second one though)! Well done! I guess I must come up with less routine exercises since you seem to solve all of them ;)
 
  • #70
@Infrared I wish to know... is #2 really as innocent as it looks? I was thinking IVT might be the only thing I need... on the other hand, it is bivariate (in a way) but the notion of continuity for many variables seems too strong for this problem. FYI you have the right to respond with your best "poker face."
 
  • Haha
Likes nuuskur
  • #71
fresh_42 said:
There is still another - in a way elementary - proof possible. It uses a common theorem of complex analysis.
My spider sense is tingling. \Gamma (z) / F(z) is entire. Liouville's theorem? Or maybe the theorem that allows finit order entire functions without zeros be written as e^{P(z)} for some polynomial P. (Hadamard?)
Math_QED said:
I guess I must come up with less routine exercises since you seem to solve all of them ;)
It's not like I look at the exercise and come up with a solution. At some point I've done similar things, so it's a matter of reminding myself definitions/results and relevant techniques. As you can also see, I make mistakes, so it's not smooth sailing. Most of what I do is wrong, I have stacks of papers scribbled full of some gibberish and failed attempts at the problems in OP.
 
  • Like
Likes benorin and member 587159
  • #72
For question 15, should it be understood that it simply means that there is no solution where ##p, q, r \in \mathbb{Q}##?
 
  • #73
nuuskur said:
My spider sense is tingling. \Gamma (z) / F(z) is entire. Liouville's theorem? Or maybe the theorem that allows finit order entire functions without zeros be written as e^{P(z)} for some polynomial P. (Hadamard?)
Liouville, but not for the quotient.
 
  • #74
Mayhem said:
For question 15, should it be understood that it simply means that there is no solution where ##p, q, r \in \mathbb{Q}##?
Yes. There are certainly real solutions, but none with three rational numbers.
Hint: Reduce the question to an integer version of the statement.
 
  • #75
benorin said:
@Math_QED +1 style point (and a like) for using the phrase "black magic" in reference to mathematics.
I think he gets that from me. I've been calling complex analysis black magic, because..well..it is :D
That said, @Math_QED , I gave a black magic approach in #32.
 
Last edited:
  • Love
  • Like
Likes benorin and member 587159
  • #76
nuuskur said:
That said, @Math_QED , I gave a black magic approach in #32.

It got lost in the sea of other posts haha. I will have a look.
 
  • #77
@benorin Well, I can't prove that there isn't a totally elementary solution, but the proof I'm thinking of uses some tools a little more sophisticated than IVT, etc.
 
  • #78
fresh_42 said:
Yes. There are certainly real solutions, but none with three rational numbers.
Hint: Reduce the question to an integer version of the statement.
What I did:

I broke up ##p,q,r## into fractions of integers and expanded the exponential and added the fractions together. This would mean that the denominator and numerator of said fraction must be integers, but I don't know how to prove, or in this case, disprove that.
 
  • #79
Mayhem said:
What I did:

I broke up ##p,q,r## into fractions of integers and expanded the exponential and added the fractions together. This would mean that the denominator and numerator of said fraction must be integers, but I don't know how to prove, or in this case, disprove that.
It is a bit tricky. We can multiply the equation by its common denominator and get four new variables instead, but all integers. Then we can assume that the LHS and the RHS do not share a common factor. With this preparation we examine the equation modulo a suited number ##n##, i.e. we consider the remainders which occur by division by ##n##.

E.g. if ##3\cdot 6 + 11 = 29## then division by ##4## yields the equation ##3 \cdot 2 + 3 = 9 = 1 \, \operatorname{mod}4##.
 
  • Like
Likes Mayhem
  • #80
fresh_42 said:
12. Prove ##{\overline{CP}\,}^{2} = \overline{AP} . \overline{BP}##

sekanten-tangentensatz-png.png


angle_q12.png


Pasted above is a redrawing of the original diagram with some triangles and angles highlighted for use in the proof. The proof is based on similarity of triangles ##\triangle{CAP}## and ##\triangle{BCP}##. Let ##\angle{BMA}=\alpha## and ##\angle{BMC}=\beta##. Note that triangles ##\triangle{BMA}##, ##\triangle{BMC}## and ##\triangle{CMA}## are all isosceles since ##\overline{MA} = \overline{MB} = \overline{MC}## as these line segments equal the radius of the circle. Since angles opposite equal sides of a traingle must be equal, it follows that
  • ##\angle{BAM} = \angle{ABM} = 90^{\circ} - \dfrac {\alpha} {2}##
  • ##\angle{CAM} = \angle{ACM} = 90^{\circ} - \dfrac {\alpha + \beta} {2}##
  • ##\angle{BCM} = \angle{CBM} = 90^{\circ} - \dfrac {\beta} {2}##
As per the original diagram, the line segment ##PC## is tangential to the circle. Therefore, ##\angle{PCM} = 90^{\circ} \Rightarrow \angle{BCP} = \angle{PCM} - \angle{BCM} = \dfrac {\beta} {2}##.

And ##\angle{CAP} = \angle{BAC} = \angle{BAM} - \angle{CAM} = \dfrac {\beta} {2}##.

Comparing the triangles ##\triangle{CAP}## and ##\triangle{BCP}##, we find that they must be similar (##\triangle{CAP} \sim \triangle{BCP}##) since they both have a common angle ##\angle{BPC}## and they also have another pair of corresponding angles, namely ##\angle{BCP} = \angle{CAP} = \dfrac {\beta} {2}##.

From the similarity of these triangles, it follows that all their corresponding sides must be of same proportion. Therefore,
$$\dfrac {\overline{BP}} {\overline{CP}} = \dfrac {\overline{CP}} {\overline{AP}}
\Rightarrow \overline{CP} . \overline{CP} = {\overline{CP}\,}^{2} = \overline{AP} . \overline{BP}
$$
 
  • #81
Math_QED said:
It got lost in the sea of other posts haha. I will have a look.
#55 for P7 might be found in the same waters.
 
  • Wow
Likes member 587159
  • #82
fresh_42 said:
Liouville, but not for the quotient.
Ok I was playing around with it a bit. Note that G(z) := F(z) - F(1)\Gamma (z) is holomorphic on H(0). Since \Gamma also satisfies the functional equation we have zG(z) = G(z+1). Note that G(1) = 0.
Fred Wright said:
https://www.jstor.org/stable/2975370
By the theorem in the link we have have an entire extension of G to \mathbb C, call it g. By assumption g(z) is bounded for 1\leq \mathrm{Re}(z)\leq 2. This implies g is bounded for 0\leq \mathrm{Re}(z)&lt;1. We have 1\leq \mathrm{Re}(z+1) &lt; 2 so by assumption g(z) = \frac{g(z+1)}{z} is bounded.

Put h(z) := g(z)g(1-z), then it's clear h is bounded for 0\leq\mathrm{Re}(z)&lt; 1. This implies it's bounded in \mathbb C. Indeed, we have by functional equation -zg(-z) = g(1-z), which implies
<br /> h(z+1) = g(z+1)g(-z) = -zg(z) \cdot \frac{g(1-z)}{z} = -h(z).<br />
So we can start in the critical strip and remain bounded by induction. Thus h is entire and bounded. By Liouville's theorem, h is constant. Since h(1) = G(1)g(0) = 0 we must have g=0, hence F(z) \equiv F(1)\Gamma (z).
 
Last edited:
  • Like
Likes fresh_42
  • #83
HINT/IDEA: #2:

The intuition to use the IVT seems correct, but since there are two component functions, f,g, one seems to need the 2 dimensional version of this theorem, i.e. the winding number theorem: if a continuous map from (the boundary of) a polygon to the plane winds around a point a non zero number of times, then any continuous extension of that map to the interior of the polygon hits the point.

For the polygon, take the triangle bounded by (0,0), (2,0), (2,2) in the plane, and consider the map of that polygon to the plane defined by sending (a,b) to (f(a)-f(b), g(a)-g(b)). Try to show the boundary of the polygon (either meets (1,1), or) winds around the point (1,1) a non zero number of times. Remember that winding number is computed by adding up small angle changes, and use what is given. In fact all you need to know about winding numbers is it measures (1/2π times) the total angle change (in radians) swept out by an arrow based at the given point, and with head at a point on the path, as the point on the path runs all the way around the path.
 
Last edited:
  • Wow
Likes nuuskur
  • #85
Outstanding! I did notice at 21:12 what I think is an incorrect remark, that we supposedly don't know whether there are zeroes inside a square with winding number zero on the boundary. While true in general, it seems to be false in the example he was doing there, of a complex polynomial, since all winding numbers are non negative for such holomorphic maps, i.e. holomorphic maps are orientation preserving. In fact since winding numbers are additive over adjacent squares, if a square contains a zero, necessarily isolated, that zero will contribute a positive amount to the whole winding number, and there cannot be any negative winding numbers in that square to cancel it out. His mind no doubt reverted to the general principle as it applies to arbitrary continuous or smooth maps. I am pretty sure about this. So for a complex polynomial, I believe the winding number counts exactly the number of zeroes inside that square, each counted with its algebraic multiplicity. That multiplicity of course can also be defined as the winding number over a small enough square centered at that zero.

Forgive me for pointing out the one tiny error in an excellent presentation. Earlier I twice thought I had spotted errors and I turned out to be wrong both times. First he made a false conjecture, but then later he taught us that it is indeed false, and why. Then I thought he was wrong to say winding number 3, instead of minus 3, when the image path wound 3 times around zero clockwise, when mathematicians always prefer counterclockwise as positive. Then I realized he had also traversed the domain path clockwise, and what matters is that the image traverses the path in the same direction as the domain path, so both clockwise is the same as both counterclockwise, and +3 is indeed correct. On the last remark above however I believe he slipped up, although with good intentions of warning us of the situation in the general, not necessarily orientation preserving, situation. But complex holomorphic functions are especially nice.

That reminds me however that even there, there are situations where numbers that should ordinarily be positive, can still be negative. Namely in complex geometry, intersection numbers of holomorphic cycles are always positive, for the same reasons, except! in the oddball case where you are intersecting a cycle with itself! Then you are supposed to move the cycle and then intersect it. Only some cycles, "rigid" ones, do not move in the holomorphic category, they only move smoothly, and then you can get a negative self intersection number of a "rigid" holomorphic cycle with itself. E.g. if you "blow up" a point of complex P^2 to a line, then that blownup line, meets itself -1 times ! I.e. the corresponding line bundle does not have a section which is holomorphic, only one which is meromorphic, along that line. I.e. just as z winds once around zero, 1/z winds once around infinity, which is minus once around zero.

A basic result then in surface theory is you can recognize a line which can be blown down, because it has self intersection -1. A famous fact is that any smooth cubic surface in P^3 is isomorphic to the complex projective plane after being blown up at 6 general points. I.e. on any smooth projective cubic surface, there exist 6 disjoint lines (the choice of them is not unique) which all have self intersection -1, and such that, blowing them down results in a surface isomorphic to the projective plane! [In relation to another thread on stem reading, this sort of thing is why I love algebraic geometry.]
 
Last edited:
  • #86
@mathwonk Very nice argument! One way to conclude that your loop has nonzero winding number is the following fact: If ##f:S^1\to S^1## is odd, then it has odd degree (and the same is true if ##S^1## is replaced by ##S^n##). Actually your geometry is good, and maybe this fact is just how it's usually formalized.

And you showed the stronger result that there are ##a,b## such that ##(f(a)-f(b),g(a)-g(b))## is actually ##(1,1)##, instead of just being a lattice point. In general, the problem statement will hold if and only if the prescribed values for ##f(2),g(2)## share a common factor (and of course there's no reason to make the functions defined on ##[0,2]##, I just did this for aesthetics).
 
  • #87
nuuskur said:
In other words we have \{x\}= \bigcap _{n\in\mathbb N} \{U\subseteq X \mid x\in U,\ U\text{ is open and }f(U) \subseteq B(f(x), n^{-1}) \}.

I think there is a problem with your intersection, at least semantically. You have an intersection ##\bigcap_{n \in \Bbb{N}} A_n## with ##A_n \subseteq \mathcal{P}(X)##. Thus semantically your intersection is a subset of ##\mathcal{P}(X)##. However, the left hand side is ##\{x\}## which is a subset of ##X## (and not of ##\mathcal{P}(X))##. You might want to fix/clarify this.

Next, explain why the equality you then obtain is true (write out the inclusion ##\supseteq##).

we have the G_\delta set
<br /> \bigcap _{n\in\mathbb N} \bigcup \left\{ U\subseteq X \mid U\text{ is open, }\ \sup_{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \} = C(f).<br />

Explain this equality
 
  • #88
Math_QED said:
I think there is a problem with your intersection, at least semantically. You have an intersection ##\bigcap_{n \in \Bbb{N}} A_n## with ##A_n \subseteq \mathcal{P}(X)##. Thus semantically your intersection is a subset of ##\mathcal{P}(X)##. However, the left hand side is ##\{x\}## which is a subset of ##X## (and not of ##\mathcal{P}(X))##. You might want to fix/clarify this.
I messed up there, even if the semantics is fixed, the logic breaks, because I was envisioning something that behaves too nicely. For instance, take a constant map. Fixing open sets x\in U_n such that f(U_n) \subseteq B(f(x),n^{-1}) does not imply the U_n become smaller around x - contrary to what I had in mind. Disregard that part, entirely.

Next, explain why the equality you then obtain is true (write out the inclusion ##\supseteq##).
The equality in question is sufficient to solve the problem. Suppose f is continuous at x. Fix n\in\mathbb N. Take the open set B(f(x),(2n)^{-1}) and it also satisfies the supremum condition (the diameter is twice the radius). By continuity at x, there exists an open set satisfying x\in U and f(U) \subseteq B(f(x), (2n)^{-1}).

The converse reads exactly like the definition of continuity.
 
Last edited:
  • #89
nuuskur said:
I messed up there, even if the semantics is fixed, the logic breaks, because I was envisioning something that behaves too nicely. For instance, take a constant map. Fixing open sets x\in U_n such that f(U_n) \subseteq B(f(x),n^{-1}) does not imply the U_n become smaller around x - contrary to what I had in mind. Disregard that part, entirely.The equality in question is sufficient to solve the problem. Suppose f is continuous at x. Fix n\in\mathbb N. Take the open set B(f(x),(2n)^{-1}) and it also satisfies the supremum condition (the diameter is twice the radius). By continuity at x, there exists an open set satisfying x\in U and f(U) \subseteq B(f(x), (2n)^{-1}).

The converse reads exactly like the definition of continuity.

Indeed for this first set equality you needed something like injectivity which is not given.

Can you edit or make a new attempt at the question so everything reads smoothly? Otherwise everything is shattered through multiple posts making it very hard to read for others (and me when I go through it again).
 
  • Like
Likes nuuskur
  • #90
Math_QED said:
Can you edit or make a new attempt at the question so everything reads smoothly?
Let C(f) := \{x\in X \mid f\text{ is continuous at }x\} (may also be empty). It is sufficient to show C(f) is a Borel set. Then C(f)^c = D(f) is also a Borel set. We show C(f) is a countable intersection of open sets. By definition, f is continuous at x, if
<br /> \forall n\in\mathbb N,\quad \exists\delta &gt;0,\quad f(B(x,\delta)) \subseteq B(f(x), n^{-1}).<br />
Note that arbitrary unions of open sets are open. We have the following equality
<br /> C(f) = \bigcap _{n\in\mathbb N} \bigcup\left \{U\subseteq X \mid U\text{ is open and } \sup _{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \}.<br />
Indeed, let f be continuous at x. Take the open set B(f(x), (2n)^{-1})\subseteq Y. By continuity at x, there exists an open set x\in U\subseteq X satisfying f(U) \subseteq B(f(x), (2n)^{-1}). We then have d_Y(f(u),f(v)) \leq n^{-1} for all u,v\in U. The converse inclusion reads exactly as the definition of continuity at a point.
 
Last edited:
  • Like
Likes member 587159
  • #91
nuuskur said:
Let C(f) := \{x\in X \mid f\text{ is continuous at }x\} (may also be empty). It is sufficient to show C(f) is a Borel set. Then C(f)^c = D(f) is also a Borel set. We show C(f) is a countable intersection of open sets. By definition, f is continuous at x, if
<br /> \forall n\in\mathbb N,\quad \exists\delta &gt;0,\quad f(B(x,\delta)) \subseteq B(f(x), n^{-1}).<br />
Note that arbitrary unions of open sets are open. We have the following equality
<br /> C(f) = \bigcap _{n\in\mathbb N} \bigcup\left \{U\subseteq X \mid U\text{ is open and } \sup _{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \}.<br />
Indeed, let f be continuous at x. Take the open set B(f(x), (2n)^{-1})\subseteq Y. By continuity at x, there exists an open set U\subseteq X satisfying f(U) \subseteq B(f(x), (2n)^{-1}). We then have d_Y(f(u),f(v)) \leq n^{-1} for all u,v\in U. The converse inclusion reads exactly as the definition of continuity at a point.

Close enough. The reverse inequality is not quite the definition of continuity but you need a routine triangle inequality argument to get there, like you did in the other inclusion. But I'm sure you could have filled up the little gap. Well done! I think your solution is the the shortest possible.
 
  • #92
Suggestion for #7:

Consider the induced map fxf:XxX-->YxY. Claim: f is continuous at p iff (p,p) is an interior point of the inverse image of every neighborhood of the diagonal. Hence for a shrinking countable sequence of such neighborhoods, the set of points of continuity is the intersection of the (diagonal of XxX with the) interiors of those inverse images, hence a G delta set. Note that X ≈ diagonal of XxX.
 
  • Informative
  • Like
Likes nuuskur and member 587159
  • #93
mathwonk said:
Suggestion for #7:

Consider the induced map fxf:XxX-->YxY. Claim: f is continuous at p iff (p,p) is an interior point of the inverse image of every neighborhood of the diagonal. Hence for a shrinking countable sequence of such neighborhoods, the set of points of continuity is the intersection of the (diagonal of XxX with the) interiors of those inverse images, hence a G delta set. Note that X ≈ diagonal of XxX.

Ah nice, a more topological approach. I didn't write out the details of your post but this seems to suggest that the statement is more generally true than in metric spaces. I would be interested in knowing what minimal assumptions are such that this holds. I guess the Hausdorffness condition and some countability assumption might be necessary.
 
  • #94
Math_QED said:
I would be interested in knowing what minimal assumptions are such that this holds. I guess the Hausdorffness condition and some countability assumption might be necessary.
The argument I gave doesn't really make use of the fact that X is a metric space. We can relax it to f being a continuous map between topological spaces and Y a metric space. Not sure right now, how far Y can be relaxed.
 
  • #95
nuuskur said:
The argument I gave doesn't really make use of the fact that X is a metric space. We can relax it to f being a continuous map between topological spaces and Y a metric space. Not sure right now, how far Y can be relaxed.

Yes, I was talking about ##Y##. But both the approaches given use a shrinking family of neighborhoods, which seems to suggest at least that we deal with first countable spaces?
 
  • Like
Likes nuuskur
  • #96
Re: #7: apologies, perhaps my claim is false. when i have more time i will try to tweak it.

edit later: sorry, It seems to be hard for me to write out details, but now I again think the claim is true, but the argument I have uses the triangle inequality.

i.e. given e>0, define U(e) as that subset of X such that p belongs to U(e) if and only if there is some d - nbhd of (p,p) in XxX (use the sum metric, or max metric), for some d >0, that maps into the e - nbhd of the diagonal of YxY. This U(e) seems to be open, and it seems that if p is in U(e), with associated d, then any q closer than d to p must satisfy |f(p)-f(q)| < e. (With the sum metric, the point (p,q) then lies in the d-nbhd of (p,p), and thus (f(p),f(q)) lies in the e - nbhd of some point of form ((f(x,f(x)). Then the triangle inequality implies (using sum metric on YxY), that also f(p) is within e of f(q).)

Then if p lies in every U(1/n) for every n, it seems that f is continuous at p, and vice versa, I think. I apologize for not vetting this more carefully. But this argument, if correct, seems to validate the original more topological sounding claim, at least in the case of metric spaces.
 
Last edited:
  • #97
Ideas for problem #5:

Make the problem simpler: take B = identity map, so b =1, and take the constant a = 1, so we want to prove the map f taking x to x^2 + x, hits every point C with |C| ≤ 1/4, where |x^2| ≤ |x|^2.

Start with the case of the Banach space R, the real numbers, and see by calculus that f takes the interval [-1/2, 1/2] to the interval [1/4, 3/4]. One only needs evaluate the map on the endpoints -1/2 and 1/2 and use the intermediate value theorem.

Now take the case of Banach space R^2. This time we can use of course the winding number technique. Look at the image of the circle |x| = 1/2. The map x-->x^2, takes each point of the circle |x| = 1/2, to a point of length ≤ 1/4. Hence the sum f(x) = x + x^2 takes each point p of this circle, to a point of the disc of radius 1/4 centered at p.

Hence the line segment joining p to f(p) lies in the annulus between the circles of radii 1/4 and 1/2. Hence the homotopy t-->x + t.x^2, with 0 ≤t≤1, is a homotopy between the identity map and the map f, such that every point of the homotopy lies in the annulus mentioned above. Consequently the map f acting on the circle |x| = 1/2 has the same winding number as the identity map, namely one. Hence the map f surjects the disc |x| ≤ 1/2 onto (a set containing) the disc |x| ≤ 1/4.

Using homotopies of spheres, this argument should work in all finite dimensions. I.e. the map f surjects the ball |x| ≤ 1/2, onto the disc |x| ≤ 1/4, (and maybe more).

Now to progress to infinite dimensional Banach spaces, it seems we should use some argument for the inverse function theorem, or open mapping theorem. I.e. look at the proof of the inverse function theorem for Banach spaces and see what ball is guaranteed to be in the image.

Suggestion: Use the proof of the "surjective mapping theorem", attributed to Graves, in Lang Analysis II, p. 193. I.e. just as slightly perturbing the identity map leaves us with a map that is still locally a homeomorphism, so sightly perturbing a surjective continuous linear [hence open] map leaves us with a map that is still locally [open and?] surjective.

Graves' theorem: http://www.heldermann-verlag.de/jca/jca03/jca03003.pdf

(iii) There exists a constant M such that for every y ∈ Y there exists x ∈ X with y = A(x)
and
∥ x ∥≤ M ∥ y ∥ .
Let us denote by Ba(x) the closed ball centered at x with radius a. Up to some minor adjustments in notation, the original formulation and proof of the Graves theorem are as follows.
Theorem 1.2. (Graves [12]). Let X, Y be Banach spaces and let f be a continuous functionfromXtoY definedinBε(0)forsomeε>0withf(0)=0. LetAbea continuous and linear operator from X onto Y and let M be the corresponding constant from Theorem 1.1 (iii). Suppose that there exists a constant δ < M −1 such that
∥ f(x1) − f(x2) − A(x1 − x2) ∥≤ δ ∥ x1 − x2 ∥ (1) whenever x1,x2 ∈ Bε(0). Then the equation y = f(x) has a solution x ∈ Bε(0) whenever
∥ y ∥≤ cε, where c = M−1 − δ.
 
Last edited:
  • #98
My take on problem 15:
I first prove that the equation ## 7n^2 = a^2 + b^2 + c^2 ## has no integer solutions. If we can show this, we will have proved that there are no integer solutions to ## 7(n_1 n_2 n_3)^2 = (m_1 n_2 n_3 )^2 + (m_2 n_3 n_1 )^2 + (m_3 n_1 n_2 )^2 ## and thus no three rational numbers ## p = m_1/n_1 , q = m_2/n_2, r = m_3/n_3 ## whose squares sum to 7.

We write the equation as ## (2n)^2 = (a - n)(a+n) + (b-n)(b+n) + (c-n)(c+n) ##, and substituting ## x = a - n, y = b - n, z = c - n ## we get: $$ (2n)^2 = x(x+2n) + y(y+2n) + z(z + 2n) $$ (1)
Now suppose that ## x, y, z ## are all odd, but this means that each term of the form ## x(x+2n) ## must be odd too, as ## x ## and ## x + 2n ## are both odd. The sum of any three odd numbers must also be odd, but the left hand side ## (2n)^2 ## is even! Therefore, ## x, y, z ## cannot all be odd.

Then suppose any two of ## x, y, z ## are odd; without loss of generality, say ## y, z ##. Since x is even ## 4 | x(x + 2n) ## and of course ## 4| (2n)^2 ## , so that ## 4 | y(y+2n) + z(z + 2n) = y^2 + z^2 + 2n( y + z ) ##
## y^2 ## and ## z^2 ## are each squares of odd integers, so that ## y^2 + z^2 \equiv (1+1) ## mod ## 4 \equiv 2 ## mod ## 4 ##.*
This means that ## 2n( y + z ) \equiv 2 ## mod ## 4 ## but how can this be?
## 2|(y+z) ## as ## y,z ## are odd, and that means ## 2n(y + z) \equiv 0 ## mod ## 4 ##.
We realize that no two of x,y,z can be odd.

What if we have one odd number (say z ) ? Then ## z(z + 2n) = (2n)^2 - y(y+2n) - z(z + 2n) ##. A contradiction, as the LHS is odd and the RHS is even.

Then all of ## x,y,z ## must be even. Let them be so through the substitutions ## x = 2\alpha, y = 2\beta, z = 2\gamma ##, then from (1):
$$ n^2 = \alpha(\alpha + n) + \beta(\beta + n) + \gamma( \gamma + n) $$
which implies $$ ( \alpha + \beta + \gamma )( \alpha + \beta + \gamma + n ) = 2( \alpha \beta + \beta \gamma + \gamma \alpha ) + n^2 $$
## n ## cannot be odd here, for then the RHS would be odd, and there's no way to make the LHS odd. ## ( ( \alpha + \beta + \gamma ) ## cannot be even, but if it's odd ## ( \alpha + \beta + \gamma + n ) ## is even. )
Therefore, n must be even. So let ## n = 2m ##
$$ (2m)^2 = \alpha(\alpha + 2m) + \beta(\beta + 2m) + \gamma( \gamma + 2m) $$.
This is exactly the same as equation (1) !
So if we go through the same arguments as before, we can say m would also have to be even. But then it's the same as before, and whatever m divided by 2 is should also be even...ad infinitum! We can say this number will go to infinity this way, unless we hit a zero, and the original n = 0, (but for which there is obviously no solution) Therefore, no such finite integers exist.

* I used the fact that squares of odd integers are always congruent to 1 mod 4. ## (2k + 1)^2 = 4( k^2 + k ) + 1 ##
In all seriousness, if this proof works, it is the strangest, most hand wavy proof I've ever done. I am not aware whether arguments involving infinity are commonly used in such problems, let alone whether they are rigorous enough.
 
  • Like
Likes nuuskur and fresh_42
  • #99
ItsukaKitto said:
My take on problem 15:

I first prove that the equation ## 7n^2 = a^2 + b^2 + c^2 ## has no integer solutions. If we can show this, we will have proved that there are no integer solutions to ## 7(n_1 n_2 n_3)^2 = (m_1 n_2 n_3 )^2 + (m_2 n_3 n_1 )^2 + (m_3 n_1 n_2 )^2 ## and thus no three rational numbers ## p = m_1/n_1 , q = m_2/n_2, r = m_3/n_3 ## whose squares sum to 7.

We write the equation as ## (2n)^2 = (a - n)(a+n) + (b-n)(b+n) + (c-n)(c+n) ##, and substituting ## x = a - n, y = b - n, z = c - n ## we get: $$ (2n)^2 = x(x+2n) + y(y+2n) + z(z + 2n) $$ (1)
Now suppose that ## x, y, z ## are all odd, but this means that each term of the form ## x(x+2n) ## must be odd too, as ## x ## and ## x + 2n ## are both odd. The sum of any three odd numbers must also be odd, but the left hand side ## (2n)^2 ## is even! Therefore, ## x, y, z ## cannot all be odd.

Then suppose any two of ## x, y, z ## are odd; without loss of generality, say ## y, z ##. Since x is even ## 4 | x(x + 2n) ## and of course ## 4| (2n)^2 ## , so that ## 4 | y(y+2n) + z(z + 2n) = y^2 + z^2 + 2n( y + z ) ##
## y^2 ## and ## z^2 ## are each squares of odd integers, so that ## y^2 + z^2 \equiv (1+1) ## mod ## 4 \equiv 2 ## mod ## 4 ##.*
This means that ## 2n( y + z ) \equiv 2 ## mod ## 4 ## but how can this be?
## 2|(y+z) ## as ## y,z ## are odd, and that means ## 2n(y + z) \equiv 0 ## mod ## 4 ##.
We realize that no two of x,y,z can be odd.

What if we have one odd number (say z ) ? Then ## z(z + 2n) = (2n)^2 - y(y+2n) - z(z + 2n) ##. A contradiction, as the LHS is odd and the RHS is even.

Then all of ## x,y,z ## must be even. Let them be so through the substitutions ## x = 2\alpha, y = 2\beta, z = 2\gamma ##, then from (1):
$$ n^2 = \alpha(\alpha + n) + \beta(\beta + n) + \gamma( \gamma + n) $$
which implies $$ ( \alpha + \beta + \gamma )( \alpha + \beta + \gamma + n ) = 2( \alpha \beta + \beta \gamma + \gamma \alpha ) + n^2 $$
## n ## cannot be odd here, for then the RHS would be odd, and there's no way to make the LHS odd. ## ( ( \alpha + \beta + \gamma ) ## cannot be even, but if it's odd ## ( \alpha + \beta + \gamma + n ) ## is even. )
Therefore, n must be even. So let ## n = 2m ##
$$ (2m)^2 = \alpha(\alpha + 2m) + \beta(\beta + 2m) + \gamma( \gamma + 2m) $$.
This is exactly the same as equation (1) !
So if we go through the same arguments as before, we can say m would also have to be even. But then it's the same as before, and whatever m divided by 2 is should also be even...ad infinitum! We can say this number will go to infinity this way, unless we hit a zero, and the original n = 0, (but for which there is obviously no solution) Therefore, no such finite integers exist.

* I used the fact that squares of odd integers are always congruent to 1 mod 4. ## (2k + 1)^2 = 4( k^2 + k ) + 1 ##

In all seriousness, if this proof works, it is the strangest, most hand wavy proof I've ever done. I am not aware whether arguments involving infinity are commonly used in such problems, let alone whether they are rigorous enough.

Well done!

The argument "and so on ad infinitum" is usually shortened by the following trick, an indirect proof:
Assume ##n## to be the smallest solution in the set of positive integers. As this set is bounded from below, there cannot be a smaller solution ##m##. Of course you first had to rule out the all zero solution.

What you have done is basically divide the equation by ##4## and distinguish odd and even of the rest. This could have been shortened by considering the original equation (integer version with coprime LHS and RHS) modulo ##8## and examine the three possible remainders ##0,1,4.## This is the same as what you have done, only with less letters.
 
  • Like
Likes ItsukaKitto
  • #100
fresh_42 said:
Well done!

The argument "and so on ad infinitum" is usually shortened by the following trick, an indirect proof:
Assume ##n## to be the smallest solution in the set of positive integers. As this set is bounded from below, there cannot be a smaller solution ##m##. Of course you first had to rule out the all zero solution.

What you have done is basically divide the equation by ##4## and distinguish odd and even of the rest. This could have been shortened by considering the original equation (integer version with coprime LHS and RHS) modulo ##8## and examine the three possible remainders ##0,1,4.## This is the same as what you have done, only with less letters.
Taking modulo 8 shortens the proof a lot! Somehow it wasn't immediately obvious to me 😅. Thanks for letting me know there exists a much simpler and general way to go about such problems.
 

Similar threads

Replies
33
Views
8K
Replies
42
Views
10K
3
Replies
137
Views
19K
2
Replies
80
Views
9K
2
Replies
61
Views
12K
2
Replies
61
Views
11K
3
Replies
121
Views
22K
4
Replies
156
Views
20K
Replies
46
Views
8K
2
Replies
93
Views
11K
Back
Top