Math Challenge - August 2020

In summary, this conversation covers topics in mathematics such as compact subsets, bounded operators, Banach spaces, probability spaces, continuous functions, holomorphic functions, logarithmic functions, and integrals. It also involves solving problems related to these topics, including the Tychonoff theorem, axiom of choice, Borel-measurable sets, meromorphic functions, and rational solutions. There is also a question about the probability of two pocket aces in Texas Hold'em.
  • #71
fresh_42 said:
There is still another - in a way elementary - proof possible. It uses a common theorem of complex analysis.
My spider sense is tingling. [itex]\Gamma (z) / F(z)[/itex] is entire. Liouville's theorem? Or maybe the theorem that allows finit order entire functions without zeros be written as [itex]e^{P(z)}[/itex] for some polynomial [itex]P[/itex]. (Hadamard?)
Math_QED said:
I guess I must come up with less routine exercises since you seem to solve all of them ;)
It's not like I look at the exercise and come up with a solution. At some point I've done similar things, so it's a matter of reminding myself definitions/results and relevant techniques. As you can also see, I make mistakes, so it's not smooth sailing. Most of what I do is wrong, I have stacks of papers scribbled full of some gibberish and failed attempts at the problems in OP.
 
  • Like
Likes benorin and member 587159
Physics news on Phys.org
  • #72
For question 15, should it be understood that it simply means that there is no solution where ##p, q, r \in \mathbb{Q}##?
 
  • #73
nuuskur said:
My spider sense is tingling. [itex]\Gamma (z) / F(z)[/itex] is entire. Liouville's theorem? Or maybe the theorem that allows finit order entire functions without zeros be written as [itex]e^{P(z)}[/itex] for some polynomial [itex]P[/itex]. (Hadamard?)
Liouville, but not for the quotient.
 
  • #74
Mayhem said:
For question 15, should it be understood that it simply means that there is no solution where ##p, q, r \in \mathbb{Q}##?
Yes. There are certainly real solutions, but none with three rational numbers.
Hint: Reduce the question to an integer version of the statement.
 
  • #75
benorin said:
@Math_QED +1 style point (and a like) for using the phrase "black magic" in reference to mathematics.
I think he gets that from me. I've been calling complex analysis black magic, because..well..it is :D
That said, @Math_QED , I gave a black magic approach in #32.
 
Last edited:
  • Love
  • Like
Likes benorin and member 587159
  • #76
nuuskur said:
That said, @Math_QED , I gave a black magic approach in #32.

It got lost in the sea of other posts haha. I will have a look.
 
  • #77
@benorin Well, I can't prove that there isn't a totally elementary solution, but the proof I'm thinking of uses some tools a little more sophisticated than IVT, etc.
 
  • #78
fresh_42 said:
Yes. There are certainly real solutions, but none with three rational numbers.
Hint: Reduce the question to an integer version of the statement.
What I did:

I broke up ##p,q,r## into fractions of integers and expanded the exponential and added the fractions together. This would mean that the denominator and numerator of said fraction must be integers, but I don't know how to prove, or in this case, disprove that.
 
  • #79
Mayhem said:
What I did:

I broke up ##p,q,r## into fractions of integers and expanded the exponential and added the fractions together. This would mean that the denominator and numerator of said fraction must be integers, but I don't know how to prove, or in this case, disprove that.
It is a bit tricky. We can multiply the equation by its common denominator and get four new variables instead, but all integers. Then we can assume that the LHS and the RHS do not share a common factor. With this preparation we examine the equation modulo a suited number ##n##, i.e. we consider the remainders which occur by division by ##n##.

E.g. if ##3\cdot 6 + 11 = 29## then division by ##4## yields the equation ##3 \cdot 2 + 3 = 9 = 1 \, \operatorname{mod}4##.
 
  • Like
Likes Mayhem
  • #80
fresh_42 said:
12. Prove ##{\overline{CP}\,}^{2} = \overline{AP} . \overline{BP}##

sekanten-tangentensatz-png.png


angle_q12.png


Pasted above is a redrawing of the original diagram with some triangles and angles highlighted for use in the proof. The proof is based on similarity of triangles ##\triangle{CAP}## and ##\triangle{BCP}##. Let ##\angle{BMA}=\alpha## and ##\angle{BMC}=\beta##. Note that triangles ##\triangle{BMA}##, ##\triangle{BMC}## and ##\triangle{CMA}## are all isosceles since ##\overline{MA} = \overline{MB} = \overline{MC}## as these line segments equal the radius of the circle. Since angles opposite equal sides of a traingle must be equal, it follows that
  • ##\angle{BAM} = \angle{ABM} = 90^{\circ} - \dfrac {\alpha} {2}##
  • ##\angle{CAM} = \angle{ACM} = 90^{\circ} - \dfrac {\alpha + \beta} {2}##
  • ##\angle{BCM} = \angle{CBM} = 90^{\circ} - \dfrac {\beta} {2}##
As per the original diagram, the line segment ##PC## is tangential to the circle. Therefore, ##\angle{PCM} = 90^{\circ} \Rightarrow \angle{BCP} = \angle{PCM} - \angle{BCM} = \dfrac {\beta} {2}##.

And ##\angle{CAP} = \angle{BAC} = \angle{BAM} - \angle{CAM} = \dfrac {\beta} {2}##.

Comparing the triangles ##\triangle{CAP}## and ##\triangle{BCP}##, we find that they must be similar (##\triangle{CAP} \sim \triangle{BCP}##) since they both have a common angle ##\angle{BPC}## and they also have another pair of corresponding angles, namely ##\angle{BCP} = \angle{CAP} = \dfrac {\beta} {2}##.

From the similarity of these triangles, it follows that all their corresponding sides must be of same proportion. Therefore,
$$\dfrac {\overline{BP}} {\overline{CP}} = \dfrac {\overline{CP}} {\overline{AP}}
\Rightarrow \overline{CP} . \overline{CP} = {\overline{CP}\,}^{2} = \overline{AP} . \overline{BP}
$$
 
  • #81
Math_QED said:
It got lost in the sea of other posts haha. I will have a look.
#55 for P7 might be found in the same waters.
 
  • Wow
Likes member 587159
  • #82
fresh_42 said:
Liouville, but not for the quotient.
Ok I was playing around with it a bit. Note that [itex]G(z) := F(z) - F(1)\Gamma (z)[/itex] is holomorphic on [itex]H(0)[/itex]. Since [itex]\Gamma[/itex] also satisfies the functional equation we have [itex]zG(z) = G(z+1)[/itex]. Note that [itex]G(1) = 0[/itex].
Fred Wright said:
https://www.jstor.org/stable/2975370
By the theorem in the link we have have an entire extension of [itex]G[/itex] to [itex]\mathbb C[/itex], call it [itex]g[/itex]. By assumption [itex]g(z)[/itex] is bounded for [itex]1\leq \mathrm{Re}(z)\leq 2[/itex]. This implies [itex]g[/itex] is bounded for [itex]0\leq \mathrm{Re}(z)<1[/itex]. We have [itex]1\leq \mathrm{Re}(z+1) < 2[/itex] so by assumption [itex]g(z) = \frac{g(z+1)}{z}[/itex] is bounded.

Put [itex]h(z) := g(z)g(1-z)[/itex], then it's clear [itex]h[/itex] is bounded for [itex]0\leq\mathrm{Re}(z)< 1[/itex]. This implies it's bounded in [itex]\mathbb C[/itex]. Indeed, we have by functional equation [itex]-zg(-z) = g(1-z)[/itex], which implies
[tex]
h(z+1) = g(z+1)g(-z) = -zg(z) \cdot \frac{g(1-z)}{z} = -h(z).
[/tex]
So we can start in the critical strip and remain bounded by induction. Thus [itex]h[/itex] is entire and bounded. By Liouville's theorem, [itex]h[/itex] is constant. Since [itex]h(1) = G(1)g(0) = 0[/itex] we must have [itex]g=0[/itex], hence [itex]F(z) \equiv F(1)\Gamma (z)[/itex].
 
Last edited:
  • Like
Likes fresh_42
  • #83
HINT/IDEA: #2:

The intuition to use the IVT seems correct, but since there are two component functions, f,g, one seems to need the 2 dimensional version of this theorem, i.e. the winding number theorem: if a continuous map from (the boundary of) a polygon to the plane winds around a point a non zero number of times, then any continuous extension of that map to the interior of the polygon hits the point.

For the polygon, take the triangle bounded by (0,0), (2,0), (2,2) in the plane, and consider the map of that polygon to the plane defined by sending (a,b) to (f(a)-f(b), g(a)-g(b)). Try to show the boundary of the polygon (either meets (1,1), or) winds around the point (1,1) a non zero number of times. Remember that winding number is computed by adding up small angle changes, and use what is given. In fact all you need to know about winding numbers is it measures (1/2π times) the total angle change (in radians) swept out by an arrow based at the given point, and with head at a point on the path, as the point on the path runs all the way around the path.
 
Last edited:
  • Wow
Likes nuuskur
  • #85
Outstanding! I did notice at 21:12 what I think is an incorrect remark, that we supposedly don't know whether there are zeroes inside a square with winding number zero on the boundary. While true in general, it seems to be false in the example he was doing there, of a complex polynomial, since all winding numbers are non negative for such holomorphic maps, i.e. holomorphic maps are orientation preserving. In fact since winding numbers are additive over adjacent squares, if a square contains a zero, necessarily isolated, that zero will contribute a positive amount to the whole winding number, and there cannot be any negative winding numbers in that square to cancel it out. His mind no doubt reverted to the general principle as it applies to arbitrary continuous or smooth maps. I am pretty sure about this. So for a complex polynomial, I believe the winding number counts exactly the number of zeroes inside that square, each counted with its algebraic multiplicity. That multiplicity of course can also be defined as the winding number over a small enough square centered at that zero.

Forgive me for pointing out the one tiny error in an excellent presentation. Earlier I twice thought I had spotted errors and I turned out to be wrong both times. First he made a false conjecture, but then later he taught us that it is indeed false, and why. Then I thought he was wrong to say winding number 3, instead of minus 3, when the image path wound 3 times around zero clockwise, when mathematicians always prefer counterclockwise as positive. Then I realized he had also traversed the domain path clockwise, and what matters is that the image traverses the path in the same direction as the domain path, so both clockwise is the same as both counterclockwise, and +3 is indeed correct. On the last remark above however I believe he slipped up, although with good intentions of warning us of the situation in the general, not necessarily orientation preserving, situation. But complex holomorphic functions are especially nice.

That reminds me however that even there, there are situations where numbers that should ordinarily be positive, can still be negative. Namely in complex geometry, intersection numbers of holomorphic cycles are always positive, for the same reasons, except! in the oddball case where you are intersecting a cycle with itself! Then you are supposed to move the cycle and then intersect it. Only some cycles, "rigid" ones, do not move in the holomorphic category, they only move smoothly, and then you can get a negative self intersection number of a "rigid" holomorphic cycle with itself. E.g. if you "blow up" a point of complex P^2 to a line, then that blownup line, meets itself -1 times ! I.e. the corresponding line bundle does not have a section which is holomorphic, only one which is meromorphic, along that line. I.e. just as z winds once around zero, 1/z winds once around infinity, which is minus once around zero.

A basic result then in surface theory is you can recognize a line which can be blown down, because it has self intersection -1. A famous fact is that any smooth cubic surface in P^3 is isomorphic to the complex projective plane after being blown up at 6 general points. I.e. on any smooth projective cubic surface, there exist 6 disjoint lines (the choice of them is not unique) which all have self intersection -1, and such that, blowing them down results in a surface isomorphic to the projective plane! [In relation to another thread on stem reading, this sort of thing is why I love algebraic geometry.]
 
Last edited:
  • #86
@mathwonk Very nice argument! One way to conclude that your loop has nonzero winding number is the following fact: If ##f:S^1\to S^1## is odd, then it has odd degree (and the same is true if ##S^1## is replaced by ##S^n##). Actually your geometry is good, and maybe this fact is just how it's usually formalized.

And you showed the stronger result that there are ##a,b## such that ##(f(a)-f(b),g(a)-g(b))## is actually ##(1,1)##, instead of just being a lattice point. In general, the problem statement will hold if and only if the prescribed values for ##f(2),g(2)## share a common factor (and of course there's no reason to make the functions defined on ##[0,2]##, I just did this for aesthetics).
 
  • #87
nuuskur said:
In other words we have [itex]\{x\}= \bigcap _{n\in\mathbb N} \{U\subseteq X \mid x\in U,\ U\text{ is open and }f(U) \subseteq B(f(x), n^{-1}) \}[/itex].

I think there is a problem with your intersection, at least semantically. You have an intersection ##\bigcap_{n \in \Bbb{N}} A_n## with ##A_n \subseteq \mathcal{P}(X)##. Thus semantically your intersection is a subset of ##\mathcal{P}(X)##. However, the left hand side is ##\{x\}## which is a subset of ##X## (and not of ##\mathcal{P}(X))##. You might want to fix/clarify this.

Next, explain why the equality you then obtain is true (write out the inclusion ##\supseteq##).

we have the [itex]G_\delta[/itex] set
[tex]
\bigcap _{n\in\mathbb N} \bigcup \left\{ U\subseteq X \mid U\text{ is open, }\ \sup_{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \} = C(f).
[/tex]

Explain this equality
 
  • #88
Math_QED said:
I think there is a problem with your intersection, at least semantically. You have an intersection ##\bigcap_{n \in \Bbb{N}} A_n## with ##A_n \subseteq \mathcal{P}(X)##. Thus semantically your intersection is a subset of ##\mathcal{P}(X)##. However, the left hand side is ##\{x\}## which is a subset of ##X## (and not of ##\mathcal{P}(X))##. You might want to fix/clarify this.
I messed up there, even if the semantics is fixed, the logic breaks, because I was envisioning something that behaves too nicely. For instance, take a constant map. Fixing open sets [itex]x\in U_n[/itex] such that [itex]f(U_n) \subseteq B(f(x),n^{-1})[/itex] does not imply the [itex]U_n[/itex] become smaller around [itex]x[/itex] - contrary to what I had in mind. Disregard that part, entirely.

Next, explain why the equality you then obtain is true (write out the inclusion ##\supseteq##).
The equality in question is sufficient to solve the problem. Suppose [itex]f[/itex] is continuous at [itex]x[/itex]. Fix [itex]n\in\mathbb N[/itex]. Take the open set [itex]B(f(x),(2n)^{-1})[/itex] and it also satisfies the supremum condition (the diameter is twice the radius). By continuity at [itex]x[/itex], there exists an open set satisfying [itex]x\in U[/itex] and [itex]f(U) \subseteq B(f(x), (2n)^{-1})[/itex].

The converse reads exactly like the definition of continuity.
 
Last edited:
  • #89
nuuskur said:
I messed up there, even if the semantics is fixed, the logic breaks, because I was envisioning something that behaves too nicely. For instance, take a constant map. Fixing open sets [itex]x\in U_n[/itex] such that [itex]f(U_n) \subseteq B(f(x),n^{-1})[/itex] does not imply the [itex]U_n[/itex] become smaller around [itex]x[/itex] - contrary to what I had in mind. Disregard that part, entirely.The equality in question is sufficient to solve the problem. Suppose [itex]f[/itex] is continuous at [itex]x[/itex]. Fix [itex]n\in\mathbb N[/itex]. Take the open set [itex]B(f(x),(2n)^{-1})[/itex] and it also satisfies the supremum condition (the diameter is twice the radius). By continuity at [itex]x[/itex], there exists an open set satisfying [itex]x\in U[/itex] and [itex]f(U) \subseteq B(f(x), (2n)^{-1})[/itex].

The converse reads exactly like the definition of continuity.

Indeed for this first set equality you needed something like injectivity which is not given.

Can you edit or make a new attempt at the question so everything reads smoothly? Otherwise everything is shattered through multiple posts making it very hard to read for others (and me when I go through it again).
 
  • Like
Likes nuuskur
  • #90
Math_QED said:
Can you edit or make a new attempt at the question so everything reads smoothly?
Let [itex]C(f) := \{x\in X \mid f\text{ is continuous at }x\}[/itex] (may also be empty). It is sufficient to show [itex]C(f)[/itex] is a Borel set. Then [itex]C(f)^c = D(f)[/itex] is also a Borel set. We show [itex]C(f)[/itex] is a countable intersection of open sets. By definition, [itex]f[/itex] is continuous at [itex]x[/itex], if
[tex]
\forall n\in\mathbb N,\quad \exists\delta >0,\quad f(B(x,\delta)) \subseteq B(f(x), n^{-1}).
[/tex]
Note that arbitrary unions of open sets are open. We have the following equality
[tex]
C(f) = \bigcap _{n\in\mathbb N} \bigcup\left \{U\subseteq X \mid U\text{ is open and } \sup _{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \}.
[/tex]
Indeed, let [itex]f[/itex] be continuous at [itex]x[/itex]. Take the open set [itex]B(f(x), (2n)^{-1})\subseteq Y[/itex]. By continuity at [itex]x[/itex], there exists an open set [itex]x\in U\subseteq X[/itex] satisfying [itex]f(U) \subseteq B(f(x), (2n)^{-1})[/itex]. We then have [itex]d_Y(f(u),f(v)) \leq n^{-1}[/itex] for all [itex]u,v\in U[/itex]. The converse inclusion reads exactly as the definition of continuity at a point.
 
Last edited:
  • Like
Likes member 587159
  • #91
nuuskur said:
Let [itex]C(f) := \{x\in X \mid f\text{ is continuous at }x\}[/itex] (may also be empty). It is sufficient to show [itex]C(f)[/itex] is a Borel set. Then [itex]C(f)^c = D(f)[/itex] is also a Borel set. We show [itex]C(f)[/itex] is a countable intersection of open sets. By definition, [itex]f[/itex] is continuous at [itex]x[/itex], if
[tex]
\forall n\in\mathbb N,\quad \exists\delta >0,\quad f(B(x,\delta)) \subseteq B(f(x), n^{-1}).
[/tex]
Note that arbitrary unions of open sets are open. We have the following equality
[tex]
C(f) = \bigcap _{n\in\mathbb N} \bigcup\left \{U\subseteq X \mid U\text{ is open and } \sup _{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \}.
[/tex]
Indeed, let [itex]f[/itex] be continuous at [itex]x[/itex]. Take the open set [itex]B(f(x), (2n)^{-1})\subseteq Y[/itex]. By continuity at [itex]x[/itex], there exists an open set [itex]U\subseteq X[/itex] satisfying [itex]f(U) \subseteq B(f(x), (2n)^{-1})[/itex]. We then have [itex]d_Y(f(u),f(v)) \leq n^{-1}[/itex] for all [itex]u,v\in U[/itex]. The converse inclusion reads exactly as the definition of continuity at a point.

Close enough. The reverse inequality is not quite the definition of continuity but you need a routine triangle inequality argument to get there, like you did in the other inclusion. But I'm sure you could have filled up the little gap. Well done! I think your solution is the the shortest possible.
 
  • #92
Suggestion for #7:

Consider the induced map fxf:XxX-->YxY. Claim: f is continuous at p iff (p,p) is an interior point of the inverse image of every neighborhood of the diagonal. Hence for a shrinking countable sequence of such neighborhoods, the set of points of continuity is the intersection of the (diagonal of XxX with the) interiors of those inverse images, hence a G delta set. Note that X ≈ diagonal of XxX.
 
  • Informative
  • Like
Likes nuuskur and member 587159
  • #93
mathwonk said:
Suggestion for #7:

Consider the induced map fxf:XxX-->YxY. Claim: f is continuous at p iff (p,p) is an interior point of the inverse image of every neighborhood of the diagonal. Hence for a shrinking countable sequence of such neighborhoods, the set of points of continuity is the intersection of the (diagonal of XxX with the) interiors of those inverse images, hence a G delta set. Note that X ≈ diagonal of XxX.

Ah nice, a more topological approach. I didn't write out the details of your post but this seems to suggest that the statement is more generally true than in metric spaces. I would be interested in knowing what minimal assumptions are such that this holds. I guess the Hausdorffness condition and some countability assumption might be necessary.
 
  • #94
Math_QED said:
I would be interested in knowing what minimal assumptions are such that this holds. I guess the Hausdorffness condition and some countability assumption might be necessary.
The argument I gave doesn't really make use of the fact that [itex]X[/itex] is a metric space. We can relax it to [itex]f[/itex] being a continuous map between topological spaces and [itex]Y[/itex] a metric space. Not sure right now, how far [itex]Y[/itex] can be relaxed.
 
  • #95
nuuskur said:
The argument I gave doesn't really make use of the fact that [itex]X[/itex] is a metric space. We can relax it to [itex]f[/itex] being a continuous map between topological spaces and [itex]Y[/itex] a metric space. Not sure right now, how far [itex]Y[/itex] can be relaxed.

Yes, I was talking about ##Y##. But both the approaches given use a shrinking family of neighborhoods, which seems to suggest at least that we deal with first countable spaces?
 
  • Like
Likes nuuskur
  • #96
Re: #7: apologies, perhaps my claim is false. when i have more time i will try to tweak it.

edit later: sorry, It seems to be hard for me to write out details, but now I again think the claim is true, but the argument I have uses the triangle inequality.

i.e. given e>0, define U(e) as that subset of X such that p belongs to U(e) if and only if there is some d - nbhd of (p,p) in XxX (use the sum metric, or max metric), for some d >0, that maps into the e - nbhd of the diagonal of YxY. This U(e) seems to be open, and it seems that if p is in U(e), with associated d, then any q closer than d to p must satisfy |f(p)-f(q)| < e. (With the sum metric, the point (p,q) then lies in the d-nbhd of (p,p), and thus (f(p),f(q)) lies in the e - nbhd of some point of form ((f(x,f(x)). Then the triangle inequality implies (using sum metric on YxY), that also f(p) is within e of f(q).)

Then if p lies in every U(1/n) for every n, it seems that f is continuous at p, and vice versa, I think. I apologize for not vetting this more carefully. But this argument, if correct, seems to validate the original more topological sounding claim, at least in the case of metric spaces.
 
Last edited:
  • #97
Ideas for problem #5:

Make the problem simpler: take B = identity map, so b =1, and take the constant a = 1, so we want to prove the map f taking x to x^2 + x, hits every point C with |C| ≤ 1/4, where |x^2| ≤ |x|^2.

Start with the case of the Banach space R, the real numbers, and see by calculus that f takes the interval [-1/2, 1/2] to the interval [1/4, 3/4]. One only needs evaluate the map on the endpoints -1/2 and 1/2 and use the intermediate value theorem.

Now take the case of Banach space R^2. This time we can use of course the winding number technique. Look at the image of the circle |x| = 1/2. The map x-->x^2, takes each point of the circle |x| = 1/2, to a point of length ≤ 1/4. Hence the sum f(x) = x + x^2 takes each point p of this circle, to a point of the disc of radius 1/4 centered at p.

Hence the line segment joining p to f(p) lies in the annulus between the circles of radii 1/4 and 1/2. Hence the homotopy t-->x + t.x^2, with 0 ≤t≤1, is a homotopy between the identity map and the map f, such that every point of the homotopy lies in the annulus mentioned above. Consequently the map f acting on the circle |x| = 1/2 has the same winding number as the identity map, namely one. Hence the map f surjects the disc |x| ≤ 1/2 onto (a set containing) the disc |x| ≤ 1/4.

Using homotopies of spheres, this argument should work in all finite dimensions. I.e. the map f surjects the ball |x| ≤ 1/2, onto the disc |x| ≤ 1/4, (and maybe more).

Now to progress to infinite dimensional Banach spaces, it seems we should use some argument for the inverse function theorem, or open mapping theorem. I.e. look at the proof of the inverse function theorem for Banach spaces and see what ball is guaranteed to be in the image.

Suggestion: Use the proof of the "surjective mapping theorem", attributed to Graves, in Lang Analysis II, p. 193. I.e. just as slightly perturbing the identity map leaves us with a map that is still locally a homeomorphism, so sightly perturbing a surjective continuous linear [hence open] map leaves us with a map that is still locally [open and?] surjective.

Graves' theorem: http://www.heldermann-verlag.de/jca/jca03/jca03003.pdf

(iii) There exists a constant M such that for every y ∈ Y there exists x ∈ X with y = A(x)
and
∥ x ∥≤ M ∥ y ∥ .
Let us denote by Ba(x) the closed ball centered at x with radius a. Up to some minor adjustments in notation, the original formulation and proof of the Graves theorem are as follows.
Theorem 1.2. (Graves [12]). Let X, Y be Banach spaces and let f be a continuous functionfromXtoY definedinBε(0)forsomeε>0withf(0)=0. LetAbea continuous and linear operator from X onto Y and let M be the corresponding constant from Theorem 1.1 (iii). Suppose that there exists a constant δ < M −1 such that
∥ f(x1) − f(x2) − A(x1 − x2) ∥≤ δ ∥ x1 − x2 ∥ (1) whenever x1,x2 ∈ Bε(0). Then the equation y = f(x) has a solution x ∈ Bε(0) whenever
∥ y ∥≤ cε, where c = M−1 − δ.
 
Last edited:
  • #98
My take on problem 15:
I first prove that the equation ## 7n^2 = a^2 + b^2 + c^2 ## has no integer solutions. If we can show this, we will have proved that there are no integer solutions to ## 7(n_1 n_2 n_3)^2 = (m_1 n_2 n_3 )^2 + (m_2 n_3 n_1 )^2 + (m_3 n_1 n_2 )^2 ## and thus no three rational numbers ## p = m_1/n_1 , q = m_2/n_2, r = m_3/n_3 ## whose squares sum to 7.

We write the equation as ## (2n)^2 = (a - n)(a+n) + (b-n)(b+n) + (c-n)(c+n) ##, and substituting ## x = a - n, y = b - n, z = c - n ## we get: $$ (2n)^2 = x(x+2n) + y(y+2n) + z(z + 2n) $$ (1)
Now suppose that ## x, y, z ## are all odd, but this means that each term of the form ## x(x+2n) ## must be odd too, as ## x ## and ## x + 2n ## are both odd. The sum of any three odd numbers must also be odd, but the left hand side ## (2n)^2 ## is even! Therefore, ## x, y, z ## cannot all be odd.

Then suppose any two of ## x, y, z ## are odd; without loss of generality, say ## y, z ##. Since x is even ## 4 | x(x + 2n) ## and of course ## 4| (2n)^2 ## , so that ## 4 | y(y+2n) + z(z + 2n) = y^2 + z^2 + 2n( y + z ) ##
## y^2 ## and ## z^2 ## are each squares of odd integers, so that ## y^2 + z^2 \equiv (1+1) ## mod ## 4 \equiv 2 ## mod ## 4 ##.*
This means that ## 2n( y + z ) \equiv 2 ## mod ## 4 ## but how can this be?
## 2|(y+z) ## as ## y,z ## are odd, and that means ## 2n(y + z) \equiv 0 ## mod ## 4 ##.
We realize that no two of x,y,z can be odd.

What if we have one odd number (say z ) ? Then ## z(z + 2n) = (2n)^2 - y(y+2n) - z(z + 2n) ##. A contradiction, as the LHS is odd and the RHS is even.

Then all of ## x,y,z ## must be even. Let them be so through the substitutions ## x = 2\alpha, y = 2\beta, z = 2\gamma ##, then from (1):
$$ n^2 = \alpha(\alpha + n) + \beta(\beta + n) + \gamma( \gamma + n) $$
which implies $$ ( \alpha + \beta + \gamma )( \alpha + \beta + \gamma + n ) = 2( \alpha \beta + \beta \gamma + \gamma \alpha ) + n^2 $$
## n ## cannot be odd here, for then the RHS would be odd, and there's no way to make the LHS odd. ## ( ( \alpha + \beta + \gamma ) ## cannot be even, but if it's odd ## ( \alpha + \beta + \gamma + n ) ## is even. )
Therefore, n must be even. So let ## n = 2m ##
$$ (2m)^2 = \alpha(\alpha + 2m) + \beta(\beta + 2m) + \gamma( \gamma + 2m) $$.
This is exactly the same as equation (1) !
So if we go through the same arguments as before, we can say m would also have to be even. But then it's the same as before, and whatever m divided by 2 is should also be even...ad infinitum! We can say this number will go to infinity this way, unless we hit a zero, and the original n = 0, (but for which there is obviously no solution) Therefore, no such finite integers exist.

* I used the fact that squares of odd integers are always congruent to 1 mod 4. ## (2k + 1)^2 = 4( k^2 + k ) + 1 ##
In all seriousness, if this proof works, it is the strangest, most hand wavy proof I've ever done. I am not aware whether arguments involving infinity are commonly used in such problems, let alone whether they are rigorous enough.
 
  • Like
Likes nuuskur and fresh_42
  • #99
ItsukaKitto said:
My take on problem 15:

I first prove that the equation ## 7n^2 = a^2 + b^2 + c^2 ## has no integer solutions. If we can show this, we will have proved that there are no integer solutions to ## 7(n_1 n_2 n_3)^2 = (m_1 n_2 n_3 )^2 + (m_2 n_3 n_1 )^2 + (m_3 n_1 n_2 )^2 ## and thus no three rational numbers ## p = m_1/n_1 , q = m_2/n_2, r = m_3/n_3 ## whose squares sum to 7.

We write the equation as ## (2n)^2 = (a - n)(a+n) + (b-n)(b+n) + (c-n)(c+n) ##, and substituting ## x = a - n, y = b - n, z = c - n ## we get: $$ (2n)^2 = x(x+2n) + y(y+2n) + z(z + 2n) $$ (1)
Now suppose that ## x, y, z ## are all odd, but this means that each term of the form ## x(x+2n) ## must be odd too, as ## x ## and ## x + 2n ## are both odd. The sum of any three odd numbers must also be odd, but the left hand side ## (2n)^2 ## is even! Therefore, ## x, y, z ## cannot all be odd.

Then suppose any two of ## x, y, z ## are odd; without loss of generality, say ## y, z ##. Since x is even ## 4 | x(x + 2n) ## and of course ## 4| (2n)^2 ## , so that ## 4 | y(y+2n) + z(z + 2n) = y^2 + z^2 + 2n( y + z ) ##
## y^2 ## and ## z^2 ## are each squares of odd integers, so that ## y^2 + z^2 \equiv (1+1) ## mod ## 4 \equiv 2 ## mod ## 4 ##.*
This means that ## 2n( y + z ) \equiv 2 ## mod ## 4 ## but how can this be?
## 2|(y+z) ## as ## y,z ## are odd, and that means ## 2n(y + z) \equiv 0 ## mod ## 4 ##.
We realize that no two of x,y,z can be odd.

What if we have one odd number (say z ) ? Then ## z(z + 2n) = (2n)^2 - y(y+2n) - z(z + 2n) ##. A contradiction, as the LHS is odd and the RHS is even.

Then all of ## x,y,z ## must be even. Let them be so through the substitutions ## x = 2\alpha, y = 2\beta, z = 2\gamma ##, then from (1):
$$ n^2 = \alpha(\alpha + n) + \beta(\beta + n) + \gamma( \gamma + n) $$
which implies $$ ( \alpha + \beta + \gamma )( \alpha + \beta + \gamma + n ) = 2( \alpha \beta + \beta \gamma + \gamma \alpha ) + n^2 $$
## n ## cannot be odd here, for then the RHS would be odd, and there's no way to make the LHS odd. ## ( ( \alpha + \beta + \gamma ) ## cannot be even, but if it's odd ## ( \alpha + \beta + \gamma + n ) ## is even. )
Therefore, n must be even. So let ## n = 2m ##
$$ (2m)^2 = \alpha(\alpha + 2m) + \beta(\beta + 2m) + \gamma( \gamma + 2m) $$.
This is exactly the same as equation (1) !
So if we go through the same arguments as before, we can say m would also have to be even. But then it's the same as before, and whatever m divided by 2 is should also be even...ad infinitum! We can say this number will go to infinity this way, unless we hit a zero, and the original n = 0, (but for which there is obviously no solution) Therefore, no such finite integers exist.

* I used the fact that squares of odd integers are always congruent to 1 mod 4. ## (2k + 1)^2 = 4( k^2 + k ) + 1 ##

In all seriousness, if this proof works, it is the strangest, most hand wavy proof I've ever done. I am not aware whether arguments involving infinity are commonly used in such problems, let alone whether they are rigorous enough.

Well done!

The argument "and so on ad infinitum" is usually shortened by the following trick, an indirect proof:
Assume ##n## to be the smallest solution in the set of positive integers. As this set is bounded from below, there cannot be a smaller solution ##m##. Of course you first had to rule out the all zero solution.

What you have done is basically divide the equation by ##4## and distinguish odd and even of the rest. This could have been shortened by considering the original equation (integer version with coprime LHS and RHS) modulo ##8## and examine the three possible remainders ##0,1,4.## This is the same as what you have done, only with less letters.
 
  • Like
Likes ItsukaKitto
  • #100
fresh_42 said:
Well done!

The argument "and so on ad infinitum" is usually shortened by the following trick, an indirect proof:
Assume ##n## to be the smallest solution in the set of positive integers. As this set is bounded from below, there cannot be a smaller solution ##m##. Of course you first had to rule out the all zero solution.

What you have done is basically divide the equation by ##4## and distinguish odd and even of the rest. This could have been shortened by considering the original equation (integer version with coprime LHS and RHS) modulo ##8## and examine the three possible remainders ##0,1,4.## This is the same as what you have done, only with less letters.
Taking modulo 8 shortens the proof a lot! Somehow it wasn't immediately obvious to me 😅. Thanks for letting me know there exists a much simpler and general way to go about such problems.
 
  • #101
mathwonk said:
Ideas for problem #5:

Make the problem simpler: take B = identity map, so b =1, and take the constant a = 1, so we want to prove the map f taking x to x^2 + x, hits every point C with |C| ≤ 1/4, where |x^2| ≤ |x|^2
...
I can only say that my idea was completely different and I can not understand whether your argument eventually hits the aim.Perhaps it was not a good idea to post this problem for the Math Challenge. All my previous problems were easily solved and I decided to post something more interesting. And I posted an example from my article. I feel this was too much.

 
  • #102
Of course I am just pointing out that the inverse function theorem implies that the map f(x) = x^2 + x does map the ball of radius 1/2 around zero onto some open ball around 0, and the goal is to give a concrete radius for such an image ball, namely 1/4. My homotopy argument apparently works as given, in equal finite dimensions, (oh yes, and by restriction to a subspace also in the unequal finite dimensional case), and I did not pursue whether the existing proofs of the inverse function theorem, or surjective mapping theorem can be made explicit in the Banach space case. My intent was to leave it for someone else at least for some more weeks, since August is not over. In fact I think your problem is quite interesting, and I am very glad you posted it. Not all problems have to be, nor should be, easy. Thank you for the rather nice and instructive problem.
 
Last edited:
  • #103
f(x) = g(x) = x
a = 1.5
b = .5
g(a) - g(b) = 1
f(a) - f(b) = 1
QED
 
  • #104
Buzz Bloom said:
f(x) = g(x) = x
a = 1.5
b = .5
g(a) - g(b) = 1
f(a) - f(b) = 1
QED

You can actually do it when f(x)=x pretty easily. If g(1)=1, a=0 and b=1. Otherwise the function g(x+1)-g(x) is smaller than 1 for x=0 or 1, and larger than 1 for they other. So somewhere in between it just be equal to 1
 
  • #105
Hi @Office_Shredder:

I think I now understand that my answer was incomplete in that the problem wants a pair a and b for each of any pair of functions f and g where both are continuous and satisfy the end conditions at 0 and 2. I should have got that sooner, since using just a single pair f and g was very simple.

I guess I do not read the math language of symbols very well. The language has changed a lot since I was in college. If I had been writing the problem in the old days, I would have used:
"... for any pair f,g of continuous functions...".

I will now think a bit harder about the problem.

Regards,
Buzz
 

Similar threads

  • Math Proof Training and Practice
Replies
33
Views
7K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
3
Replies
80
Views
4K
  • Math Proof Training and Practice
4
Replies
137
Views
15K
  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
  • Math Proof Training and Practice
2
Replies
46
Views
4K
  • Math Proof Training and Practice
3
Replies
93
Views
6K
  • Math Proof Training and Practice
5
Replies
156
Views
16K
  • Math Proof Training and Practice
2
Replies
60
Views
8K
Back
Top