Challenge Math Challenge - August 2020

Click For Summary
The August 2020 Math Challenge features a variety of mathematical problems, many of which have been solved by participants. Key discussions include the construction of bounded operators with specific spectral properties, the existence of integers from continuous functions, and the implications of the Tychonoff theorem in relation to the axiom of choice. Additionally, there are explorations of properties of random variables in probability spaces and the non-existence of continuous logarithmic functions in complex analysis. The thread showcases collaborative problem-solving and deep engagement with advanced mathematical concepts.
  • #91
nuuskur said:
Let C(f) := \{x\in X \mid f\text{ is continuous at }x\} (may also be empty). It is sufficient to show C(f) is a Borel set. Then C(f)^c = D(f) is also a Borel set. We show C(f) is a countable intersection of open sets. By definition, f is continuous at x, if
<br /> \forall n\in\mathbb N,\quad \exists\delta &gt;0,\quad f(B(x,\delta)) \subseteq B(f(x), n^{-1}).<br />
Note that arbitrary unions of open sets are open. We have the following equality
<br /> C(f) = \bigcap _{n\in\mathbb N} \bigcup\left \{U\subseteq X \mid U\text{ is open and } \sup _{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \}.<br />
Indeed, let f be continuous at x. Take the open set B(f(x), (2n)^{-1})\subseteq Y. By continuity at x, there exists an open set U\subseteq X satisfying f(U) \subseteq B(f(x), (2n)^{-1}). We then have d_Y(f(u),f(v)) \leq n^{-1} for all u,v\in U. The converse inclusion reads exactly as the definition of continuity at a point.

Close enough. The reverse inequality is not quite the definition of continuity but you need a routine triangle inequality argument to get there, like you did in the other inclusion. But I'm sure you could have filled up the little gap. Well done! I think your solution is the the shortest possible.
 
Physics news on Phys.org
  • #92
Suggestion for #7:

Consider the induced map fxf:XxX-->YxY. Claim: f is continuous at p iff (p,p) is an interior point of the inverse image of every neighborhood of the diagonal. Hence for a shrinking countable sequence of such neighborhoods, the set of points of continuity is the intersection of the (diagonal of XxX with the) interiors of those inverse images, hence a G delta set. Note that X ≈ diagonal of XxX.
 
  • Informative
  • Like
Likes nuuskur and member 587159
  • #93
mathwonk said:
Suggestion for #7:

Consider the induced map fxf:XxX-->YxY. Claim: f is continuous at p iff (p,p) is an interior point of the inverse image of every neighborhood of the diagonal. Hence for a shrinking countable sequence of such neighborhoods, the set of points of continuity is the intersection of the (diagonal of XxX with the) interiors of those inverse images, hence a G delta set. Note that X ≈ diagonal of XxX.

Ah nice, a more topological approach. I didn't write out the details of your post but this seems to suggest that the statement is more generally true than in metric spaces. I would be interested in knowing what minimal assumptions are such that this holds. I guess the Hausdorffness condition and some countability assumption might be necessary.
 
  • #94
Math_QED said:
I would be interested in knowing what minimal assumptions are such that this holds. I guess the Hausdorffness condition and some countability assumption might be necessary.
The argument I gave doesn't really make use of the fact that X is a metric space. We can relax it to f being a continuous map between topological spaces and Y a metric space. Not sure right now, how far Y can be relaxed.
 
  • #95
nuuskur said:
The argument I gave doesn't really make use of the fact that X is a metric space. We can relax it to f being a continuous map between topological spaces and Y a metric space. Not sure right now, how far Y can be relaxed.

Yes, I was talking about ##Y##. But both the approaches given use a shrinking family of neighborhoods, which seems to suggest at least that we deal with first countable spaces?
 
  • Like
Likes nuuskur
  • #96
Re: #7: apologies, perhaps my claim is false. when i have more time i will try to tweak it.

edit later: sorry, It seems to be hard for me to write out details, but now I again think the claim is true, but the argument I have uses the triangle inequality.

i.e. given e>0, define U(e) as that subset of X such that p belongs to U(e) if and only if there is some d - nbhd of (p,p) in XxX (use the sum metric, or max metric), for some d >0, that maps into the e - nbhd of the diagonal of YxY. This U(e) seems to be open, and it seems that if p is in U(e), with associated d, then any q closer than d to p must satisfy |f(p)-f(q)| < e. (With the sum metric, the point (p,q) then lies in the d-nbhd of (p,p), and thus (f(p),f(q)) lies in the e - nbhd of some point of form ((f(x,f(x)). Then the triangle inequality implies (using sum metric on YxY), that also f(p) is within e of f(q).)

Then if p lies in every U(1/n) for every n, it seems that f is continuous at p, and vice versa, I think. I apologize for not vetting this more carefully. But this argument, if correct, seems to validate the original more topological sounding claim, at least in the case of metric spaces.
 
Last edited:
  • #97
Ideas for problem #5:

Make the problem simpler: take B = identity map, so b =1, and take the constant a = 1, so we want to prove the map f taking x to x^2 + x, hits every point C with |C| ≤ 1/4, where |x^2| ≤ |x|^2.

Start with the case of the Banach space R, the real numbers, and see by calculus that f takes the interval [-1/2, 1/2] to the interval [1/4, 3/4]. One only needs evaluate the map on the endpoints -1/2 and 1/2 and use the intermediate value theorem.

Now take the case of Banach space R^2. This time we can use of course the winding number technique. Look at the image of the circle |x| = 1/2. The map x-->x^2, takes each point of the circle |x| = 1/2, to a point of length ≤ 1/4. Hence the sum f(x) = x + x^2 takes each point p of this circle, to a point of the disc of radius 1/4 centered at p.

Hence the line segment joining p to f(p) lies in the annulus between the circles of radii 1/4 and 1/2. Hence the homotopy t-->x + t.x^2, with 0 ≤t≤1, is a homotopy between the identity map and the map f, such that every point of the homotopy lies in the annulus mentioned above. Consequently the map f acting on the circle |x| = 1/2 has the same winding number as the identity map, namely one. Hence the map f surjects the disc |x| ≤ 1/2 onto (a set containing) the disc |x| ≤ 1/4.

Using homotopies of spheres, this argument should work in all finite dimensions. I.e. the map f surjects the ball |x| ≤ 1/2, onto the disc |x| ≤ 1/4, (and maybe more).

Now to progress to infinite dimensional Banach spaces, it seems we should use some argument for the inverse function theorem, or open mapping theorem. I.e. look at the proof of the inverse function theorem for Banach spaces and see what ball is guaranteed to be in the image.

Suggestion: Use the proof of the "surjective mapping theorem", attributed to Graves, in Lang Analysis II, p. 193. I.e. just as slightly perturbing the identity map leaves us with a map that is still locally a homeomorphism, so sightly perturbing a surjective continuous linear [hence open] map leaves us with a map that is still locally [open and?] surjective.

Graves' theorem: http://www.heldermann-verlag.de/jca/jca03/jca03003.pdf

(iii) There exists a constant M such that for every y ∈ Y there exists x ∈ X with y = A(x)
and
∥ x ∥≤ M ∥ y ∥ .
Let us denote by Ba(x) the closed ball centered at x with radius a. Up to some minor adjustments in notation, the original formulation and proof of the Graves theorem are as follows.
Theorem 1.2. (Graves [12]). Let X, Y be Banach spaces and let f be a continuous functionfromXtoY definedinBε(0)forsomeε>0withf(0)=0. LetAbea continuous and linear operator from X onto Y and let M be the corresponding constant from Theorem 1.1 (iii). Suppose that there exists a constant δ < M −1 such that
∥ f(x1) − f(x2) − A(x1 − x2) ∥≤ δ ∥ x1 − x2 ∥ (1) whenever x1,x2 ∈ Bε(0). Then the equation y = f(x) has a solution x ∈ Bε(0) whenever
∥ y ∥≤ cε, where c = M−1 − δ.
 
Last edited:
  • #98
My take on problem 15:
I first prove that the equation ## 7n^2 = a^2 + b^2 + c^2 ## has no integer solutions. If we can show this, we will have proved that there are no integer solutions to ## 7(n_1 n_2 n_3)^2 = (m_1 n_2 n_3 )^2 + (m_2 n_3 n_1 )^2 + (m_3 n_1 n_2 )^2 ## and thus no three rational numbers ## p = m_1/n_1 , q = m_2/n_2, r = m_3/n_3 ## whose squares sum to 7.

We write the equation as ## (2n)^2 = (a - n)(a+n) + (b-n)(b+n) + (c-n)(c+n) ##, and substituting ## x = a - n, y = b - n, z = c - n ## we get: $$ (2n)^2 = x(x+2n) + y(y+2n) + z(z + 2n) $$ (1)
Now suppose that ## x, y, z ## are all odd, but this means that each term of the form ## x(x+2n) ## must be odd too, as ## x ## and ## x + 2n ## are both odd. The sum of any three odd numbers must also be odd, but the left hand side ## (2n)^2 ## is even! Therefore, ## x, y, z ## cannot all be odd.

Then suppose any two of ## x, y, z ## are odd; without loss of generality, say ## y, z ##. Since x is even ## 4 | x(x + 2n) ## and of course ## 4| (2n)^2 ## , so that ## 4 | y(y+2n) + z(z + 2n) = y^2 + z^2 + 2n( y + z ) ##
## y^2 ## and ## z^2 ## are each squares of odd integers, so that ## y^2 + z^2 \equiv (1+1) ## mod ## 4 \equiv 2 ## mod ## 4 ##.*
This means that ## 2n( y + z ) \equiv 2 ## mod ## 4 ## but how can this be?
## 2|(y+z) ## as ## y,z ## are odd, and that means ## 2n(y + z) \equiv 0 ## mod ## 4 ##.
We realize that no two of x,y,z can be odd.

What if we have one odd number (say z ) ? Then ## z(z + 2n) = (2n)^2 - y(y+2n) - z(z + 2n) ##. A contradiction, as the LHS is odd and the RHS is even.

Then all of ## x,y,z ## must be even. Let them be so through the substitutions ## x = 2\alpha, y = 2\beta, z = 2\gamma ##, then from (1):
$$ n^2 = \alpha(\alpha + n) + \beta(\beta + n) + \gamma( \gamma + n) $$
which implies $$ ( \alpha + \beta + \gamma )( \alpha + \beta + \gamma + n ) = 2( \alpha \beta + \beta \gamma + \gamma \alpha ) + n^2 $$
## n ## cannot be odd here, for then the RHS would be odd, and there's no way to make the LHS odd. ## ( ( \alpha + \beta + \gamma ) ## cannot be even, but if it's odd ## ( \alpha + \beta + \gamma + n ) ## is even. )
Therefore, n must be even. So let ## n = 2m ##
$$ (2m)^2 = \alpha(\alpha + 2m) + \beta(\beta + 2m) + \gamma( \gamma + 2m) $$.
This is exactly the same as equation (1) !
So if we go through the same arguments as before, we can say m would also have to be even. But then it's the same as before, and whatever m divided by 2 is should also be even...ad infinitum! We can say this number will go to infinity this way, unless we hit a zero, and the original n = 0, (but for which there is obviously no solution) Therefore, no such finite integers exist.

* I used the fact that squares of odd integers are always congruent to 1 mod 4. ## (2k + 1)^2 = 4( k^2 + k ) + 1 ##
In all seriousness, if this proof works, it is the strangest, most hand wavy proof I've ever done. I am not aware whether arguments involving infinity are commonly used in such problems, let alone whether they are rigorous enough.
 
  • Like
Likes nuuskur and fresh_42
  • #99
ItsukaKitto said:
My take on problem 15:

I first prove that the equation ## 7n^2 = a^2 + b^2 + c^2 ## has no integer solutions. If we can show this, we will have proved that there are no integer solutions to ## 7(n_1 n_2 n_3)^2 = (m_1 n_2 n_3 )^2 + (m_2 n_3 n_1 )^2 + (m_3 n_1 n_2 )^2 ## and thus no three rational numbers ## p = m_1/n_1 , q = m_2/n_2, r = m_3/n_3 ## whose squares sum to 7.

We write the equation as ## (2n)^2 = (a - n)(a+n) + (b-n)(b+n) + (c-n)(c+n) ##, and substituting ## x = a - n, y = b - n, z = c - n ## we get: $$ (2n)^2 = x(x+2n) + y(y+2n) + z(z + 2n) $$ (1)
Now suppose that ## x, y, z ## are all odd, but this means that each term of the form ## x(x+2n) ## must be odd too, as ## x ## and ## x + 2n ## are both odd. The sum of any three odd numbers must also be odd, but the left hand side ## (2n)^2 ## is even! Therefore, ## x, y, z ## cannot all be odd.

Then suppose any two of ## x, y, z ## are odd; without loss of generality, say ## y, z ##. Since x is even ## 4 | x(x + 2n) ## and of course ## 4| (2n)^2 ## , so that ## 4 | y(y+2n) + z(z + 2n) = y^2 + z^2 + 2n( y + z ) ##
## y^2 ## and ## z^2 ## are each squares of odd integers, so that ## y^2 + z^2 \equiv (1+1) ## mod ## 4 \equiv 2 ## mod ## 4 ##.*
This means that ## 2n( y + z ) \equiv 2 ## mod ## 4 ## but how can this be?
## 2|(y+z) ## as ## y,z ## are odd, and that means ## 2n(y + z) \equiv 0 ## mod ## 4 ##.
We realize that no two of x,y,z can be odd.

What if we have one odd number (say z ) ? Then ## z(z + 2n) = (2n)^2 - y(y+2n) - z(z + 2n) ##. A contradiction, as the LHS is odd and the RHS is even.

Then all of ## x,y,z ## must be even. Let them be so through the substitutions ## x = 2\alpha, y = 2\beta, z = 2\gamma ##, then from (1):
$$ n^2 = \alpha(\alpha + n) + \beta(\beta + n) + \gamma( \gamma + n) $$
which implies $$ ( \alpha + \beta + \gamma )( \alpha + \beta + \gamma + n ) = 2( \alpha \beta + \beta \gamma + \gamma \alpha ) + n^2 $$
## n ## cannot be odd here, for then the RHS would be odd, and there's no way to make the LHS odd. ## ( ( \alpha + \beta + \gamma ) ## cannot be even, but if it's odd ## ( \alpha + \beta + \gamma + n ) ## is even. )
Therefore, n must be even. So let ## n = 2m ##
$$ (2m)^2 = \alpha(\alpha + 2m) + \beta(\beta + 2m) + \gamma( \gamma + 2m) $$.
This is exactly the same as equation (1) !
So if we go through the same arguments as before, we can say m would also have to be even. But then it's the same as before, and whatever m divided by 2 is should also be even...ad infinitum! We can say this number will go to infinity this way, unless we hit a zero, and the original n = 0, (but for which there is obviously no solution) Therefore, no such finite integers exist.

* I used the fact that squares of odd integers are always congruent to 1 mod 4. ## (2k + 1)^2 = 4( k^2 + k ) + 1 ##

In all seriousness, if this proof works, it is the strangest, most hand wavy proof I've ever done. I am not aware whether arguments involving infinity are commonly used in such problems, let alone whether they are rigorous enough.

Well done!

The argument "and so on ad infinitum" is usually shortened by the following trick, an indirect proof:
Assume ##n## to be the smallest solution in the set of positive integers. As this set is bounded from below, there cannot be a smaller solution ##m##. Of course you first had to rule out the all zero solution.

What you have done is basically divide the equation by ##4## and distinguish odd and even of the rest. This could have been shortened by considering the original equation (integer version with coprime LHS and RHS) modulo ##8## and examine the three possible remainders ##0,1,4.## This is the same as what you have done, only with less letters.
 
  • Like
Likes ItsukaKitto
  • #100
fresh_42 said:
Well done!

The argument "and so on ad infinitum" is usually shortened by the following trick, an indirect proof:
Assume ##n## to be the smallest solution in the set of positive integers. As this set is bounded from below, there cannot be a smaller solution ##m##. Of course you first had to rule out the all zero solution.

What you have done is basically divide the equation by ##4## and distinguish odd and even of the rest. This could have been shortened by considering the original equation (integer version with coprime LHS and RHS) modulo ##8## and examine the three possible remainders ##0,1,4.## This is the same as what you have done, only with less letters.
Taking modulo 8 shortens the proof a lot! Somehow it wasn't immediately obvious to me 😅. Thanks for letting me know there exists a much simpler and general way to go about such problems.
 
  • #101
mathwonk said:
Ideas for problem #5:

Make the problem simpler: take B = identity map, so b =1, and take the constant a = 1, so we want to prove the map f taking x to x^2 + x, hits every point C with |C| ≤ 1/4, where |x^2| ≤ |x|^2
...
I can only say that my idea was completely different and I can not understand whether your argument eventually hits the aim.Perhaps it was not a good idea to post this problem for the Math Challenge. All my previous problems were easily solved and I decided to post something more interesting. And I posted an example from my article. I feel this was too much.

 
  • #102
Of course I am just pointing out that the inverse function theorem implies that the map f(x) = x^2 + x does map the ball of radius 1/2 around zero onto some open ball around 0, and the goal is to give a concrete radius for such an image ball, namely 1/4. My homotopy argument apparently works as given, in equal finite dimensions, (oh yes, and by restriction to a subspace also in the unequal finite dimensional case), and I did not pursue whether the existing proofs of the inverse function theorem, or surjective mapping theorem can be made explicit in the Banach space case. My intent was to leave it for someone else at least for some more weeks, since August is not over. In fact I think your problem is quite interesting, and I am very glad you posted it. Not all problems have to be, nor should be, easy. Thank you for the rather nice and instructive problem.
 
Last edited:
  • #103
f(x) = g(x) = x
a = 1.5
b = .5
g(a) - g(b) = 1
f(a) - f(b) = 1
QED
 
  • #104
Buzz Bloom said:
f(x) = g(x) = x
a = 1.5
b = .5
g(a) - g(b) = 1
f(a) - f(b) = 1
QED

You can actually do it when f(x)=x pretty easily. If g(1)=1, a=0 and b=1. Otherwise the function g(x+1)-g(x) is smaller than 1 for x=0 or 1, and larger than 1 for they other. So somewhere in between it just be equal to 1
 
  • #105
Hi @Office_Shredder:

I think I now understand that my answer was incomplete in that the problem wants a pair a and b for each of any pair of functions f and g where both are continuous and satisfy the end conditions at 0 and 2. I should have got that sooner, since using just a single pair f and g was very simple.

I guess I do not read the math language of symbols very well. The language has changed a lot since I was in college. If I had been writing the problem in the old days, I would have used:
"... for any pair f,g of continuous functions...".

I will now think a bit harder about the problem.

Regards,
Buzz
 

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
9K
  • · Replies 42 ·
2
Replies
42
Views
10K
  • · Replies 137 ·
5
Replies
137
Views
19K
  • · Replies 80 ·
3
Replies
80
Views
9K
  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 61 ·
3
Replies
61
Views
11K
  • · Replies 121 ·
5
Replies
121
Views
22K
  • · Replies 156 ·
6
Replies
156
Views
20K
  • · Replies 46 ·
2
Replies
46
Views
8K
  • · Replies 93 ·
4
Replies
93
Views
11K