member 587159
nuuskur said:This implies there exists c\in\mathbb R such that F = I_{[c,\infty]}.
Explain this line.
nuuskur said:This implies there exists c\in\mathbb R such that F = I_{[c,\infty]}.
Effectively it boils down to the discontinuity of ##\arg z##. I thought that was trivial and well established!Math_QED said:Hmm, I need a little more details here. Explain why the discontinuities arise. I find your argument a bit too handwavy.
Ok, here are the details. Firstly, since g is injective, it means g(\mathbb C^*) \cong \mathbb C^* as sets. That means any u\neq 0 can be written uniquely as u=g(z). But we also have a homemorphism with the obvious choice ( this is forced, really) f(g(z)) := z,\ z\neq 0. For continuity of f:g(\mathbb C^*) \to \mathbb C^* suppose g(z_n) \to g(z), then continuity of exponential map implies e^{g(z_n)} \to e^{g(z)} which by assumption is the same as z_n\to z so f is continuous.nuuskur said:We can kill a fly with a fly swatter. Don't need the rifle.
Suppose g:\mathbb C^* \to\mathbb C is continuous and satisfies the identity e^{g(z)} = z. Since g is right inverse to \exp, it is injective, thus g(\mathbb C^*) \cong \mathbb C^*. But now \exp is forced to be injective on \mathbb C^*. Indeed, for any u,v\neq 0 we have
e^u = e^v \Leftrightarrow e^{g(z)} = e^{g(w)} \Leftrightarrow z=w \Rightarrow u=v.
But that's impossible.
By definition of limit. As x\to -\infty we must have A>0 such that x\leq -A implies F(x) = \mathbb P\{X\leq x\}=0. Since such A are bounded from below, take the infimum i.e c := \inf \{x\in\mathbb R \mid X\leq x\}. Now, whatever happens for x>c must occur with probability 1 (otherwise c wouldn't be the infimum). There can be no in-betweens so F = I_{[c,\infty]} is forced due to right continuity of F i.e F(c+) = 1.Math_QED said:Explain this line.
nuuskur said:That means any u\neq 0 can be written uniquely as u=g(z).
Oops, that's bad expression by me. I really had in mind that u is identified uniquely with g(z). But now e^u = e^v \Rightarrow e^{g(z)} = e^{g(w)} might break. Thanks for noticing. The homemorphism part still works, but usable if g is an inclusion as @Math_QED said.Infrared said:Why?
nuuskur said:Yet I'm not convinced continuity is a required assumption. I get a feeling e^{g(z)} =z forces g to be continuous and in this case, holomorphic
Of course *smacks forehead*. I had a thought maybe |z|\leq |e^z|, but that's not true in \mathbb C.Infrared said:It doesn't. ##g(z)=\ln|z|+\arg(z)## is a counterexample
We have a function ##\beta (\theta)## for ##0 \le \theta < 2\pi## with ##\cos \beta (\theta) = \cos \theta## and ##\sin \beta (\theta) = \sin \theta##. We know that ##\beta(0) = 2\pi n## for some ##n##. As adding a constant does not effect the continuity of a function we can, wlog, take ##\beta(0) = 0##.Math_QED said:Hmm, I need a little more details here. Explain why the discontinuities arise. I find your argument a bit too handwavy.
fresh_42 said:11. Let ##a < b < c < d## be real numbers. Sort ##x = ab + cd, y = bc + ad, z = ac + bd## and prove it.
Well, problem #8 IS Wielandt's theorem. You just shifted the problem to Bohr-Mollerup. This is a bit like proving AC by Zorn.Fred Wright said:Problem 8.
I first state the Bohr-Mollerup theorem,
Let ##f:(0, \infty ) \rightarrow \mathbb R^+## be a function satisfying:
(i) ##f(x+1)=xf(x)##.
(ii) f is a log-convex function.
(iii) ##f(1)=1##.
##f(x)=\Gamma (x)## on it's domain.
##\Gamma## is meromorphic. The identity theorem states: If two meromorphic functions in ##\mathbb C## agree on a set with a limit point in ##\mathbb C##, then they agree everywhere in ##\mathbb C##. In particular, two meromorphic functions that agree on ##(0, \infty )## agree everywhere on ##\mathbb C##.
Thus condition (i) above holds in the complex plane. The two meromorphic functions, ##\Gamma (z+1)## and ##z\Gamma (z)##, that agree on ##(0, \infty )## implies ##\Gamma (z+1)=z\Gamma (z)## for all ##z \in \mathbb C##.
Wielant's theorem (https://www.jstor.org/stable/2975370) states that condition (ii) "f is a log-convex function" can be replaced by "##f(x)## is in the bounded strip ##\{ z\in \mathbb C | 1 \leq \mathfrak{ R}(z) \leq 2\}##".
Because ##F(z)## is in the bounded strip ##\{ z\in \mathbb C | 1 \leq \mathfrak{ R}(z) \leq 2\}## it satisfies the Bohr-Mollerup theorem, extended to the complex plane, up to a real constant ##F(1) = a## and thus ##F(z)=F(1)\Gamma (z)##.
Nor is it true in ##\mathbb{R}## (negative numbers)!nuuskur said:Of course *smacks forehead*. I had a thought maybe |z|\leq |e^z|, but that's not true in \mathbb C.
PeroK said:We have a function ##\beta (\theta)## for ##0 \le \theta < 2\pi## with ##\cos \beta (\theta) = \cos \theta## and ##\sin \beta (\theta) = \sin \theta##. We know that ##\beta(0) = 2\pi n## for some ##n##. As adding a constant does not effect the continuity of a function we can, wlog, take ##\beta(0) = 0##.
The technical point outstanding is that if ##\beta## is continuous, then ##\beta(\theta) = \theta##.
In general, we have ##\beta(\theta) = \theta + 2\pi n(\theta)## with ##n(0) = 0##, as above.
The only continuous functions of the form ##2\pi n(\theta)## are constant functions. I'll spare everyone an epsilon-delta proof of that. Therefore, we must have ##g(z) = i\theta = i\arg z## for ## 0 \le \theta < 2\pi##.
The second technical point is that ##\arg(z)## is discontinuous on the unit circle. I'll quote that as a known result.
In any case, that proves that ##g## cannot be continuous.
nuuskur said:For every x\in\mathbb R we have \mathbb P\{X\leq x\}\in \{0,1\}. Let F be the distribution function for X. F is right continuous and we have \lim _{x\to\infty} F(x) = 1 and \lim _{x\to -\infty}F(x) = 0. This implies there exists c\in\mathbb R such that F = I_{[c,\infty]}. Now \mathbb P\{X<c\} = F(c-) = 0, therefore X=c a.s.
My spider sense is tingling. \Gamma (z) / F(z) is entire. Liouville's theorem? Or maybe the theorem that allows finit order entire functions without zeros be written as e^{P(z)} for some polynomial P. (Hadamard?)fresh_42 said:There is still another - in a way elementary - proof possible. It uses a common theorem of complex analysis.
It's not like I look at the exercise and come up with a solution. At some point I've done similar things, so it's a matter of reminding myself definitions/results and relevant techniques. As you can also see, I make mistakes, so it's not smooth sailing. Most of what I do is wrong, I have stacks of papers scribbled full of some gibberish and failed attempts at the problems in OP.Math_QED said:I guess I must come up with less routine exercises since you seem to solve all of them ;)
Liouville, but not for the quotient.nuuskur said:My spider sense is tingling. \Gamma (z) / F(z) is entire. Liouville's theorem? Or maybe the theorem that allows finit order entire functions without zeros be written as e^{P(z)} for some polynomial P. (Hadamard?)
Yes. There are certainly real solutions, but none with three rational numbers.Mayhem said:For question 15, should it be understood that it simply means that there is no solution where ##p, q, r \in \mathbb{Q}##?
nuuskur said:That said, @Math_QED , I gave a black magic approach in #32.
What I did:fresh_42 said:Yes. There are certainly real solutions, but none with three rational numbers.
Hint: Reduce the question to an integer version of the statement.
It is a bit tricky. We can multiply the equation by its common denominator and get four new variables instead, but all integers. Then we can assume that the LHS and the RHS do not share a common factor. With this preparation we examine the equation modulo a suited number ##n##, i.e. we consider the remainders which occur by division by ##n##.Mayhem said:What I did:
I broke up ##p,q,r## into fractions of integers and expanded the exponential and added the fractions together. This would mean that the denominator and numerator of said fraction must be integers, but I don't know how to prove, or in this case, disprove that.
fresh_42 said:12. Prove ##{\overline{CP}\,}^{2} = \overline{AP} . \overline{BP}##
![]()
![]()
Pasted above is a redrawing of the original diagram with some triangles and angles highlighted for use in the proof. The proof is based on similarity of triangles ##\triangle{CAP}## and ##\triangle{BCP}##. Let ##\angle{BMA}=\alpha## and ##\angle{BMC}=\beta##. Note that triangles ##\triangle{BMA}##, ##\triangle{BMC}## and ##\triangle{CMA}## are all isosceles since ##\overline{MA} = \overline{MB} = \overline{MC}## as these line segments equal the radius of the circle. Since angles opposite equal sides of a traingle must be equal, it follows that
As per the original diagram, the line segment ##PC## is tangential to the circle. Therefore, ##\angle{PCM} = 90^{\circ} \Rightarrow \angle{BCP} = \angle{PCM} - \angle{BCM} = \dfrac {\beta} {2}##.
- ##\angle{BAM} = \angle{ABM} = 90^{\circ} - \dfrac {\alpha} {2}##
- ##\angle{CAM} = \angle{ACM} = 90^{\circ} - \dfrac {\alpha + \beta} {2}##
- ##\angle{BCM} = \angle{CBM} = 90^{\circ} - \dfrac {\beta} {2}##
And ##\angle{CAP} = \angle{BAC} = \angle{BAM} - \angle{CAM} = \dfrac {\beta} {2}##.
Comparing the triangles ##\triangle{CAP}## and ##\triangle{BCP}##, we find that they must be similar (##\triangle{CAP} \sim \triangle{BCP}##) since they both have a common angle ##\angle{BPC}## and they also have another pair of corresponding angles, namely ##\angle{BCP} = \angle{CAP} = \dfrac {\beta} {2}##.
From the similarity of these triangles, it follows that all their corresponding sides must be of same proportion. Therefore,
$$\dfrac {\overline{BP}} {\overline{CP}} = \dfrac {\overline{CP}} {\overline{AP}}
\Rightarrow \overline{CP} . \overline{CP} = {\overline{CP}\,}^{2} = \overline{AP} . \overline{BP}
$$
#55 for P7 might be found in the same waters.Math_QED said:It got lost in the sea of other posts haha. I will have a look.
Ok I was playing around with it a bit. Note that G(z) := F(z) - F(1)\Gamma (z) is holomorphic on H(0). Since \Gamma also satisfies the functional equation we have zG(z) = G(z+1). Note that G(1) = 0.fresh_42 said:Liouville, but not for the quotient.
By the theorem in the link we have have an entire extension of G to \mathbb C, call it g. By assumption g(z) is bounded for 1\leq \mathrm{Re}(z)\leq 2. This implies g is bounded for 0\leq \mathrm{Re}(z)<1. We have 1\leq \mathrm{Re}(z+1) < 2 so by assumption g(z) = \frac{g(z+1)}{z} is bounded.Fred Wright said:https://www.jstor.org/stable/2975370
nuuskur said:In other words we have \{x\}= \bigcap _{n\in\mathbb N} \{U\subseteq X \mid x\in U,\ U\text{ is open and }f(U) \subseteq B(f(x), n^{-1}) \}.
we have the G_\delta set
<br /> \bigcap _{n\in\mathbb N} \bigcup \left\{ U\subseteq X \mid U\text{ is open, }\ \sup_{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \} = C(f).<br />
I messed up there, even if the semantics is fixed, the logic breaks, because I was envisioning something that behaves too nicely. For instance, take a constant map. Fixing open sets x\in U_n such that f(U_n) \subseteq B(f(x),n^{-1}) does not imply the U_n become smaller around x - contrary to what I had in mind. Disregard that part, entirely.Math_QED said:I think there is a problem with your intersection, at least semantically. You have an intersection ##\bigcap_{n \in \Bbb{N}} A_n## with ##A_n \subseteq \mathcal{P}(X)##. Thus semantically your intersection is a subset of ##\mathcal{P}(X)##. However, the left hand side is ##\{x\}## which is a subset of ##X## (and not of ##\mathcal{P}(X))##. You might want to fix/clarify this.
The equality in question is sufficient to solve the problem. Suppose f is continuous at x. Fix n\in\mathbb N. Take the open set B(f(x),(2n)^{-1}) and it also satisfies the supremum condition (the diameter is twice the radius). By continuity at x, there exists an open set satisfying x\in U and f(U) \subseteq B(f(x), (2n)^{-1}).Next, explain why the equality you then obtain is true (write out the inclusion ##\supseteq##).
nuuskur said:I messed up there, even if the semantics is fixed, the logic breaks, because I was envisioning something that behaves too nicely. For instance, take a constant map. Fixing open sets x\in U_n such that f(U_n) \subseteq B(f(x),n^{-1}) does not imply the U_n become smaller around x - contrary to what I had in mind. Disregard that part, entirely.The equality in question is sufficient to solve the problem. Suppose f is continuous at x. Fix n\in\mathbb N. Take the open set B(f(x),(2n)^{-1}) and it also satisfies the supremum condition (the diameter is twice the radius). By continuity at x, there exists an open set satisfying x\in U and f(U) \subseteq B(f(x), (2n)^{-1}).
The converse reads exactly like the definition of continuity.
Math_QED said:Can you edit or make a new attempt at the question so everything reads smoothly?
nuuskur said:Let C(f) := \{x\in X \mid f\text{ is continuous at }x\} (may also be empty). It is sufficient to show C(f) is a Borel set. Then C(f)^c = D(f) is also a Borel set. We show C(f) is a countable intersection of open sets. By definition, f is continuous at x, if
<br /> \forall n\in\mathbb N,\quad \exists\delta >0,\quad f(B(x,\delta)) \subseteq B(f(x), n^{-1}).<br />
Note that arbitrary unions of open sets are open. We have the following equality
<br /> C(f) = \bigcap _{n\in\mathbb N} \bigcup\left \{U\subseteq X \mid U\text{ is open and } \sup _{u,v\in U} d_Y(f(u),f(v)) \leq n^{-1} \right \}.<br />
Indeed, let f be continuous at x. Take the open set B(f(x), (2n)^{-1})\subseteq Y. By continuity at x, there exists an open set U\subseteq X satisfying f(U) \subseteq B(f(x), (2n)^{-1}). We then have d_Y(f(u),f(v)) \leq n^{-1} for all u,v\in U. The converse inclusion reads exactly as the definition of continuity at a point.
mathwonk said:Suggestion for #7:
Consider the induced map fxf:XxX-->YxY. Claim: f is continuous at p iff (p,p) is an interior point of the inverse image of every neighborhood of the diagonal. Hence for a shrinking countable sequence of such neighborhoods, the set of points of continuity is the intersection of the (diagonal of XxX with the) interiors of those inverse images, hence a G delta set. Note that X ≈ diagonal of XxX.
The argument I gave doesn't really make use of the fact that X is a metric space. We can relax it to f being a continuous map between topological spaces and Y a metric space. Not sure right now, how far Y can be relaxed.Math_QED said:I would be interested in knowing what minimal assumptions are such that this holds. I guess the Hausdorffness condition and some countability assumption might be necessary.
nuuskur said:The argument I gave doesn't really make use of the fact that X is a metric space. We can relax it to f being a continuous map between topological spaces and Y a metric space. Not sure right now, how far Y can be relaxed.
ItsukaKitto said:My take on problem 15:
I first prove that the equation ## 7n^2 = a^2 + b^2 + c^2 ## has no integer solutions. If we can show this, we will have proved that there are no integer solutions to ## 7(n_1 n_2 n_3)^2 = (m_1 n_2 n_3 )^2 + (m_2 n_3 n_1 )^2 + (m_3 n_1 n_2 )^2 ## and thus no three rational numbers ## p = m_1/n_1 , q = m_2/n_2, r = m_3/n_3 ## whose squares sum to 7.
We write the equation as ## (2n)^2 = (a - n)(a+n) + (b-n)(b+n) + (c-n)(c+n) ##, and substituting ## x = a - n, y = b - n, z = c - n ## we get: $$ (2n)^2 = x(x+2n) + y(y+2n) + z(z + 2n) $$ (1)
Now suppose that ## x, y, z ## are all odd, but this means that each term of the form ## x(x+2n) ## must be odd too, as ## x ## and ## x + 2n ## are both odd. The sum of any three odd numbers must also be odd, but the left hand side ## (2n)^2 ## is even! Therefore, ## x, y, z ## cannot all be odd.
Then suppose any two of ## x, y, z ## are odd; without loss of generality, say ## y, z ##. Since x is even ## 4 | x(x + 2n) ## and of course ## 4| (2n)^2 ## , so that ## 4 | y(y+2n) + z(z + 2n) = y^2 + z^2 + 2n( y + z ) ##
## y^2 ## and ## z^2 ## are each squares of odd integers, so that ## y^2 + z^2 \equiv (1+1) ## mod ## 4 \equiv 2 ## mod ## 4 ##.*
This means that ## 2n( y + z ) \equiv 2 ## mod ## 4 ## but how can this be?
## 2|(y+z) ## as ## y,z ## are odd, and that means ## 2n(y + z) \equiv 0 ## mod ## 4 ##.
We realize that no two of x,y,z can be odd.
What if we have one odd number (say z ) ? Then ## z(z + 2n) = (2n)^2 - y(y+2n) - z(z + 2n) ##. A contradiction, as the LHS is odd and the RHS is even.
Then all of ## x,y,z ## must be even. Let them be so through the substitutions ## x = 2\alpha, y = 2\beta, z = 2\gamma ##, then from (1):
$$ n^2 = \alpha(\alpha + n) + \beta(\beta + n) + \gamma( \gamma + n) $$
which implies $$ ( \alpha + \beta + \gamma )( \alpha + \beta + \gamma + n ) = 2( \alpha \beta + \beta \gamma + \gamma \alpha ) + n^2 $$
## n ## cannot be odd here, for then the RHS would be odd, and there's no way to make the LHS odd. ## ( ( \alpha + \beta + \gamma ) ## cannot be even, but if it's odd ## ( \alpha + \beta + \gamma + n ) ## is even. )
Therefore, n must be even. So let ## n = 2m ##
$$ (2m)^2 = \alpha(\alpha + 2m) + \beta(\beta + 2m) + \gamma( \gamma + 2m) $$.
This is exactly the same as equation (1) !
So if we go through the same arguments as before, we can say m would also have to be even. But then it's the same as before, and whatever m divided by 2 is should also be even...ad infinitum! We can say this number will go to infinity this way, unless we hit a zero, and the original n = 0, (but for which there is obviously no solution) Therefore, no such finite integers exist.
* I used the fact that squares of odd integers are always congruent to 1 mod 4. ## (2k + 1)^2 = 4( k^2 + k ) + 1 ##
In all seriousness, if this proof works, it is the strangest, most hand wavy proof I've ever done. I am not aware whether arguments involving infinity are commonly used in such problems, let alone whether they are rigorous enough.
Taking modulo 8 shortens the proof a lot! Somehow it wasn't immediately obvious to mefresh_42 said:Well done!
The argument "and so on ad infinitum" is usually shortened by the following trick, an indirect proof:
Assume ##n## to be the smallest solution in the set of positive integers. As this set is bounded from below, there cannot be a smaller solution ##m##. Of course you first had to rule out the all zero solution.
What you have done is basically divide the equation by ##4## and distinguish odd and even of the rest. This could have been shortened by considering the original equation (integer version with coprime LHS and RHS) modulo ##8## and examine the three possible remainders ##0,1,4.## This is the same as what you have done, only with less letters.