Challenge Math Challenge - February 2020

Click For Summary
The Math Challenge - February 2020 thread features various mathematical problems, with several solved by users. Key discussions include limits involving cosine functions, polynomial interpolation for distinct real numbers, and properties of smooth closed curves using Green's formula. Additionally, the thread addresses uniform continuity of functions and the behavior of Fibonacci numbers. The conversation also touches on the interpretation of mathematical symbols and concepts, particularly regarding angles and integrals, showcasing a collaborative problem-solving environment. Overall, the thread highlights a range of mathematical topics and community engagement in solving complex problems.
  • #91
I don't know if @fresh_42 didn't want the fact that ##\mathbb{Z}_p^*## is cyclic to be used for problem 9, but the result follows easily from it. If ##a=b^2\in\mathbb{Z}_p^*## is a square, then ##\lambda_a=\lambda_{b}\circ\lambda_{b}## is even. On the other hand, if ##a## is not a square, then it is an odd power of a generator, and hence also a generator. So ##\lambda_a## is a ##(p-1)##-cycle, which is odd.
 
Physics news on Phys.org
  • #92
Infrared said:
I don't know if @fresh_42 didn't want the fact that ##\mathbb{Z}_p^*## is cyclic to be used for problem 9, but the result follows easily from it. If ##t## is a generator, and ##a\in\mathbb{Z}_p^*## is a square, then ##a=t^{2k}## for some integer ##k##, so ##\lambda_a=\lambda_{t^k}\circ\lambda_{t^k}## is even. On the other hand, if ##a=t^{2k+1}##, then ##a## is also a generator, so ##\lambda_a## is a ##(p-1)##-cycle and hence odd.
It does no harm to give an elementary proof.
 
  • #93
QuantumQuest said:
Well done @etotheipi. It's a very nice shortcut way for this particular case. However, there is a more general way we can solve problems like this. I won't tell what is it but if anyone else wants to try the problem and come up with a correct solution, he / she'll also get credit.

Can you explain what you mean by problems like this? ##A## is a specific matrix, which is its own inverse. What sort of generalisation are you looking for?
 
  • #94
PeroK said:
Can you explain what you mean by problems like this? ##A## is a specific matrix, which is its own inverse. What sort of generalisation are you looking for?

Well, there is some specific procedure we can follow, a relation coming out of it and then the classic way we treat problems like this.
 
Last edited:
  • #95
QuantumQuest said:
Well, there is some specific procedure we can follow, a relation coming out of it and then the classic way we treat problems like this.
That's all very well, but I have no idea what the question is asking! Problems like what? Perhaps no one has solved it because no one knows what you're driving at.
 
  • #96
In question 1) is the power of n inside the cosine or outside it?
 
  • #97
PeroK said:
That's all very well, but I have no idea what the question is asking! Problems like what? Perhaps no one has solved it because no one knows what you're driving at.

I'm not saying anything cryptic. When I say problems like this I literally mean it: problems that ask to calculate some power(s) of a matrix. So, in standard curricula for mathematicians, there is a standard way to treat problems like this - the solution given so far in the challenge is absolutely acceptable but there is some more formal way to tackle the whole thing ;)
 
  • #98
skanskan said:
In question 1) is the power of n inside the cosine or outside it?
Outside
 
  • #99
Since the solution I came up with is relatively simple and ignores @fresh_42’s hint, I feel like I must be completely wrong.

Consider the operator ##\mathrm{B_1}: \mathcal{H}\rightarrow\mathcal{H}## defined by ##\mathrm{B_1}(f)=(\beta(f,\cdot))^*##. The question essentially is asking for a proof that ##\mathrm{B_1}## is a bijection.
Now, ##\beta(f,f)## bounded below implies that ##\mathrm{B_1}## is, too:
$$\beta(f,f)=\langle\mathrm{B_1}f,f\rangle\leq\|\mathrm{B_1}f\|\|f\|$$
by Cauchy-Schwarz;
$$\Rightarrow\|\mathrm{B_1}f\|\|f\|\geq C\|f\|^2\\
\Rightarrow\|\mathrm{B_1}f\|\geq C\|f\|; \|f\|\neq 0.$$
When ##\|f\|=0## the final inequality is trivially true. Additionally, the same can be applied to the map ##\mathrm{B}_2(f)=(\beta(\cdot,f))^*=\mathrm{B}_1^*(f)##.

Now, we use the following fact: a continuous linear map is bounded below iff it is injective with closed range.
This means that ##\mathrm{B_1}## and ##\mathrm{B_2}## are injective with closed range. By the closed range theorem, ##\mathrm{im(B_1)=ker(B_1^*)^\perp=ker(B_2)^\perp}##. But since ##\mathrm{B_2}## is injective, ##\mathrm{ker(B_2)}## is trivial, and so ##\mathrm{im(B_1)}=\mathcal{H}##. So ##\mathrm{B_1}## is injective and surjective, proving the original result.

I tried to use the hint, and was able to apply the fixed point theorem for ##\lambda\in(1,2)##, but didn’t know what to do after that.
 
Last edited:
  • #100
suremarc said:
Since the solution I came up with is relatively simple and ignores @fresh_42’s hint, I feel like I must be completely wrong.

Consider the operator ##\mathrm{B_1}: \mathcal{H}\rightarrow\mathcal{H}## defined by ##\mathrm{B_1}(f)=(\beta(f,\cdot))^*##. The question essentially is asking for a proof that ##\mathrm{B_1}## is a bijection.
Now, ##\beta(f,f)## bounded below implies that ##\mathrm{B_1}## is, too:
$$\beta(f,f)=\langle\mathrm{B_1}f,f\rangle\leq\|\mathrm{B_1}f\|\|f\|$$
by Cauchy-Schwarz;
$$\Rightarrow\|\mathrm{B_1}f\|\|f\|\geq C\|f\|^2\\
\Rightarrow\|\mathrm{B_1}f\|\geq C\|f\|; \|f\|\neq 0.$$
When ##\|f\|=0## the final inequality is trivially true. Additionally, the same can be applied to the map ##\mathrm{B}_2(f)=(\beta(\cdot,f))^*=\mathrm{B}_1^*(f)##.

Now, we use the following fact: a continuous linear map is bounded below iff it is injective with closed range.
This means that ##\mathrm{B_1}## and ##\mathrm{B_2}## are injective with closed range. By the closed range theorem, ##\mathrm{im(B_1)=ker(B_1^*)^\perp=ker(B_2)^\perp}##. But since ##\mathrm{B_2}## is injective, ##\mathrm{ker(B_2)}## is trivial, and so ##\mathrm{im(B_1)}=\mathcal{H}##. So ##\mathrm{B_1}## is injective and surjective, proving the original result.

I tried to use the hint, and was able to apply the fixed point theorem for ##\lambda\in(1,2)##, but didn’t know what to do after that.
I'm a bit skeptical about your use of duality. E.g. why is ##\operatorname{im}B_1 =\mathcal{H}\,?## Didn't you use ##\mathcal{H}=\left(\mathcal{H}^*\right)^*## with ##\beta## instead of the inner product? In other words: why does ##B_1## land in the space you claim it does? You have only added an asterix.

I know that boundedness and continuity is equivalent. Can you reference the theorem you used to make the range closed, not only dense? I think this is the crucial point. My proof does those things in smaller steps by using Banach to actually define the specific image ##f^\dagger## we are looking for.

The whole theorem is a generalization of Riesz, so we have to be careful with duality, i.e. rigorously separate the inner product and the given bilinear form. The simple notation with an asterix needs a bit of an explanation.
 
  • #101
@fresh_42 I think I have the solution using the hint. That hint is quite a good idea, it's probably useful to remember in general.
First, we use Riesz representation theorem to define a mapping ##T: \mathcal{H}^* \rightarrow \mathcal{H}##, which defines the mapping from the dual space of linear functionals to Hilbert space:
$$F(x) = (T(F),x)$$
for every ##x \in \mathcal{H}##, where ##(\cdot,\cdot)## is the inner product. The theorem itself states that the vector ##T(F)## exists and is unique, thus defining the mapping above. Taking into account that inner product is linear(in a real Hilbert space), we find that ##T## is also linear, namely:
$$(T(F+G),x) = (F+G)(x) = F(x)+G(x) = (T(F)+T(G),x)$$
Also, we define the mapping ##B##:
$$B(f)(h) = \beta(f,h)$$
which is possible for continuous bilinear forms. ##B## is linear by definition.
Now we induce the norm on our Hilbert space from the inner product, and metric from the norm naturally:
$$\lVert x \rVert_\mathcal{H} \equiv \sqrt{(x,x)} \qquad d_\mathcal{H}(x,y) \equiv \lVert x-y\rVert_\mathcal{H}$$
We define the mapping which was hinted(there's a typo in the hint though, I assume):
$$Q(f) \equiv f -\lambda(T(B(f)) - T(F))$$
where ##\lambda## is a free parameter.
If it is possible to find ##\lambda## such that this function has a fixed point ##x##, then we have for that point:
$$Q(x) = x \Leftrightarrow T(B(x)) = T(F)$$
and this would be equivalent to the solution of our exercise, since by Riesz we certainly have unique vectors ##T(B(f))## and ##T(F)## such that, for some ##f \in \mathcal{H}##:
$$(\forall h \in \mathcal{H})( F(h) = (T(F),h) \wedge \beta(f,h) = (T(B(f)),h))$$
but the above then proves that we have a unique vector ##f## such that ##\beta(f,h) = F(h)## for all ##h \in \mathcal{H}##. hence it proves the theorem.

So, we aim to prove that ##Q(f)## has a unique fixed point, which we will prove using Banach fixed point theorem. We first want ##Q(f)## to be a contraction.
A contraction is a function ##f## on a metric space, such that:
$$d(f(x),f(y)) \leq q d(x,y)$$
for every two elements ##x## and ##y## and for some number ##q \in [0,1)##.
Using the naturally induced norm, we have:
$$\lVert Q(f) - Q(g)\rVert^2 = \lVert f -g + \lambda(T(B(f-g)) \rVert^2 \stackrel{f-g=u}{=} \lVert u-\lambda T(B(u)) \rVert^2 = (u-\lambda T(B(u)),u-\lambda T(B(u))) = \lVert u\rVert^2 - 2\lambda(T(B(u)),u) + \lambda^2\lVert T(B(u))\rVert^2$$
At this point, we use the coercivity of ##\beta##, keeping in mind that we have:
$$\beta(f,h) = (T(B(f)),h) \wedge \beta(f,f) \geq C\lVert f\rVert^2$$
It follows that:
$$(T(B(u),u) = \beta(u,u) \geq C\lVert u\rVert^2$$
$$\lVert T(B(u))\rVert^2 \stackrel{Riesz}{=} \lVert B(u)\rVert^2 \leq M\lVert u\rVert^2$$
where ##M## is a positive number in the definition of boundedness of ##\beta##(since ##\beta## is continuous).
Substituting, we obtain the inequality:
$$\lVert Q(f) - Q(g)\rVert^2 \leq \lVert f-g\rVert^2 -2\lambda C\lVert f-g\rVert^2 +\lambda^2M^2\lVert f-g\rVert^2 = (1-2\lambda C + \lambda^2M^2) \lVert f-g\rVert^2$$
Now all that is left is to prove that we can pick ##\lambda##, such that the above coefficient ##(1-2\lambda C+\lambda^2M^2) \in (0,1)##. We keep in mind that trivially ##C\leq M##(because ##C\lVert u\rVert^2 \leq \beta(u,u) \leq M\lVert u\rVert^2## for any ##u## by definition).
We consider the following inequalities:
$$1 - 2\lambda C + \lambda^2M^2 <1 \Leftrightarrow 0<\lambda < \frac{2C}{M^2}$$
$$C<M \Rightarrow 1 - 2\lambda C + \lambda^2M^2> 0 $$
From this we see that the acceptable choice of ##\lambda##, would be ##0<\lambda<\frac{2C}{M^2}##. This value asserts that ##Q(f)## is a contraction, hence by Banach fixed point theorem, there is a unique fixed point of ##Q(f)##, which implies existence and uniqueness of ##f \in \mathcal{H}## such that for all ##h \in \mathcal{H}##:
$$F(h) = \beta(f,h)$$
as we have shown above when we defined ##Q##. This finishes our proof. Banach's theorem even gives algorithm of how we would arrive at this vector(as a limit of infinite power of ##Q## on an arbitrary vector ##f## from our Hilbert space, that is as: ##\lim_{n\rightarrow \infty} Q(Q(\dots Q(f)))## where ##Q## is applied in a composition ##n## times).
 
Last edited:
  • #102
I guess it's time to give both of you - @suremarc and @Antarres - the credit for solving the problem, since you both have had a correct strategy. Since this problem - Lemma of Babuška-Lax-Milgram - is a bit confusing, I'll add my proof. There was nothing wrong with either of yours, but for all other readers who might have had difficulties to follow:

If we define a continuous function ##B(f)(g):=\beta(f,g)## then Riesz' representation theorem gives us an isometric isomorphism ##T\, : \,\mathcal{H}^* \longrightarrow \mathcal{H}## such that for every ##B(f)\in \mathcal{H}^*## there is a unique ##T(B(f))## such that ##\|B(f)\|=\|T(B(f))\|## and
$$
B(f)(g)=\langle T(B(f)) ,g \rangle_\mathcal{H} = \beta(f,g)\quad \forall \,\, g\in \mathcal{H} \quad (*)
$$
or generally ##f^*(g)=\langle T(f^*),g \rangle_\mathcal{H}\,\, \forall \,\, g\in \mathcal{H} \quad (*)##

The functionals ##B(f)## are bounded since ##\beta## is continuous, i.e. ##\|B\|## is a finite real number. We get from our lower bound
\begin{align*}
C\|f\|^2&\leq |\beta(f,f)|=\langle T(B(f)) ,f \rangle_\mathcal{H}\\
&\leq \|T(B(f))\|\cdot\|f\|= \|B(f)\|\cdot\|f\| \leq \|B\|\cdot \|f\|^2
\end{align*}
hence ##0 < \dfrac{C}{\|B\|} \leq 1.## We now define the function
$$
Q(f):=f-k\cdot \left( T(B(f)) - T(F) \right)
$$
on ##\mathcal{H}## with a real number ##k\in \mathbb{R}-\{0\}.## A vector ##f^\dagger \in \mathcal{H}## is a fixed point of ##Q## iff ##T(B(f^\dagger)) - T(F)=0\,.## In general we have for all ##g\in \mathcal{H}##
\begin{align*}
T(B(f)) - T(F) \stackrel{(**)}{=} 0 &\Longleftrightarrow F(g)\stackrel{(**)}{=}B(f)(g) =\beta(f ,g)\stackrel{(*)}{=}\langle T(B(f)) ,g \rangle_\mathcal{H}\\
&\Longleftrightarrow F(g) \stackrel{(*)}{=} \langle T(F),g \rangle_\mathcal{H} \stackrel{(**)}{=} \langle T(B(f)) ,g \rangle_\mathcal{H}\\
&\Longleftrightarrow \langle T(B(f))-T(F),g\rangle_\mathcal{H} \stackrel{(**)}{=} 0
\end{align*}
again by Riesz' representation theorem and the equations above. As ##g\in \mathcal{H}## is arbitrary, we may set ##g:=T(B(f^\dagger ))-T(F)## for a fixed point of ##Q## and get ##\|T(B(f^\dagger ))-T(F)\|^2=0## hence
$$
B(f^\dagger)=\beta(f^\dagger,-)=F
$$
which has to be shown. Thus all what's left to show is, that such a unique fixed point ##f^\dagger## of ##Q## exists, which we will prove with Banach's fixed point theorem.
\begin{align*}
\|Q(f)-Q(g)\|^2 &= \| f-k(T(B(f))-T(F))-g+k(T(B(g))-T(F)) \|^2\\
&= \langle (f-g)-kT(B(f-g)),(f-g)-kT(B(f-g)) \rangle_\mathcal{H}\\
&\stackrel{(*)}{=}l, \|f-g\|^2 -2k\, \langle T(B(f-g)) , f-g\rangle_\mathcal{H} +k^2\,\|T(B(f-g))\|^2 \\
&\stackrel{(*)}{=} \|f-g\|^2 -2k \,\beta(f-g,f-g) +k^2\, \|B(f-g)\|^2 \\
&\leq \|f-g\|^2 -2k\, C\,\|f-g\|^2 + k^2\, \|B\|^2\,\|f-g\|^2 \\
&= \|f-g\|^2 \left( 1-2k\,C+k^2\,\|B\|^2 \right) \\
&\stackrel{\text{set }k:=C/\|B\|^2}{=} \|f-g\|^2 \left(1 - \dfrac{C^2}{\|B\|^2}\right)
\end{align*}
We have seen that ##\dfrac{C}{\|B\|} \in (0,1]## hence ##q:=1 - \dfrac{C^2}{\|B\|^2} \in [0,1)## and ##\|Q(f)-Q(g)\|^2=q\,\|f-g\|^2## and the statement follows from Banach's fixed point theorem.
 
Last edited:
  • Like
Likes Antarres and suremarc
  • #103
archaic said:
$$\begin{align*}
\lim_{n\to\infty}\cos\left(\frac{t}{\sqrt{n}}\right)^n&=\lim_{n\to\infty}e^{n\ln\left(\cos\left(\frac{t}{\sqrt{n}}\right)\right)}\\
&=\lim_{n\to\infty}e^{n\ln\left(1-\frac{t^2}{2n}+o(\frac{1}{n})\right)}\\
&=\lim_{n\to\infty}e^{n\left(-\frac{t^2}{2n}+o(\frac{1}{n})\right)}\\
&=e^{-\frac{t^2}{2}}
\end{align*}$$
Tried to PM the solver but couldn't. I have some questions about the steps here, particularly how the cosine was changed into the algebraic expression?
 
  • Like
Likes archaic
  • #104
Hsopitalist said:
Tried to PM the solver but couldn't. I have some questions about the steps here, particularly how the cosine was changed into the algebraic expression?
https://en.wikipedia.org/wiki/Taylor_series
 
  • #105
Oh man, thank you!
 
  • Like
Likes archaic
  • #106
Hsopitalist said:
Oh man, thank you!
You are welcome!
 
  • #107
For #5, Let ##R## be a commutative ring. To say a subset ##I \subseteq R## is an ideal of ##R## means that ##I## is nonempty, ##I## is closed under addition and that for any ##r \in R## and ##i \in I##, we have ##ri \in I##. And to say ##\operatorname{char} R \neq 2## means ##2## is not the smallest positive integer ##n## such that ##n\cdot 1 = 0##. But I'm not sure how to relate this to matrix multiplication. Would it be possible to get another hint, please?
 
  • #108
fishturtle1 said:
For #5, Let ##R## be a commutative ring. To say a subset ##I \subseteq R## is an ideal of ##R## means that ##I## is nonempty, ##I## is closed under addition and that for any ##r \in R## and ##i \in I##, we have ##ri \in I##. And to say ##\operatorname{char} R \neq 2## means ##2## is not the smallest positive integer ##n## such that ##n\cdot 1 = 0##. But I'm not sure how to relate this to matrix multiplication. Would it be possible to get another hint, please?
The matrix has a certain additive decomposition (*) of which one part is in a certain ideal. The decomposition series of that ideal together with a certain property of (*) then solves the question.
 

Similar threads

  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 42 ·
2
Replies
42
Views
10K
  • · Replies 64 ·
3
Replies
64
Views
16K
  • · Replies 80 ·
3
Replies
80
Views
9K
  • · Replies 61 ·
3
Replies
61
Views
11K
  • · Replies 33 ·
2
Replies
33
Views
9K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 67 ·
3
Replies
67
Views
11K
  • · Replies 102 ·
4
Replies
102
Views
11K
  • · Replies 93 ·
4
Replies
93
Views
15K