# Math Challenge - July 2020

• Challenge
• Featured
Mentor
2022 Award
I don't know if you're allowed to answer/give hints, but for problem 14, I had this thought:

We have two equations and two unknowns. This is a good start. Although, I don't see any way we can solve for ##x## in terms of ##y## (I've really tried), which makes it hard to make progress. Although I recognize that ##\sin^n{(x+\frac{\pi}{2})} = \cos^n{(x)}##. Does that mean for in order for there to be any solutions at all that $$\sin^n{(x+\frac{\pi}{2})} - \cos^n{(x)} = y^4+\left(x + \frac{\pi}{2} \right)y^2-4y^2+4 - x^4+x^2y^2-4x^2+1 = 0$$

I might be completely wrong in my reasoning, but I was just wondering.

Disclaimer: I've never done a "find all solutions x,y for a function" type problem.
You cannot solve it directly. The trick is to get rid of what disturbs most. This is always a good plan. You can eliminate the trig functions to the cost that equality turns into an inequality.

You cannot solve it directly. The trick is to get rid of what disturbs most. This is always a good plan. You can eliminate the trig functions to the cost that equality turns into an inequality.

If you add them, using ##\sin^4{x} + \cos^4{x} \in [\frac{1}{2},1]##, you can get the inequality $$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) < -4$$ $$(x^2 + y^2)^2 -4(x^2 + y^2) + 4 < 0$$ $$([x^2 + y^2] -2)^2 < 0$$That would seem to imply that there are no possible values of ##x^2 + y^2## that satisfy the inequality, and that there are no solutions.

• Mayhem
Mentor
2022 Award
That would seem to imply that there are no possible values of ##x^2+y^2## that satisfy the inequality, and that there are no solutions...
... if you only had worked properly ...

Or to say it in chess speak: Sauber setzen!
Sorry, the rhythm doesn't work in English.

• etotheipi
ItsukaKitto
For the first problem in the high school section, I think one can use induction (Don't know if this has been done already)...noting that for n=1 the solution is trivial i.e. a1 + b1 = a + b, and assuming the statement is true for n = m,
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2) = (a^2 + b^2) ## , we can prove
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2)(a_{m+1}^2 + b_{m + 1}^2) = (a^2 + b^2)(a_{m+1}^2 + b_{m+1}^2 ) = (aa_{m+1} + bb_{m+1} )^2 + (ab_{m+1} - ba_{m+1})^2 ##

• fresh_42
Mentor
2022 Award
For the first problem in the high school section, I think one can use induction (Don't know if this has been done already)...noting that for n=1 the solution is trivial i.e. a1 + b1 = a + b, and assuming the statement is true for n = m,
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2) = (a^2 + b^2) ## , we can prove
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2)(a_{m+1}^2 + b_{m + 1}^2) = (a^2 + b^2)(a_{m+1}^2 + b_{m+1}^2 ) = (aa_{m+1} + bb_{m+1} )^2 + (ab_{m+1} - ba_{m+1})^2 ##
Another possibility is to write ##a_k^2+b_k^2 =\det \left| \begin{pmatrix}a_k&-b_k\\b_k&a_k\end{pmatrix} \right|##, show that this form is conserved by matrix multiplication, and then use that the determinant respects matrix multiplication.

Last edited:
• ItsukaKitto
Mayhem
Can we prove problem 11 using vector algebra? Let ##a_k = \binom{a_k}{a_k}##, then we can rewrite the question as
$$\prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \mathbf{a}\cdot\mathbf{b}$$
We know that the scalar product returns a scalar. Which would mean that a product of ##n## scalar products would also return a scalar, namely ##\mathbf{a}\cdot\mathbf{b}##.

And since the scalar product for a 2-d vector is by definition ##a_1a_2+b_1b_2##, then for ##a_1 = a_2## and ##b_1 + b_2##, the resulting scalar of \mathbf{a}\cdot\mathbf{b} must be a sum of two integer squares given the initial conditions for ##\mathbf{a}_k##.

Maybe I'm using circular logic/assuming the conclusion. I'm quite new to proofs.

ItsukaKitto
I know one way to solve the trigonometric problem, only that it happens to be just etotheipi 's approach, but done...(I hope) correctly:
Adding the equations we see that we get an inequality in ## A = x^2 + y^2 ## , namely
##cos^4 (x) + sin^4(x) = A(A-4) + 5 <= 1 ##
But we see the greatest value of the expression in A is 1, so that we have equality when it is equal to 1, then we solve ##A^2 - 4A + 4 = 0## , getting ##A = 2##;
Now, we also see that ##cos^4(x) + sin^4(x) = 1##, which will occur when ##x = \pm π/2, 0 ##
So that the values of y can be found from the relation A = 2, to be ## y = \pm \sqrt{2}, \pm \sqrt{2 - π^2/4 } ##

Mentor
2022 Award
Can we prove problem 11 using vector algebra? Let ##a_k = \binom{a_k}{a_k}##, then we can rewrite the question as
$$\prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \mathbf{a}\cdot\mathbf{b}$$
We know that the scalar product returns a scalar. Which would mean that a product of ##n## scalar products would also return a scalar, namely ##\mathbf{a}\cdot\mathbf{b}##.

And since the scalar product for a 2-d vector is by definition ##a_1a_2+b_1b_2##, then for ##a_1 = a_2## and ##b_1 + b_2##, the resulting scalar of \mathbf{a}\cdot\mathbf{b} must be a sum of two integer squares given the initial conditions for ##\mathbf{a}_k##.

Maybe I'm using circular logic/assuming the conclusion. I'm quite new to proofs.
I guess you meant ##\mathbf{a}_k=\mathbf{b}_k=\begin{bmatrix}a_k\\b_k\end{bmatrix}## in which case we have
$$(a_1^2+b_1^2)\cdot \ldots \cdot (a_n^2+b_n^2)= \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{a}_k = \prod_{k = 1}^{n} \|\mathbf{a}_k\|_2^2$$
And at this point you have to insert the argument why the result is again of the requested form. Do you know this argument?

• Mayhem
Mayhem
I guess you meant ##\mathbf{a}_k=\mathbf{b}_k=\begin{bmatrix}a_k\\b_k\end{bmatrix}## in which case we have
$$(a_1^2+b_1^2)\cdot \ldots \cdot (a_n^2+b_n^2)= \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{a}_k = \prod_{k = 1}^{n} \|\mathbf{a}_k\|_2^2$$
And at this point you have to insert the argument why the result is again of the requested form. Do you know this argument?
Not on the top of my head, no. And yes, I messed up the definition of ##\mathbf{a}_k##. Thanks for noticing that.

Mentor
2022 Award
Not on the top of my head, no. And yes, I messed up the definition of ##\mathbf{a}_k##. Thanks for noticing that.
The scalar product of a vector with itself is the squared Euclidean norm of this vector, its length squared. Now which properties do norms have?

Mentor
2022 Award
I know one way to solve the trigonometric problem, only that it happens to be just etotheipi 's approach, but done...(I hope) correctly:
Adding the equations we see that we get an inequality in ## A = x^2 + y^2 ## , namely
##cos^4 (x) + sin^4(x) = A(A-4) + 5 <= 1 ##
But we see the greatest value of the expression in A is 1, so that we have equality when it is equal to 1, then we solve ##A^2 - 4A + 4 = 0## , getting ##A = 2##;
Now, we also see that ##cos^4(x) + sin^4(x) = 1##, which will occur when ##x = \pm π/2, 0 ##
So that the values of y can be found from the relation A = 2, to be ## y = \pm \sqrt{2}, \pm \sqrt{2 - π^2/4 } ##
Have you checked which of all these numbers is actually a solution?

Mayhem
The scalar product of a vector with itself is the squared Euclidean norm of this vector, its length squared. Now which properties do norms have?
Ah, so ##||\mathbf{a}||## is norm? During high school, we used ##|\mathbf{a}|##, but I should have guessed that it was the same thing.

The norm of a vector ##|\mathbf{a}|##, that is to say its "length" is defined as

$$|\mathbf{a}|=\sqrt{a_1^2+a_2^2}$$

If we square both sides, we get ##|\mathbf{a}|^2 = a_1^2+b_2^2##. Generalize that for ##|\mathbf{a_k}|## and we get what we wanted. I see! Thank you.

$$([x^2 + y^2] -2)^2 < 0$$
Sorry, this should be a non-strict inequality.

Mentor
2022 Award
Ah, so ##||\mathbf{a}||## is norm? During high school, we used ##|\mathbf{a}|##, but I should have guessed that it was the same thing.

The norm of a vector ##|\mathbf{a}|##, that is to say its "length" is defined as

$$|\mathbf{a}|=\sqrt{a_1^2+a_2^2}$$

If we square both sides, we get ##|\mathbf{a}|^2 = a_1^2+b_2^2##. Generalize that for ##|\mathbf{a_k}|## and we get what we wanted. I see! Thank you.
Yes, but you still need a property of the norm.

Mayhem
Yes, but you still need a property of the norm.
That it is a scalar?

ItsukaKitto
Have you checked which of all these numbers is actually a solution?
Ah yes, they're only a list of permissible values, the actual solutions I think must be a subset of these...
##(0,\sqrt{2}) , (0, -\sqrt{2})## work but the other values don't seem to work.

• fresh_42
Mentor
2022 Award
That it is a scalar?
The problem we still have is whether the product of the squared norms is still of the form ##\|\mathbf{a}\|_2^2##. I don't see this. So maybe I was wrong and the norm doesn't work as easy as I thought. We will still need additional information. I guess the determinant idea is the closest one to that approach.

Mayhem
The problem we still have is whether the product of the squared norms is still of the form ##\|\mathbf{a}\|_2^2##. I don't see this. So maybe I was wrong and the norm doesn't work as easy as I thought. We will still need additional information. I guess the determinant idea is the closest one to that approach.
So in conclusion, setting it up as a product of scalar products probably isn't particularly useful. I really ought to brush up on my linear algebra. Unfortunately calculus is more fun.

Mentor
2022 Award
So in conclusion, setting it up as a product of scalar products probably isn't particularly useful. I really ought to brush up on my linear algebra. Unfortunately calculus is more fun.
13.) and 12.c.) from last month are calculus - basically.

• Mayhem
Lament
Given,
\begin{align*}
f(xy)&=f(x)f(y)-f(x)-f(y)+2 ...(1)\\
f(x+y)&=f(x)+f(y)+2xy-1 ...(2)\\
f(1)&=2 ...(3)
\end{align*}
Put ##y=0## in ##(2)##, gives ##f(0)=1##
Put ##y=1## in ##(2)##, gives
$$f(x+1)=f(x)+f(1)+2x-1$$
or,
\begin{align*}
f(x+1)-f(x)&=2x+1 ...(4)
\end{align*}
Hence,
\begin{align*}
[f(x+1)-f(x)]\\+[f(x)-f(x-1)]\\+...\\+[f(3)-f(2)]\\+[f(2)-f(1)]&=[2*x+1]+[2*(x-1)+1]+...+[2*2+1]+[2*1+1]\\
\Rightarrow f(x+1)-f(1)&=2*\frac {x(x+1)} {2}+x \\
\Rightarrow f(x+1)&=x^2+2x+2=(x+1)^2+1 ...(5)\\
\end{align*}
or,
\begin{align*}
f(x)&=x^2+1 ...(6)
\end{align*}
It is easy to check ##(6)## satisfies ##(1)## & ##(2)##.

• fresh_42
If you add them, using ##\sin^4{x} + \cos^4{x} \in [\frac{1}{2},1]##, you can get the inequality $$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) < -4$$ $$(x^2 + y^2)^2 -4(x^2 + y^2) + 4 < 0$$ $$([x^2 + y^2] -2)^2 < 0$$That would seem to imply that there are no possible values of ##x^2 + y^2## that satisfy the inequality, and that there are no solutions.
I think, pal, you missed the equality. The first expression should be

$$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) \leq -4$$
because 1 is allowed. Hence, your last expression becomes
$$([x^2 + y^2] -2)^2 \leq 0$$
since we want only real solutions, therefore equality is something that we want (a square cannot be less than zero if it's components are less than 0). So, we have
$$[(x^2 +y^2)-2]^2= 0$$
$$x^2 + y^2 = 2$$
With this value we can put it into our original given equations to get the real solutions.

(Pal I'm not alluding anything to our discussion in that thread, we are friends and I respect that).

• etotheipi
I think, pal, you missed the equality. The first expression should be

You're right, see #83 • You're right, see #83 Sorry missed that. Actaully, there were so many posts and even the tab “there are more posts to display” but I didn’t click on it Homework Helper
SPOILER Remarks/hints:

Problem #1 taught me that i did not really know what the weak topology is, i.e. what the basic open sets are. nice problem.

Problem #9 taught me that you can reverse the inequality and still get the same conclusion, even wthout assuming the map is onto, (essentially same proof). Another instructive problem.

Problem #2, I don't know much about, but I would start by finding their lie algebras. (Since, as you probably know, the lie algebra is the tangent space to the lie group at the identity, and transversaity is related to relative position of tangent spaces.)

Last edited:
Gold Member
@mathwonk Yes, computing their Lie algebras is the right way to go. And in general that should be enough, because if ##G,G'\subset GL_n## are Lie groups, and ##g\in G\cap G'##, then ##G## and ##G'## intersect transversely at ##g## iff they do at ##1## (since ##T_gG=g_* \left(T_1G\right)## and similarly for ##G'##).

you can reverse the inequality and still get the same conclusion, even wthout assuming the map is onto, (essentially same proof). Another instructive problem.
Sure, to remember the Poincare recurrence theorem is also instructive in this concern

Homework Helper
I'm afraid I don't know that theorem of Poincare'. Ok I googled it, yes, nice connection.
Also one gets from the reverse result that every (necessarily injective) isometry of a compact metric space into itself is also surjective, a possibly useful fact, analogous to results about linear maps of finite dimensional vector spaces, and algebraic maps of irreducible projective varieties, and of course functions on finite sets, another illustration of the principle that "compact" generalizes "finite".

And hopefully the lie algebra result gives what we want, since then SU(n) has a nice lie algebra structure compatible with the other two!

And in looking at continuous functions on the interval, I was able to find a sequence of constant norm that converges to zero weakly, (using the known structure of the dual of C[0,1] due to Riesz in 1909, as functions of bounded variation), but so far that only illustrates problem #1, apparently not whether C[0,1] is a dual. I thought maybe I could use the theorem that the unit ball in a dual is weak star compact, but have not seen how to do that yet. I.e. even if the ball were not weakly compact, which is not at all clear either, it seems it might still be weak-star compact. Some people say the key is to look at "extreme points", i.e. maybe convexity is involved? (i.e. Krein Milman as well as Alaoglu.) Haven't written anything down yet but if the picture in my head is close, among positive valued functions in the unit ball in C[0,1], there seems to be only one extreme point, hence altogether only two? Just tossing out guesses.

Last edited:
theorem that the unit ball in a dual is weak star compact,
Krein Milman
the key words have been pronounced :)

Since $\dim V \leq \dim V'$ always holds, the space $V'$ must be infinite dimensional. Assume for a contradiction $\sigma (V,V')$ is normable. The dual space of normed space is a Banach space, so it is enough to show that the vector space $V'$ admits a countable basis, which is impossible for Banach spaces.

Let $\|\cdot\|$ be the inducing norm. I will attempt to justify that we may assume a countable neighborhood basis of $0$ which consists of kernels of functionals in $V'$. Consider a nh basis of $0$, for instance $T_n = \{x\in V \mid \|x\| < 1/n\},\quad n\in\mathbb N.$
Fix $n\in\mathbb N$. By Hahn-Banach (the point separation corollary), take $\varphi _n \in V'$ such that $\overline{T_n}\subseteq \mathrm{Ker}\varphi _n$. Then the sequence of kernels constitute a nh basis of $0$. In fact,
$$\left\{\bigcap _{k=1}^n\varphi _k^{-1}(B(0,\varepsilon)) \mid n\in\mathbb N, \varepsilon >0\right\}$$
is a nh basis of $0$ w.r.t $\sigma (V,V')$. Take $\varphi \in V'$. For every $\varepsilon >0$ we have $N_\varepsilon \in\mathbb N$ s.t
$$\bigcap _{k=1} ^{N_\varepsilon} \varphi _{k}^{-1}(B(0,\varepsilon)) \subseteq \varphi ^{-1}(B(0,\varepsilon))$$
Put $N := \min _{\varepsilon }N_{\varepsilon}$. Then $x\in \bigcap _{k=1}^N \mathrm{Ker}\varphi _k$ implies $x\in\mathrm{Ker}\varphi$, thus $\varphi \in \mathrm{span}(\varphi _1,\ldots, \varphi _N)$.
I think Hahn Banach is overkill and I also think a similar argument passes assuming $\sigma (V,V')$ is metrisable. I can't put my finger on it atm. It's likely something stupidly simple.

Last edited:
Since $\dim V \leq \dim V'$ always holds, the space $V'$ must be infinite dimensional. Assume for a contradiction $\sigma (V,V')$ is normable. The dual space of normed space is a Banach space, so it is enough to show that $V'$ admits a countable basis, which is impossible for Banach spaces.

Let $\|\cdot\|$ be the inducing norm. I will attempt to justify that we may assume a countable nh basis of $0$ which consists of kernels of functionals in $V'$. Consider a neighborhood basis of $0$, for instance $T_n = \{x\in V \mid \|x\| < 1/n\},\quad n\in\mathbb N.$
Fix $n\in\mathbb N$. By Hahn-Banach (the point separation corollary), take $\varphi _n \in V'$ such that $\overline{T_n}\subseteq \mathrm{Ker}\varphi _n$. Then the sequence of kernels constitute a nh basis of $0$. In fact,
$$\mathcal N := \left\{\bigcap _{k=1}^n\varphi _k^{-1}(B(0,\varepsilon)) \mid n\in\mathbb N, \varepsilon >0\right\}$$
is a nh basis of $0$ w.r.t $\sigma (V,V')$. Take $\varphi \in V'$. For every $\varepsilon >0$ we have $N_\varepsilon \in\mathbb N$ s.t
$$\bigcap _{k=1} ^{N_\varepsilon} \varphi _{k}^{-1}(B(0,\varepsilon)) \subseteq \varphi ^{-1}(B(0,\varepsilon))$$
Put $N := \min _{\varepsilon }N_{\varepsilon}$. Then $x\in \bigcap _{k=1}^N \mathrm{Ker}\varphi _n$ implies $x\in\mathrm{Ker}\varphi$, thus $\varphi \in \mathrm{span}(\varphi _1,\ldots, \varphi _N)$.
I think Hahn Banach is overkill and I also think a similar argument passes assuming $\sigma (V,V')$ is metrisable. I can't put my finger on it atm. It's likely something stupidly simple.

I didn't read yet in detail but what do you mean with "basis"? A topological basis? In that case ##l^2(\Bbb{N})## is separable and thus has a countable basis, yet has infinite dimension. So I don't quite see how you would get a contradiction.

I didn't read yet in detail but what do you mean with "basis"? A topological basis? In that case ##l^2(\Bbb{N})## is separable and thus has a countable basis, yet has infinite dimension. So I don't quite see how you would get a contradiction.
Now, you're confusing me. The $\varphi_n$ make a countable basis of the vector space $V'$, which is impossible for Banach spaces. Iirc it's a consequence of Baire category theorem.. now that I think about it, maybe P1 more generally can also be shown with BCT.

I edited the post to emphasise it's a basis in the sense of vector spaces.

Last edited:
Now, you're confusing me. The $\varphi_n$ make a countable basis of the vector space $V'$, which is impossible for Banach spaces. Iirc it's a consequence of Baire category theorem.. now that I think about it, maybe P1 more generally can also be shown with BCT.

I edited the post to emphasise it's a basis in the sense of vector spaces.

Ah yes, I see now. Your idea was to show that ##V^*## admits a countable Hamel basis by showing that ##V^*## is spanned by countably many functionals. Interesting! I will look at the details soon and get back to you.

Last edited by a moderator:
Since $\dim V \leq \dim V'$ always holds, the space $V'$ must be infinite dimensional. Assume for a contradiction $\sigma (V,V')$ is normable. The dual space of normed space is a Banach space, so it is enough to show that the vector space $V'$ admits a countable basis, which is impossible for Banach spaces.

Let $\|\cdot\|$ be the inducing norm. I will attempt to justify that we may assume a countable neighborhood basis of $0$ which consists of kernels of functionals in $V'$. Consider a nh basis of $0$, for instance $T_n = \{x\in V \mid \|x\| < 1/n\},\quad n\in\mathbb N.$
Fix $n\in\mathbb N$. By Hahn-Banach (the point separation corollary), take $\varphi _n \in V'$ such that $\overline{T_n}\subseteq \mathrm{Ker}\varphi _n$. Then the sequence of kernels constitute a nh basis of $0$. In fact,
$$\left\{\bigcap _{k=1}^n\varphi _k^{-1}(B(0,\varepsilon)) \mid n\in\mathbb N, \varepsilon >0\right\}$$
is a nh basis of $0$ w.r.t $\sigma (V,V')$. Take $\varphi \in V'$. For every $\varepsilon >0$ we have $N_\varepsilon \in\mathbb N$ s.t
$$\bigcap _{k=1} ^{N_\varepsilon} \varphi _{k}^{-1}(B(0,\varepsilon)) \subseteq \varphi ^{-1}(B(0,\varepsilon))$$
Put $N := \min _{\varepsilon }N_{\varepsilon}$. Then $x\in \bigcap _{k=1}^N \mathrm{Ker}\varphi _k$ implies $x\in\mathrm{Ker}\varphi$, thus $\varphi \in \mathrm{span}(\varphi _1,\ldots, \varphi _N)$.
I think Hahn Banach is overkill and I also think a similar argument passes assuming $\sigma (V,V')$ is metrisable. I can't put my finger on it atm. It's likely something stupidly simple.

I'm not sure how you apply Hahn-Banach, but even if its use is justified there seems to be a problem:

You have ##T_n \subseteq \ker(\varphi_n)## and consequently ##V=\operatorname{span}(T_n) \subseteq \ker(\varphi_n)## so we have ##\varphi_n = 0## for all ##n##, so it is impossible that these functionals span the dual space.

Somewhere you must have made a mistake.

Oops, I'm an idiot.. • 