Challenge Intermediate Math Challenge - May 2018

Click For Summary
The Intermediate Math Challenge for May 2018 invites participants to solve a variety of mathematical problems, emphasizing the importance of providing full proofs for solutions. Participants are encouraged to use resources like Google and WolframAlpha, but direct searches for problem solutions are prohibited. Mentors and homework helpers are asked to refrain from posting solutions until the 16th of each month to allow others the opportunity to engage with the challenges. Several problems have been successfully solved, showcasing a range of mathematical concepts from integrals to properties of prime numbers. The challenge aims to foster a collaborative environment for mathematical exploration and learning.
  • #61
julian said:
I think I have solved problem 1:

I split the proof into the parts:

Part (a) A few facts about real skew symmetric matrices.
Part (b): Proof for ##n## even.
(i) Looking at case ##n = 2##.
(ii) Proving a key inequality. This will prove case ##n=2##
(iii) Proving case for general even ##n## (then easy).
Part (c) Case of odd ##n##.
(i) Proving case for ##n = 3## (easy because of part (b)).
(iii) Proving case for general odd ##n## (easy because of part (b)).

I went through it fairly granularly and did not see any flaws.

A couple thoughts:

1.) If you are so inclined in your first spoiler, you may make use of rule
QuantumQuest said:
2) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
I would be happy to accept basic results about skew symmetric matrices' eigenvalues having zero real component by spectral theory. On the other hand, your workthrough may be more instructive for 3rd parties reading it who don’t know the spectral theory underlying it – so they both have merits.

2.) The key insight for this problem, in my view, is figuring out the ##n = 2## case. Everything can be built off of this. The other insight is relating it to ##\text{GM} \leq \text{AM}## in some way. I think you basically re-created Cauchy’s forward-backward induction proof for ##\text{GM} \leq \text{AM}##, in Part (b) albeit for additivity not for vanilla ##\text{GM} \leq \text{AM}##. Since we are at month end, I will share another much simpler idea, which is the fact that 'regular' ##\text{GM} \leq \text{AM}## implies this result.

my take is that in Part (B) (II) when you are seeking to prove:

##(\theta^2 + x_1^2) \dots (\theta^2 + x_k^2) \geq [\theta^2 + (x_1^2 \dots x_k^2)^{1/k}]^k##

or equivalently

##\Big((\theta^2 + x_1^2) \dots (\theta^2 + x_k^2)\Big)^{1/k} \geq \theta^2 + (x_1^2 \dots x_k^2)^{1/k}##
multiply each side by

##\big(\theta^2\big)^{-1}##
(which is positive and doesn't change the inequality) and define
##z_i := \frac{x_i^2}{\theta^2} \gt 0##

The relationship is thus:

##\Big(\prod_{i=1}^k (1 + z_i)\Big)^{1/k}= \Big((1 + z_1) \dots (1 + z_k)\Big)^{1/k} \geq 1 + (z_1 \dots z_k)^{1/k} = \Big(\prod_{i=1}^k 1\Big)^{1/k} + \Big(\prod_{i=1}^k z_i\Big)^{1/k}##

which is true by the super additivity of the Geometric Mean (which incidentally was a past challenge problem, but since it is not this challenge problem I think it is fine to assume it is common knowledge to mathematicians).
- - - -
To consider the case of any eigenvalues equal to zero, we can verify that the inequality holds with equality, which we can chain onto the above.
- - - -
I have a soft spot for proving this via ##2^r## for ##r = \{1, 2, 3, ...\}## and then filling in the gaps. Really well done. Forward backward-induction is a very nice technique, but a lot of book-keeping!
 
Physics news on Phys.org
  • #62
Here is the solution to the last open problem #9.

For a given a real Lie algebra ##\mathfrak{g}##, we define
$$
\mathfrak{A(g)} = \{\,\alpha \, : \,\mathfrak{g}\longrightarrow \mathfrak{g}\,\,: \,\,[\alpha(X),Y]=-[X,\alpha(Y)]\text{ for all }X,Y\in \mathfrak{g}\,\}\quad (1)
$$
The Lie algebra multiplication is defined by
  • ##(2)## anti-commutativity: ##[X,X]=0##
  • ##(3)## Jacobi-identity: ##[X,[Y,Z]]+[Y,[Z,X]]+[Z,[X,Y]]=0##
a) ##\mathfrak{A(g)}\subseteq \mathfrak{gl}(g)## is a Lie subalgebra in the Lie algebra of all linear transformations of ##\mathfrak{g}## with the commutator as Lie product ##[\alpha, \beta]= \alpha \beta -\beta \alpha \quad (4)## because
\begin{align*}
[[\alpha,\beta]X,Y]&\stackrel{(4)}{=}[\alpha \beta X,Y] - [\beta\alpha X,Y]\\
&\stackrel{(1)}{=}[X,\beta \alpha Y]-[X,\alpha \beta Y]\\
&\stackrel{(4)}{=}[X,[\beta,\alpha]Y]\\
&\stackrel{(2)}{=}-[X,[\alpha,\beta]Y]
\end{align*}
b) The smallest non Abelian Lie algebra ##\mathfrak{g}## with trivial center is ##\mathfrak{g}=\langle X,Y\,: \,[X,Y]=Y\rangle\,.## It's easy to verify ##\mathfrak{A(g)} \cong \mathfrak{sl}(2,\mathbb{R})\,##, the Lie algebra of ##2 \times 2## matrices with trace zero.

##\mathfrak{g}=\mathfrak{B(sl(}2,\mathbb{R}))## is the maximal solvable subalgebra of ##\mathfrak{sl}(2,\mathbb{R})##, a so called Borel subalgebra.

c) To show that ##\mathfrak{g} \rtimes \mathfrak{A(g)}## is a semidirect product given by $$[X,\alpha]:=[\operatorname{ad}X,\alpha]=\operatorname{ad}X\,\alpha - \alpha\,\operatorname{ad}X\quad (5)$$ we have to show that this multiplication makes ##\mathfrak{A}(g)## an ideal in ##\mathfrak{g} \rtimes \mathfrak{A(g)}## and a ##\mathfrak{g}-##module.
\begin{align*}
[[X,\alpha]Y,Z]&\stackrel{(5)}{=}[[X,\alpha Y],Z] - [\alpha[X,Y],Z]\\
&\stackrel{(3),(1)}{=}-[[\alpha Y,Z],X]-[[Z,X],\alpha Y]+[[X,Y],\alpha Z]\\
&\stackrel{(3),(1)}{=}[[Y,\alpha Z],X]+[\alpha[Z,X],Y]\\&-[[Y,\alpha Z],X]-[[\alpha Z,X],Y]\\
&\stackrel{(2)}{=}[Y,\alpha[X,Z]]-[Y,[X,\alpha Z]]\\
&\stackrel{(5)}{=}-[Y,[X,\alpha Z]]
\end{align*}
and ##\mathfrak{A(g)}## is an ideal in ##\mathfrak{g} \rtimes \mathfrak{A(g)}##. It is also a ##\mathfrak{g}-##module, because ##\operatorname{ad}## is a Lie algebra homomorphism ##(6)## and therefore
\begin{align*}
[[X,Y],\alpha]&\stackrel{(5)}{=}[\operatorname{ad}[X,Y],\alpha]\\
&\stackrel{(6)}{=}[[\operatorname{ad}X,\operatorname{ad}Y],\alpha]\\
&\stackrel{(3)}{=}-[[\operatorname{ad}Y,\alpha],\operatorname{ad}X]-[[\alpha,\operatorname{ad}X],\operatorname{ad}Y]\\
&\stackrel{(2)}{=}[\operatorname{ad}X,[\operatorname{ad}Y,\alpha]]-[\operatorname{ad}Y,[\operatorname{ad}X,\alpha]]\\
&\stackrel{(5)}{=} [X,[Y,\alpha]]-[Y,[X,\alpha]]
\end{align*}
d) For the last equation with ##\alpha \in \mathfrak{A(g)}## and ##X,Y,Z \in \mathfrak{g} ##
$$[\alpha(X),[Y,Z]]+[\alpha(Y),[Z,X]]+[\alpha(Z),[X,Y]] =0\quad (7)$$
we have
\begin{align*}
[\alpha(X),[Y,Z]]&\stackrel{(3)}{=}-[Y,[Z,\alpha(X)]]-[Z,[\alpha(X),Y]]\\
&\stackrel{(1)}{=} [Y,[\alpha(Z),X]]+[Z,[X,\alpha(Y)]]\\
&\stackrel{(3)}{=} -[\alpha(Z),[X,Y]]-[X,[Y,\alpha(Z)]]\\
&-[X,[\alpha(Y),Z]]-[\alpha(Y),[Z,X]]\\
&\stackrel{(1)}{=}-[\alpha(Y),[Z,X]]-[\alpha(Z),[X,Y]]
\end{align*}
 

Similar threads

  • · Replies 93 ·
4
Replies
93
Views
15K
  • · Replies 33 ·
2
Replies
33
Views
9K
  • · Replies 55 ·
2
Replies
55
Views
10K
  • · Replies 77 ·
3
Replies
77
Views
15K
  • · Replies 114 ·
4
Replies
114
Views
11K
  • · Replies 93 ·
4
Replies
93
Views
11K
  • · Replies 100 ·
4
Replies
100
Views
11K
  • · Replies 46 ·
2
Replies
46
Views
13K
  • · Replies 16 ·
Replies
16
Views
6K
  • · Replies 61 ·
3
Replies
61
Views
11K