# Intermediate Math Challenge - August 2018

• Challenge
• Featured
Mentor
2021 Award
Summer is coming and brings a new intermediate math challenge! Enjoy! If you find the problems difficult to solve don't be disappointed! Just check our other basic level math challenge thread!

RULES:

1) In order for a solution to count, a full derivation or proof must be given. Answers with no proof will be ignored. Solutions will be posted around 15th of the following month.
2) It is fine to use nontrivial results without proof as long as you cite them and as long as it is "common knowledge to all mathematicians". Whether the latter is satisfied will be decided on a case-by-case basis.
3) If you have seen the problem before and remember the solution, you cannot participate in the solution to that problem.
4) You are allowed to use google, wolframalpha or any other resource. However, you are not allowed to search the question directly. So if the question was to solve an integral, you are allowed to obtain numerical answers from software, you are allowed to search for useful integration techniques, but you cannot type in the integral in wolframalpha to see its solution.
5) Mentors, advisors and homework helpers are kindly requested not to post solutions, not even in spoiler tags, for the challenge problems, until 16th of each month. This gives the opportunity to other people including but not limited to students to feel more comfortable in dealing with / solving the challenge problems. In case of an inadvertent posting of a solution the post will be deleted.

QUESTIONS:

1. (solved by @Citan Uzuki ) Let ##R## be a ring with identity element ##1## and ##r \in R## an element without left inverse but at last one right inverse ##r\cdot a_0=1##. Prove that there are infinitely many right inverses to ##r##.

2. Consider the Lie algebra of skew-Hermitian ##2\times 2## matrices ##\mathfrak{g}:=\mathfrak{su}(2,\mathbb{C})## and the Pauli matrices (note that Pauli matrices are not a basis!)
$$\sigma_1=\begin{bmatrix}0&1\\1&0\end{bmatrix}\, , \,\sigma_2=\begin{bmatrix}0&-i\\i&0\end{bmatrix}\, , \,\sigma_3=\begin{bmatrix}1&0\\0&-1\end{bmatrix}$$
Now we define an operation on ##V:=\mathbb{C}_2[x,y]##, the vector space of all complex polynomials of degree less than three in the variables ##x,y## by
\begin{align*}
\varphi(\alpha_1\sigma_1 +\alpha_2\sigma_2+\alpha_3\sigma_3)&.(a_0+a_1x+a_2x^2+a_3y+a_4y^2+a_5xy)= \\
&= x(-i \alpha_1 a_3 +\alpha_2 a_3 - \alpha_3 a_1 )+\\
&+ x^2(2i\alpha_1 a_5 +2 \alpha_2 a_5 + 2\alpha_3 a_2 )+\\
&+ y(-i\alpha_1 a_1 -\alpha_2 a_1 +\alpha_3 a_3 )+\\
&+ y^2(2i\alpha_1 a_5 -2\alpha_2 a_5 -2\alpha_3 a_4 )+\\
&+ xy(-i\alpha_1 a_2 -i\alpha_1 a_4 +\alpha_2 a_2 -\alpha_2 a_4 )
\end{align*}
Show that
• an adjusted ##\varphi## defines a representation of ##\mathfrak{su}(2,\mathbb{C})## on ##\mathbb{C}_2[x,y]##
• Determine its irreducible components.
• Compute a vector of maximal weight for each of the components.
Hint: ##\mathfrak{su}(2,\mathbb{C}) \cong \mathfrak{sl}(2,\mathbb{C})## leads to a more obvious basis for the weight spaces.

3. (solved by @Infrared ) Let ##(X\;,\;||\,.\,||\,)## be a normed vector space. Prove that ##X## is complete if and only if for each sequence with ##\sum_{n=1}^{\infty}||x_n|| < \infty## the series ##\sum_{n=1}^{\infty}x_n## converges as well in ##X##.

4. (solved by @Infrared ) Gauß' Divergence Theorem: ##\iiint_V (\nabla F)\,dV = \iint_{\partial V}(F\cdot N)d(\partial V)##.
See https://www.physicsforums.com/insights/pantheon-derivatives-part-v/

a.) Let ##B=B_1(0)## the unit sphere in ##\mathbb{R}^3## and consider the vector field
$$F(x)=\begin{bmatrix}(x_2^4+2x_2^2x_3^2)x_1\\(x_3^4+2x_1^2x_3^2)x_2 \\(x_1^4+2x_1^2x_2^2)x_3 \end{bmatrix}$$
and calculate the integral ##\int_{\partial B}F\cdot N \,d\mathbb{S}^2##

b.) Let ##U \subseteq \mathbb{R}^n## be open and ##h\in C^1(U)\; , \;F\in C^1(U,\mathbb{R}^n)\,.## Show that on ##U## we have
$$\operatorname{div}(hF)=h \operatorname{div}F + \nabla h\cdot F$$
c.) Let ##B^n \subseteq \mathbb{R}^n## be the closed unit ball and ##f,g \in C^2(B^n)\,.## Show that with the unit normal vector field ##N##
$$\int_{B^n} f \Delta g \,dB^n = -\int_{B^n} \nabla f \cdot \nabla g \,\,dB^n + \int_{\partial B^n} f \nabla g \cdot N \,d\mathbb{S}^{N-1}$$
5. Let ##f\, : \,(0,1)\longrightarrow \mathbb{R}## be Lebesgue integrable and $$Y := \{\,(x_1,x_2)\in\mathbb{R}^2\,|\,x_1,x_2\geq 0\, , \,x_1+x_2\leq 1\,\}$$
Show that for any ##\alpha_1\, , \,\alpha_2 > 0##
$$\int_Y f(x_1+x_2)x_1^{\alpha_1}x_2^{\alpha_2}\,d\lambda(x_1,x_2) = \left[\int_0^1 f(u)u^{\alpha_1+\alpha_2+1}\,d\lambda(u) \right]\cdot \left[\int_0^1 v^{\alpha_1}(1-v)^{\alpha_2}\,d\lambda(v) \right]$$
6. (solved by @Infrared ) Finite Groups.
• Let ##U\subsetneq G## be a proper subgroup of a finite group.
Show that ##\bigcup_{g\in G}gUg^{-1}\subsetneq G## is a proper subset.
• Let ##G\neq \{\,1\,\}## be a finite group which operates transitive on ##X## which has at least two elements ##|X|>1\,.## Transitive means all elements of ##X## can be reached by the group operation from a single ##x\in X\,.## Show that there is a group element ##g\in G## such that ##g.x \neq x## for all ##x\in X\,.##
7. Let
\begin{align*}
O_n(\mathbb{R})&=\{\,A\in \mathbb{M}(n,\mathbb{R})\,|\,
\langle Av,Aw\rangle = \langle v,w \rangle \text{ for all }v,w\in \mathbb{R}^n\,\}\\
&=\{\,A\in \mathbb{M}(n,\mathbb{R})\,|\,A^\tau A = A A^\tau =1\,\}
\end{align*}
be the orthogonal group of ##n\times n## matrices which operate per matrix multiplication on ##\mathbb{R}^n\,(n\in {N})\,.##
• (solved by @lpetrich ) Determine the orbit of ##x\in \mathbb{R}^n## under ##O_n(\mathbb{R})\,.##
• (solved by @lpetrich ) Determine the stabilizer ##\operatorname{Stab}_x(O_n(\mathbb{R}))=\{\,A\in O_n(\mathbb{R})\,|\,A.x=x\,\}## of ##x = (0,0,\ldots,1)\in \mathbb{R}^n## in ##O_n(\mathbb{R})\,.##
• (solved by @Infrared ) Determine a bijection ##\mathbb{S}^{n-1} \stackrel{1:1}{\longleftrightarrow} O_n(\mathbb{R})/O_{n-1}(\mathbb{R})## between the unit sphere in ##\mathbb{R}^n## and the factor of two consecutive orthogonal groups.
8. We define an equivalence relation on the topological two-dimensional unit sphere ##\mathbb{S}^2\subseteq \mathbb{R}^3## by ##x \sim y \Longleftrightarrow x \in \{\,\pm y\,\}## and the projection ##q\, : \,\mathbb{S}^2 \longrightarrow \mathbb{S}^2/\sim \,.## Furthermore we consider the homeomorphism ##\tau \, : \,\mathbb{S}^2 \longrightarrow \mathbb{S}^2## defined by ##\tau (x)=-x\,.## Note that for ##A \subseteq \mathbb{S}^2## we have ##q^{-1}(q(A))= A \cup \tau(A)\,.##
Show that
• ##q## is open and closed.
• ##\mathbb{S}^2/\sim ## is compact, i.e. Hausdorff and covering compact.
• Let ##U_x=\{\,y\in \mathbb{S}^2\,:\,||y-x||<1\,\}## be an open neighborhood of ##x \in \mathbb{S}^2\,.## Show that ##U_x \cap U_{-x} = \emptyset \; , \;U_{-x}=\tau(U_x)\; , \;q(U_x)=q(U_{-x})## and ##q|_{U_{x}}## is injective. Conclude that ##q## is a covering.
9. A function ##|\,.\,|\, : \,\mathbb{F}\longrightarrow \mathbb{R}_{\geq 0}## on a field ##\mathbb{F}## is called a value function if
\begin{align*}
&|x|=0 \Longleftrightarrow x=0 \\
&|xy| = |x|\;|y|\\
&|x+y| \leq |x|+|y|
\end{align*}
It is called Archimedean, if for any two elements ##a,b\,\,(a\neq 0)## there is a natural number ##n## such that ##|na|>|b|\,.## We consider the rational numbers. The usual absolute value
$$|x| = \begin{cases} x &\text{ if }x\geq 0 \\ -x &\text{ if }x<0\end{cases}$$
is Archimedean, whereas the trivial value
$$|x|_0 = \begin{cases} 0 &\text{ if }x = 0 \\ 1 &\text{ if }x\neq 0\end{cases}$$
is not.
Determine all non-trivial and non-Archimedean value functions on ##\mathbb{Q}\,.##

10. (solved by @nuuskur ) For a set ##X## let $$\mathcal{B}(X) = \{\,f\, : \,X\longrightarrow \mathbb{R}\,:\,\sup_{x\in X}\{\,|f(x)|\,\}=:||f||_\infty <\infty\,\}$$
be the space of all bounded functions on ##X\,.## We define a metric on ##\mathcal{B}(X)## by ##d(f,g)=||f-g||_\infty\,.##
• Show that ##(\mathcal{B}(X)\, , \,d)## is complete.
• If ##(X,d)## is a metric space and ##a\in X\,.## Prove that the function
$$\phi_a\, : \,X \longrightarrow \mathcal{B}(X)\, , \,\phi_a(x)=d(x,.)-d(a,.)$$
is an isometry of ##X## in ##\mathcal{B}(X)\,.##
• Show that the closure of ##\operatorname{im}(\phi_a)## is a completion of ##X \sim \phi_a(X)\,.##

Last edited:

## Answers and Replies

Citan Uzuki
I have a solution for problem 1:

Let $r \in R$ and let $a$ be a right-inverse of $r$. Let $X = \{x \in R: rx=0\}$. $X$ is a right ideal of $R$. Suppose $X$ is finite. For each $y\in R$, let $\phi_y: X \rightarrow X : x \mapsto xy$. Observe that $\phi_a \circ \phi_r = \phi_{ra} = \mathrm{id}_X$, so $\phi_a$ is surjective. Since $X$ is finite, this implies that $\phi_a$ is also injective. Now, $ar-1\in X$ and $\phi_a (ar-1) = (ar-1)a = a-a = 0$, so by the injectivity of $\phi_a$, it follows that $ar-1 = 0$, and hence $ar=1$. But if $r$ has no left inverse, $ar \neq 1$, so $X$ is infinite and the set $\{a+x : x \in X\}$ is an infinite family of right inverses of $a$.

• mfb
Mentor
2021 Award
I have a solution for problem 1:

Let $r \in R$ and let $a$ be a right-inverse of $r$. Let $X = \{x \in R: rx=0\}$. $X$ is a right ideal of $R$. Suppose $X$ is finite. For each $y\in R$, let $\phi_y: X \rightarrow X : x \mapsto xy$. Observe that $\phi_a \circ \phi_r = \phi_{ra} = \mathrm{id}_X$, so $\phi_a$ is surjective. Since $X$ is finite, this implies that $\phi_a$ is also injective. Now, $ar-1\in X$ and $\phi_a (ar-1) = (ar-1)a = a-a = 0$, so by the injectivity of $\phi_a$, it follows that $ar-1 = 0$, and hence $ar=1$. But if $r$ has no left inverse, $ar \neq 1$, so $X$ is infinite and the set $\{a+x : x \in X\}$ is an infinite family of right inverses of $a$.
Very elegant. It would have been perfect without the typo on the very last letter My proof constructs for ##N## right inverses an ##N+1##st with basically the same trick, but yours is nicer.

The theorem is named after Kaplansky.

Last edited:
lpetrich
Solution of Problem 9
A value function is ##|F| = R(\geq 0)## for field F and the nonnegative real numbers. It satisfies
$$|x| = 0 \leftrightarrow x = 0 \\ |x y| = |x| |y| \\ |x + y| \leq |x| + |y|$$
For the second property, we have ##|x^2| = |x|^2## and likewise for higher powers: ##|x^p| = |x|^p## for all positive-integer ##p##.

For ##x = 1##, ##|1| = |1|^p## for all positive-integer ##p##, meaning that ##|1| = 1##. Likewise, for ##x = -1## and ##p = 2##, we have ##|1| = |-1|^2##. Since the value function is a nonnegative real number, ##|-1| = 1##

Zero value of ##p##? ##|x^0| = |x|^0## gives ##1 = |x|^0##, so the identity holds there.

Negative value of ##p##? Consider reciprocals. Let ##y## satisfy ##xy = 1##. Then, ##|xy| = 1 = |x||y|##, meaning that ##|y| = |x|^{-1}##. This result is easily extended to other negative-integer powers, so ##|x^p| = |x|^p## for all integers ##p##.

Applying the identity to products of powers gives ## |x^p \cdot y^q| = |x|^p \cdot |y|^q##.

Every nonzero rational number can be expressed in the form ##q = \pm \prod_i (p_i)^{m_i}## where the p's are primes and the m's integers. For integers, the m's must be nonnegative integers, and dividing gives m = m(numerator) - m(denominator). Thus,
$$|q| = |\pm \prod_i (p_i)^{m_i}| = \prod_i |p_i|^{m_i}$$

For real numbers, it can easily be shown that ##|x| = (\text{abs}(x))^c## for some power ##c##.

Let us now consider the third identity. For real numbers, it is easy: ##|2x| \leq 2|x|## meaning that power ##c \leq 1##. For rational numbers, it is more difficult. Consider adding up some prime ##p_x## of some number ##q##. Then,
$$|p_x q| = |p_x| ^{m_x+1} \prod_{i \neq x} |p_i|^{m_i} \leq p_x |p_x| ^{m_x} \prod_{i \neq x} |p_i|^{m_i}$$
Thus, ##|p| \leq p## for every prime p. The third identity gives some additional constraints, like ##|3| \leq |2| + 1## and ##|5| \leq |3| + |2|##. However, I am unable to proceed further.

So to be nontrivial, in the real case, ##c \ne 1##, and in the rational case, for all primes p, at least one value of ##|p|## must be different from 1.

-

Now the Archimedean property of the value function. For nonzero a and b in F, there is some positive integer n such that ##|nb| > |a|##. Since in a field, division is always possible for nonzero divisors, we find ##|n| > |a/b|##, or, for simplicity, ##|n| > |a|##. For real numbers, ##|x|## for nonzero x ranges over the entire range of positive real numbers, except when ##c = 0##. This means that ##|n|## must be capable of being arbitrarily large, and thus ##c > 0## for being Archimedean. For rational numbers, it is more complicated. If at least one of the ##|p|## values, for primes p, is not 1, then ##|x|## can range over the entire range of real numbers, since x can have both positive and negative powers of primes. To make ##|n|## arbitrarily large, then at least one of the ##|p|##'s must be greater than 1. Thus, to be non-Archimedean, rational numbers must have ##|p| \leq 1## for all primes p.

Mentor
2021 Award
Solution of Problem 9
A value function is ##|F| = R(\geq 0)## for field F and the nonnegative real numbers. It satisfies
$$|x| = 0 \leftrightarrow x = 0 \\ |x y| = |x| |y| \\ |x + y| \leq |x| + |y|$$
For the second property, we have ##|x^2| = |x|^2## and likewise for higher powers: ##|x^p| = |x|^p## for all positive-integer ##p##.

For ##x = 1##, ##|1| = |1|^p## for all positive-integer ##p##, meaning that ##|1| = 1##. Likewise, for ##x = -1## and ##p = 2##, we have ##|1| = |-1|^2##. Since the value function is a nonnegative real number, ##|-1| = 1##

Zero value of ##p##? ##|x^0| = |x|^0## gives ##1 = |x|^0##, so the identity holds there.

Negative value of ##p##? Consider reciprocals. Let ##y## satisfy ##xy = 1##. Then, ##|xy| = 1 = |x||y|##, meaning that ##|y| = |x|^{-1}##. This result is easily extended to other negative-integer powers, so ##|x^p| = |x|^p## for all integers ##p##.

Applying the identity to products of powers gives ## |x^p \cdot y^q| = |x|^p \cdot |y|^q##.

Every nonzero rational number can be expressed in the form ##q = \pm \prod_i (p_i)^{m_i}## where the p's are primes and the m's integers. For integers, the m's must be nonnegative integers, and dividing gives m = m(numerator) - m(denominator). Thus,
$$|q| = |\pm \prod_i (p_i)^{m_i}| = \prod_i |p_i|^{m_i}$$

For real numbers, it can easily be shown that ##|x| = (\text{abs}(x))^c## for some power ##c##.

Let us now consider the third identity. For real numbers, it is easy: ##|2x| \leq 2|x|## meaning that power ##c \leq 1##. For rational numbers, it is more difficult. Consider adding up some prime ##p_x## of some number ##q##. Then,
$$|p_x q| = |p_x| ^{m_x+1} \prod_{i \neq x} |p_i|^{m_i} \leq p_x |p_x| ^{m_x} \prod_{i \neq x} |p_i|^{m_i}$$
Thus, ##|p| \leq p## for every prime p. The third identity gives some additional constraints, like ##|3| \leq |2| + 1## and ##|5| \leq |3| + |2|##. However, I am unable to proceed further.

So to be nontrivial, in the real case, ##c \ne 1##, and in the rational case, for all primes p, at least one value of ##|p|## must be different from 1.

-

Now the Archimedean property of the value function. For nonzero a and b in F, there is some positive integer n such that ##|nb| > |a|##. Since in a field, division is always possible for nonzero divisors, we find ##|n| > |a/b|##, or, for simplicity, ##|n| > |a|##. For real numbers, ##|x|## for nonzero x ranges over the entire range of positive real numbers, except when ##c = 0##. This means that ##|n|## must be capable of being arbitrarily large, and thus ##c > 0## for being Archimedean. For rational numbers, it is more complicated. If at least one of the ##|p|## values, for primes p, is not 1, then ##|x|## can range over the entire range of real numbers, since x can have both positive and negative powers of primes. To make ##|n|## arbitrarily large, then at least one of the ##|p|##'s must be greater than 1. Thus, to be non-Archimedean, rational numbers must have ##|p| \leq 1## for all primes p.
I'm not sure you understood the problem correctly. At least I couldn't match all of what you wrote. As a general hint, now that I've read a couple of your proofs, it helps a lot if you insert lines which state what you will do next and why before you start an argument. Otherwise your readers will always need a piece of paper to understand you, which is a bit annoying. After you will have finished a proof, structure it by lines like: "Next we will show ..." or "... is even true for ... because ...". Especially on a platform like the internet, people aren't caught in the world you're in when you write your proofs, so help them. And on the internet, people are notoriously impatient and lazy to think, i.e. the audience is a different one than if you wrote a paper for university. And you always write for those who will read it. Of course you won't need to follow this advice, but I think it makes life easier. That's another advantage we offer here: practice for free!

The statement of #9 is:

Given a function ##v\, : \,\mathbb{Q} \longrightarrow \mathbb{R}^+_0## with the properties of a value function. Determine (explicitly) all those functions ##v## which are neither trivial nor Archimedean.

You have found major steps of the proof, but with some unnecessary bypasses. The arguments don't require the use of the fundamental theory of arithmetic and are thus shorter. Just translate non-Archimedean and non-trivial into equations / conditions. Also, where do you get real ##x## from? And one of your findings isn't restricted to primes.

lpetrich
My solution for the first part of Question 7:
The orbit of a vector v under a matrix group G is the set of all vectors ##M.v## for ##M \in G##. So let us first find all the possible orbits of real-number vectors under the group ##O_n(R)##, and once we have done that, let us find the orbit that contains ##x = (0, 0, 0, \dots, 1)##.

From the definition of that group, for all ##A \in O_n(R)##, and for all vectors v and w in ##R^n##, we have ##(Av) \cdot (Aw) = v \cdot w##. The orbit of v is the set of all vectors ##Av##. The defining constraint of the group's elements suggests calculating the magnitudes of all the vectors in the orbit: ##(Av) \cdot (Av)##, So in that constraint, we set w equal to v, and we find that ##(Av) \cdot (Av) = v \cdot v##, or ##|(Av)| = |v|## for all A. Thus, all vectors in an orbit under ##O_n(R)## have the same magnitude.

This does not mean that all vectors with the same magnitude are in one orbit, and I will settle that issue by construction. For every v in ##R^n##, I will construct an A in ##O_n(R)## such that ##Av = z(|v|)## where vector ##z(x) = (0, 0, 0, \dots, x)##. This means that the orbit of v contains z(|v|). It is easy to prove that if the orbit of a contains b, then the orbit of b contains a. Thus, the orbit of z(w) contains every vector v such that |v| = w.

For a one-dimensional space, the construction is rather trivial, since ##O_1## = {((1)),((-1))}. Thus, vector (x) has orbit {(x),(-x)}.

For more dimensions, I will demonstrate the method of solution with a two-dimensional space. Find A such that ##A.v = z(w)##, where A is orthonormal and v is not identically zero. I will use a two-dimensional rotation matrix:
$$A = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}$$
where ##\theta## is the angle of rotation. This gives us
$$v_1 \cos\theta - v_2 \sin\theta = 0 ,\ v_1 \sin\theta +v_2 \cos\theta = w$$
One can easily solve the first equation for ##\theta##: ##\tan\theta = v_1 / v_2##, and selecting the domain of ##\theta## to make w nonnegative gives
$$A = \frac{1}{\sqrt{v_1{}^2 + v_2{}^2}} \begin{pmatrix} v_2 & -v_1 \\ v_1 & v_2 \end{pmatrix}$$
and ##w = \sqrt{v_1{}^2 + v_2{}^2}##. If both v components are zero, then this matrix is arbitrary.

I now extend this result to more dimensions. For that purpose, I define matrix ##A(k,a,b)## to have the 2D matrix for a and b at indices k and k+1, and identity-matrix values elsewhere. This matrix is for zeroing out the kth component of a vector with appropriate choices of a and b. For at least one of a and b nonzero,
$$A_{k,k} = A_{k+1,k+1} = \frac{b}{\sqrt{a^2+b^2}} ,\ - A_{k,k+1} = A_{k+1,k} = \frac{a}{\sqrt{a^2+b^2}} ,\ A_{other\ i,j} = \delta_{ij}$$
Otherwise, this matrix is made the identity matrix for convenience. This matrix is rather obviously a member of ##O_n(R)##.

Applying these matrices to zeroing out components of vector v,
$$A(1, v_1, v_2) v = (0, \sqrt{v_1{}^2 + v_2{}^2}, v_3, v_4, ..., v_n) \\ A(2, \sqrt{v_1{}^2 + v_2{}^2}, v_3) A(1, v_1, v_2) v = (0, 0, \sqrt{v_1{}^2 + v_2{}^2 + v_3{}^2}, v_4, ..., v_n)$$
etc. The next matrix to be applied is ##A(3, \sqrt{v_1{}^2 + v_2{}^2 + v_3{}^2}, v_4)##, and the sequence ends with ##A(n-1, \sqrt{v_1{}^2 + v_2{}^2 + \cdots + v_{n-1}{}^2}, v_n)##. The product of these matrices, A(n-1,...).A(n-2,...)... A(2,...).A(1,...) I will call A(v). Only the last component of v survives, and
$$A(v) v = z(|v|)$$
Since A(v) is in ##O_n(R)##, this is the result that I had wanted earlier, and this implies that all vectors in ##R^n## with the same magnitude are in the same orbit under ##O_n(R)##. Since I had used rotations, this result is also true for ##SO_n(R)## for n > 1. For ##SO_1##, however, every vector is in its own orbit.

Finally, I consider the orbit of x = (0,0,0,...,1) under ##O_n(R)##. It is easy to show that |x| = 1, and thus the orbit of x is all unit vectors in ##R^n##.

I tried to be more verbose and descriptive here.

Mentor
2021 Award
My solution for the first part of Question 7:

The orbit of a vector v under a matrix group G is the set of all vectors ##M.v## for ##M \in G##. So let us first find all the possible orbits of real-number vectors under the group ##O_n(R)##, and once we have done that, let us find the orbit that contains ##x = (0, 0, 0, \dots, 1)##.

From the definition of that group, for all ##A \in O_n(R)##, and for all vectors v and w in ##R^n##, we have ##(Av) \cdot (Aw) = v \cdot w##. The orbit of v is the set of all vectors ##Av##. The defining constraint of the group's elements suggests calculating the magnitudes of all the vectors in the orbit: ##(Av) \cdot (Av)##, So in that constraint, we set w equal to v, and we find that ##(Av) \cdot (Av) = v \cdot v##, or ##|(Av)| = |v|## for all A. Thus, all vectors in an orbit under ##O_n(R)## have the same magnitude.

This does not mean that all vectors with the same magnitude are in one orbit, and I will settle that issue by construction. For every v in ##R^n##, I will construct an A in ##O_n(R)## such that ##Av = z(|v|)## where vector ##z(x) = (0, 0, 0, \dots, x)##. This means that the orbit of v contains z(|v|). It is easy to prove that if the orbit of a contains b, then the orbit of b contains a.
And now I need a machete.
Thus, the orbit of z(w) contains every vector v such that |v| = w.
What role does ##w## now play? You introduced ##v## then ##(0,\ldots ,0,x)##, next ##z(x)## then ##a## and ##b## and now ##w##??? You have "proved" symmetry (the ##a##-##b##-argument) within an orbit by "easy to prove" which I can accept as we have a group. But now you conclude transitivity by a simple "thus" from symmetry, which I have difficulties with, as these are two independent properties which are not connectable by "thus".

What you meant was probably the following:
Given our (still to be proven) Lemma: ##(0,\ldots,0,|v|) \in O_n(\mathbb{R}).v## then there are matrices ##A_v,A_w \in O_n(\mathbb{R})## such that
$$A_w.w=(0,\ldots,0,|w|)=(0,\ldots,0,|v|)=A_v.v \text{ for all } v,w \in \mathbb{R}^n \text{ with } |v|=|w|$$
and all vectors of equal length are within the same orbit, which implies transitivity on the circle with radius ##|v|\,.##

So you had all what's needed, but the jungle of ##a,b,v,w,x,z## made it unnecessarily difficult to read.
For a one-dimensional space, the construction is rather trivial, since ##O_1## = {((1)),((-1))}. Thus, vector (x) has orbit {(x),(-x)}.

For more dimensions, I will demonstrate the method of solution with a two-dimensional space. Find A such that ##A.v = z(w)##, where A is orthonormal and v is not identically zero. I will use a two-dimensional rotation matrix:
$$A = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}$$
where ##\theta## is the angle of rotation. This gives us
$$v_1 \cos\theta - v_2 \sin\theta = 0 ,\ v_1 \sin\theta +v_2 \cos\theta = w$$
One can easily solve the first equation for ##\theta##: ##\tan\theta = v_1 / v_2##, and selecting the domain of ##\theta## to make w nonnegative gives
$$A = \frac{1}{\sqrt{v_1{}^2 + v_2{}^2}} \begin{pmatrix} v_2 & -v_1 \\ v_1 & v_2 \end{pmatrix}$$
and ##w = \sqrt{v_1{}^2 + v_2{}^2}##. If both v components are zero, then this matrix is arbitrary.

I now extend this result to more dimensions. For that purpose, I define matrix ##A(k,a,b)## to have the 2D matrix for a and b at indices k and k+1, and identity-matrix values elsewhere. This matrix is for zeroing out the kth component of a vector with appropriate choices of a and b. For at least one of a and b nonzero,
$$A_{k,k} = A_{k+1,k+1} = \frac{b}{\sqrt{a^2+b^2}} ,\ - A_{k,k+1} = A_{k+1,k} = \frac{a}{\sqrt{a^2+b^2}} ,\ A_{other\ i,j} = \delta_{ij}$$
Otherwise, this matrix is made the identity matrix for convenience. This matrix is rather obviously a member of ##O_n(R)##.

Applying these matrices to zeroing out components of vector v,
$$A(1, v_1, v_2) v = (0, \sqrt{v_1{}^2 + v_2{}^2}, v_3, v_4, ..., v_n) \\ A(2, \sqrt{v_1{}^2 + v_2{}^2}, v_3) A(1, v_1, v_2) v = (0, 0, \sqrt{v_1{}^2 + v_2{}^2 + v_3{}^2}, v_4, ..., v_n)$$
etc. The next matrix to be applied is ##A(3, \sqrt{v_1{}^2 + v_2{}^2 + v_3{}^2}, v_4)##, and the sequence ends with ##A(n-1, \sqrt{v_1{}^2 + v_2{}^2 + \cdots + v_{n-1}{}^2}, v_n)##. The product of these matrices, A(n-1,...).A(n-2,...)... A(2,...).A(1,...) I will call A(v). Only the last component of v survives, and
$$A(v) v = z(|v|)$$
Since A(v) is in ##O_n(R)##, this is the result that I had wanted earlier, and this implies that all vectors in ##R^n## with the same magnitude are in the same orbit under ##O_n(R)##. Since I had used rotations, this result is also true for ##SO_n(R)## for n > 1. For ##SO_1##, however, every vector is in its own orbit.

Finally, I consider the orbit of x = (0,0,0,...,1) under ##O_n(R)##. It is easy to show that |x| = 1, and thus the orbit of x is all unit vectors in ##R^n##.

I tried to be more verbose and descriptive here.
Not really. Descriptive would have been:

##O_n(\mathbb{R}).v \subseteq |v|\cdot \mathbb{S}^{n-1}##

Now we show the other inclusion:
For ##n=1## we are done. The other inclusion ##"\supseteq "## also holds, as ##O_n(\mathbb{R})## operates transitive on ##\mathbb{S}^{n-1}\,:## For ##v \neq w## in ##||x||\,\cdot \,\mathbb{S}^{n-1}##, i.e. ##||v||=||x||=||w||## we consider the plane ##\operatorname{span}_\mathbb{R}\{\,v,w\,\}\; , \;n>1\,##. With a rotation axis in its origin and perpendicular to the plane, we can rotate ##v## into ##w## by an appropriate element of ##O_n(\mathbb{R})## and hence ##v,w## are in the same orbit, which is the orbit of ##x## - just select ##w=x\,.##

That would have been descriptive. You still have the two dimensional case which you've shown, but instead of complicated coordinates, you simply could have chosen the coordinate system in a way, that the higher dimensions are just the two dimensional case you have shown.

And I still don't know what the orbit is. It is somewhere in your calculations, and everybody can search for it, but don't you think a simple final line ##O_n(\mathbb{R}).v = |v|\cdot \mathbb{S}^{n-1}## would have been nice?

In my opinion there are far too many "easys", "obviouses" and "trivials", especially since you used ten times the space which was actually needed. If you are so detailed, then save the "clearlys" and write those few lines as well. The first three paragraphs are all in
$$\langle Av,Aw \rangle = (Av)^{\tau} (Aw) = v^{\tau}A^{\tau}A w = v^{\tau}(A^{\tau}A) w = v^{\tau} w = \langle v,w \rangle$$
and an inspection of it does the job.

Gold Member
Problem 3: Suppose $X$ is complete. Let $s_n=\sum_{i=1}^n x_i$. Fix $\varepsilon>0$ and let $N\in\mathbb{N}$ be such that $\sum_{i=N}^\infty |x_i|<\varepsilon$ (to do this, just take $N$ large enough that the sum of the first $N-1$ terms is within $\varepsilon$ of the value of the full sum). Then for $n,m>N$, we have $|s_n-s_m|<\varepsilon$ so the sequence $(s_n)$ is Cauchy and hence converges. The convergent value is the sum $\sum_{i=1}^\infty x_i$.

For the other direction, suppose $(y_n)$ is a Cauchy sequence. Let $y_{n_1},y_{n_2},\ldots$ be a subsequence such that $|y_{n_{k+1}}-y_{n_k}|<2^{-k}$. To do this, let $N_k$ be such that $|y_n-y_m|<2^{-k}$ for $n,m>N_k$. Then choose $n_k>N_k$ for each $k$. Then the sum $\sum |y_{n_{k+1}}-y_{n_k}|$ converges, so the sum $\sum y_{n_{k+1}}-y_{n_k}$ does too. This tells us that the subsequence $(y_{n_k})$ converges (to the sum minus $y_{n_1}$). By easy epsilonics, any Cauchy sequence with a convergent subsequence also converges.

Problem 6a Let $n$ be the number of distinct conjugate subgroups $gUg^{-1}$. We see $n\leq[G:U]$ because if $x,g$ are in the same coset of $U$, i.e.$x^{-1}g\in U$, then $x^{-1}gU(x^{-1}g)^{-1}=U$, which is equivalent to $gUg^{-1}=xUx^{-1}$. Since the identity element is in each conjugate subgroup, we have $\left|\bigcup_{g\in G}gUg^{-1}\right|\leq n|U|-n+1\leq |G|-n+1$. If $n>1$, then this quantity is smaller than $|G|$ and we're done. Otherwise, if $n=1$, then the quantity $n|U|-n+1$ is equal to $|U|$, which is smaller than $|G|$ by assumption.

6b: Fix $x\in X$. Let $H\subset G$ be the stabilizer of $x$. Note that $H$ must be a strict subset of $G$ since by transitivity there is some element of $G$ taking $x$ to a different element of $X$. Next, note that $gHg^{-1}$ is the stabilizer of $gx$. So, $\bigcup_{g\in G}gHg^{-1}$ is the set of elements of $G$ which stabilize some element in the orbit of $x$, which is all of $X$. By part a, this union is not all of $G$. By the above, any element of $G$ outside of this union fixes no points of $X$.

Last edited:
Mentor
2021 Award
Problem 3: Suppose $X$ is complete. Let $s_n=\sum_{i=1}^n x_i$. Fix $\varepsilon>0$ and let $N\in\mathbb{N}$ be such that $\sum_{i=N}^\infty |x_i|<\varepsilon$ (to do this, just take $N$ large enough that the sum of the first $N-1$ terms is within $\varepsilon$ of the value of the full sum). Then for $n,m>N$, we have $|s_n-s_m|<\varepsilon$ so the sequence $(s_n)$ is Cauchy and hence converges. The convergent value is the sum $\sum_{i=1}^\infty x_i$.

For the other direction, suppose $(y_n)$ is a Cauchy sequence. Let $y_{n_1},y_{n_2},\ldots$ be a subsequence such that $|y_{n_{k+1}}-y_{n_k}|<2^{-k}$. To do this, let $N_k$ be such that $|y_n-y_m|<2^{-k}$ for $n,m>N_k$. Then choose $n_k>N_k$ for each $k$. Then the sum $\sum |y_{n_{k+1}}-y_{n_k}|$ converges, so the sum $\sum y_{n_{k+1}}-y_{n_k}$ does too. This tells us that the subsequence $(y_{n_k})$ converges (to the sum minus $y_{n_1}$). By easy epsilonics, any Cauchy sequence with a convergent subsequence also converges.
Correct.

As this is a completion criterion which has some value in its own right, let me add a proof, which is mainly the same as yours, but - which I find - due to structure a bit easier to read:

Assume ##X## is complete and the sum of the normed sequence is finite. Then we have
$$||\sum_{n=1}^\infty x_n|| \leq \sum_{n=1}^\infty ||x_n|| < \infty$$
and as ##X## is complete, it converges even in ##X##.

Let on the other hand ##(x_n)_{n\in \mathbb{N}}\subseteq X## be a Cauchy sequence. Then we set ##a_n := x_{N(n+1)}-x_{N(n)}## where we found the indices ##N(n)## in such a way, that ##||x_m-x_k||< \varepsilon_n:=2^{-n}## for all ##m,k>N(n)\,.## Since we have now
$$\sum_{n=1}^\infty ||a_n|| \leq \sum_{n=1}^\infty 2^{-n}=1$$
the series ##\sum_{n=1}^\infty a_n## converges in ##X## by our assumption. Hence
$$\sum_{n=1}^\infty a_n =\lim_{M \to \infty}\sum_{n < M}a_n = \lim_{M \to \infty} \left(-x_{N(1)}+x_{N(M)}\right)=-x_{N(1)}+\lim_{M\to \infty}x_{N(M)}$$
and the limit of our sequence ##(x_n)_{n\in \mathbb{N}}## exists also in ##X## and ##(X\;,\;||\,.\,||\,)## is complete.

Mentor
2021 Award
Problem 6a Let $n$ be the number of distinct conjugate subgroups $gUg^{-1}$. We see $n\leq[G:U]$ because if $x,g$ are in the same coset of $U$, i.e.$x^{-1}g\in U$, then $x^{-1}gU(x^{-1}g)^{-1}=U$, which is equivalent to $gUg^{-1}=xUx^{-1}$. Since the identity element is in each conjugate subgroup, we have $\left|\bigcup_{g\in G}gUg^{-1}\right|\leq n|U|-n+1\leq |G|-n+1$. If $n>1$, then this quantity is smaller than $|G|$ and we're done. Otherwise, if $n=1$, then the quantity $n|U|-n+1$ is equal to $|U|$, which is smaller than $|G|$ by assumption.

6b: Fix $x\in X$. Let $H\subset G$ be the stabilizer of $x$. Note that $H$ must be a strict subset of $G$ since by transitivity there is some element of $G$ taking $x$ to a different element of $X$. Next, note that $gHg^{-1}$ is the stabilizer of $gx$. So, $\bigcup_{g\in G}gHg^{-1}$ is the set of elements of $G$ which stabilize some element in the orbit of $x$, which is all of $X$. By part a, this union is not all of $G$. By the above, any element of $G$ outside of this union fixes no points of $X$.
Part (b) is a bit very short in my opinion, because I find physicists who use operations a lot should be firm in the terminology and the exercise was meant to practice this. Nevertheless it's correct and contains all crucial arguments.

Gold Member
Wow. You guys are so smart... Correct.

As this is a completion criterion which has some value in its own right, let me add a proof, which is mainly the same as yours, but - which I find - due to structure a bit easier to read:

Assume ##X## is complete and the sum of the normed sequence is finite. Then we have
$$||\sum_{n=1}^\infty x_n|| \leq \sum_{n=1}^\infty ||x_n|| < \infty$$
and as ##X## is complete, it converges even in ##X##.

Let on the other hand ##(x_n)_{n\in \mathbb{N}}\subseteq X## be a Cauchy sequence. Then we set ##a_n := x_{N(n+1)}-x_{N(n)}## where we found the indices ##N(n)## in such a way, that ##||x_m-x_k||< \varepsilon_n:=2^{-n}## for all ##m,k>N(n)\,.## Since we have now
$$\sum_{n=1}^\infty ||a_n|| \leq \sum_{n=1}^\infty 2^{-n}=1$$
the series ##\sum_{n=1}^\infty a_n## converges in ##X## by our assumption. Hence
$$\sum_{n=1}^\infty a_n =\lim_{M \to \infty}\sum_{n < M}a_n = \lim_{M \to \infty} \left(-x_{N(1)}+x_{N(M)}\right)=-x_{N(1)}+\lim_{M\to \infty}x_{N(M)}$$
and the limit of our sequence ##(x_n)_{n\in \mathbb{N}}## exists also in ##X## and ##(X\;,\;||\,.\,||\,)## is complete.

Isn't this a standard functional analysis fact? It was the first theorem we proved about Banach spaces and I had it on an exam this year.

lpetrich
I will now solve part two of Question 7.
The solution will have these parts:

(1) Show that a stabilizer is a group, (2) find the stabilizer of ##O_n(R)## under ##x = (0,0,0,\dots,0,1) \in R^n##, a stabilizer which I will call SSX, (3) show that SSX is isomorphic to a subgroup of ##O_{n-1}(R)##, (4) show that ##O_{n-1}(R)## is isomorphic to a subgroup of SSX, and (5) show that ##O_{n-1}(R)## is isomorphic to SSX.

(1) Show that a stabilizer is a group.

Consider a group G and a set of entities X, where X is an orbit over G. That is, for all g in G and x in X, ##x' = gx## is in X, and also ##(gh)x = g(hx)## for all g, h in G and x in X.

Let the stabilizer for element x of X be S, a subset of G. I will now show that it has all the group properties.
• Closure: for s, t in S, ##(st)x = s(tx) = sx = x##. Thus, st is also in S.
• Associativity: follows from the associativity of G.
• Identity. The identity of G, e, satisfies ##ex = x## for all x in X. Thus, e is a member of every stabilizer.
• Inverse. Consider ##(s^{-1} s) x##. In that form, it gives x. But from the second orbit property, ##(s^{-1} s) x = s^{-1} (sx) = s^{-1} x##. Thus, the inverse of every element of S is also in S.
Thus showing that S is a group, a subgroup of G. This means that in this problem, SSX is a group.

(2) find the stabilizer of ##O_n(R)## under x.

For every element A of ##O_n(R)##, ##A_{in} = 0## for ##i < n##, and ##A_{nn} = 1##.

This may seem like a very limited constraint, but since SSX is a group, we can derive another one. ##A^{-1} x = x = A^T x##. Looking at the transpose of A, ##A_{ni} = 0## and ##A_{nn} = 1##.

The elements of A are thus block-diagonal matrices, with the first ##n-1## indices forming a matrix that I shall call B, and the last index being for 1:
$$A = \begin{pmatrix} B & 0 \\ 0 & 1 \end{pmatrix}$$

(3) Show that SSX is isomorphic to a subgroup of ##O_{n-1}(R)##.

For that, we need to evaluate the orthonormality relations on the elements of SSX. That gives us
• ##(AA^T)_{ij} = (BB^T)_{ij} = (A^TA)_{ij} = (B^TB)_{ij} = \delta_{ij}##
• ##(AA^T)_{in} = (A^TA)_{in} = 0##, ##(AA^T)_{nj} = (A^TA)_{nj} = 0##
• ##(AA^T)_{nn} = (A^TA)_{nn} = 1##
for i and j < n. This means that the B's are also orthonormal, and thus in ##O_{n-1}(R)##.

(4) Show that ##O_{n-1}(R)## is isomorphic to a subgroup of SSX.

With the construction of the SSX matrices mentioned earlier,
$$A = \begin{pmatrix} B & 0 \\ 0 & 1 \end{pmatrix}$$
for each element B in ##O_{n-1}(R)##, construct element A of SSX. That means that SSX contains a set of elements that has a bijection to ##O_{n-1}(R)##, and thus that ##O_{n-1}(R)## is isomorphic to a subgroup of SSX.

(5) Show that ##O_{n-1}(R)## is isomorphic to SSX.

This result follows from results (3) and (4), and thus, the elements of SSX have the form
$$\begin{pmatrix} \text{element of } O_{n-1}(R) & 0 \\ 0 & 1 \end{pmatrix}$$
for every element of ##O_{n-1}(R)##.

More generally, for sets A and B, if ##A \subseteq B## and ##B \subseteq A##, then ##A = B##.

Mentor
2021 Award
Isn't this a standard functional analysis fact? It was the first theorem we proved about Banach spaces and I had it on an exam this year.
IIRC I took it from an exam. One of my goals with these problems is to give away some useful knowledge, too. It might not always work, but I think it provides more value than a simple smart trick which will be forgotten again. This way it's more like a practice in standard proofs, so we can certainly discuss this point of view. But students here can practice without the risk of a bad grade, or where their weaknesses lie, e.g. in notation.

Mentor
2021 Award
I will now solve part two of Question 7.
The solution will have these parts:

(1) Show that a stabilizer is a group, (2) find the stabilizer of ##O_n(R)## under ##x = (0,0,0,\dots,0,1) \in R^n##...,
the stabilizer of ##x = (0,0,0,\dots,0,1) \in \mathbb{R}^n## under ##O_n(\mathbb{R})##
...
a stabilizer which I will call SSX, (3) show that SSX is isomorphic to a subgroup of ##O_{n-1}(R)##, (4) show that ##O_{n-1}(R)## is isomorphic to a subgroup of SSX, and (5) show that ##O_{n-1}(R)## is isomorphic to SSX.

(1) Show that a stabilizer is a group.

Consider a group G and a set of entities X, where X is an orbit over G. That is, for all g in G and x in X, ##x' = gx## is in X, and also ##(gh)x = g(hx)## for all g, h in G and x in X.

Let the stabilizer for element x of X be S, a subset of G. I will now show that it has all the group properties.
• Closure: for s, t in S, ##(st)x = s(tx) = sx = x##. Thus, st is also in S.
• Associativity: follows from the associativity of G.
• Identity.
• This is part of the definition of an operation and not automatically true.
The identity of G, e, satisfies ##ex = x## for all x in X. Thus, e is a member of every stabilizer.
[*]Inverse. Consider ##(s^{-1} s) x##. In that form, it gives x. But from the second orbit property, ##(s^{-1} s) x = s^{-1} (sx) = s^{-1} x##. Thus, the inverse of every element of S is also in S.
Thus showing that S is a group, a subgroup of G. This means that in this problem, SSX is a group.

(2) find the stabilizer of ##O_n(R)## under x.
See above: of ##x## under [the operation of] ##G##.
For every element A of ##O_n(R)##, ##A_{in} = 0## for ##i < n##, and ##A_{nn} = 1##.

This may seem like a very limited constraint, but since SSX is a group, we can derive another one. ##A^{-1} x = x = A^T x##. Looking at the transpose of A, ##A_{ni} = 0## and ##A_{nn} = 1##.
Not that it makes a lot of a difference, but why did you change from the natural mapping ##(A,x) \longmapsto A.x=Ax## to ##(A,x) \longmapsto xA\,?##
The elements of A are thus block-diagonal matrices, with the first ##n-1## indices forming a matrix that I shall call B, and the last index being for 1:
$$A = \begin{pmatrix} B & 0 \\ 0 & 1 \end{pmatrix}$$

(3) Show that SSX is isomorphic to a subgroup of ##O_{n-1}(R)##.

For that, we need to evaluate the orthonormality relations on the elements of SSX. That gives us
• ##(AA^T)_{ij} = (BB^T)_{ij} = (A^TA)_{ij} = (B^TB)_{ij} = \delta_{ij}##
• ##(AA^T)_{in} = (A^TA)_{in} = 0##, ##(AA^T)_{nj} = (A^TA)_{nj} = 0##
• ##(AA^T)_{nn} = (A^TA)_{nn} = 1##
for i and j < n. This means that the B's are also orthonormal, and thus in ##O_{n-1}(R)##.

(4) Show that ##O_{n-1}(R)## is isomorphic to a subgroup of SSX.

With the construction of the SSX matrices mentioned earlier, ...
This is wrong, as it is what happened under paragraph 3: ##\operatorname{Stab}_x(O_n)\subseteq O_{n-1}##.
In the other direction, you are not allowed to use any information about the stabilisator other than ##A.x=x##. What you have to show is, that for ##B\in O_{n-1}##
$$\begin{bmatrix}B&0\\0&1\end{bmatrix}.\begin{bmatrix}0\\ \vdots \\ x_n \end{bmatrix}=\begin{bmatrix}0\\ \vdots \\ x_n \end{bmatrix}$$
which is trivial, however, you must not reference to "earlier", since earlier has been the necessary condition and we now deal with its sufficiency.
$$A = \begin{pmatrix} B & 0 \\ 0 & 1 \end{pmatrix}$$
for each element B in ##O_{n-1}(R)##, construct element A of SSX. That means that SSX contains a set of elements that has a bijection to ##O_{n-1}(R)##, and thus that ##O_{n-1}(R)## is isomorphic to a subgroup of SSX.

(5) Show that ##O_{n-1}(R)## is isomorphic to SSX.

This result follows from results (3) and (4), and thus, the elements of SSX have the form
$$\begin{pmatrix} \text{element of } O_{n-1}(R) & 0 \\ 0 & 1 \end{pmatrix}$$
for every element of ##O_{n-1}(R)##.
More generally, for sets A and B, if ##A \subseteq B## and ##B \subseteq A##, then ##A = B##.
This is correct up to (4), a little bit very detailed, but I guess this is my fault. E.g. (1), (3) and (4) are more or less trivial, and a remark that matrix multiplication is blockwise would have done it. We're now in the intermediate challenges thread and the audience is a bit another one. That a stabilzer is a group, which is btw. usually noted as ##\operatorname{Stab}_x(G)##, can almost be seen immediately. The embedding ##O_{n-1}\subseteq O_n## is a standard one, the only question is whether new the rotation axis will be inserted first or last.

Last edited:
Epsilonics inbound!
Per definition of completeness we must show every Cauchy sequence ##f_n\in \mathcal B(X), n\in\mathbb N ## converges in ##\mathcal B(X)##.

Let ##x\in X##, then ##f_n(x)\in\mathbb R, n\in\mathbb N ## is a Cauchy sequence in ##\mathbb R##, therefore it converges, since ##\mathbb R ## is complete.
Define ##f : X\to \mathbb R,\quad x\mapsto \lim_{n\to\infty }f_n(x) ##. Clearly, ##f_n\xrightarrow[n\to\infty]{}f## pointwise.

We show now that ##f\in\mathcal B(X) ##. Pick ##M ## such that
$$m,n>M\implies \| f_n-f_m\|_\infty < 1.$$
Note that ##\|f-f_M\|_\infty \leq 1 ## since for every ##x\in X ##
$$|f(x) - f_M(x)| = |\lim _{n\to\infty} f_n(x) - f_M(x)| = \lim_{n\to\infty} |f_n(x)- f_M(x)| \leq 1 \implies \sup_{x\in X} |f(x)- f_M(x)| = \|f-f_M\|_\infty \leq 1.$$
Applying the triangle inequality we have
$$|f(x)| \leq |f(x)-f_M(x)| + | f_M(x)| \leq \|f-f_M\|_\infty + \|f_M\|_\infty <\infty\quad (x\in X).$$

Finally, we need to show ##f_n\xrightarrow[n\to\infty]{}f ## w.r.t to the metric.
Given ##\varepsilon >0 ##, we have ## M## with ##m,n>M \implies \|f_m - f_n\|_\infty \leq\varepsilon##. For every ##m>M ## we have
$$|f(x) - f_m(x)| = \lim_{n\to\infty }|f_n(x) - f_m(x)| \leq \varepsilon \quad (x\in X),$$
therefore ##\|f-f_m\|_\infty \leq\varepsilon## as desired.
Let ##a\in X## and define ##\varphi _a : X\to\mathcal B(X) ## such that for a fixed ##x\in X##
$$\varphi _a(x) (y) = d(x,y) - d(a,y)\quad (y\in X)$$
We show that the mapping ##\varphi _a## is an isometry i.e ##\|\varphi _a(x) - \varphi _a(y)\|_\infty = d(x,y) ##. Note that
$$\|\varphi _a(x) - \varphi _a(y)\|_\infty = \sup_{z\in X} | d(x,z) - d(a,z) - d(y,z) + d(a,z)| = \sup_{z\in X} |d(x,z) - d(y,z)| = d(x,y),$$
because the supremum is attained for ##z=x ## or ##z=y ## (because at least one of the distances becomes zero so the expression is maximalised). Use symmetricity of a metric if necessary.
Every isometry is injective. (##f(x) = f(y) \implies 0 = d_Y(f(x), f(y)) = d_X(x,y)##)
We've shown that ##X ## and ##\varphi _a[X]## are isometric (i.e the isometry ##\varphi _a : X\to \varphi _a[X]## is bijective). Since ##\mathcal B(X) ## is complete, the subspace ##\mbox{cl} (\varphi _a[X]) ## is also complete, thus ##\mbox{cl} (\varphi _a[X])## is the completion of ##X## (and of ##\varphi_a[X]## for that matter).
A completion is uniquely determined up to isometricity.

Last edited:
Mentor
2021 Award
Epsilonics inbound!
Per definition of completeness we must show every Cauchy sequence ##f_n\in \mathcal B(X), n\in\mathbb N ## converges in ##\mathcal B(X)##.

Let ##x\in X##, then ##f_n(x)\in\mathbb R, n\in\mathbb N ## is a Cauchy sequence in ##\mathbb R##, therefore it converges, since ##\mathbb R ## is complete.
Define ##f : X\to \mathbb R,\quad x\mapsto \lim_{n\to\infty }f_n(x) ##. Clearly, ##f_n\xrightarrow[n\to\infty]{}f## pointwise.

We show now that ##f\in\mathcal B(X) ##. Pick ##M ## such that
$$m,n>M\implies \| f_n-f_m\|_\infty < 1.$$
Note that ##\|f-f_M\|_\infty \leq 1 ## since for every ##x\in X ##
$$|f(x) - f_M(x)| = |\lim _{n\to\infty} f_n(x) - f_M(x)| = \lim_{n\to\infty} |f_n(x)- f_M(x)| \leq 1 \implies \sup_{x\in X} |f(x)- f_M(x)| = \|f-f_M\|_\infty \leq 1.$$
Applying the triangle inequality we have
$$|f(x)| \leq |f(x)-f_M(x)| + | f_M(x)| \leq \|f-f_M\|_\infty + \|f_M\|_\infty <\infty\quad (x\in X).$$

Finally, we need to show ##f_n\xrightarrow[n\to\infty]{}f ## w.r.t to the metric.
Given ##\varepsilon >0 ##, we have ## M## with ##m,n>M \implies \|f_m - f_n\|_\infty \leq\varepsilon##. For every ##m>M ## we have
$$|f(x) - f_m(x)| = \lim_{n\to\infty }|f_n(x) - f_m(x)| \leq \varepsilon \quad (x\in X),$$
therefore ##\|f-f_m\|_\infty \leq\varepsilon## as desired.
Let ##a\in X## and define ##\varphi _a : X\to\mathcal B(X) ## such that for a fixed ##x\in X##
$$\varphi _a(x) (y) = d(x,y) - d(a,y)\quad (y\in X)$$
We show that the mapping ##\varphi _a## is an isometry i.e ##\|\varphi _a(x) - \varphi _a(y)\|_\infty = d(x,y) ##. Note that
$$\|\varphi _a(x) - \varphi _a(y)\|_\infty = \sup_{z\in X} | d(x,z) - d(a,z) - d(y,z) + d(a,z)| = \sup_{z\in X} |d(x,z) - d(y,z)| = d(x,y),$$
because the supremum is attained for ##z=x ## or ##z=y ## (because at least one of the distances becomes zero so the expression is maximalised). Use symmetricity of a metric if necessary.
Every isometry is injective. (##f(x) = f(y) \implies 0 = d_Y(f(x), f(y)) = d_X(x,y)##)
Yes, but why is ##\varphi_a(x) \in \mathcal{B}(X)\,##?
We've shown that ##X ## and ##\varphi _a[X]## are isometric (i.e the isometry ##\varphi _a : X\to \varphi _a[X]## is bijective). Since ##\mathcal B(X) ## is complete, the subspace ##\mbox{cl} (\varphi _a[X]) ## is also complete, thus ##\mbox{cl} (\varphi _a[X])## is the completion of ##X## (and of ##\varphi_a[X]## for that matter).
A completion is uniquely determined up to isometricity.
Well done. Except for this one missing inequality.

@fresh_42 Right, the triangle inequality
The norm of ##\varphi _a(x) ## has a clear bound since ##a,x\in X## are fixed:
$$\| \varphi _a(x)\|_\infty = \sup_{y\in X} |d(x,y) - d(a,y)| \leq \sup_{y\in X} |d(x,a)| = d(x,a)<\infty$$

• fresh_42
Gold Member
4a) We have $(\text{div }F)(x_1,x_2,x_3)=x_2^4+2x_2^2x_3^2+x_3^4+2x_1^2x_3^2+x_1^4+2x_1^2x_2^2=(x_1^2+x_2^2+x_3^2)^2$. So, by the divergence theorem, our integral is $\int_{B} (x_1^2+x_2^2+x_3^2)^2 dV$. Changing to spherical coordinates, the volume element becomes $r^2\sin\theta d\phi \ d\theta \ dr$, the integrand becomes $r^4$ and the bounds are $r\in[0,1],\theta\in [0,\pi],\phi\in[0,2\pi]$. Thus, our integral becomes
$$\int_0^1\int_0^\pi\int_0^{2\pi}r^6\sin(\theta)d\phi \ d\theta \ dr=2\pi\int_0^1\int_0^\pi r^6\sin(\theta)d\theta \ dr=4\pi\int_0^1r^6dr=4\pi/7.$$

4b) Let $F_1,\ldots,F_n$ be the components of $F$ (so $F=(F_1,\ldots,F_n)$). Then,
$$\text{div }(hF)=\sum_{i=1}^n\frac{\partial (hF_i)}{\partial x_i}=\sum_{i=1}^n \left(h\frac{\partial F_i}{\partial x_i}+F_i\frac{\partial h}{\partial x_i}\right)=h\sum_{i=1}^n \frac{\partial F_i}{\partial x_i}+\sum_{i=1}^nF_i\frac{\partial h}{\partial x_i}=h\text{div }(F)+(\nabla h)\cdot F.$$

4c) Consider the vector field $f\nabla g$. Since $f,g$ are $C^2$, this product is $C^1(B^n,\mathbb{R}^n)$. Its divergence is $\text{div }(f\nabla g)=f\text{div }(\nabla g)+(\nabla f)\cdot(\nabla g)=f\Delta g+(\nabla f)\cdot(\nabla g)$, by part b. So, the divergence theorem gives
$$\int_{B^n} \left(f\Delta g+(\nabla f)\cdot(\nabla g)\right) dB^n=\int_{B^n} \text{div }(f\nabla g) dB^n=\int_{\partial B^n}(f\nabla g)\cdot N dS^{n-1}.$$

Rearranging gives the desired result.

7c) In general, given a group $G$ acting on a set $X$ and an element $x\in X$, the orbit-stabilizer theorem gives us a bijection between $G/Stab(x)$ and the orbit of $x$ (take the coset $g Stab(x)$ to $g\cdot x$). In our case, $G=O(n)$ and $X=\mathbb{R}^n$. Taking $x=(0,\ldots,0,1)$, the previous parts tell us that the orbit of $x$ is $S^{n-1}$ and that the stabilizer of $x$ is a subgroup of $O(n)$ that we may identify with $O(n-1)$. The orbit-stabilizer theorem then gives the needed bijection.

Last edited:
Mentor
2021 Award
7c) In general, given a group $G$ acting on a set $X$ and an element $x\in X$, the orbit-stabilizer theorem gives us a bijection between $G/Stab(x)$ and the orbit of $x$ (take the coset $g Stab(x)$ to $g\cdot x$). In our case, $G=O(n)$ and $X=\mathbb{R}^n$. Taking $x=(0,\ldots,0,1)$, the previous parts tell us that the orbit of $x$ is $S^{n-1}$ and that the stabilizer of $x$ is a subgroup of $O(n)$ that we may identify with $O(n-1)$. The orbit-stabilizer theorem then gives the needed bijection.
Correct, and I finally Iearnt the English name of the formula. Here it's just "orbit formula".
In equations it is
\begin{align*}
O_n(\mathbb{R}) / O_{n-1}(\mathbb{R})&\cong O_n(\mathbb{R})/\operatorname{Stab}_{(0,0,\ldots ,1)}(O_n(\mathbb{R}))\\
&\cong O_n(\mathbb{R}).(0,0,\ldots ,1) \\
&=||(0,0,\ldots ,1)||\,\cdot \, \mathbb{S}^{n-1}\\
&= \mathbb{S}^{n-1}\\
&\cong SO_n(\mathbb{R}) / SO_{n-1}(\mathbb{R})
\end{align*}
in case someone wants to remember it as all objects (orthogonal groups, spheres, action) appear quite often in physics. The last isomorphism can easiest be seen by the second isomorphism theorem or directly, as only reflections are cancelled.

(I'll check the rest tomorrow, it's too late to say something smart.)

Mentor
2021 Award
4a) We have $(\text{div }F)(x_1,x_2,x_3)=x_2^4+2x_2^2x_3^2+x_3^4+2x_1^2x_3^2+x_1^4+2x_1^2x_2^2=(x_1^2+x_2^2+x_3^2)^2$. So, by the divergence theorem, our integral is $\int_{B} (x_1^2+x_2^2+x_3^2)^2 dV$. Changing to spherical coordinates, the volume element becomes $r^2\sin\theta d\phi \ d\theta \ dr$, the integrand becomes $r^4$ and the bounds are $r\in[0,1],\theta\in [0,\pi],\phi\in[0,2\pi]$. Thus, our integral becomes
$$\int_0^1\int_0^\pi\int_0^{2\pi}r^6\sin(\theta)d\phi \ d\theta \ dr=2\pi\int_0^1\int_0^\pi r^6\sin(\theta)d\theta \ dr=4\pi\int_0^1r^6dr=4\pi/7.$$

4b) Let $F_1,\ldots,F_n$ be the components of $F$ (so $F=(F_1,\ldots,F_n)$). Then,
$$\text{div }(hF)=\sum_{i=1}^n\frac{\partial (hF_i)}{\partial x_i}=\sum_{i=1}^n \left(h\frac{\partial F_i}{\partial x_i}+F_i\frac{\partial h}{\partial x_i}\right)=h\sum_{i=1}^n \frac{\partial F_i}{\partial x_i}+\sum_{i=1}^nF_i\frac{\partial h}{\partial x_i}=h\text{div }(F)+(\nabla h)\cdot F.$$

4c) Consider the vector field $f\nabla g$. Since $f,g$ are $C^2$, this product is $C^1(B^n,\mathbb{R}^n)$. Its divergence is $\text{div }(f\nabla g)=f\text{div }(\nabla g)+(\nabla f)\cdot(\nabla g)=f\Delta g+(\nabla f)\cdot(\nabla g)$, by part b. So, the divergence theorem gives
$$\int_{B^n} \left(f\Delta g+(\nabla f)\cdot(\nabla g)\right) dB^n=\int_{B^n} \text{div }(f\nabla g) dB^n=\int_{\partial B^n}(f\nabla g)\cdot N dS^{n-1}.$$

Rearranging gives the desired result.
Correct, although you should have added that the ball has a smooth boundary, which is required for Gauß' divergence theorem, which was your first (unmentioned) step: ##\int_{\partial B}F\cdot N \,d\mathbb{S}^2 = \int_B \operatorname{div}F \,dV##

(Simply in the hope that we have readers who are not firm in it, yet.)