# Math Challenge - September 2020

• Challenge
• Featured
Homework Helper
Can you elaborate your argument, especially why you can apply Cauchy's integral formula? How do you deal with the poles at ##z=\pm i\pi## which are inside ##U_5(0).##

Wouldn't just be quicker to say "...by Residue Theorem"? You always get me with a shorter proof, back at you!

So I've been not getting how #7 parts a) and b) are related, the only way I can think to solve b) is to use CR-eqns but I figure you're going to say "Why did you do all that calculus when you could've just applied Theorem X [which relates part a) to b)]?". So should I thumb through theorems and exercises until I find Theorem X? Or do I have your blessing to go ahead and write out the calculus?

Mentor
2022 Award
Wouldn't just be quicker to say "...by Residue Theorem"? You always get me with a shorter proof, back at you!
Cauchy's formula was correct, but you have to manage holomorphy. The trick is to find appropriate integration paths, i.e. regions plus Stokes. I haven't checked whether the residue theorem works. I guess counting correctly is of equivalent difficulty.
So I've been not getting how #7 parts a) and b) are related, the only way I can think to solve b) is to use CR-eqns but I figure you're going to say "Why did you do all that calculus when you could've just applied Theorem X [which relates part a) to b)]?". So should I thumb through theorems and exercises until I find Theorem X? Or do I have your blessing to go ahead and write out the calculus?
The connection is complex differentiability. CR is ok, but you can make life a lot easier if you do not directly consider ##f(z)##.

Gold Member
Problem #9
The matrix

##
A=
\begin{pmatrix}
5&-4&2\\
-4&7&-8\\
1&-4&6
\end{pmatrix}
##

has the obvious eigenvector:

##
\begin{pmatrix}
-2\\
0\\
1
\end{pmatrix}
##

with eigenvalue 4. We have:

##
\det
\begin{pmatrix}
5 - \lambda&-4&2\\
-4&7 - \lambda&-8\\
1&-4&6 - \lambda
\end{pmatrix}
##
##
= - \lambda^3 + 18 \lambda^2 -57 \lambda + 4 = 0
##

Write

##
(\lambda - 4) (-\lambda^2 + b \lambda -1) = - \lambda^3 + 18 \lambda^2 -57 \lambda + 4
##

This implies that ##b = 14##. Solving ##\lambda^2 - 14 \lambda + 1 = 0## gives the other two eigenvalues:

##
7 + 4 \sqrt{3} , \quad 7- 4 \sqrt{3} .
##

We wish to find the remaining eigenvectors.

##
\begin{pmatrix}
5&-4&2\\
-4&7&-8\\
1&-4&6
\end{pmatrix}
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
=
(7 + 4 \sqrt{3})
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
##

or

##
\begin{pmatrix}
-2 - 4 \sqrt{3}&-4&2\\
-4&-4 \sqrt{3}&-8\\
1&-4&-1 - 4 \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
= 0
##

Guess that

##
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
=
\begin{pmatrix}
1\\
y\\
1
\end{pmatrix}
##

gives

##
\begin{pmatrix}
- 4 \sqrt{3}-4y\\
-12 -4 \sqrt{3}y\\
- 4 \sqrt{3} - 4y
\end{pmatrix}
= 0
\qquad \text{implying } y = - \sqrt{3}
##

so the eigenvector is

##
\begin{pmatrix}
1\\
- \sqrt{3}\\
1
\end{pmatrix} .
##

Similarly the other eigenvector is

##
\begin{pmatrix}
1\\
\sqrt{3}\\
1
\end{pmatrix} .
##

Consider the matrix who's columns are the eigenvectors:

##
S=
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
##

The eigenvectors are linearly-independent (as the eigenvalues are distinct) but they are not orthogonal, so the inverse matrix, ##S^{-1}##, won't be proportional to the transpose of ##S##. However, with a slight modification to the transpose we find the inverse matrix is

##
S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##

The diagonalised matrix is

##
D = S^{-1} A S =
\begin{pmatrix}
7 + 4 \sqrt{3}&0&0\\
0&4& 0&\\
0&0&7 - 4 \sqrt{3}
\end{pmatrix}
##

Note

##
(S \sqrt{D} S^{-1})^2 = S \sqrt{D} S^{-1} S \sqrt{D} S^{-1} = S D S^{-1} = A
##

Therefore

##
\sqrt{A} = S \sqrt{D} S^{-1}
##

So that

##
\sqrt{A} = S \sqrt{D} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
\pm \sqrt{7 + 4 \sqrt{3}}&0&0\\
0&\pm 2& 0&\\
0&0&\pm \sqrt{7- 4 \sqrt{3}}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##

Note there are ##2^3## distinct square-roots.

We turn to the inverse matrix, ##A^{-1}##. Note

##
(S D^{-1} S^{-1}) A = (S D^{-1} S^{-1}) (S D S^{-1}) = S D^{-1} S^{-1} S D S^{-1} = S \mathbb{1} S^{-1} = \mathbb{1}
##

Therefore

##
A^{-1} = S D^{-1} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
{1 \over 7 + 4 \sqrt{3}}&0&0\\
0&{1 \over 4}& 0&\\
0&0&{1 \over 7- 4 \sqrt{3}}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
= {1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
7 - 4 \sqrt{3}&0&0\\
0&{1 \over 4}& 0&\\
0&0&7+ 4 \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
= {1 \over 4}
\begin{pmatrix}
10&16&18\\
16&28&32&\\
9&16&19
\end{pmatrix}
##

fresh_42
Homework Helper
I'll circle back to 7 a) later, I think I got this...

So, don't consider ##f(z)## directly? I posit that the exponential part of ##f(z)## is entire so that leaves the square of the conjugate as the problem child: let ##h(z):=\bar{z} ^2## then for real ##u,v## define ##h(x+iy)=u(x,y)+i v(x,y)## so that ##u(x,y)=x^2-y^2## and ##v(x,y)=-2xy## so we may check the CR-eqns for complex differentiability: ##u_x=2x=-2x=v_y\implies x:=0## and ##u_y=-2y=-(-2y)=-v_x\implies y:=0##, hence the ##f(z)## is complex differentiable at the origin.

Last edited:
Mentor
2022 Award
Problem #9
The matrix

##
A=
\begin{pmatrix}
5&-4&2\\
-4&7&-8\\
1&-4&6
\end{pmatrix}
##

has the obvious eigenvector:

##
\begin{pmatrix}
-2\\
0\\
1
\end{pmatrix}
##

with eigenvalue 4. We have:

##
\det
\begin{pmatrix}
5 - \lambda&-4&2\\
-4&7 - \lambda&-8\\
1&-4&6 - \lambda
\end{pmatrix}
##
##
= - \lambda^3 + 18 \lambda^2 -57 \lambda + 4 = 0
##

Write

##
(\lambda - 4) (-\lambda^2 + b \lambda -1) = - \lambda^3 + 18 \lambda^2 -57 \lambda + 4
##

This implies that ##b = 14##. Solving ##\lambda^2 - 14 \lambda + 1 = 0## gives the other two eigenvalues:

##
7 + 4 \sqrt{3} , \quad 7- 4 \sqrt{3} .
##

We wish to find the remaining eigenvectors.

##
\begin{pmatrix}
5&-4&2\\
-4&7&-8\\
1&-4&6
\end{pmatrix}
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
=
(7 + 4 \sqrt{3})
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
##

or

##
\begin{pmatrix}
-2 - 4 \sqrt{3}&-4&2\\
-4&-4 \sqrt{3}&-8\\
1&-4&-1 - 4 \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
= 0
##

Guess that

##
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
=
\begin{pmatrix}
1\\
y\\
1
\end{pmatrix}
##

gives

##
\begin{pmatrix}
- 4 \sqrt{3}-4y\\
-12 -4 \sqrt{3}y\\
- 4 \sqrt{3} - 4y
\end{pmatrix}
= 0
\qquad \text{implying } y = - \sqrt{3}
##

so the eigenvector is

##
\begin{pmatrix}
1\\
- \sqrt{3}\\
1
\end{pmatrix} .
##

Similarly the other eigenvector is

##
\begin{pmatrix}
1\\
\sqrt{3}\\
1
\end{pmatrix} .
##

Consider the matrix who's columns are the eigenvectors:

##
S=
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
##

The eigenvectors are linearly-independent (as the eigenvalues are distinct) but they are not orthogonal, so the inverse matrix, ##S^{-1}##, won't be proportional to the transpose of ##S##. However, with a slight modification to the transpose we find the inverse matrix is

##
S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##

The diagonalised matrix is

##
D = S^{-1} A S =
\begin{pmatrix}
7 + 4 \sqrt{3}&0&0\\
0&4& 0&\\
0&0&7 - 4 \sqrt{3}
\end{pmatrix}
##

Note

##
(S \sqrt{D} S^{-1})^2 = S \sqrt{D} S^{-1} S \sqrt{D} S^{-1} = S D S^{-1} = A
##

Therefore

##
\sqrt{A} = S \sqrt{D} S^{-1}
##

So that

##
\sqrt{A} = S \sqrt{D} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
\pm \sqrt{7 + 4 \sqrt{3}}&0&0\\
0&\pm 2& 0&\\
0&0&\pm \sqrt{7- 4 \sqrt{3}}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##

Note there are ##2^3## distinct square-roots.

We turn to the inverse matrix, ##A^{-1}##. Note

##
(S D^{-1} S^{-1}) A = (S D^{-1} S^{-1}) (S D S^{-1}) = S D^{-1} S^{-1} S D S^{-1} = S \mathbb{1} S^{-1} = \mathbb{1}
##

Therefore

##
A^{-1} = S D^{-1} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
{1 \over 7 + 4 \sqrt{3}}&0&0\\
0&{1 \over 4}& 0&\\
0&0&{1 \over 7- 4 \sqrt{3}}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
= {1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
7 - 4 \sqrt{3}&0&0\\
0&{1 \over 4}& 0&\\
0&0&7+ 4 \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
= {1 \over 4}
\begin{pmatrix}
10&16&18\\
16&28&32&\\
9&16&19
\end{pmatrix}
##
Can you give at least one square root explicitly? The hint that we look for a Cartan matrix of a simple Lie algebra narrows it down!

Mentor
2022 Award
I'll circle back to 7 a) later, I think I got this...

So, don't consider ##f(z)## directly? I posit that the exponential part of ##f(z)## is entire so that leaves the square of the conjugate as the problem child: let ##h(z):=\bar{z} ^2## then for real ##u,v## define ##h(x+iy)=u(x,y)+i v(x,y)## so that ##u(x,y)=x^2+y^2## and ##v(x,y)=-2xy## so we may check the CR-eqns for complex differentiability: ##u_x=2x=-2x=v_y\implies x:=0## and ##u_y=2y=-(-2y)=-v_x\implies y\in\mathbb{R}##, hence the ##f(z)## is complex differentiable along the imaginary axis.


Bon Jovi, uhm I mean, slippery when wet, uhm I mean: calculation error.

Gold Member
Can you give at least one square root explicitly? The hint that we look for a Cartan matrix of a simple Lie algebra narrows it down!
Just noticed that ##(2 + \sqrt{3})^2 = 7 + 4 \sqrt{3}## and ##(2 - \sqrt{3})^2 = 7 - 4 \sqrt{3}##. Just a second.

Gold Member
##
\sqrt{A} = S \sqrt{D} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
2 + \sqrt{3}&0&0\\
0&2& 0&\\
0&0&2- \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
=
\begin{pmatrix}
2&-1&0\\
-1&2& -2&\\
0&-1&2
\end{pmatrix}
##

fresh_42
Mentor
2022 Award
##
\sqrt{A} = S \sqrt{D} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
2 + \sqrt{3}&0&0\\
0&2& 0&\\
0&0&2- \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
=
\begin{pmatrix}
2&-1&0\\
-1&2& -2&\\
0&-1&2
\end{pmatrix}
##
Yes. This is the Cartan matrix of the simple Lie algebra of type ##B_3## which is the ##21## dimensional orthogonal Lie algebra ##\mathfrak{o}(7,\mathbb{R})=\mathfrak{so}(7,\mathbb{R}).##

Homework Helper
Bon Jovi, uhm I mean, slippery when wet, uhm I mean: calculation error.

Oh duh, here:

So, don't consider ##f(z)## directly? I posit that the exponential part of ##f(z)## is entire so that leaves the square of the conjugate as the problem child: let ##h(z):=\bar{z} ^2## then for real ##u,v## define ##h(x+iy)=u(x,y)+i v(x,y)## so that ##u(x,y)=x^2-y^2## and ##v(x,y)=-2xy## so we may check the CR-eqns for complex differentiability: ##u_x=2x=-2x=v_y\implies x:=0## and ##u_y=-2y=-(-2y)=-v_x\implies y:=0##, hence the ##f(z)## is complex differentiable at the origin.

Homework Helper
7. a)

$$\begin{gathered}\int_{|z|=5}\tfrac{e^z}{z^2+\pi ^2}\, dz=\tfrac{1}{2\pi i}\int_{|z|=5}e^z\left(\tfrac{1}{z-i\pi }-\tfrac{1}{z+i\pi }\right) dz \\ = \lim_{z\to i \pi}\left[ (z-i\pi )\cdot \tfrac{e^z}{z-i\pi}\right]-\lim_{z\to -i \pi}\left[ (z+i\pi )\cdot \tfrac{e^z}{z+i\pi}\right] \\ =e^{i\pi}-e^{-i\pi}=0 \\ \end{gathered}$$
by Residue Theorem

Mentor
2022 Award
7. a)

$$\begin{gathered}\int_{|z|=5}\tfrac{e^z}{z^2+\pi ^2}\, dz=\tfrac{1}{2\pi i}\int_{|z|=5}e^z\left(\tfrac{1}{z-i\pi }-\tfrac{1}{z+i\pi }\right) dz \\ = \lim_{z\to i \pi}\left[ (z-i\pi )\cdot \tfrac{e^z}{z-i\pi}\right]-\lim_{z\to -i \pi}\left[ (z+i\pi )\cdot \tfrac{e^z}{z+i\pi}\right] \\ =e^{i\pi}-e^{-i\pi}=0 \\ \end{gathered}$$
by Residue Theorem
Residue theorem: https://en.wikipedia.org/wiki/Residue_theorem
Residues of simple roots: https://en.wikipedia.org/wiki/Residue_(complex_analysis)#Simple_poles

Homework Helper
@fresh_42 I'd like to ask if #2 can hope to be solved with only basic modulo arithmetic because I don't remember Number Theory from the 1 class I took that NT-ish whose text was Concrete Mathematics (I do recall liking the text though)? I did play with it and find some relation to binary sequences and multiplication by integers in base 2, but IDK what the heck I'm doing lol

Mentor
2022 Award
@fresh_42 I'd like to ask if #2 can hope to be solved with only basic modulo arithmetic because I don't remember Number Theory from the 1 class I took that NT-ish whose text was Concrete Mathematics (I do recall liking the text though)? I did play with it and find some relation to binary sequences and multiplication by integers in base 2, but IDK what the heck I'm doing lol
A standard result of number theory with Legendre symbols helps a lot. Whether this is basic modular arithmetics or not depends on whom you ask. It is not sophisticated though.

Gold Member
Problem #4

a)

We have

\begin{align*}
a^2=
\dfrac{1}{2}\cdot
\begin{bmatrix}
-1&\sqrt{3}&0&0\\
-\sqrt{3}&-1&0&0\\
0&0&-1&-\sqrt{3}\\
0&0&\sqrt{3}&-1
\end{bmatrix}
\end{align*}

and

\begin{align*}
a^3=
\begin{bmatrix}
-1&0&0&0\\
0&-1&0&0\\
0&0&-1&0\\
0&0&0&-1
\end{bmatrix} .
\end{align*}

Implying:

\begin{align*}
a^6=
\begin{bmatrix}
1&0&0&0\\
0&1&0&0\\
0&0&1&0\\
0&0&0&1
\end{bmatrix}
\end{align*}

and so ##p=6##.

We have

\begin{align*}
b=\begin{bmatrix}
0&0&1&0\\
0&0&0&1\\
-1&0&0&0\\
0&-1&0&0
\end{bmatrix}
\end{align*}

and

\begin{align*}
b^2=\begin{bmatrix}
-1&0&0&0\\
0&-1&0&\\
0&0&-1&0\\
0&0&0&-1
\end{bmatrix} .
\end{align*}

Implying

\begin{align*}
b^4=\begin{bmatrix}
1&0&0&0\\
0&1&0&0\\
0&0&1&0\\
0&0&0&1
\end{bmatrix}
\end{align*}

So that ##q=4##.

We have

\begin{align*}
aba & =
\dfrac{1}{4}\cdot
\begin{bmatrix}
1&\sqrt{3}&0&0\\
-\sqrt{3}&1&0&0\\
0&0&1&-\sqrt{3}\\
0&0&\sqrt{3}&1
\end{bmatrix}
\begin{bmatrix}
0&0&1&0\\
0&0&0&1\\
-1&0&0&0\\
0&-1&0&0
\end{bmatrix}
\begin{bmatrix}
1&\sqrt{3}&0&0\\
-\sqrt{3}&1&0&0\\
0&0&1&-\sqrt{3}\\
0&0&\sqrt{3}&1
\end{bmatrix}
\\
& =
\begin{bmatrix}
0&0&1&0\\
0&0&0&1\\
-1&0&0&0\\
0&-1&0&1
\end{bmatrix}
\\
& = b .
\end{align*}

So that ##r=1##. We obviously have ##a^3 = b^2##. In summary we have the presentation ##(6,4,1,3,2)##. That is,

##
G = \langle a,b | a^6 = b^2 = {\bf 1} , (aba) = b , a^3 = b^2 \rangle .
##

The group multiplication table is (aided by using ##a^6 = b^2 = {\bf 1}## and ##ba^j = a^{6-j}b##) is:

\begin{align*}
\begin{array}{c|c|c|c}
& 1 & a & a^2 & a^3 & a^4 & a^5 & b & ab & a^2 b & a^3 b & a^4 b & a^5 b \\
\hline 1 & 1 & a & a^2 & a^3 & a^4 & a^5 & b & ab & a^2b & a^3b & a^4b & a^5b \\
\hline a & a & a^2 & a^3 & a^4 & a^5 & 1 & ab & a^2b & a^3b & a^4b & a^5b & b \\
\hline a^2 & a^2 & a^3 & a^4 & a^5 & 1 & a & a^2b & a^3b & a^4b & a^5b & b & ab \\
\hline a^3 & a^3 & a^4 & a^5 & 1 & a & a^2 & a^3b & a^4b & a^5b & b & ab & a^2b \\
\hline a^4 & a^4 & a^5 & 1 & a & a^2 & a^3 & a^4b & a^5b & b & ab & a^2b & a^3b \\
\hline a^5 & a^5 & 1 & a & a^2 & a^3 & a^4 & a^5b & b & ab & a^2b & a^3b & a^4b \\
\hline b & b & a^5b & a^4b & a^3b & a^2b & ab & 1 & a^5 & a^4 & a^3 & a^2 & a \\
\hline ab & ab & b & a^5b & a^4b & a^3b & a^2b & a & 1 & a^5 & a^4 & a^3 & a^2 \\
\hline a^2b & a^2b & ab & b & a^5b & a^4b & a^3b & a^2 & a & 1 & a^5 & a^4 & a^3 \\
\hline a^3b & a^3b & a^2b & ab & b & a^5b & a^4b & a^3 & a^2 & a & a & a^5 & a^4 \\
\hline a^4b & a^4b & a^3b & a^2b & ab & b & a^5b & a^4 & a^3 & a^2 & a & 1 & a^5 \\
\hline a^5b & a^5b & a^4b & a^3b & a^2b & ab & 1 & a^5 & a^4 & a^3 & a^2 & a & 1 \\
\end{array}
\end{align*}

We check associativity:

##
((a^i) (a^j)) (a^k) = (a^i) ((a^j) (a^k)) ;
##
##
((a^i) (a^j)) (a^k b) = (a^i) ((a^j) (a^k b)) ;
##
##
((a^i) (a^j)) (a^k b) = (a^{i + j}) (a^k b) = (a^{i + j + k} b) = (a^i) (a^{j + k} b) = (a^i) ((a^j) (a^k b)) ;
##
##
((a^i) (a^j b)) (a^k) = (a^{i + j} b) (a^k) = (a^{i + j - k} b) = (a^i) (a^{j - k} b) = (a^i) ((a^j b) (a^k)) ;
##
##
((a^i b) (a^j )) (a^k) = (a^{i - j} b) (a^k) = (a^{i - j - k} b) = (a^i) (a^{j - k} b) = (a^i b) ((a^j) (a^k)) ;
##
##
((a^i) (a^j b)) (a^k b) = (a^{i + j} b) (a^k b) = (a^{i + j - k}) = (a^i) (a^{j - k}) = (a^i) ((a^j b) (a^k b)) ;
##
##
((a^ib) (a^j)) (a^k b) = (a^{i - j} b) (a^k b) = (a^{i - j - k}) = (a^i b) (a^{j + k} b) = (a^i b) ((a^j) (a^k b)) ;
##
##
((a^i b) (a^j b)) (a^k) = (a^{i - j}) (a^k) = (a^{i - j + k}) = (a^i b) (a^{j - k} b) = (a^i b) ((a^j b) (a^k)) ;
##
##
((a^i b) (a^j b)) (a^k b) = (a^{i - j}) (a^{k} b) = (a^{i - j + k} b) = (a^i b) (a^{j - k}) = (a^i b) ((a^j b) (a^k b)).
##

It is indeed a non-abelian group as the table is not symmetric about the main diagonal.

The group ##G## is the subgroup of ##S_{12}## generated by the permutations:

##
a = (1 \; 2 \; 3 \; 4 \; 5 \; 6) (7 \; 8 \; 9 \; 10 \; 11 \; 12) \quad \text{and} \quad b = (1 \; 7 \; 4 \; 10) (2 \; 12 \; 5 \; 9) (3 \; 11 \; 6 \; 8) .
##

Note that, if ##\epsilon## denotes the identity permutation, then;

##
a^6 = (1 \; 2 \; 3 \; 4 \; 5 \; 6)^6 (7 \; 8 \; 9 \; 10 \; 11 \; 12)^2 = \epsilon
##

and

##
b^4 = (1 \; 7 \; 4 \; 10)^4 (2 \; 12 \; 5 \; 9)^4 (3 \; 11 \; 6 \; 8)^4 = \epsilon .
##

Writing

\begin{align*}
aba & = (1 \; 2 \; 3 \; 4 \; 5 \; 6) (7 \; 8 \; 9 \; 10 \; 11 \; 12) (1 \; 7 \; 4 \; 10) (2 \; 12 \; 5 \; 9) (3 \; 11 \; 6 \; 8) (1 \; 2 \; 3 \; 4 \; 5 \; 6) (7 \; 8 \; 9 \; 10 \; 11 \; 12)
\end{align*}

we read off that ##aba (1) = 7##, ##aba (2) = 12##, ##aba (3) = 11##, ##aba (4) = 10##, ##aba (5) = 9##, ##aba (6) = 8##, ##aba (7) = 4##, ##aba (8) = 3##, ##aba (9) = 2##, ##aba (10) = 1##, ##aba (11) = 6##, and ##aba (12) = 5##.

We read off that ##b (1) = 7##, ##b (2) = 12##, ##b (3) = 11##, ##\beta (4) = 10##, ##b (5) = 9##, ##b (6) = 8##, ##b (7) = 4##, ##b (8) = 3##, ##b (9) = 2##, ##b (10) = 1##, ##b (11) = 6##, ##b (12) = 5##.

Thus:

##
aba = b .
##

b) It is the Dihedral group ##D_{12}##. The elements of the dihedral group, ##D_{2n}##, can be thought of as the symmetry operations on a regular ##n##-polygon. An ##n-##polygon has ##n## rotational symmetries, through ##360^0 / n## rotations, and ##n## reflections about lines of symmetry - so ##2n## in total. If ##a## corresponds to a ##360^0 / 6## rotation and ##b## corresponds to a reflection about one line of symmetry, these obviously can be used to generate the 12 symmetries operations on a regular ##6##-polygon. Also is easy to verify that ##(aba) = b## and ##a^6 = b^2##. Therefore, we have that:

##
H = \langle a,b | a^6 = b^2 = {\bf 1} , (aba) = b \rangle .
##

c) The group ##L## is the alternating group ##A_4## of degree ##4## (as ##|A_n| = n! /2## for ##n \geq 2## we have ##|A_4|= 12##). Also, known as the tetrahedral group. A regular tetrahedron has 12 rotational symmetries. The tetrahedral group has the identity ##{\bf 1}##, three ##180^0## rotations, four ##120^0## rotations, and four ##(120^0)^2## rotations. So twelve elements in total.

Every element of ##A_4##, by definition, is a product of even transpositions (the identity element is taken to be even). Any even product of transpositions can be expressed as a 3-cycle because:

##
(ij)(ik) = (ikj) , \qquad (ij)(kl) = (ikj)(ikl) \quad \text{where } i,j,k,l \text{ are all distinct.}
##

If one vertex of a tetrahedral is fixed, the other three can only be rotated cyclically, thus the tetrahedral group contains all possible 3-cycles, hence it contains ##A_4##. But since its order is the same as that of ##A_4##, the tetradhedral group must be to equal to ##A_4## (isomorphic that is).

We are motivated to consider the group ##M = \langle a,b \rangle \leq A_4## where ##a^2 = b^3 = {\bf 1}## and ##ab \not = ba##. We see that ##M## has at least 6 distinct elements: ##{\bf 1}##, ##a##, ##b##, ##b^2##, ##ab##, ##ba##. By Lagrange's theorem there can be no proper subgroup in between ##M## and ##A_4## and we know that ##A_4## has no subgroup of order 6. Thus ##M## must be equal to whole group ##A_4##. It is easy to understand that ##ab## corresponds to a ##120^0## rotation distinct from ##b##. We have a presentation:

##
L = \langle a,b | a^2 = b^3 = {\bf 1} , (ab)^3 = {\bf 1} , a^3 = b^2 \rangle.
##

If ##b## is a ##120^0## counterclockwise rotation that leaves a certain vertex invariant and if ##a## is a counterclockwise ##180^0## rotation, then ##ba## is the ##120^0## counterclockwise rotation distinct from ##b##, then ##ab## is a ##120^0## counterclockwise rotation distinct from ##b## and ##ba##, then ##aba## is the fourth ##120^0## counterclockwise rotation. Combining a ##120^0## rotation that leaves one vertex invariant combined with a with a ##(120^0)^2## rotation that leaves a different vertex invariant gives a ##180^0## rotation. This way all rotations are generated.

Is it possible to have a presentation where ##a## and ##b## have different meanings and ##aba = b^r##?

Last edited:
fresh_42
Mentor
2022 Award
Is it possible to have a presentation where ##a## and ##b## have different meanings and ##aba = b^r##?
##a=(123),b=(234)##

Gold Member
Problem #6

The extended complex plane and the Riemann sphere

The Riemann sphere is the geometric object for representing complex numbers in a way that treats infinity on par with other complex values.

We first define the mapping between the complex plane ##\mathbb{C}## and Riemann sphere, ##S##, with it north pole removed, i.e. ##S / \{ N \}##. See figure below. The Riemann sphere is a sphere of radius ##1## whose centre is located at the origin of the complex plane. Stereographic projection allows to set up a on-to-one correspondence between points of ##\mathbb{C}## and the points on ##\mathbb{C}## and points on ##S / \{ N \}##. Geometrically, the line from any point ##x## of ##\mathbb{C}## to ##N## intersects the sphere precisely one point ##x'##, and for every point ##x'##of ##S/{N}##, the line through ##N## and ##x′## meets the complex plane ##\mathbb{C}## in a unique point ##x##.

There is a natural concrete representative for “the point at infinity” associated with the north pole of ##S##. The line is then parallel to the complex plane.

Proving χ(x,y) is a metric:

It is easy to show that

##
\chi (x,y) = 0 \text{ iff } x=y
##

and

##
\chi (x,y) = \chi (y,x) .
##

We now turn to the triangle inequality. We must show:

\begin{align*}
\chi (x,z) \leq \chi(x,y) + \chi (y,z) \quad \text{for } x,y,z \in \mathbb{C}
\\
\chi (x,y) \leq \chi(x,\infty) + \chi (\infty,y) \quad \text{for } x,y \in \mathbb{C}
\\
\chi (x,\infty) \leq \chi(x,y) + \chi (y,\infty) \quad \text{for } x,y \in \mathbb{C}
\end{align*}

The case is when ##x,y,z \not= \infty##:

We will prove that

\begin{align*}
\frac{2 \| x - y \|_2}{\sqrt{1 + \| x \|_2^2} \sqrt{1 + \| y \|_2^2}}
\end{align*}

is equal to the Euclidean distance between the pair of points ##x′## and ##y′## on the Riemann sphere which correspond to the pair of points ##x## and ##y## in the complex plane. It will then be obvious why the triangle inequality holds: this is because ##2 \chi (x,y)##, ##2 \chi (y,z)##, and ##2 \chi (x,z)## are the side lengths of a plane triangle.

Let us proceed to prove that the above formula is equal to the distance between two points ##x′## and ##y′## on the Reimann sphere. See the figure above. We introduce Euclidean coordinates with origin at the centre of the sphere: the coordinates: ##X## and ##Y## are parallel to the real and imaginary axes respectively, with ##Z## as the vertical coordinate.

The complex coordinate, denoted ##x## with real part ##x_R## and imaginary part ##x_I##, labels the point ##P′## in the complex plane.

We wish to transform from complex coordinates for point ##P′## to Euclidean coordinates for the corresponding point ##P## on the sphere. Note that the triangles ##\bigtriangleup NPS## and ##\bigtriangleup NOP'## are similar and so we have:

\begin{align*}
\frac{PN}{NS} = \frac{NS/2}{P'N} .
\end{align*}

We have

\begin{align*}
\frac{X}{x_R} = \frac{PQ}{P'S} = \frac{PN}{P'N} = \frac{1}{2} \frac{(NS)^2}{(P'N)^2} = \frac{2}{(P'N)^2}
\end{align*}

(as ##NS = 2##) so

\begin{align*}
X = \frac{2}{(P'N)^2} x_R .
\end{align*}

We obviously have:

\begin{align*}
(P'N)^2 = x_R^2 + x_I^2 + 1^2
\end{align*}

we obtain

\begin{align*}
X = \frac{2}{x_R^2 + x_I^2 + 1} x_R .
\end{align*}

Similarly,

\begin{align*}
Y = \frac{2}{x_R^2 + x_I^2 + 1} x_I .
\end{align*}

Using that ##Z^2=1−X^2−Y^2## together with the above equations for ##X## and ##Y##, we obtain

\begin{align*}
Z = \frac{x_R^2 + x_I^2 - 1}{x_R^2 + x_I^2 + 1} x_I .
\end{align*}

(note this is negative when ##x_R^2+x_I^2 < 1## in accordance with the diagram). So the coordinates of the point ##P## on the sphere are

\begin{align*}
\frac{(2 x_R , 2 x_I , (x_R^2 + x_I^2 - 1))}{x_R^2 + x_I^2 + 1}
\end{align*}

or

\begin{align*}
\frac{x + \overline{x} , (x - \overline{x})/i , (\| x \|_2^2 - 1)}{\| x \|_2^2 + 1}
\end{align*}

Consider two complex numbers ##x## and ##y##. Take the dot product the vectors of the corresponding points on the sphere:

\begin{align*}
x' \cdot y ' = \cos \theta & = \frac{ (x + \overline{x}) (y + \overline{y}) - (x - \overline{x}) (y - \overline{y}) + (\| x \|_2 - 1) (\| y \|_2 - 1) }{(1 + \| x \|_2^2) (1 + \| y \|_2^2)}
\\
& = \frac{ 2 x \overline{y} +2 \overline{x} y - \overline{y}) + (\| x \|_2 \| y \|_2 - \| x \|_2 - \| y \|_2 + 1) }{(1 + \| x \|_2^2) (1 + \| y \|_2^2)}
\\
& = 1 - \frac{2 \| x - y \|_2^2}{(1 + \| x \|_2^2) (1 + \| y \|_2^2)}
\end{align*}

By the cosine rule ##d^2=2−2 \cos⁡ \theta##

\begin{align*}
d (x,y) = 2 \chi (x,y) = \sqrt{2 - 2 \cos \theta} = \frac{2 \| x - y \|_2}{\sqrt{1 + \| x \|_2^2} \sqrt{1 + \| y \|_2^2}}
\end{align*}

Three distinct complex numbers define a (nondegenerate) triangle where the vertices of the triangle are on the sphere. Seeing as ##2 \chi (x,y)## is the Euclidean length of a side of such a triangle, we have the usual triangle inequality:

\begin{align*}
\chi (x,z) \leq \chi (x,y) + \chi (y,z) .
\end{align*}

We now turn to the case;

\begin{align*}
\chi (x,y) \leq \chi(x,\infty) + \chi (\infty,y) \quad \text{for } x,y \in \mathbb{C}
\\
\chi (x,\infty) \leq \chi(x,y) + \chi (y,\infty) \quad \text{for } x,y \in \mathbb{C}
\end{align*}

We prove that

\begin{align*}
\frac{2}{\sqrt{1 + \| x \|_2^2}}
\end{align*}

is the Euclidean length of the line from the north pole of the sphere to the point on the sphere corresponding to the complex number ##x##. The components of the vector pointing to the north pole are ##(0,0,1)##. Taking the dot product of the vectors of the corresponding points on the sphere gives

\begin{align*}
\cos \theta = \frac{\| x \|_2^2 - 1}{\| x \|_2^2 + 1} .
\end{align*}

By the cosine rule ##d^2=2−2 \cos \theta##

\begin{align*}
d (x , \infty) = 2 \chi (x, \infty) = \frac{2 \| x \|_2}{\sqrt{1 + \| x \|_2^2}} .
\end{align*}

Two distinct complex numbers ##x,y \not= \infty## and the point ##\infty## define a (nondegenerate) triangle where the vertices of the triangle are on the sphere. Seeing as ##2 \chi (x,\infty)## is the Euclidean length of the side of such a triangle between the north pole and a point on the sphere corresponding the complex number ##x##, we have the usual triangle inequalities

\begin{align*}
\chi (x,y) \leq \chi(x,\infty) + \chi (\infty,y) \quad \text{for } x,y \in \mathbb{C}
\end{align*}

and

\begin{align*}
\chi (x,\infty) \leq \chi(x,y) + \chi (y,\infty) \quad \text{for } x,y \in \mathbb{C} .
\end{align*}

Thus we have proved that ##\chi (x,y)## defines a metric!

Proving ##\mathcal{C} = (\mathbb{C}_\infty , \chi)## is a compact topological space:

A topology is induced by the metric ##d## on a metric space ##X##. The open sets are all subsets that can be realized as the unions of open balls:

\begin{align*}
B_r (x,d) = \{ y \in X : d (x,y) < r \}
\end{align*}

where ##x \in X## and ##r > 0##.

Note. We have two metrics on ##\mathbb{C}##; namely the metric induced by modulus, ##d'(x,y)=|x−y|## which we are denoting ##\| x - y \|_2##, and the metric ##\chi (x,y)## defined on ##\mathbb{C}_\infty## (treating ##\mathbb{C}## and a metric subspace of ##\mathbb{C}_\infty##).

We consider the topology induced by the metric ##\tilde{\chi} := 2 \chi## on the space ##\mathbb{C}_\infty##. This is exactly the same topology induced by ##\chi## as they both have exactly the same collection of open balls. The open sets, as mentioned above, are all subsets that can be realized as the unions of open balls:

\begin{align*}
B_r (x,2\chi) = \{ y \in \mathbb{C}_\infty : 2\chi (x,y) < r \}
\end{align*}

where ##x \in \mathbb{C}_\infty## and ##r > 0##.

We first elucidate the topology induced by ##\tilde{\chi}## on ##\mathbb{C}##. The balls ##B_r (x,2\chi)## have an nice interpretation. Before we give this interpretation we state a result. Each circle on the sphere (not intersecting the north-pole) is mapped to a distinct circle in the complex plane. Conversely, each circle in the complex plane maps to a distinct circle on the sphere (not intersecting the north-pole). This can be proved by considering the intersection of a plane

\begin{align*}
aX + bY + cZ = d
\end{align*}

with the Riemann sphere (not intersecting the north-pole). One finds that circles on the Riemann sphere (not intersecting the north-pole) are mapped to circles in ##\mathbb{C}## (circles that intersect the the north-pole are mapped to lines instead of circles). This is all proved in the spoiler below. In the following we only consider circles that don't intersect the north-pole.

Let us return to the interpretation of balls ##B_r (x,2\chi)##. The boundary of the ball ##B_r (x,2\chi)## is an equal distance away from the point ##x## with respect to the metric ##\tilde{\chi} (x,y)## in ##\mathbb{C}##. This means that the mapping of this boundary onto the sphere must be a circle with centre ##x′## ##(x′## being the point on the sphere corresponding to ##x##) because ##\tilde{\chi} (x,y)## has the interpretation of the chord distance. This implies that the boundary of the ball ##B_r (x,2\chi)## in the complex plane is a circle and so the ball an open disc (note however that the "centre" of the ball is not an equal distance away from the boundary with respect to the usual metric, ##\| x - y \|_2## on ##\mathbb{C}##, but the distance from the centre to the boundary with respect to ##\| x - y \|_2## is always non-zero).

We write the equation of a plane ##aX+bY+cZ=d## where we take ##a^2+b^2+c^2=1##. Recall that the vector ##<a,b,c>## is perpendicular to the plane. Note that when the vector ##<X,Y,Z>## is parallel (or anti-parallel) to the unit vector ##<a,b,c>##, we have that their dot product is equal to ##d## (positive when parallel and negative when anti-parallel). The plane obviously intersects the Riemann sphere when ##-1 \leq d \leq 1##.

We substitute the formula for ##X,Y##, and Z into ##aX+bY+cZ=d##:

\begin{align*}
a \frac{2x_R}{x_R^2 + x_I^2 + 1} + b \frac{2x_I}{x_R^2 + x_I^2 + 1} + c \frac{x_R^2 + x_I^2 - 1}{x_R^2 + x_I^2 + 1} = d
\end{align*}

which becomes:

\begin{align*}
(d - c) (x_R^2 + x_I^2) - 2 a x_R - 2 b x_I + (d + c) = 0
\end{align*}

which corresponds to a circle in the complex plane when ##d \not= c## and a line when ##d = c##. When ##c = d## we have that the vector ##<X=0,Y=0,Z=1>## is a point on the plane, in other words when ##c=d## the plane intersects the north pole.

Does every circle in complex plane map to a circle on the Riemann sphere? The following argument uses the key fact that a circle is uniquely defined by three noncollinear points. Take any circle in the complex plane and choose three distinct points on the circle ##x,y,## and ##z##. These uniquely correspond to three noncolinear points ##x',y'##, and ##z'## on the Riemann sphere, these in turn define a plane and the intersection of this plane with the Riemann sphere defines a circle on the sphere. This circle maps to a circle in the complex plane that passes through the points ##x,y,## and ##z##, but seeing as three points uniquely determine a circle this circle must coincide with the origin circle. This establishes that every circle in complex plane maps to a circle on the Riemann sphere.

Let us gain some intuition as to what happens when we vary ##d##, with ##a,b,## and ##c## fixed. Let us write the equation for a circle in the complex plane (##|x−C|^2=r^2##) in terms of the real and imaginary parts of its centre, (##C_R,C_I##), and its radius ##r##:

\begin{align*}
x_R^2 + x_I^2 - 2 C_R x_R - 2 C_I x_I + C_R^2 + C_I^2 = r^2
\end{align*}

and so we have the following formula:

\begin{align*}
C_R = \frac{a}{d-c} , \quad C_I = \frac{b}{d-c} , \quad r^2 = C_R^2 + C_I^2 + \frac{c+d}{c-d} .
\end{align*}

\begin{align*}
r^2 & = \frac{a^2 + b^2 + c^2 - d^2}{(c - d)^2}
\\
& = \frac{1 - d^2}{(c - d)^2}
\end{align*}

which is zero when ##d = \pm 1## (as would be expected as in this case the plane is tangential to the sphere). Keep ##c## fixed and allow ##d## to vary. Taking the derivative with respect to ##d## gives:

\begin{align*}
\frac{d r^2}{dd} = \frac{2 (1 - cd)}{(c - d)^3}
\end{align*}

First take ##c > 0## then the derivative is negative for ##c < d \leq 1##. Thus ##r^2## monotonically increases as you reduce ##d## from ##1## and becomes arbitrarily large as ##d## approaches ##c.## For ##-1 \leq d < c## the derivative is positive. Thus ##r^2## monotonically increases as you increase ##d## from ##−##1 and becomes arbitrarily large as ##d## approaches ##c##.

Take ##c=0## then the derivative is negative for ##0 < d \leq 1##. So again ##r^2 ##monotonically increases as you reduce ##d## from ##1## and becomes arbitrarily large as ##d## approaches ##0##. For ##-1 \leq d < 0##−1≤d<0 the derivative is positive. Thus ##r^2## monotonically increases as you increase ##d## from ##−1## where and becomes arbitrarily large as ##d## approaches ##0##.

As you vary ##d## it moves the position of the centre of the circle in the complex plane as

\begin{align*}
C_R = \frac{a}{d-c} , \quad C_I = \frac{b}{d-c} .
\end{align*}

Above we established that there is a bijection between circles on the sphere (that don't intersect the north pole) and circles in the complex plane. We don't bother to write down the expressions for (##a,b,c,d##) in terms of (##C_R,C_I,r##).

In the spoiler we established that we have a bijection between circles on the sphere (that do not intersect the north pole) and circles in the complex plane (also we found that circles that come arbitrarily close to the north pole correspond to arbitrarily large circles on the complex plane. The radius diverges as ##d \rightarrow c##).

So we have elucidated the topology induced by ##\tilde\chi## on ##\mathbb{C}##. It actually coincides with the usual topology on ##\mathbb{C}##, as we will now explain.

Definition: Two metrics ##d_1## and ##d_2## defined on a space ##X## are called equivalent if they induce the same metric topology on ##X##. This is the case if and only if, for every point ##x## of ##X##, every ball with centre at ##x## defined with respect to ##d_1##:

\begin{align*}
B_{r_1} (x,d_1) = \{ y \in X : d_1 (x,y) < r_1 \}
\end{align*}

contains a ball with centre x with respect to ##d_2##:

\begin{align*}
B_{r_2} (x,d_2) = \{ y \in X : d_2 (x,y) < r_2 \}
\end{align*}

and conversely.

The two metrics ##\tilde{\chi} (x,y)## and ##\| x - y \|_2## defined on a space ##\mathbb{C}## are obviously equivalent, as such they induce the same metric topology on ##\mathbb{C}##. And so the two metrics ##\chi (x,y)## and ##\| x - y \|_2## induce the same metric topology on ##\mathbb{C}##.

We have a topology on ##\mathbb{C}_\infty## defined by the open sets:

(a) the open sets of ##\mathbb{C}##, regarded as subset of ##\mathbb{C}_\infty##;
(b) the complements in ##\mathbb{C}_\infty## of closed, bounded subsets of ##\mathbb{C}##;
(c) the full space ##\mathbb{C}_\infty##.

Recall according to the Heine-Borel theorem that a subset of Euclidean space (coordinate space together with the usual norm, here ##\| \cdot \|_2##) is compact if and only if it is closed and bounded. The same holds true for (##\mathbb{C}## , ##\| x - y \|_2##) (it is basically ##\mathbb{R}^2## with the usual norm: ##\| x \|^2 = x_R^2 + x_I^2##).

Sometimes if ##\mathbb{R}^n## not equipped the usual metric it may fail to have the Heine-Borel property. A moment ago we established that the relative topology on ##\mathbb{C}## (induced by the metric topology of (##\mathbb{C}_\infty , \chi##) is equivalent to the topology defined by (##\mathbb{C} , \| x - \|_2##). Therefore, we can employ the Heine-Borel theorem and condition (b) can be replaced with:

(b)' the complements in ##\mathbb{C}_\infty## of compact subsets of ##\mathbb{C}##.

We now prove that ##\mathbb{C}_\infty## is compact (every open cover has a finite subcover). Let ##\{ U_i \}## be an open cover of ##\mathbb{C}_\infty##.

If the open cover contains ##\{ \mathbb{C}_\infty \}##, the ##\{ U_ i \}## immediately has finite subcover, namely, ##\{ \mathbb{C}_\infty \}##.

We now assume that each ##U_i## is of type (a) or type (b)'. To be a cover, there must be at least one ##U_i## that contains ##\infty##, and this set must be of type (b)'. Let's denote it ##U_0##. It's complement, ##U_0'## is compact subspace of ##\mathbb{C}## which is contained in the union of open subsets of ##\mathbb{C}## of the form ##U_i \cap \mathbb{C}##. As ##U_0'## is compact it is contained in the finite subclass of these sets say

##
\{ U_1 \cap \mathbb{C} , U_2 \cap \mathbb{C} , \dots , U_n \cap \mathbb{C} \}
##

It is now easy to see that ##\{ U_0 , U_1 , U_2 , \dots , U_n \}## is a finite subcover our initial open cover of ##\mathbb{C}_\infty##, and so ##\mathbb{C}_\infty## is compact.

Last edited:
fresh_42
Mentor
2022 Award
Problem #6

The extended complex plane and the Reimann sphere

The Riemann sphere is the geometric object for representing complex numbers in a way that treats infinity on par with other complex values. ...
Looks fine, even though ...
Sorry it so long.

It contains all necessary ideas. My solution is a bit more structured and formal, but the calculations are long, too. I recommend to the interested reader to look it up in the solution manual in