Challenge Math Challenge - November 2021

fresh_42
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
2024 Award
Messages
20,627
Reaction score
27,762
Summary: Analysis. Projective Geometry. ##C^*##-algebras. Group Theory. Markov Processes. Manifolds. Topology. Galois Theory. Linear Algebra. Commutative Algebra.1.a. (solved by @nuuskur ) Let ##C\subseteq \mathbb{R}^n## be compact and ##f\, : \,C\longrightarrow \mathbb{R}^n## continuous and injective. Show that the inverse ##g=f^{-1}\, : \,f(C)\longrightarrow \mathbb{R}^n## is continuous.

1.b. (solved by @nuuskur ) Let ##S:=\{x+tv\,|\,t\in (0,1)\}## with ##x,v\in \mathbb{R}^n,## and ##f\in C^0(\mathbb{R}^n)## differentiable for all ##y\in S.## Show that there is a ##z\in S## such that
$$
f(x+v)-f(x)=\nabla f(z)\cdot v\,.
$$
1.c. (solved by @MathematicalPhysicist ) Let ##\gamma \, : \,[0,\pi]\longrightarrow \mathbb{R}^3## be given as
$$
\gamma(t):=\begin{pmatrix}
\cos(t)\sin(t)\\ \sin^2(t)\\ \cos(t)
\end{pmatrix}\, , \,t\in [0,\pi].
$$
Show that the length ##L(\gamma )>\pi.##2. (solved by @mathwonk ) Let ##g,h## be two skew lines in a three-dimensional projective space ##\mathcal{P}=\mathcal{P}(V)##, and ##P## a point that is neither on ##g## nor on ##h##. Prove that there is exactly one straight through ##P## that intersects ##g## and ##h.##3. (solved by @QuantumSpace ) Let ##(\mathcal{A},e)## be a unital ##C^*##-algebra. A self-adjoint element ##a\in \mathcal{A}## is called positive, if its spectral values are:
$$
\sigma(a) :=\{\lambda \in \mathbb{C}\,|\,a-\lambda e \text{ is not invertible }\}\subseteq \mathbb{R}^+:=[0,\infty).
$$
The set of all positive elements is written ##\mathcal{A}_+\,.## A linear functional ##f\, : \,\mathcal{A}\longrightarrow \mathbb{C}## is called positive, if ##f(a)\in \mathbb{R}^+## for all positive ##a\in \mathcal{A}_+\,.##

Prove that a positive functional is continuous.4. Prove that the following groups ##F_1,F_2## are free groups:

4.a. (solved by @nuuskur ) Consider the functions ##\alpha ,\beta ## on ##\mathbb{C}\cup \{\infty \}## defined by the rules
$$
\alpha(x)=x+2 \text{ and }\beta(x)=\dfrac{x}{2x+1}.
$$
The symbol ##\infty ## is subject to such formal rules as ##1/0=\infty ## and ##\infty /\infty =1.## Then ##\alpha ,\beta ## are bijections with inverses
$$
\alpha^{-1}(x)=x-2\text{ and }\beta^{-1}(x)=\dfrac{x}{1-2x}.
$$
Thus ##\alpha ## and ##\beta ## generate a group of permutations ##F_1## of ##\mathbb{C}\cup \{\infty \}.##

4.b. (solved by @martinbn and @mathwonk ) Define the group ##F_2:=\langle A,B \rangle ## with
$$
A:=\begin{bmatrix}1&2\\0&1 \end{bmatrix} \text{ and }
B:=\begin{bmatrix}1&0\\2&1 \end{bmatrix}
$$

5. We model the move of a chess piece on a chessboard as a timely homogeneous Markov chain with the ##64## squares as state space and the position of the piece at a certain (discrete) point in time as a state. The transition matrix is given by the assumption, that the next possible state is equally probable. Determine whether these Markov chains ##M(\text{piece})## are irreducible and aperiodic for (a) king, (b) bishop, (c) pawn, and (d) knight.6. Prove that a ##n##-dimensional manifold ##X## is orientable if and only if
(a) there is an atlas for which all chart changes respect orientation, i.e. have a positive functional determinant,
(b) there is a continuous ##n##-form which nowhere vanishes on ##M.##7. (solved by @nuuskur ) A topological vector space ##E## over ##\mathbb{K}\in \{\mathbb{R},\mathbb{C}\}## is normable if and only if it is Hausdorff and possesses a bounded convex neighborhood of ##\vec{0}.##8.a. (solved by @kmitza ) Determine the minimal polynomial of ##\pi + e\cdot i## over the reals.

8.b. (solved by @jbstemp ) Show that ##\mathbb{F}:=\mathbb{F}_7[T]/(T^3-2)## is a field, calculate the number of its elements, and determine ##(T^2+2T+4)\cdot (2T^2+5),## and ##(T+1)^{-1}.##

8.c. (solved by @mathwonk ) Consider ##P(X):=X^{7129}+105X^{103}+15X+45\in \mathbb{F}[X]## and determine whether it is irreducible in case
$$
\mathbb{F} \in \{\mathbb{Q},\mathbb{R},\mathbb{F}_2,\mathbb{Q}[T]/(T^{7129}+105T^{103}+15T+45)\}
$$
8.d. (solved by @mathwonk ) Determine the matrix of the Frobenius endomorphism in ##\mathbb{F}_{25}## for a suitable basis.9. (solved by @mathwonk ) Let ##V## and ##W## be finite-dimensional vector spaces over the field ##\mathbb{F}## and ##f\, : \,V\otimes_\mathbb{F}W\longrightarrow \mathbb{F}## a linear mapping such that
\begin{align*}
\forall \,v\in V-\{0\}\quad \exists \,w\in W\, &: \,f(v\otimes w)\neq 0\\
\forall \,w\in W-\{0\}\quad \exists \,v\in V\, &: \,f(v\otimes w)\neq 0
\end{align*}
Show that ##V\cong_\mathbb{F} W.##10. (solved by @mathwonk ) Let ##R:=\mathbb{C}[X,Y]/(Y^2-X^2)##. Describe ##V_\mathbb{R}(Y^2-X^2)\subseteq \mathbb{R}^2,## determine whether ##\operatorname{Spec}(R)## is finite, calculate the Krull-dimension of ##R,## and determine whether ##R## is Artinian.

1606835746499-png-png-png-png-png.png


High Schoolers only
11.
Let ##a\not\in\{-1,0,1\}## be a real number. Solve
$$
\dfrac{(x^4+1)(x^4+6x^2+1)}{x^2(x^2-1)^2}=\dfrac{(a^4+1)(a^4+6a^2+1)}{a^2(a^2-1)^2}\,.
$$

12. Define a sequence ##a_1,a_2,\ldots,a_n,\ldots ## of real numbers by
$$
a_1=1\, , \,a_{n+1}=2a_n+\sqrt{3a_n^2+1}\quad(n\in \mathbb{N})\,.
$$
Determine all sequence elements that are integers.13. For ##n\in \mathbb{N}## define
$$
f(n):=\sum_{k=1}^{n^2}\dfrac{n-\left[\sqrt{k-1}\right]}{\sqrt{k}+\sqrt{k-1}}\,.
$$
Determine a closed form for ##f(n)## without summation. The bracket means: ##[x]=m\in \mathbb{Z}## if ##m\leq x <m+1.##14. Solve over the real numbers
\begin{align*}
&(1)\quad\quad x^4+x^2-2x&\geq 0\\
&(2)\quad\quad 2x^3+x-1&<0\\
&(3)\quad\quad x^3-x&>0
\end{align*}

15. Let ##f(x):=x^4-(x+1)^4-(x+2)^4+(x+3)^4.## Determine whether there is a smallest function value if ##f(x)## is defined ##(a)## for integers, and ##(b)## for real numbers. Which is it?#
 
Last edited:
  • Like
Likes jbergman and berkeman
Physics news on Phys.org
My solution for exercise 3.

We don't need the assumption that ##\mathcal{A}## is unital, so we will not use this.

Note that every element ##a## in a ##C^*##-algebra ##\mathcal{A}## can be written as ##a = p_1 - p_2 + i(p_3-p_4)## where ##p_1, p_2,p_3, p_4## are positive elements with ##\|p_i\| \le \|a\|##.

Thus, it suffices to show that
$$\sup_{a \in \mathcal{A}_+, \|a\| \le 1} \|f(a)\| < \infty$$
in order to conclude that ##\|f\| < \infty##. Suppose to the contrary that this supremum equals ##\infty##. Then we find a sequence ##\{a_n\}_{n=1}^\infty## of positive elements in the unit ball of ##\mathcal{A}## with the property that ##\|f(a_n)\|\ge 4^n##. Define ##a:= \sum_{n=1}^\infty 2^{-n} a_n##, where the series converges in the norm-topology because its absolutely convergent. For all ##n \ge 1##, we have ##a \ge 2^{-n} a_n## and thus by positivity ##f(a) \ge 2^{-n} f(a_n)## for all ##n \ge 1##. Taking norms, we obtain
$$\|f(a)\| \ge 2^{-n}\|f(a_n)\| \ge 2^{-n} 4^n = 2^n$$
and letting ##n\to \infty## yields a contradiction. Hence, the claim follows.

Remark: the same proof works to show that any positive map between ##C^*##-algebras is bounded. That's why I denote the absolute value on ##\mathbb{C}## by ##\|\cdot\|## as well.
 
Last edited:
  • Like
Likes fresh_42
My solution to 1.c:
##L(\gamma)=\int_0^\pi | \dot{\gamma(t)}|dt = \int_0^\pi\sqrt{(\cos^2(2t)+\sin^2(2t)+\sin^2(t)}dt=\int_0^\pi \sqrt{(1+\sin^2(t))}dt\ge \int_0^\pi 1dt = \pi##
 
Let V:=V_\mathbb K be a topological VS with topology \tau. If V is normable, then its closed unit ball is a bounded convex neighborhood of zero. The space V is automatically Hausdorff, because there are balls of arbitrarily small radii.

Conversely, let C be a convex bounded NH of zero. Then there exists a balanced NH of zero B such that B\subseteq C. Then A := \mathrm{Cl}(\mathrm{conv\, B}) is a closed, absolutely convex bounded NH of zero. Its Minkowski functional p_A is therefore a seminorm. Check that p_A is actually a norm. That is, p_A(x) = 0 implies x=0.

For every NH of zero W we can choose t&gt;0 such that tA \subseteq W (because A is bounded). This implies \{tA \mid t&gt;0\} is a NH basis of zero. By assumption V is Hausdorff, thus \bigcap \{tA \mid t&gt;0\} = \{0\}.

Suppose x\neq 0. Then there exists t_0&gt;0 such that x\notin t_0A, which means p_A(x) \neq 0. Thus, (V,p_A) is a normed space with unit ball A and since the tA are a \tau-NH basis of zero, the p_A-induced topology coincides with \tau.
 
nuuskur said:
Let V:=V_\mathbb K be a topological VS with topology \tau. If V is normable, then its closed unit ball is a bounded convex neighborhood of zero. The space V is automatically Hausdorff, because there are balls of arbitrarily small radii.

Conversely, let C be a convex bounded NH of zero. Then there exists a balanced NH of zero B such that B\subseteq C. Then A := \mathrm{Cl}(\mathrm{conv\, B}) is a closed, absolutely convex bounded NH of zero. Its Minkowski functional p_A is therefore a seminorm. Check that p_A is actually a norm. That is, p_A(x) = 0 implies x=0.

For every NH of zero W we can choose t&gt;0 such that tA \subseteq W (because A is bounded). This implies \{tA \mid t&gt;0\} is a NH basis of zero. By assumption V is Hausdorff, thus \bigcap \{tA \mid t&gt;0\} = \{0\}.

Suppose x\neq 0. Then there exists t_0&gt;0 such that x\notin t_0A, which means p_A(x) \neq 0. Thus, (V,p_A) is a normed space with unit ball A and since the tA are a \tau-NH basis of zero, the p_A-induced topology coincides with \tau.
Wow. You concentrated my proof from 48 to 8 lines! Is that already a compactification?

My more detailed solution will be published on 2/1/2022 in
https://www.physicsforums.com/threads/solution-manuals-for-the-math-challenges.977057/
or here in case someone wants to read it prior to that.
 
Some ideas for 4a
Let F_1 = \langle\alpha,\beta \rangle. By definition
<br /> F_1 = \{\gamma _n\circ \gamma _{n-1} \circ\ldots\circ \gamma _1 \mid \gamma _i \in \{\alpha,\beta,\alpha^{-1},\beta ^{-1}\},\ n\in\mathbb N\}<br />
Since \alpha,\beta,\alpha^{-1},\beta^{-1} are maps, in principle, it could happen that two different reduced compositions are still the same. So we need to check that this does not happen. None of the maps is idempotent (because an idempotent bijection must be the identity) and the different type of maps do not commute. E.g one can work out \alpha\beta (x) = \frac{5x+2}{2x+1} and \beta\alpha (x) = \frac{x+2}{2x+5}. Similarly, \alpha\beta ^{-1} \neq \beta ^{-1}\alpha and \alpha^{-1}\beta \neq \beta\alpha^{-1}. Also, \alpha\alpha \neq \beta\beta.

Is the following true? If a composition is non-empty and reduced, then it is not the identity. Suppose this is true for all reduced compositions of length n\geqslant 2. Let \omega := \gamma _{n+1}\gamma _{n}\ldots \gamma _1\in F_1 be a reduced composition of length n+1. Denote \sigma := \gamma _n\ldots\gamma _1. Suppose \gamma _{n+1}\sigma is the identity. Then \sigma ^{-1} = \gamma _{n+1}, but then \omega reduces to the identity, a contradiction... ?!?

Something feels wrong, though. There are matrices, for instance, that satisfy ##A^n = E ## for some (possibly large) ##n##. This is true by Cayley Hamilton, even. Take any matrix with characteristic equation ##x^n -1 =0##, then ##A^n-E=0##.

There must be something specific about these ##\alpha,\beta## ..
 
True in very general circumstances. Let f:X\to Y be continuous and suppose X is compact. Then the image is also compact. Suppose f(X) \subseteq \bigcup V_i, where the V_i are an open cover. Then continuity implies X\subseteq \bigcup f^{-1}(V_i), where f^{-1}(V_i) are open. By compactness there must be a finite subcover, so f(X) also has a finite subcover.

Further, if X is compact and A\subseteq X is closed, then taking an open cover A\subseteq \bigcup U_i we have an open over X \subseteq(X\setminus A) \cup \bigcup U_i. So A must also have a finite subcover and so is compact.

Thirdly, in Hausdorff spaces compact implies closed. Let X be Hausdorff and A\subseteq X compact. Suffices to show X\setminus A is open. Take b\in X\setminus A, then for every a\in A, one can pick disjoint open sets U_a, V_a such that a\in U_a and b\in V_a. We have an open cover A\subseteq \bigcup \{U_a \mid a\in A\}. Suppose A\subseteq \bigcup \{U_a \mid a\in F\} for some finite subset F\subseteq A. Then b\in\bigcap _{a\in F} V_a \subseteq X\setminus A. So b\in \mathrm{Int\,}(X\setminus A).

Now suppose f:X\to Y is a continuous injection with X compact. All we have to assume is the spaces are Hausdorff. Then the closed subsets are mapped to compact ones, which are closed and so f:X\to f(X) is a homeomorphism.
 
Last edited:
nuuskur said:
Some ideas for 4a
Let F_1 = \langle\alpha,\beta \rangle. By definition
<br /> F_1 = \{\gamma _n\circ \gamma _{n-1} \circ\ldots\circ \gamma _1 \mid \gamma _i \in \{\alpha,\beta,\alpha^{-1},\beta ^{-1}\},\ n\in\mathbb N\}<br />
Since \alpha,\beta,\alpha^{-1},\beta^{-1} are maps, in principle, it could happen that two different reduced compositions are still the same. So we need to check that this does not happen. None of the maps is idempotent (because an idempotent bijection must be the identity) and the different type of maps do not commute. E.g one can work out \alpha\beta (x) = \frac{5x+2}{2x+1} and \beta\alpha (x) = \frac{x+2}{2x+5}. Similarly, \alpha\beta ^{-1} \neq \beta ^{-1}\alpha and \alpha^{-1}\beta \neq \beta\alpha^{-1}. Also, \alpha\alpha \neq \beta\beta.

Is the following true? If a composition is non-empty and reduced, then it is not the identity. Suppose this is true for all reduced compositions of length n\geqslant 2. Let \omega := \gamma _{n+1}\gamma _{n}\ldots \gamma _1\in F_1 be a reduced composition of length n+1. Denote \sigma := \gamma _n\ldots\gamma _1. Suppose \gamma _{n+1}\sigma is the identity. Then \sigma ^{-1} = \gamma _{n+1}, but then \omega reduces to the identity, a contradiction... ?!?

Something feels wrong, though. There are matrices, for instance, that satisfy ##A^n = E ## for some (possibly large) ##n##. This is true by Cayley Hamilton, even. Take any matrix with characteristic equation ##x^n -1 =0##, then ##A^n-E=0##.

There must be something specific about these ##\alpha,\beta## ..
Hint: Consider what the powers of ##\alpha ## and the powers of ##\beta ## do geometrically.
 
Last edited:
Good lord, I have o(e^{-n}) understanding of geometry :oldgrumpy:
 
  • #10
nuuskur said:
Good lord, I have o(e^{-n}) understanding of geometry :oldgrumpy:
Ok, then use topology and what you know about the 1-sphere 2-disc.
 
  • #11
The crucial point is, that all powers of ##\alpha ## map the interior of the unit circle to the exterior, and all powers of ##\beta ## map the exterior to the interior with ##0## removed. Now consider a non-trivial reduced word that equals ##1## and conclude by the universal property that its non-existence is sufficient to have an isomorphism to the free group.
 
  • #12
Oh, I didn't think of that, that's a neat little trick
So, by definition
<br /> F_1 = \{\gamma _n\gamma _{n-1} \ldots \gamma _1 \mid \gamma _i \in\{\alpha,\beta,\alpha^{-1},\beta^{-1}\},\ n\in\mathbb N\}<br />
Call a composition reduced if it does not contain strings of type \ldots\gamma\gamma ^{-1}\ldots and \ldots\gamma ^{-1}\gamma\ldots. The goal is to show that non-empty reduced words are not the identity map.

Clearly, \alpha ^n(x) = x+2n. Note that \beta ^2(x) = \frac{x}{2\cdot 2x+1}. Suppose \beta ^n(x) = \frac{x}{2nx + 1}, then
<br /> \beta ^n ( \beta (x)) = \frac{\beta (x)}{2n\beta (x)+1} = \frac{x}{2(n+1)x +1}<br />
Note the following. Let 0&lt;|z|&lt;1. Immediately one has |\alpha ^n(z)| &gt; 1. Also
<br /> \beta ^n(1/z) = \frac{1/z}{2n(1/z)+1} = \frac{1}{z+2n} = \frac{1}{\alpha ^n(z)},<br />
which implies \beta ^n maps the exterior of the unit ball into the unit ball never attaining zero. Now it suffices to show that non-empty non-constant reduced words don't map zero to zero. By a constant word I mean something like \beta^n, we know they are not the identity.

So a typical reduced word is of the form
<br /> \left ( \beta ^{u_1} \right )^{n_1}\left (\alpha^{u_2}\right )^{n_2}\left (\beta^{u_3}\right )^{n_3}\ldots \left (\alpha^{u_k}\right )^{n_k}<br />
where u_i \in \{-1,1\}. The word may end with powers of \alpha^{\pm1} or \beta^{\pm 1}, but we may assume the word begins with powers of \alpha ^{\pm 1}, because powers of \beta^{\pm 1} map zero to zero. If we begin with powers of \alpha ^{\pm 1} we land outside the unit ball, then powers of \beta ^{\pm 1} do not map to zero and we just start oscillating.

So this means two reduced words are equal as maps if and only if they are the same word. Let F be the free group with generators a,b. Then \alpha \mapsto a and \beta \mapsto b extends to a well defined map, which is obviously an isomorphism.
 
  • Like
Likes fresh_42
  • #13
Lagrange MVT, essentially.
It is well known that directional derivative along \mathbf{u}\neq 0 can be computed as
<br /> \frac{\partial}{\partial \mathbf{u}} f(\mathbf{a}) = \langle \nabla f(\mathbf{a}), \overline{\mathbf{u}} \rangle<br />
where \overline{\mathbf{u}} is unit vector in the direction of \mathbf{u}.

Let f:\mathbb R^n \to \mathbb R be continuous and differentiable in S. Define h:[0,1] \to S\cup \{x,x+v\} by
<br /> h(t) := x+tv.<br />
Then fh satisfies the assumptions of the MVT. So (fh)&#039;(t_0) = fh(1) - fh(0) for some t_0\in (0,1). Put h(t_0) = z, then \langle \nabla f(z), v\rangle = f(x+v)-f(x).
 
  • #14
In reference to #2, probably none of us in the US had a course in elementary geometry, even though we may have seen topological vector spaces, advanced calculus, field extension theory, tensor products, and free groups! So this is a review of simple geometric facts about projective 3 space.

Recall that the points of a projective 3 space P^3 are the lines through the origin (one diml subspaces) of a 4 diml vector space V. A line in P^3 is a 2 diml subspace of V, and a plane in P^3 is a 3 diml subspace of V.
So this problem is also a problem in linear algebra, if you prefer. In particular, you can use linear algebra to prove these geometric facts:

If two distinct lines in P^3 meet, they meet in exactly one point, and lie on exactly one plane.
Given a line and a plane in P^3, either the line lies in the plane or else meets it in exactly one point.
A line in P^3 and a point not on that line, together lie on exactly one plane.
Two distinct planes in P^3 meet in a unique line.
Two distinct lines in a plane meet in exactly one point.

If you assume these facts, they suffice to solve the problem as stated. In fact the solution then requires no further mathematical knowledge, only logic, hence a layperson could do it. Or you might prefer to prove the linear algebra version: Given two 2 diml subspaces F,G of the 4 diml vectror space V, intersecting only at the origin, and a one diml subspace E not lying in either F or G, prove there is a unique 2 diml subspace H containing E and meeting each of F,G in a one diml subspace. If you choose this approach, you really should then go back and give the projective geometric argument. I only make this alternate suggestion since some of us may have more linear algebra intuition than projective geometric intuition.
 
Last edited:
  • Like
Likes jbergman
  • #15
I pretty much bruteforced 8a but here is my entry until someone posts a nice one:

So we start by setting $$a = \pi + e*i$$ then by squaring both sides we get $$a^2 = \pi^2 +2ei\pi -e^2 $$ now we put all real terms on one side and square again to get $$ (a^2 - \pi^2 +e^2 )^2 = -4e^2\pi^2 $$ from here some simple algebra gets us our first candidate: $$f(a) = a^4 +2a^2e^2 -2a^2\pi^2 +e^4 + e^2\pi^2 + \pi^4 $$ of course this doesn't seem irreducible and to check we suppose that there is a factorization $$(a^2+ba+ c)(a^2 + da + f) = f(x)$$ simple manipulation gives us: $$ d+b = 0 \implies d = -b$$ $$f +db +c = 2e^2 - 2\pi^2 $$ $$bf + dc = b(f-c) = 0 \implies b= 0 \or f=c$$ $$cf = (e^2 +\pi^2)^2$$ we will consider the case where f=c so from here we get that $$c=f= \pi^2 + e^2$$ and $$b = 2\pi$$ so our factorization is $$f(x) = (x^2 +2x\pi + e^2 + \pi^2 )(x^2 - 2x\pi + e^2 + \pi^2) = g(x)h(x)$$ finally it is easy to see that h(x) has no real zeros and is hence irreducible and it has $$\pi + ei$$ as one of the zeros. Hence h(x) is the minimal polynomial
 
Last edited:
  • #16
kmitza said:
I pretty much bruteforced 8a but here is my entry until someone posts a nice one:

So we start by setting $$a = \pi + e*i$$ then by squaring both sides we get $$a^2 = \pi^2 +2ei\pi -e^2 $$ now we put all real terms on one side and square again to get $$ (a^2 - \pi^2 +e^2 )^2 = -4e^2\pi^2 $$ from here some simple algebra gets us our first candidate: $$f(a) = a^4 +2a^2e^2 -2a^2\pi^2 +e^4 + e^2\pi^2 + \pi^4 $$ of course this doesn't seem irreducible and to check we suppose that there is a factorization $$(a^2+ba+ c)(a^2 + da + f) = f(x)$$ simple manipulation gives us: $$ d+b = 0 \implies d = -b$$ $$f +db +c = 2e^2 - 2\pi^2 $$ $$bf + dc = b(f-c) = 0 \implies b= 0 \or f=c$$ $$cf = (e^2 +\pi^2)^2$$ we will consider the case where f=c so from here we get that $$c=f= \pi^2 + e^2$$ and $$b = 2\pi$$ so our factorization is $$f(a) = (x^2 +2a\pi + e^2 + \pi^2 )(x^2 - 2a\pi^2 + e^2 + \pi^2) = g(a)h(a)$$ finally it is easy to see that h(a) has no real zeros and is hence irreducible and it has $$\pi + ei$$ as one of the zeros. Hence h(a) is the minimal polynomial
I assume there is a typo somewhere. Could you please write your solution as an element of ##\mathbb{R}[x]?## I'm a bit lost within your alphabet.
 
  • #17
fresh_42 said:
I assume there is a typo somewhere. Could you please write your solution as an element of ##\mathbb{R}[x]?## I'm a bit lost within your alphabet.
Yeah sorry my notation wasn't good, bad choice using a as a variable... I think I fixed it now
 
  • #18
kmitza said:
Yeah sorry my notation wasn't good, bad choice using a as a variable... I think I fixed it now
Nope. Hint: Compare ##g(x)## and ##h(x)##. One is wrong. Unfortunately ##h(x).##
 
  • #19
Um I am not sure what you mean I double checked with wolfram and hand just now and the one that has $$\pi + ei$$ as a root is $$h(x) = x^2 -2x\pi + e^2 + \pi^2$$ the other one has $$-\pi + ie$$ and it's conjugate as roots, right?
 
  • #20
kmitza said:
Um I am not sure what you mean I double checked with wolfram and hand just now and the one that has $$\pi + ei$$ as a root is $$h(x) = x^2 -2x\pi + e^2 + \pi^2$$ the other one has $$-\pi + ie$$ and it's conjugate as roots, right?
Yes, but this is not what you wrote earlier! You squared ##\pi ## in the second term in your original answer. Here it is gone!

Here is the (slightly) more elegant proof:

It is easy to guess the conjugate, second root. Now ##(\pi+ e\cdot i)(\pi - e\cdot i)=\pi^2+e^2 \in \mathbb{R}## and ##(\pi+ e\cdot i)+(\pi - e\cdot i)=2\pi \in \mathbb{R}## so we get by Vieta's formulas ##X^2-2\pi X+\pi^2+e^2\in \mathbb{R}[X].##
 
  • Like
Likes kmitza
  • #21
fresh_42 said:
Yes, but this is not what you wrote earlier! You squared ##\pi ## in the second term in your original answer. Here it is gone!

Here is the (slightly) more elegant proof:

It is easy to guess the conjugate, second root. Now ##(\pi+ e\cdot i)(\pi - e\cdot i)=\pi^2+e^2 \in \mathbb{R}## and ##(\pi+ e\cdot i)+(\pi - e\cdot i)=2\pi \in \mathbb{R}## so we get by Vieta's formulas ##X^2-2\pi X+\pi^2+e^2\in \mathbb{R}[X].##
Oh my god that's so much simpler thank you for showing me, when it comes to the square it is an honest mistake I didn't see I added it
 
  • Like
Likes fresh_42
  • #22
Hint for #9: To show two vector spaces are isomorphic, of course we first try to find a linear map from one to the other, and then hopefully it is bijective. We don't quite have that here, but we have something close. I.e. remember the defining property of the tensor product, Hom(VtensW, X) ≈ Bil(VxW,X) ≈ Hom(V,Hom(W, X)), (where Hom means linear maps and Bil means bilinear maps). What does that give here, and how does that help?

Hints for #10: This is an (affine) algebraic geometry problem, the study of the relation between subsets of affine space and the ideals of polynomials vanishing on them. If f(X,Y) is a polynomial over a field k, with no multiple factors, and C = {(a,b): f(a,b) = 0} is the subset of the plane where it vanishes, then the family of all polynomials vanishing on this set equals the ideal (f) generated by f in k[X,Y], hence k[X,Y]/(f) = R is the ring of polynomial functions restricted to C.

The fundamental fact relating points and ideals, is that if p = (a,b) is a point of C, then the evaluation function k[X,Y]-->k, sending a polynomial g to its value g(p), at p, has as its kernel a maximal ideal of k[X,Y] which contains the ideal (f), hence defines a maximal, hence prime, ideal Mp of R. (Prove this.) What are the generators of Mp?

This technique let's you produce certain elements of Spec(R) = {set of all prime ideals of R}. There are however, also other prime ideals in R. Can you find them? (They correspond to "points" of C, i.e. solutions (a,b) of f(X,Y) = 0, but with coefficients (a,b) not in k, but in certain field extensions of k.)

If you want to explore the prime (or maximal) spectrum of the ring of real polynomial functions restricted to the real locus, you may consult my answer to this question on MathOverflow:
https://math.stackexchange.com/ques...y/2844259?noredirect=1#comment5865953_2844259
 
Last edited:
  • #23
Question on #6:
Pardon me for ignorance, but I wonder just what is wanted in problem 6, since I thought 6a is sometimes taken as a definition of an orientable differentiable manifold. Of course it is possible to define the orientability of any continuous manifold, in terms of continuous sections of the orientation bundle, a certain 2 sheeted cover of the manifold constructed from local integral homology groups, in which setting the problem does not quite make sense, i.e. if no differentiable structure is given. Does the problem ask for a proof that the definition of orientability as a continuous manifold, i.e. in terms of the orientation bundle, is equivalent to the two given conditions, in the presence of differentiable structure?

Of course for starters one could just prove that 6a and 6b are equivalent for differentiable manifolds. But then what? Thank you.
 
  • #24
mathwonk said:
Question on #6:
Pardon me for ignorance, but I wonder just what is wanted in problem 6, since I thought 6a is sometimes taken as a definition of an orientable differentiable manifold.
I meant the following more basic definition via coordinate charts. I thought that was closest to how physicists consider an orientation; basically, a positive fundamental determinant ##\det D(\varphi_\alpha \circ \varphi^{-1}_\beta )>0## of charts changes:

Orientations of a vector space are elements from either of the two possible equivalence classes of ordered bases, i.e. ##\det T \gtrless 0## where ##T## is the transformation matrix between bases.

An orientation ##\mu## of ##M## is a choice of orientations ##\mu_x## for every tangent space ##T_x(M),## such that for all ##x_0\in M## there is an open neighborhood ##x_0\in U\subseteq M## and differentiable vector fields ##\xi_1,\ldots,\xi_n## on ##U## with
$$
\left[\left(\xi_1\right)_x,\ldots,\left(\xi_n\right)_x\right]=\mu_x
$$
for all ##x\in U.## The manifold ##M## is called orientable, if an orientation for ##M## can be chosen.

Let ##\mu## be an orientation on ##M.## A chart ##(U,\varphi )## with coordinates ##x_1,\ldots,x_n## is called positive oriented, if for all ##x\in U##
$$
\left[\left. \dfrac{\partial }{\partial x_1}\right|_{x},\ldots,\left. \dfrac{\partial }{\partial x_n}\right|_{x}\right]=\mu_x
$$
 
  • #25
Suggestion for #8b:
First prove that F7 is a field, then try to use the same proof to show F7[T]/(T^3-2) is a field.

Here is an example of the (easier) problem of finding an inverse, this time of T:
since T^3 = 2, then 4T^3 = (4T^2)T = 8 ≈ 1 (mod 7), so T^-1 = 4T^2.

For a reprise of basic facts about computations in extensions of this type, one may consult pages 21-30 of chapter 2 of Galois Theory, by Emil Artin, available here for free download, under open access; (this is where I first encountered these ideas in about 1963, and is still the clearest explanation I have seen since):
https://projecteuclid.org/ebooks/no...d-Theory/ndml/1175197045?tab=ArticleFirstPage
 
Last edited:
  • #26
I'll take a go at 8b.

Let ##p(t) = T^3-2##. Notice that ##p## has not roots over ##\mathbb{F}_7## which can be seen by simply evaluating ##p## at each element of ##\mathbb{F}_7##.

We claim then ##p## is irreducible. If not then ##p## can be factored as ##p = f g## where ##f##, ##g## are irreducible and ##deg(f) + deg(g) = 3##. Since ##f## and ##g## are irreducible, ##deg(f), deg(g) > 0## and so either ##deg(f) = 1## or ##deg(g) = 1##. Suppose without loss of generality that ##deg(f) = 1##, then ##f## is linear and hence has a root of ##\mathbb{F}_7## which means that ##p## has a root over ##\mathbb{F}_7## which is a contradiction.

Since ##p## is irreducible it then ##(p)## must be maximal. Otherwise suppose there exists some ##q \in \mathbb{F}_7[T]## such that ##(p) \subset (q)##. Hence there exists an ##f \in \mathbb{F}_7[T]## such that ##p = qf##. Since ##p## is irreducible either ##q## is a unit (in which case ##(q) = \mathbb{F}_7[T]##) or ##f## is a unit (in which case ##(f) = (q)##). So indeed ##(f)## is maximal.

We now use the fact that given any ring ##R## with a maximal ideal ##M##, ##R/M## is a field. This follows from one of the standard isomorphism theorems which states that there is a one-to-one correspondence between the ideals of ##R/M## and the ideals in ##R## containing ##M##. So if ##M## is maximal, ##R/M## can not contain any proper ideals, and so must be a field.

To compute the number of elements in ##\mathbb{F}_7[T]/(T^3-2)## we note that since ##3## is prime each cosets can be represented by elements of the form ## c_1 + c_2 T + c_3 T^2## where ##c_i \in \mathbb{F}_7[T]##, and there are ##7^3 = 343## such elements.

To compute ##(T^2+2T+4)(2T^2+5)## in ##\mathbb{F}_7[T]/(T^3-2)## we first note that ##(T^2+2T+4)(2T^2+5) = 2T^4 + 2T^3 + 6T^2 + 3T + 6## in ##\mathbb{F}_7## then reducing ##mod (T^3-2)## gives ## 2(2)T + 2(2) + 6T^2 + 3T + 6## and so $$(T^2+2T+4)(2T^2+5) = 6T^2 + 3$$

To compute ##\frac{1}{T+1}## you can do a sort of "brute force" method. Let ##\frac{1}{T+1} = P(T) = a_0 + a_1T + a_2T^2## then ##(T+1)P(T) = (a_1+a_2)T^2 + (a_1 + a_0)T + (a_0 + 2a_2) = 1## which gives a system $$a_0 + a_2 = 1, a_1 + a_0 = 0, a_1 + a_2 = 0$$
which has a solution ##a_0 = 5, a_1 = 2, a+2 = 5##. So $$\frac{1}{T+1} =5 + 2T + 5T^2$$
 
Last edited by a moderator:
  • Like
Likes mathwonk and fresh_42
  • #27
jbstemp said:
I'll take a go at 8b.

Let ##p(t) = T^3-2##. Notice that ##p## has not roots over ##\mathbb{F}_7## which can be seen by simply evaluating ##p## at each element of ##\mathbb{F}_7##.

We claim then ##p## is irreducible. If not then ##p## can be factored as ##p = f g## where ##f##, ##g## are irreducible and ##deg(f) + deg(g) = 3##. Since ##f## and ##g## are irreducible, ##deg(f), deg(g) > 0## and so either ##deg(f) = 1## or ##deg(g) = 1##. Suppose without loss of generality that ##deg(f) = 1##, then ##f## is linear and hence has a root of ##\mathbb{F}_7## which means that ##p## has a root over ##\mathbb{F}_7## which is a contradiction.

Since ##p## is irreducible it then ##(p)## must be maximal. Otherwise suppose there exists some ##q \in \mathbb{F}_7[T]## such that ##(p) \subset (q)##. Hence there exists an ##f \in \mathbb{F}_7[T]## such that ##p = qf##. Since ##p## is irreducible either ##q## is a unit (in which case ##(q) = \mathbb{F}_7[T]##) or ##f## is a unit (in which case ##(f) = (q)##). So indeed ##(f)## is maximal.

We now use the fact that given any ring ##R## with a maximal ideal ##M##, ##R/M## is a field. This follows from one of the standard isomorphism theorems which states that there is a one-to-one correspondence between the ideals of ##R/M## and the ideals in ##R## containing ##M##. So if ##M## is maximal, ##R/M## can not contain any proper ideals, and so must be a field.

To compute the number of elements in ##\mathbb{F}_7[T]/(T^3-2)## we note that since ##3## is prime each cosets can be represented by elements of the form ## c_1 + c_2 T + c_3 T^2## where ##c_i \in \mathbb{F}_7[T]##, and there are ##7^3 = 343## such elements.

To compute ##(T^2+2T+4)(2T^2+5)## in ##\mathbb{F}_7[T]/(T^3-2)## we first note that ##(T^2+2T+4)(2T^2+5) = 2T^4 + 2T^3 + 6T^2 + 3T + 6## in ##\mathbb{F}_7## then reducing ##mod (T^3-2)## gives ## 2(2)T + 2(2) + 6T^2 + 3T + 6## and so $$(T^2+2T+4)(2T^2+5) = 6T^2 + 3$$

To compute ##\frac{1}{T+1}## you can do a sort of "brute force" method. Let ##\frac{1}{T+1} = P(T) = a_0 + a_1T + a_2T^2## then ##(T+1)P(T) = (a_1+a_2)T^2 + (a_1 + a_0)T + (a_0 + 2a_2) = 1## which gives a system $$a_0 + a_2 = 1, a_1 + a_0 = 0, a_1 + a_2 = 0$$
which has a solution ##a_0 = 5, a_1 = 2, a+2 = 5##. So $$\frac{1}{T+1} =5 + 2T + 5T^2$$
Almost perfect! Only ##(T^2+2T+4)(2T^2+5)=6T^2.## The coefficient at ##T^3## is ##4,## not ##2.##
 
  • #28
Nice work! Linear equations can always be used to find inverses as you do. Another trick is to find the "minimal polynomial" of an element, since if X^3 + aX^2 + bX + c = 0, then X^3 + aX^2 + bX = -c, and so X(X^2 + aX + b) = -c, and since we know the inverse of -c in the field of coefficients, we can divide by it. (In particular an element has an inverse iff its minimal poly has non zero constant term. This works in linear algebra too, where we recall the constant term of the characteristic polynomial is the determinant.)

This is brute force, but for linear polynomials like T+a, the force needed is minimal, since we can always substitute T = (T+a)-a into the given equation and expand. E.g. T+1 = S, gives T^3 = (S-1)^3 = S^3 - 3S^2 + 3S -1 = 2, so S^3 - 3S^2 + 3S = S(S^2-3S+3) = 3, so since 3.5 =1, we get S(5S^2 - S + 1) = 1, and thus 1/(T+1) = 1/S = 5S^2 -S + 1 = 5(T+1)^2 -(T+1) + 1 = 5T^2 + 2T +5.

Or just play with it, looking for a multiple of T+1 that is constant: E.g. here T^3 = 2, so T^3 + 1 = 3, so T^3 + 1 = (T+1)(T^2-T+1) = 3, and since 1/3 = 5, thus 1/(T+1) = 5(T^2-T+1) = 5T^2 -5T +5 = 5T^2 +2T +5.

I tried to use linear equations to show this ring is a field, by checking that we can always solve the system for an inverse but had trouble showing the determinant is non zero. For a general element a + bT + cT^2, I got determinant a^3 + 2b^3 + 4c^3 + abc, which apparently has no solutions mod 7, except a=b=c= 0. I could show this at least when abc = 0, i.e. when at least one coefficient is zero, (which includes the case of T+1), using the fact that mod 7 the only numbers having cube roots are 0,1,-1, and also when abc≠0 and all the cubes are equal, but did not pursue all other cases in general.

Your abstract method using maximal ideals of course works beautifully in general. You can also use abstract linear algebra by first showing that the product of two non zero polyomials of form a+bX+cX^2 cannot be divisible by T^3-2, (either directly as Artin does, following Gauss, or using unique factorization of polynomials), hence the map from our ring (a finite dimensional vector space) to itself defined by multiplication by a non zero polynomial a+bX+cX^2 is a linear injection, hence also surjective, hence multiplies some polynomial into 1.

Congratulations!

(What do you think of 8c? For 8d, I myself will need to recall Galois' construction of the field with 25 elements, a field I have never used. ... Oh yes, fresh_42 just showed us how to comnstruct the field of order 7^3 so we can do likewise for 5^2. I also need to know what the frobenius map of F25 is, ... ok it raises each element to the 5th power. Ah yes! it is additive, multiplicative, and fixes the subfield F5, hence is an F5 - linear map!...after much faulty calculation, I seem to have accidentally chosen a nice simple eigenbasis. By the way, in hindsight, what happens when you compose the frobenius with itself?)

Hint: For 8c:

If f.g = P, all with integer coefficients, what would be true mod 5? Is that possible?By the way, as usual these are very nice problems. I think #8 in particular is wonderful. The technical difficulties are minimal, and the instructional value is high. It is not easy to come up with problems like this that are not routine, do not overwhelm, and teach you a lot.
 
Last edited:
  • Like
Likes fresh_42
  • #29
SPOILER, solutions of 8c, 8d:

8c: To be reducible it suffices to have a root, say X=a, since then by the root factor theorem, (X-a) is a factor. In each of the last three fields the polynomial has a root: in the reals, by the intermediate value theorem every odd degree polynomial takes both signs hence has a root; in the field with 2 elements X=1 is visibly a root since each of the 4 terms is congruent to 1, mod2; in the quotient ring, the meaning of the notation is that the polynomial in the bottom is set equal to zero, hence T itself is a root. Over the integers, the polynomial is irreducible, since it is congruent to a positive power of X, mod 5, but then both factors must also be congruent to a positive power of X, mod 5, which implies both have constant term divisible by 5, contradicting the fact that the constant term is not divisible by 25. By Gauss's lemma, an integer polynomial which factors with rational coefficients, has also integer coefficient factors, so it is also irreducible over the rationals.

8d: The non zero elements of a finite field form a cyclic group, so all 25 terms of the field F25 satisfy the polynomial X^25 = X, but not all satisfy X^5 = X. Since by definition the Frobenius map F takes an element a to F(a) = a^5, that means F^2 = Id, so F satisfies the minimal and characteristic polynomial X^2-1 = 0. Hence its eigenvalues are 1,-1, and it is diagonalizable as a 2x2 matrix, in a suitable basis, with those eigenvalues on the diagonal.
 
Last edited:
  • #30
SPOILER: solution for 9:

By definition of the tensor product, or by use of its basic property, a linear map out of VtensW is equivalent to a bilinear map out of VxW, which then is equivalent both to a linear map from V to W*, and to a linear map from W to V*. In particular, given f as in the problem, define a linear map V-->W* by sending v to the linear functional that sends w to f(vtensw). By what is given, no non zero v maps to the zero functional, i.e. this injects V linearly into W*. Hence dim(V) ≤ dim(W*) = dim(W). Similarly, W embeds in V*, so also dim(W) ≤ dim(V). Since V,W are finite dimensional of the same dimension, they are isomorphic, (but not naturally).
 
  • #31
SPOILER: solution of #10:By definition, V(Y^2-X^2) denotes the set of points in (X,Y) space whose coordinates satisfy the polynomial Y^2-X^2. Hence the real points consist of the points on the two lines X=Y and X+Y = 0, in the real plane. By definition Spec(R) is the set of prime ideals in the ring R, where an ideal J in R is prime if and only if the quotient ring R/J is a "domain", i.e. R/J has no non trivial divisors of zero. Equivalently, if and only if for elements a,b, in R, the product ab belongs to J if and only if at least one of a or b, or both, belongs to J.

If (a,b) is a solution of Y^2-X^2 = 0, the evaluation map C[X,Y]-->C, sending f(X,Y) to f(a,b), defines a surjective homomorphism C[X,Y]-->C with kernel the maximal ideal (X-a,Y-b). Y^2-X^2 belongs to this ideal as one sees by setting Y = (Y-b)+b and X=(X-a)+a and expanding Y^2-X^2. Hence (X-a,Y-b) defines an ideal J in R = C[X,Y]/(Y^2-X^2), such that R/J ≈ C, field. Thus J is a prime ideal in R. Moreover J determines the point (a,b) as the only common zero of all elements of J. Hence there is an injection from the infinite set of (real, hence also complex) points of V(Y^2-X^2) into Spec(R), that spectrum is infinite.

By definition, the Krull dimension of R is the length of the longest strict chain of prime ideals in R, ordered by containment, where the chain P0 < P1 <...< Pn has "length" = n. Since (X-Y) < (X-1,Y-1) is a chain of prime ideals in R of length 1, the Krull dimension is at least one. It already follows that R is not Artinian, since a ring is Artinian if and only if it is Noetherian, (which R is), and of Krull dimension zero, which R is not.

Now R actually has dimension one, since (X-Y) is a principal, hence minimal, prime in R, and when we mod out by it we get the principal ideal domain C[X,Y]/(X-Y) ≈ C[X]. In this ring an ideal is prime if and only if it is generated by an irreducible element, hence no non - zero prime ideal can be contained in another, since one of the irreducible generators would divide the other, contradicting the meaning of irreducible. Thus only one prime ideal, at most, can contain (X-Y). Similarly at most one can contain (X+Y). Thus any chain starting from either (X-Y) or (X+Y) has length at most one. But by definition of a prime ideal, any prime in R must contain zero, i.e. the product (X-Y)(X+Y), hence contains at last one of them.
 
  • #32
mathwonk said:
SPOILER, solutions of 8c, 8d:

8c: To be reducible it suffices to have a root, say X=a, since then by the root factor theorem, (X-a) is a factor. In each of the last three fields the polynomial has a root: in the reals, by the intermediate value theorem every odd degree polynomial takes both signs hence has a root; in the field with 2 elements X=1 is visibly a root since each of the 4 terms is congruent to 1, mod2; in the quotient ring, the meaning of the notation is that the polynomial in the bottom is set equal to zero, hence T itself is a root.

mathwonk said:
Over the integers, the polynomial is irreducible, since it is congruent to a positive power of X, mod 5, but then both factors must also be congruent to a positive power of X, mod 5, which implies both have constant term divisible by 5, contradicting the fact that the constant term is not divisible by 25. By Gauss's lemma, an integer polynomial which factors with rational coefficients, has also integer coefficient factors, so it is also irreducible over the rationals.
Or short with Eisenstein and ##5##.
mathwonk said:
8d: The non zero elements of a finite field form a cyclic group, so all 25 terms of the field F25 satisfy the polynomial X^25 = X. Since by definition the Frobenius map F takes an element a to F(a) = a^5, that means F^2 = Id, so F satisfies the minimal and characteristic polynomial X^2-1 = 0. Hence its eigenvalues are 1,-1, and it is diagonalizable as a 2x2 matrix, in a suitable basis, with those eigenvalues on the diagonal.
... which results in ##
\begin{bmatrix}
1&0\\0&4
\end{bmatrix}
##
 
  • #33
SPOILER: solution for 2:

Let the two disjoint lines be L and M, and p the point not on either of them. Then there is a unique plane A spanned by L and p, and a unique plane B spanned by p and M. Then A and B are distinct planes, since one of them contains L, the other contains M, and L and M cannot be in the same plane since they do not meet. Hence the planes A and B meet along a unique common line K. Since K consists of all points common to A and B, it contains p. Since K also lies in both planes A and B, it meets both L and M. If R were some other such line, R would contain p as well as a point of L, hence would meet A in 2 points, hence would lie in A. Similarly it would lie in B, so the only possible such line R is the unique line K of intersection of A and B.That leaves 4b, 5 and 6. Are we stumped?!
 
  • Like
Likes fresh_42
  • #34
mathwonk said:
SPOILER: solution for 2:

Let the two disjoint lines be L and M, and p the point not on either of them. Then there is a unique plane A spanned by L and p, and a unique plane B spanned by p and M. Then A and B are distinct planes, since one of them contains L, the other contains M, and L and M cannot be in the same plane since they do not meet. Hence the planes A and B meet along a unique common line K. Since K consists of all points common to A and B, it contains p. Since K also lies in both planes A and B, it meets both L and M. If R were some other such line, R would contain p as well as a point of L, hence would meet A in 2 points, hence would lie in A. Similarly it would lie in B, so the only possible such line R is the unique line K of intersection of A and B.That leaves 4b, 5 and 6. Are we stumped?!
Doh! I was just working on sketching out a logical proof based on your axioms of P^3:
1) the two skew lines (g, h) do not meet, so they are in distict planes, A and B
1a) g is a line on A
1b) h is a line on B
2) A and B are distinct so they meet in a unique line, k != h or g
2a) h intersects A at exactly one point
2b) g intersects B at exactly one point
2c) k is a line in both planes
3) the point p is neither on g or h
3a) p and g lie in exactly one plane, C
3b) p and h lie in exacly one plane, D
4) C and D are distinct, so they meet in a unique line, l
5) I'm not sure here... but I think:
4a) C and A intersect along the line g (g is in both planes)
4b) D and B intersect along the line h (h is in both planes)
6) l intersects with both g and h
6a) since both l and g are on C, they meet at a unique point
6b) since both l and h are on D, they meet at a unique point
6c) The point p is on both C and D, so it must be also be on l

If that is more or less correct, I guess 2) becomes irrelevant, but let me know if I got anything wrong.
 
  • #35
valenumr said:
If that is more or less correct, I guess 2) becomes irrelevant, but let me know if I got anything wrong.
This is too vague and looks too Euclidean. How do you construct ##A## and ##B## with those properties? Anyway, the correct solution has been given in post #33.
 
  • #36
fresh_42 said:
This is too vague and looks too Euclidean. How do you construct ##A## and ##B## with those properties? Anyway, the correct solution has been given in post #33.
Yeah, I wasn't finished when mathwonk posted the answer. I was working from the axioms in post #14. After review, it looks like statements 3, 4, and 6 are pretty close.
 
  • #37
6
mathwonk said:
That leaves 4b, 5 and 6. Are we stumped?!
What is the difference between 4.a and 4.b? These are the matrices of the corresponding Moebius' transformations ##\alpha## and ##\beta##, and a composition of transformations has the product of the matrices.
 
  • #38
Nice! I was thinking they should be equivalent, but was trying to do it via 4b acting on the real plane, the complement of infinity. Your idea seems to use the realization of the riemann sphere as the projective complex line. Maybe you could give a little more detail.\\SPOILER: #6:
as for 6, some hints are: an n form determines a b-orientation, and then to get an a-orientation, choose any atlas whose domain sets are connected. Then each chart will determine a pullback of the standard n form which is everywhere in that chart either a positive or negative multiple of the given one. If negative, compose the chart with an oreintation reversing linear isomorpohism of n space. The resulting family of charts, all of which pull the standard form back to the given one on M, is an a -orientation.

To go the other way, use an oriented atlas of charts to pull back copies of the standard form, then paste them together with a partition of unity, and use the property that the atlas is oriented, to check that at each point, the constructed form is a sum of positive multiples of one of them, hence non zero.

an a -orientation gives an "orientation" by using the standard vector fields defined by the charts in an oriented atlas. To go the other way, choose an atlas made up of charts subordinate to the domains of the vector fields verifying the orientation property. Then maybe wedge them together to give a non zero field of n -vectors, hence by duality an n form in that chart, and glue them into an n form with a partition of unity?
 
  • Like
Likes martinbn and fresh_42
  • #39
mathwonk said:
Nice! I was thinking they should be equivalent, but was trying to do it via 4b acting on the real plane, the complement of infinity. Your idea seems to use the realization of the riemann sphere as the projective complex line. Maybe you could give a little more detail.
What I meant is that the two groups are isomorphic. For fractional linear transformations composition is given by the multiplication of the corresponding matrices. Similarly for takinf inverse. So maming the two generators of 4b to those of 4a and extending it multiplicatively will be well defined and an isomorphism.
 
  • #40
martinbn said:
What I meant is that the two groups are isomorphic. For fractional linear transformations composition is given by the multiplication of the corresponding matrices. Similarly for takinf inverse. So maming the two generators of 4b to those of 4a and extending it multiplicatively will be well defined and an isomorphism.
There is a reason why I put them into one problem and not two. Problems of the same number should be solved as a whole. Unfortunately, members torpedoed this plan - pretty much from the beginning.
 
  • Like
Likes martinbn
  • #41
martinbn, I understood you meant they were isomorphic, but I think you still have to prove it. One way to prove it is to give a bijection between the riemann sphere and the projective line under which the two actions correspond. Perhaps you were taking all that for granted. (As a naive person like me might attempt, of course one cannot so easily define a homomorphism just from the matrix group in 4b to the group in 4a, by saying what it does to the generators, unless we know already the matrix group they generate is free.)
 
  • #42
I seem to be on a slightly different page from others, or just further behind, so let me explain my viewpoint. I apologize if this is more algebro - geometric than desired, and/or too elementary to be useful.

I was trying to relate problems 4a and 4b by letting a matrix act on the riemann sphere, by having the matrix with rows [1 0], [2 1], send x+iy to x + (2x+y)i, and the matrix with rows [1 2], [0 1], send x+iy to (x+2y) + iy, both of which of course send infinity to itself. I.e. I was letting x+iy be the point of real 2 space with coordinates (x,y) and then letting the (real) matrix act on this vector. So, obviously this didn’t work, since the given fractional transformations do not both fix infinity.

Now martinbn gave the correct action, which is the natural linear action of a complex 2x2 matrix matrix on the projective complex line. I.e. consider non zero vectors [u,v] and [z,w] in complex 2 space as equivalent if they are proportional, i.e. [u,v], with u,v, both complex, is equivalent to any vector of form [tu,tv] with t≠0. Then a vector [u,v] with v≠0 is equivalent to a unique vector of form [z,1], namely [u.v] ≈ [u/v,1], so take z = u/v. All vectors [u,v] with v=0 are equivalent to each other, in particular [u,0] ≈ [1,0].

The resulting identification space, the complex projective line, is in natural bijective correspondence with the riemann sphere by letting [u,v] correspond to [u/v,1] <—-> z = u/v, if v ≠ 0, and letting [1, 0] <—-> infinity. In the other direction, a finite complex number z corresponds to [z,1], and infinity corresponds to [1,0] in the projective line.

Now a complex matrix with rows [a b], [c d], sends the vector [u, v] to [au+bv, cu+dv], and it sends the equivalent vector [tu, tv] to an equivalent vector, namely [t(au+bv), t(cu+dv)]. Hence a complex 2x2 matrix acts on the projective line, and hence also on the riemann sphere.

If z is a finite point of the riemann sphere, corresponding to the vector [z,1], then [a b], [c d], sends it the vector [az+b, cz+d], which corresponds to the point (az+b)/(cz+d) of the riemann sphere. I.e. this is a finite point iff cz+d is not zero. If we want to act on the point at infinity, we take the corresponding vector [1,0], and it then goes to the vector [a, c], which corresponds to the finite point a/c if c≠0, and to infinity if c=0. Oh yes, since we want the action on the riemann sphere to be a bijection, we assume that ad-bc ≠ 0.

So this shows how a complex matrix acts on the riemann sphere in a way that exactly corresponds to the action of a fractional transformation. Moreover, matrix multiplication corresponds to composition of mappings of complex 2 space, hence also of the projective line, and of the riemann sphere. Note however that the matrix [a b], [c d], acts in the same way as the matrix [ta tb], [tc td], for t≠0. So the matrix group is not isomorphic to the group of fractional transformations, since any diagonal matrix with equal diagonal entries corresponds to the identity fractional transformation, i.e. two matrices define the same transformation iff their entries are proportional.

We can however restrict to the subgroup of matrices with determinant one, and then the map from matrices to transformations is still a surjective homomorphism, and has kernel of order two, consisting of the identity matrix and minus the identity matrix. Now restrict this map to the subgroup generated by the given matrices in problem 4b. It follows that this is a surjective homomorphism from the subgroup defined in 4b, to the subgroup of fractional transformations defined in 4a. We must still show this is injective, and hence an isomorphism, since if minus the identity matrix were in the subgroup generated by the matrices in 4b, the map would be 2 to 1 instead of an isomorphism. But since 4a has been solved, that subgroup is free, and hence there is a map back, which is inverse to this map on generators, hence everywhere.

This may all be standard, but for me it is complex enough to deserve some details. At any rate, this is just an elaboration of martinbn's cogent observation.
 
  • Like
Likes fresh_42
  • #43
mathwonk said:
martinbn, I understood you meant they were isomorphic, but I think you still have to prove it. One way to prove it is to give a bijection between the riemann sphere and the projective line under which the two actions correspond. Perhaps you were taking all that for granted. (As a naive person like me might attempt, of course one cannot so easily define a homomorphism just from the matrix group in 4b to the group in 4a, by saying what it does to the generators, unless we know already the matrix group they generate is free.)
We don't need to know that it is free. My point was that multiplication and inverse of the matrices corresponds to composition and inverse of the linear fractional transformations. For example the inverse of ##f(x)=\frac{ax+b}{cx+d}## is ##f^{-1}(x)=\frac{a'x+b'}{c'x+d'}##, where ##\begin{bmatrix}a'&b'\\c'&d' \end{bmatrix}## is the inverse of ##\begin{bmatrix}a&b\\c&d \end{bmatrix}##

So, if we have a relation in group 4.b, say ##A^{n_1}B^{m_1}\dots A^{n_k}B^{m_k}=I ##, then the element ##A^{n_1}B^{m_1}\dots A^{n_k}B^{m_k} ## is mapped to ##\alpha^{n_1}\beta^{m_1}\dots \alpha^{n_k}\beta^{m_k}##, which is going to be a linear fractional transformation with matrix the matrix ##A^{n_1}B^{m_1}\dots A^{n_k}B^{m_k} ## , so it will be the identity. So the map is well defined.
 
  • Like
Likes fresh_42

Similar threads

2
Replies
80
Views
9K
2
Replies
93
Views
14K
2
Replies
61
Views
11K
2
Replies
61
Views
12K
3
Replies
102
Views
10K
2
Replies
86
Views
13K
2
Replies
61
Views
9K
2
Replies
67
Views
11K
2
Replies
69
Views
8K
3
Replies
114
Views
10K
Back
Top