Math Challenge - May 2019

Click For Summary
SUMMARY

The forum discussion centers on various mathematical challenges from the May 2019 Math Challenge, including Lie algebra representations, group theory, and properties of linear operators. Key problems include calculating cohomology groups for the Lie algebra ##\mathfrak{su}(2,\mathbb{C})##, analyzing the dihedral group ##D_{12}##, and exploring the properties of self-adjoint operators in Hilbert spaces. Solutions were provided for several problems, particularly by users @Periwinkle and @Couchyam, demonstrating a strong grasp of advanced mathematical concepts.

PREREQUISITES
  • Understanding of Lie algebras, specifically ##\mathfrak{su}(2,\mathbb{C})##
  • Familiarity with cohomology theory and the Chevalley-Eilenberg complex
  • Knowledge of group theory, particularly finite reflection groups
  • Proficiency in linear algebra and properties of self-adjoint operators
NEXT STEPS
  • Study the cohomology of Lie algebras using the Chevalley-Eilenberg complex
  • Explore the properties of finite reflection groups and their applications
  • Learn about the spectral theorem for self-adjoint operators in Hilbert spaces
  • Investigate the implications of the Zariski topology in algebraic geometry
USEFUL FOR

Mathematicians, graduate students in mathematics, and anyone interested in advanced topics in algebra, topology, and functional analysis.

  • #31
fresh_42 said:
Your on the right track and it has to do with ##x##. But the proof is technically correct. It is the same limit, since ##\xi## is chosen accordingly. It simply uses an assumption which is not explicitly mentioned, which one?
I've been thinking so much about this question. If we want to be so precise, then the conditions of the theorem could be completed by that ## a \lt b##.
 
Physics news on Phys.org
  • #32
Periwinkle said:
I've been thinking so much about this question. If we want to be so precise, then the conditions of the theorem could be completed by that ## a \lt b##.
No, it's more subtle than this and uses a 'tool' you wouldn't have expected in calculus.
 
Last edited:
  • Like
Likes   Reactions: Periwinkle
  • #33
fresh_42 said:
No, it's more subtle than this and uses a 'tool' you wouldn't have expected in calculus.

Then this is just the Axiom of Choice. Then I'll explain it.
 
  • Like
Likes   Reactions: fresh_42
  • #34
Periwinkle said:
Then this is just the Axiom of Choice. Then I'll explain it.
Yes, ##\xi## is selected from an interval which depends on ##x##, and since this is done for all ##x## we have silently used a selection function ##x \longmapsto \xi(x)##.

Do you also have an idea how it can be avoided?
 
  • #35
fresh_42 said:
Yes, ##\xi## is selected from an interval which depends on ##x##, and since this is done for all ##x## we have silently used a selection function ##x \longmapsto \xi(x)##.

Do you also have an idea how it can be avoided?
The question described above is written on pages 74-77 of this book. The reverse is questionable.
 
  • #36
Periwinkle said:
The question described above is written on pages 74-77 of this book. The reverse is questionable.
O.k. but we don't have to fall back on rationals here. Epsilontic and continuity will do.
 
  • #37
fresh_42 said:
O.k. but we don't have to fall back on rationals here. Epsilontic and continuity will do.

##\xi(x) = (\min\{x,x_0\}+\max\{x,x_0\})/2##

That's fair. There is no need for exact equality.

 
  • #38
Periwinkle said:
##\xi(x) = (\min\{x,x_0\}+\max\{x,x_0\})/2##

I can't see why this avoids AC. Formally we did this:
$$
\Lambda(x):=\left\{ \xi \in (\min\{x,x_0\},\max\{x,x_0\}) \, : \, \dfrac{f(x)-f(x_0)}{x-x_0}=f\,'(\xi)\right\}
$$
The mean value theorem guarantees us that all ##\Lambda(x)\neq \emptyset##, but we need more: namely a function $$\xi \, : \, [a,b]-\{x_0\}\longrightarrow \bigcup_{x\in [a,b]-\{x_0\}} \Lambda(x)$$
Narrowing the interval doesn't change the argument.
 
  • #39
fresh_42 said:
I can't see why this avoids AC. Formally we did this:
$$
\Lambda(x):=\left\{ \xi \in (\min\{x,x_0\},\max\{x,x_0\}) \, : \, \dfrac{f(x)-f(x_0)}{x-x_0}=f\,'(\xi)\right\}
$$
The mean value theorem guarantees us that all ##\Lambda(x)\neq \emptyset##, but we need more: namely a function $$\xi \, : \, [a,b]-\{x_0\}\longrightarrow \bigcup_{x\in [a,b]-\{x_0\}} \Lambda(x)$$
Narrowing the interval doesn't change the argument.

Exist the limit ##c:=\lim_{x \to x_0}f\,'(x)\,.##

Therefore, for all ##\epsilon##, there is a ## \delta## that if ##|x-x_0| \lt \delta ##, then ##|f\,'(x)-c| \lt \epsilon##.

Based on mean value theorem ##\dfrac{f(x)-f(x_0)}{x-x_0}\,## equal to one of ##f\,'(\xi)##, where ##\left| \xi-x_0 \right| \lt \delta##.

Therefore without choosing ##\xi## we know ## \left| \dfrac{f(x)-f(x_0)}{x-x_0}\ -c \right| \lt \epsilon## if ##|x-x_0| \lt \delta##.
 
Last edited:
  • Like
Likes   Reactions: fresh_42
  • #40
We have chosen ##\xi##, but only one element from one non-empty set ##\Lambda(x)## and use ##|\xi -x_0|<|x-x_0|< \delta\,.##
 
  • Like
Likes   Reactions: Periwinkle
  • #41
Let ##n:=i_\ell i_{\ell-1}\dots i_0## be the decimal representation of ##n\in\mathbb N## (that is, ##n=\sum_{j=0}^\ell i_j 10^j##.) Since ##n\geq 10^\ell##, ##\frac{1}{n}\leq 10^{-\ell}##. Now, for a fixed ##\ell##, there are no more than ##9^{\ell+1}## numbers between ##10^\ell## and ##10^{\ell+1}## whose decimal expansions do not contain the digit ##9## (this is because each number in this range has a unique decimal expansion with at most ##\ell+1## digits, and the set of length ##\ell+1## strings composed of the digits ##\{0,...,8\}## has size ##9^{\ell+1}##). Hence,
\begin{align*}
\sum_{n=1}^\infty \frac{\epsilon_n}{n}=\sum_{\ell=0}^\infty \bigg(\sum_{10^\ell\leq n<10^{\ell+1}}\frac{\epsilon_n}{n}\bigg)<\sum_{\ell=0}^\infty \frac{9^{\ell+1}}{10^\ell}=90<\infty
\end{align*}
(Heuristically, the convergence of the series is related to the Cantor-set-like support of the coefficients ##\frac{\epsilon_n}{n}##, in the sense that as ##n## grows, the gaps between regions where ##\epsilon_n## is nonzero expand proportionally.)
 
  • Like
Likes   Reactions: Periwinkle and fresh_42
  • #42
Attempt at Problem 4

Part a) follows directly from the fundamental property of self adjoint operators.
$$ (\hat T{\mathbf a})\cdot{\mathbf b} = {\mathbf a}\cdot(\hat T{\mathbf b})$$
where ##\hat T## is a self-adjoint linear operator and ##{\mathbf a},{\mathbf b}## are two vectors in a given vector space.
In the following proof ##{\mathbf a},{\mathbf b}## will be taken as eigenvectors of ##\hat T## with eigenvalues:
$$\hat T {\mathbf a} = \alpha{\mathbf a}\\
\hat T {\mathbf b} = \beta{\mathbf b}$$
First show that eigenvalues of a self-adjoint operator are real:
$$
(\hat T{\mathbf a})\cdot{\mathbf a} = {\mathbf a}\cdot(\hat T{\mathbf a})\\
(\alpha{\mathbf a})\cdot{\mathbf a} = {\mathbf a}\cdot(\alpha{\mathbf a})\\
\alpha^{*}({\mathbf a}\cdot{\mathbf a}) = \alpha({\mathbf a}\cdot{\mathbf a})\\
\alpha^{*} = \alpha
$$
Thus ##\alpha## is real.
Now show that eigenvectors of ##\hat T## with distinct eigenvalues are orthogonal
$$
(\hat T{\mathbf a})\cdot{\mathbf b} = {\mathbf a}\cdot(\hat T{\mathbf b})\\
(\alpha{\mathbf a})\cdot{\mathbf b} = {\mathbf a}\cdot(\beta{\mathbf b})\\
\alpha^{*}({\mathbf a}\cdot{\mathbf b}) = \beta({\mathbf a}\cdot{\mathbf b})\\
\alpha({\mathbf a}\cdot{\mathbf b}) = \beta({\mathbf a}\cdot{\mathbf b})
$$
where the last line follows from the fact that the eigenvalues must be real.
Since the eigenvectors ##{\mathbf a},{\mathbf b}## have distinct eigenvalues this means ##\alpha\neq\beta## and so necessarily ##{\mathbf a}\cdot{\mathbf b} = 0##.
Thus, by definition ##{\mathbf a},{\mathbf b}## are orthogonal.

Part b)
If I understand correctly, for a given function ##f(t)## in the Hilbert space ##\mathcal {H} =L_{2}([0,1])## the linear operator ##T_g## simply multiplies ##f(t)## by the function ##g(t)##:
$$T_{g}(f)(t):=g(t)f(t)$$
If so, then the eigenvalue problem requires
$$g(t)f(t) = \lambda_{g}f(t)$$
The trivial possibility is that the function ##g(t)## is a constant and so we get ##\lambda_{g} = m = M##
If ##g(t)## is not a constant, then I am not so sure I know how to proceed, the only path forward that I see is to define our eigenfunctions as delta functions, in which case our eigenvalue spectrum is continuous over the interval ##[M,m]##.
 
  • #43
SpinFlop said:
Attempt at Problem 4

Part a) follows directly from the fundamental property of self adjoint operators.
$$ (\hat T{\mathbf a})\cdot{\mathbf b} = {\mathbf a}\cdot(\hat T{\mathbf b})$$
where ##\hat T## is a self-adjoint linear operator and ##{\mathbf a},{\mathbf b}## are two vectors in a given vector space.
In the following proof ##{\mathbf a},{\mathbf b}## will be taken as eigenvectors of ##\hat T## with eigenvalues:
$$\hat T {\mathbf a} = \alpha{\mathbf a}\\
\hat T {\mathbf b} = \beta{\mathbf b}$$
First show that eigenvalues of a self-adjoint operator are real:
$$
(\hat T{\mathbf a})\cdot{\mathbf a} = {\mathbf a}\cdot(\hat T{\mathbf a})\\
(\alpha{\mathbf a})\cdot{\mathbf a} = {\mathbf a}\cdot(\alpha{\mathbf a})\\
\alpha^{*}({\mathbf a}\cdot{\mathbf a}) = \alpha({\mathbf a}\cdot{\mathbf a})\\
\alpha^{*} = \alpha
$$
Thus ##\alpha## is real.
Now show that eigenvectors of ##\hat T## with distinct eigenvalues are orthogonal
$$
(\hat T{\mathbf a})\cdot{\mathbf b} = {\mathbf a}\cdot(\hat T{\mathbf b})\\
(\alpha{\mathbf a})\cdot{\mathbf b} = {\mathbf a}\cdot(\beta{\mathbf b})\\
\alpha^{*}({\mathbf a}\cdot{\mathbf b}) = \beta({\mathbf a}\cdot{\mathbf b})\\
\alpha({\mathbf a}\cdot{\mathbf b}) = \beta({\mathbf a}\cdot{\mathbf b})
$$
where the last line follows from the fact that the eigenvalues must be real.
Since the eigenvectors ##{\mathbf a},{\mathbf b}## have distinct eigenvalues this means ##\alpha\neq\beta## and so necessarily ##{\mathbf a}\cdot{\mathbf b} = 0##.
Thus, by definition ##{\mathbf a},{\mathbf b}## are orthogonal.

Part b)
If I understand correctly, for a given function ##f(t)## in the Hilbert space ##\mathcal {H} =L_{2}([0,1])## the linear operator ##T_g## simply multiplies ##f(t)## by the function ##g(t)##:
$$T_{g}(f)(t):=g(t)f(t)$$
If so, then the eigenvalue problem requires
$$g(t)f(t) = \lambda_{g}f(t)$$
The trivial possibility is that the function ##g(t)## is a constant and so we get ##\lambda_{g} = m = M##
If ##g(t)## is not a constant, then I am not so sure I know how to proceed, the only path forward that I see is to define our eigenfunctions as delta functions, in which case our eigenvalue spectrum is continuous over the interval ##[M,m]##.
What you wrote is correct so far. But the question was to determine the spectrum, i.e. the complement of the resolvent set, not the point spectrum of eigenvalues.
 
  • #44
$$ (a(x),(b(x)) =\int_{-\pi /2}^{\pi/2} (11\sin(x) + 8\cos(x))\cdot (4\sin(x) + 13\cos(x)) \, dx =

\\ \int_{-\pi /2}^{\pi/2} (44\sin^2(x) + 175 \cos(x) \sin(x) + 104 \cos^2(x)) \, dx =
44 \pi /2 +104 \pi/2 = 148 \pi /2. $$

$$ \left|a \right| ^2 = (a(x),(a(x)) =
\\ \int_{-\pi /2}^{\pi/2} (121\sin^2(x) + 176 \cos(x) \sin(x) + 64 \cos^2(x)) \, dx = 121 \pi /2 +64 \pi/2 = 185 \pi /2.$$

$$ \left|b \right| ^2 = (b(x),(b(x)) =
\\ \int_{-\pi /2}^{\pi/2} (16\sin^2(x) + 104 \cos(x) \sin(x) + 169 \cos^2(x)) \, dx = 16 \pi /2 + 169 \pi/2 = 185 \pi /2.$$

$$ \cos(\phi) = \frac {(a(x),(b(x))} {\left|a \right| \left|b \right| } = \frac {148 \pi /2} { 185 \pi /2 }= 0.8. $$

$$ \phi = 0.6435. $$
 
Last edited:
  • #45
Periwinkle said:
$$ (a(x),(b(x)) =\int_{-\pi /2}^{\pi/2} (11\sin(x) + 8\cos(x))\cdot (4\sin(x) + 13\cos(x)) \, dx =

\\ \int_{-\pi /2}^{\pi/2} (44\sin^2(x) + 175 \cos(x) \sin(x) + 104 \cos^2(x)) \, dx =
44 \pi /2 +104 \pi/2 = 148 \pi /2. $$

$$ \left|a \right| ^2 = (a(x),(a(x)) =
\\ \int_{-\pi /2}^{\pi/2} (121\sin^2(x) + 176 \cos(x) \sin(x) + 64 \cos^2(x)) \, dx = 121 \pi /2 +64 \pi/2 = 185 \pi /2.$$

$$ \left|b \right| ^2 = (b(x),(b(x)) =
\\ \int_{-\pi /2}^{\pi/2} (16\sin^2(x) + 104 \cos(x) \sin(x) + 169 \cos^2(x)) \, dx = 16 \pi /2 + 169 \pi/2 = 185 \pi /2.$$

$$ \cos(\phi) = \frac {(a(x),(b(x))} {\left|a \right| \left|b \right| } = \frac {148 \pi /2} { 185 \pi /2 }= 0.8. $$

$$ \phi = 0.6435. $$
That will take me awhile, since one of us has made a mistake and I have to figure out who and where.

Correction: We were both right. I forgot a square root in the denominator.

How do you find my solution:

We define ##f(x)=\sin(x)-6\cos(x)\; , \;g(x)=6\sin(x)+\cos(x)## and observe, that ##\{\,f,g\,\}## is a orthogonal basis for a two dimensional subspace of ##L^2\left( \left[ -\frac{\pi}{2},+\frac{\pi}{2} \right] \right)## with ##\gamma :=|f|=|g|=\sqrt{\dfrac{37 \pi}{2}}##. As we are interested in an angle, we won't have to bother the length of our coordinate vectors, i.e. we do not need to normalize them. Now we have ##a=-f+2g\, , \,b=-2f+g## and
\begin{align*}
\cos \varphi &= \cos (\sphericalangle (a,b))\\
&= \cos(\sphericalangle (-f+2g,-2f+g))\\
&= \dfrac{\langle -f+2g,-2f+g \rangle}{|-f+2g|\cdot |-2f+g|}\\
&= 2 \; \dfrac{\langle f,f\rangle + \langle g,g \rangle}{\sqrt{\left( |f|^2+4|g|^2\right)} \cdot \sqrt{\left( 4|f|^2+|g|^2 \right)}}\\
&= 2\; \dfrac{\gamma^2+\gamma^2}{\sqrt{5\gamma^2 \cdot 5\gamma^2}}\\
&= \dfrac{4}{5}
\end{align*}
and ##\varphi \approx 36.87° \approx 0.2 \pi##
 
Last edited:
  • #46
fresh_42 said:
That will take me awhile, since one of us has made a mistake and I have to figure out who and where.

Correction: We were both right. I forgot a square root in the denominator.

How do you find my solution:

We define ##f(x)=\sin(x)-6\cos(x)\; , \;g(x)=6\sin(x)+\cos(x)## and observe, that ##\{\,f,g\,\}## is a orthogonal basis for a two dimensional subspace of ##L^2\left( \left[ -\frac{\pi}{2},+\frac{\pi}{2} \right] \right)## with ##\gamma :=|f|=|g|=\sqrt{\dfrac{37 \pi}{2}}##. As we are interested in an angle, we won't have to bother the length of our coordinate vectors, i.e. we do not need to normalize them. Now we have ##a=-f+2g\, , \,b=-2f+g## and
\begin{align*}
\cos \varphi &= \cos (\sphericalangle (a,b))\\
&= \cos(\sphericalangle (-f+2g,-2f+g))\\
&= \dfrac{\langle -f+2g,-2f+g \rangle}{|-f+2g|\cdot |-2f+g|}\\
&= 2 \; \dfrac{\langle f,f\rangle + \langle g,g \rangle}{\sqrt{\left( |f|^2+4|g|^2\right)} \cdot \sqrt{\left( 4|f|^2+|g|^2 \right)}}\\
&= 2\; \dfrac{\gamma^2+\gamma^2}{\sqrt{5\gamma^2 \cdot 5\gamma^2}}\\
&= \dfrac{4}{5}
\end{align*}
and ##\varphi \approx 36.87° \approx 0.2 \pi##

I was only the inner product of the three-dimensional Euclidean space before my eyes, which has the same meaning in Hilbert space.
 
  • #47
Question 4
Self-adjoint linear operator's eigenvalues are real
$$ \lambda (x,x) = (\lambda x, x) = (Ax,x) = (x, Ax) = (x,\lambda x,) = \bar \lambda (x,x) $$ However, in the Euclidean space ## (x,x) =0 ## follows ##x = 0##, so ## \lambda = \bar \lambda##.

The eigenvectors belonging to different eigenvalues are orthogonal
$$ (Ax, Ay) = (AAx, y) = (\lambda _1 Ax, y) = \lambda_1^2(x,y) $$ $$ (Ax, Ay) = (x, AAy) = (x, \lambda _2 Ay) = \lambda_2^2(x,y) $$ Because ##\lambda_1## and ## \lambda_2## is different, so ## (x,y)## is equal to ##0##.

The spectrum of the ##T_g## operator is the interval ##[m,M]##

The regular values of the operator ##T_g## are the ##\lambda## numbers for which the ##(T_g-\lambda I)^{-1}## operator has a value in the whole space. The other values of ##\lambda## are the spectrum of the operator. The inverse operator is defined by the following formula: $$ (T_g-\lambda I)^{-1} f(t) = \frac 1 {g(t) -\lambda} f(t). $$ It must be proved that for each number ## \lambda## in interval ##[m,M]##, there is an ##s(t)##element of space ## L^2([0,1])##, that is not mapped to an element of space ## L^2([0,1])\,## by the above inverse operator. It follows from the continuity of the ##g(t)## function that if ##\lambda## is a point in the ##[m, M]## interval, then ##g (t) = \lambda## on some ##t_{\lambda}##. Also, provided that ##\lambda## is different from ##m##, there is in the ##[0,1]## interval a ## t_1, t_2, \dots, t_i, \dots ## monotone growing sequence that if ## t_i \leq t \lt t_{\lambda}##, then $$\frac 1 {|g(t) - \lambda|} \gt i.$$ If ##\lambda = m##, there is a similar monotone descending sequence. The appropriate ##s(t)\,##element of ## L^2([0,1])\,## is constructed as follows.

The value of the function ##s(t)## on the ##[t_i, t_{i+1})## interval is $$ \frac 1 {i\sqrt {t_{i+1} - t_i}}. $$ At points outside all ##(t_i, t_{i+1})## intervals ##s(t)## is ##0## everywhere. The integral of the ##s^2 (t) ## function is equal $$ \sum_{i=1}^\infty {(t_{i+1} - t_i)} {\left( \frac 1 {i \sqrt {t_{i+1} - t_i}} \right)^2} = \sum_{n=1}^\infty \frac 1 {i^2}.$$ However, for each ##(t_i, t_{i+1})## interval, the $$S(t) = \frac 1 {g(t) -\lambda} s(t) $$ function is greater than ## i s(t)##, therefore, therefore ##S^2(t)## integral is divergent, thus the transformed function cannot belong to space ## L^2([0,1])\,##.

At points outside ##[m,M]## interval ##\lambda## values are regular. The values of $$\left|\frac 1 {g(t) -\lambda}\right|$$ in this case have an ##K\,##upper bound. However, if there is an integral of ##s^2(t)##, then there is also an integral of ##K^2s^2(t)##, so the smaller $$\frac 1 {(g(t) -\lambda)^2} s^2(t)$$ also has an integral. So in this case, the ##(T_g-\lambda I)^{-1}## operator has a value in the whole space.
 
Last edited:
  • #48
Periwinkle said:
Question 4
Self-adjoint linear operator's eigenvalues are real
$$ \lambda (x,x) = (\lambda x, x) = (Ax,x) = (x, Ax) = (x,\lambda x,) = \bar \lambda (x,x) $$ However, in the Euclidean space ## (x,x) =0 ## follows ##x = 0##, so ## \lambda = \bar \lambda##.

The eigenvectors belonging to different eigenvalues are orthogonal
$$ (Ax, Ay) = (AAx, y) = (\lambda _1 Ax, y) = \lambda_1^2(x,y) $$ $$ (Ax, Ay) = (x, AAy) = (x, \lambda _2 Ay) = \lambda_2^2(x,y) $$ Because ##\lambda_1## and ## \lambda_2## is different, so ## (x,y)## is equal to ##0##.

The spectrum of the ##T_g## operator is the interval ##[m,M]##
...
Correct. It could be said a bit shorter if we don't specify a potential inverse:

From the boundaries of ##g## we get that ##m,M## are a lower, resp. upper bound of ##T_g\,.## Hence ##\sigma(T_g) \subseteq [m,M]##. According to the mean value theorem for continuous functions we know, that ##g## takes every value in ##[m,M]## at least once, i.e for every ##\mu \in [m,M]## there is a real number ##t_\mu \in [0,1]## such that ##g(t_\mu)=\mu\,.## Thus $$T_g(f)(t_\mu)=g(t_\mu)f(t_\mu)=\mu\cdot f(t_\mu)$$
and ##T-\mu## isn't bounded invertible, hence ##\mu \in \sigma(T_g)## and ##\sigma(T_g)=[m,M]\,.##
 
  • Like
Likes   Reactions: Periwinkle
  • #49
I noticed my own mistake. Correctly: $$ |S(t)| = \left| \frac 1 {g(t) -\lambda} s(t) \right|$$ function is greater than ## i |s(t)|.##

Tomorrow I will consider the above solution.
 

Similar threads

  • · Replies 93 ·
4
Replies
93
Views
15K
  • · Replies 33 ·
2
Replies
33
Views
9K
  • · Replies 61 ·
3
Replies
61
Views
12K
  • · Replies 175 ·
6
Replies
175
Views
27K
  • · Replies 28 ·
Replies
28
Views
7K
  • · Replies 114 ·
4
Replies
114
Views
11K
  • · Replies 42 ·
2
Replies
42
Views
11K
  • · Replies 100 ·
4
Replies
100
Views
12K
  • · Replies 39 ·
2
Replies
39
Views
13K
  • · Replies 80 ·
3
Replies
80
Views
10K