Math Challenge - September 2020

  • Challenge
  • Thread starter fresh_42
  • Start date
  • Featured
  • #26
13,475
10,529
I thought that there was a method to control each pair (x;y) in the two fields using the same couple in order to demonstrate the AM GM inequality
No. There is nothing special about scalar or vector fields. They are just another way to look at functions, either scalar valued functions, or vector valued functions.

Now why they are considered at all? This comes from the fact that they occur in physics all the time: measurements at the locations of the domain (phase space), and the change of these measurements if you follow a path (flow) through these fields. The vector fields are typically tangent spaces at a certain location, collected in a set over all possible locations. E.g. the path of an orbiter through a gravitational field is such a flow; same as car driving, you always follow the tangent vectors until you apply a force to change direction. Vector fields are thus a convenient notation to handle all these different tangent spaces (one at each location) at the same time and describe flows through such fields.
 
  • Like
Likes Schalk21
  • #27
17
1
##y = x +\ln{y}##
##y-\ln{y} = x##
##\frac{dx}{dy} = 1-\frac{1}{y}##
##\frac{dx}{dy} = \frac{y-1}{y}##
##\frac{dy}{dx} = \frac{y}{y-1}##
##\frac{dy}{dx} = 1+ \frac{1}{y-1}##

2nd part
##\frac{dx}{dy} = 1-\frac{1}{y}##
##\frac{d^2x}{dy^2} = \frac{1}{y^2}##
##\frac{d^2y}{dx^2} = y^2##
 
  • #28
13,475
10,529
##y = x +\ln{y}##
##y-\ln{y} = x##
##\frac{dx}{dy} = 1-\frac{1}{y}##
##\frac{dx}{dy} = \frac{y-1}{y}##
##\frac{dy}{dx} = \frac{y}{y-1}##
##\frac{dy}{dx} = 1+ \frac{1}{y-1}##

2nd part
##\frac{dx}{dy} = 1-\frac{1}{y}##
##\frac{d^2x}{dy^2} = \frac{1}{y^2}##
##\frac{d^2y}{dx^2} = y^2##
Not sure what you did there, but if I had to guess, I'd say it is wrong.
 
  • #29
149
68
Wait, what is the 2nd question asking?
 
  • #30
13,475
10,529
Wait, what is the 2nd question asking?
It asks for a proof of the equivalence and a proof why ##3## is primitive.
 
  • #31
mathwonk
Science Advisor
Homework Helper
11,011
1,209
hint/spoiler for #1,#4, #10:

1. Sometimes it is easier to prove a certain subgroup is actually invariant under all automorphisms, not just inner ones. Try this on this problem at least in the case of a finite group, the advantage being it lets you use the order of the Frattini subgroup.

4. Mike Artin classifies all groups of order 12 around page 212, or maybe 206 of his algebra book, and I don't think this quite gives it all away; i.e. you still have to do some work.

10. Try showing all nilpotent elements do belong to the nilradical N(R), and then use the axiom of choice, maybe in the form of existence of maximal ideals in all commutative rings, to deduce the converse. It is very helpful to learn about local rings and localization of rings "at" a prime ideal, for this and also the later part of this problem, i.e. finding a ring whose nilpotent elements are a lot smaller than its jacobson radical.

Homework: read the definition of an artin ring and a local ring, and the localization of a ring at a prime ideal.
 
Last edited:
  • #32
mathwonk
Science Advisor
Homework Helper
11,011
1,209
possible suggestions for #6, #9:

In #6 aren't you tempted to try stereographic projection to the sphere?

in #9, I don't know what a lie algebra or a cartan subgroup are either, but I do know that if I want a square root of a matrix I should try to diagonaliza it. And anyone can find the inverse of a matrix by whatever they call it, gauss elimination?
 
  • #33
julian
Gold Member
588
108
Problem #10 (a)
Let ##r## be any nilpotent element, and let ##P## be any prime ideal. We have that ##r^n = 0## for some ##n \geq 1##, as such ##r^n \in P##. If ##r^1 =0##. then we have ##r \in P##. Say ##n > 1##. First, assume that ##r \notin P##. Then as ##r \cdot r^{n-1} \in P## and ##P## is a prime ideal we have ##r^{n-1} \in P##. This then implies that ##r^{n-2} \in P## by the same reasoning. Continuing in this way we arrive at ##r^2\in P##, but this implies that ##r \in P## contradicting our original assumption. Therefore, we conclude that ##r \in P##. Thus every nilpotent element is in the intersection of all prime ideals, ##N(R)##.


Next we need to show that ##N(R)## contains only nilpotent elements. We do this by proving that each non-nilpotent element does not lie in some prime ideal (and hence does not lie in the intersection of all prime ideals).

Let ##r## be any non-nilpotent element. Consider the set ##S## of all ideals in ##I## in ##R## that dont contain some positive power of ##r##:

##
S = \{ I \in R : r^n \notin I \text{ for } n \geq 1 \}
##

The zero ideal does not contain some positive power of ##r## because ##r## is not nilpotent, and so ##S## is non-empty. Partially order ##S## by inclusion. Let ##\{ I_\alpha \}_{\alpha \in A}## be a totally ordered set of ideals in ##S##. Consider their union: ##I_\cup = \cup_{\alpha \in A} I_\alpha##. Is ##I_\cup## an ideal? If ##x## and ##y## are in ##I_\cup## then ##x \in I_\alpha## and ##y \in I_\beta## for two of the ideals ##I_\alpha## and ##I_\beta##. Since this set of ideals is totally ordered, ##I_\alpha \subset I_\beta## or ##I_\beta \subset I_\alpha##. WLOG we take ##I_\alpha \subset I_\beta##. Therefore ##x,y \in I_\beta## then ##x \pm y \in I_\beta \subset I_\cup## and ##xy \in I_\beta \subset I_\cup##. If ##x \in I_\cup## then ##x \in I_\alpha## for some ideal ##I_\alpha##. So ##rx \in I_\alpha \subset I_\cup## for any ##r \in R##. Therefore we have verified that ##I_\cup## is an ideal.

Is ##I_\cup \in S##? Since no ##I_\alpha## contains a positive power of ##r##, their union does not contain a positive power of ##r##. Therefore ##I_\cup \in S##. Because ##I_\cup## contains every ##I_\alpha## it is an upper bound on the totally ordered subset ##\{ I_\alpha \}_{\alpha \in A}##. Thus every totally ordered subset of ##S## contains an upper bound in ##S##.

By Zorn's lemma there is a maximal element of ##S##. This is an ideal ##P## that does not contain any positive power of ##r## and is maximal for this property (with respect to inclusion). We now show that ##P## is a prime ideal. (Reminder: ##(x)## denotes the intersection of all ideals that contain the element ##x##. This is an ideal itself called the principal ideal of ##R## generated by ##x##). Suppose ##x,y \in R## and ##xy \in P##. We need to prove ##x \in P## or ##y \in P##. To do so we assume otherwise. In this case the ideals ##(x) + P## and ##(y) + P## are both strictly larger than ##P##, and so they cant lie in ##S##. So ##r^m \in (x) + P## and ##r^n \in (y) + P## for some positive integers ##m## and ##n##. Write

##
r^m = a x + p_1 \in (x) + P , \qquad r^n = b y + p_2 \in (y) + P
##

where ##a,b \in R##. Take their product

##
r^{m+n} = ab xy + axp_2 + byp_1 + p_1 p_2 .
##

Since ##P## is an ideal and since ##xy \in P##, the RHS is in ##P##. But then ##r^{m+n} \in P## which contradicts that ##P## contains no positive power of ##r##. Hence, ##x \in P## or ##y \in P##, so ##P## is prime. Hence, we have shown that our arbitrary non-nilpotent element in not in some prime ideal.
 
  • Like
Likes fresh_42
  • #34
benorin
Homework Helper
Insights Author
Gold Member
1,275
81
7. a) ##\int_{|z|=5}\tfrac{e^z}{z^2+\pi ^2}\, dz=\tfrac{1}{2\pi i}\int_{|z|=5}e^z\left(\tfrac{1}{z-i\pi }-\tfrac{1}{z+i\pi }\right) dz=e^{i\pi}-e^{-i\pi}=0##
by Cauchy’s Integral Formula
 
Last edited:
  • #35
13,475
10,529
7. a) ##\int_{|z|=5}\tfrac{e^z}{z^2+\pi ^2}\, dz=\tfrac{1}{2\pi i}\int_{|z|=5}e^z\left(\tfrac{1}{z-i\pi }-\tfrac{1}{z+i\pi }\right) dz=e^{i\pi}-e^{-i\pi}=0##
by Cauchy’s Integral Formula
Can you elaborate your argument, especially why you can apply Cauchy's integral formula? How do you deal with the poles at ##z=\pm i\pi## which are inside ##U_5(0).##
 
  • #36
benorin
Homework Helper
Insights Author
Gold Member
1,275
81
Can you elaborate your argument, especially why you can apply Cauchy's integral formula? How do you deal with the poles at ##z=\pm i\pi## which are inside ##U_5(0).##
Wouldn't just be quicker to say "...by Residue Theorem"? You always get me with a shorter proof, back at you!

Looking ahead:
So I've been not getting how #7 parts a) and b) are related, the only way I can think to solve b) is to use CR-eqns but I figure you're gonna say "Why did you do all that calculus when you could've just applied Theorem X [which relates part a) to b)]?". So should I thumb through theorems and exercises until I find Theorem X? Or do I have your blessing to go ahead and write out the calculus?
 
  • #37
13,475
10,529
Wouldn't just be quicker to say "...by Residue Theorem"? You always get me with a shorter proof, back at you!
Cauchy's formula was correct, but you have to manage holomorphy. The trick is to find appropriate integration paths, i.e. regions plus Stokes. I haven't checked whether the residue theorem works. I guess counting correctly is of equivalent difficulty.
Looking ahead:
So I've been not getting how #7 parts a) and b) are related, the only way I can think to solve b) is to use CR-eqns but I figure you're gonna say "Why did you do all that calculus when you could've just applied Theorem X [which relates part a) to b)]?". So should I thumb through theorems and exercises until I find Theorem X? Or do I have your blessing to go ahead and write out the calculus?
The connection is complex differentiability. CR is ok, but you can make life a lot easier if you do not directly consider ##f(z)##.
 
  • #38
julian
Gold Member
588
108
Problem #9
The matrix

##
A=
\begin{pmatrix}
5&-4&2\\
-4&7&-8\\
1&-4&6
\end{pmatrix}
##

has the obvious eigenvector:

##
\begin{pmatrix}
-2\\
0\\
1
\end{pmatrix}
##

with eigenvalue 4. We have:

##
\det
\begin{pmatrix}
5 - \lambda&-4&2\\
-4&7 - \lambda&-8\\
1&-4&6 - \lambda
\end{pmatrix}
##
##
= - \lambda^3 + 18 \lambda^2 -57 \lambda + 4 = 0
##

Write

##
(\lambda - 4) (-\lambda^2 + b \lambda -1) = - \lambda^3 + 18 \lambda^2 -57 \lambda + 4
##

This implies that ##b = 14##. Solving ##\lambda^2 - 14 \lambda + 1 = 0## gives the other two eigenvalues:

##
7 + 4 \sqrt{3} , \quad 7- 4 \sqrt{3} .
##

We wish to find the remaining eigenvectors.

##
\begin{pmatrix}
5&-4&2\\
-4&7&-8\\
1&-4&6
\end{pmatrix}
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
=
(7 + 4 \sqrt{3})
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
##

or

##
\begin{pmatrix}
-2 - 4 \sqrt{3}&-4&2\\
-4&-4 \sqrt{3}&-8\\
1&-4&-1 - 4 \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
= 0
##

Guess that

##
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
=
\begin{pmatrix}
1\\
y\\
1
\end{pmatrix}
##

gives

##
\begin{pmatrix}
- 4 \sqrt{3}-4y\\
-12 -4 \sqrt{3}y\\
- 4 \sqrt{3} - 4y
\end{pmatrix}
= 0
\qquad \text{implying } y = - \sqrt{3}
##

so the eigenvector is

##
\begin{pmatrix}
1\\
- \sqrt{3}\\
1
\end{pmatrix} .
##

Similarly the other eigenvector is

##
\begin{pmatrix}
1\\
\sqrt{3}\\
1
\end{pmatrix} .
##

Consider the matrix who's columns are the eigenvectors:

##
S=
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
##

The eigenvectors are linearly-independent (as the eigenvalues are distinct) but they are not orthogonal, so the inverse matrix, ##S^{-1}##, wont be proportional to the transpose of ##S##. However, with a slight modification to the transpose we find the inverse matrix is

##
S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##

The diagonalised matrix is

##
D = S^{-1} A S =
\begin{pmatrix}
7 + 4 \sqrt{3}&0&0\\
0&4& 0&\\
0&0&7 - 4 \sqrt{3}
\end{pmatrix}
##

Note

##
(S \sqrt{D} S^{-1})^2 = S \sqrt{D} S^{-1} S \sqrt{D} S^{-1} = S D S^{-1} = A
##

Therefore

##
\sqrt{A} = S \sqrt{D} S^{-1}
##

So that

##
\sqrt{A} = S \sqrt{D} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
\pm \sqrt{7 + 4 \sqrt{3}}&0&0\\
0&\pm 2& 0&\\
0&0&\pm \sqrt{7- 4 \sqrt{3}}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##

Note there are ##2^3## distinct square-roots.


We turn to the inverse matrix, ##A^{-1}##. Note

##
(S D^{-1} S^{-1}) A = (S D^{-1} S^{-1}) (S D S^{-1}) = S D^{-1} S^{-1} S D S^{-1} = S \mathbb{1} S^{-1} = \mathbb{1}
##

Therefore

##
A^{-1} = S D^{-1} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
{1 \over 7 + 4 \sqrt{3}}&0&0\\
0&{1 \over 4}& 0&\\
0&0&{1 \over 7- 4 \sqrt{3}}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
= {1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
7 - 4 \sqrt{3}&0&0\\
0&{1 \over 4}& 0&\\
0&0&7+ 4 \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
= {1 \over 4}
\begin{pmatrix}
10&16&18\\
16&28&32&\\
9&16&19
\end{pmatrix}
##
 
  • Like
Likes fresh_42
  • #39
benorin
Homework Helper
Insights Author
Gold Member
1,275
81
I'll circle back to 7 a) later, I think I got this...

So, don't consider ##f(z)## directly? I posit that the exponential part of ##f(z)## is entire so that leaves the square of the conjugate as the problem child: let ##h(z):=\bar{z} ^2## then for real ##u,v## define ##h(x+iy)=u(x,y)+i v(x,y)## so that ##u(x,y)=x^2-y^2## and ##v(x,y)=-2xy## so we may check the CR-eqns for complex differentiability: ##u_x=2x=-2x=v_y\implies x:=0## and ##u_y=-2y=-(-2y)=-v_x\implies y:=0##, hence the ##f(z)## is complex differentiable at the origin.
 
Last edited:
  • #40
13,475
10,529
Problem #9
The matrix

##
A=
\begin{pmatrix}
5&-4&2\\
-4&7&-8\\
1&-4&6
\end{pmatrix}
##

has the obvious eigenvector:

##
\begin{pmatrix}
-2\\
0\\
1
\end{pmatrix}
##

with eigenvalue 4. We have:

##
\det
\begin{pmatrix}
5 - \lambda&-4&2\\
-4&7 - \lambda&-8\\
1&-4&6 - \lambda
\end{pmatrix}
##
##
= - \lambda^3 + 18 \lambda^2 -57 \lambda + 4 = 0
##

Write

##
(\lambda - 4) (-\lambda^2 + b \lambda -1) = - \lambda^3 + 18 \lambda^2 -57 \lambda + 4
##

This implies that ##b = 14##. Solving ##\lambda^2 - 14 \lambda + 1 = 0## gives the other two eigenvalues:

##
7 + 4 \sqrt{3} , \quad 7- 4 \sqrt{3} .
##

We wish to find the remaining eigenvectors.

##
\begin{pmatrix}
5&-4&2\\
-4&7&-8\\
1&-4&6
\end{pmatrix}
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
=
(7 + 4 \sqrt{3})
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
##

or

##
\begin{pmatrix}
-2 - 4 \sqrt{3}&-4&2\\
-4&-4 \sqrt{3}&-8\\
1&-4&-1 - 4 \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
= 0
##

Guess that

##
\begin{pmatrix}
x\\
y\\
z
\end{pmatrix}
=
\begin{pmatrix}
1\\
y\\
1
\end{pmatrix}
##

gives

##
\begin{pmatrix}
- 4 \sqrt{3}-4y\\
-12 -4 \sqrt{3}y\\
- 4 \sqrt{3} - 4y
\end{pmatrix}
= 0
\qquad \text{implying } y = - \sqrt{3}
##

so the eigenvector is

##
\begin{pmatrix}
1\\
- \sqrt{3}\\
1
\end{pmatrix} .
##

Similarly the other eigenvector is

##
\begin{pmatrix}
1\\
\sqrt{3}\\
1
\end{pmatrix} .
##

Consider the matrix who's columns are the eigenvectors:

##
S=
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
##

The eigenvectors are linearly-independent (as the eigenvalues are distinct) but they are not orthogonal, so the inverse matrix, ##S^{-1}##, wont be proportional to the transpose of ##S##. However, with a slight modification to the transpose we find the inverse matrix is

##
S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##

The diagonalised matrix is

##
D = S^{-1} A S =
\begin{pmatrix}
7 + 4 \sqrt{3}&0&0\\
0&4& 0&\\
0&0&7 - 4 \sqrt{3}
\end{pmatrix}
##

Note

##
(S \sqrt{D} S^{-1})^2 = S \sqrt{D} S^{-1} S \sqrt{D} S^{-1} = S D S^{-1} = A
##

Therefore

##
\sqrt{A} = S \sqrt{D} S^{-1}
##

So that

##
\sqrt{A} = S \sqrt{D} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
\pm \sqrt{7 + 4 \sqrt{3}}&0&0\\
0&\pm 2& 0&\\
0&0&\pm \sqrt{7- 4 \sqrt{3}}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##

Note there are ##2^3## distinct square-roots.


We turn to the inverse matrix, ##A^{-1}##. Note

##
(S D^{-1} S^{-1}) A = (S D^{-1} S^{-1}) (S D S^{-1}) = S D^{-1} S^{-1} S D S^{-1} = S \mathbb{1} S^{-1} = \mathbb{1}
##

Therefore

##
A^{-1} = S D^{-1} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
{1 \over 7 + 4 \sqrt{3}}&0&0\\
0&{1 \over 4}& 0&\\
0&0&{1 \over 7- 4 \sqrt{3}}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
= {1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
7 - 4 \sqrt{3}&0&0\\
0&{1 \over 4}& 0&\\
0&0&7+ 4 \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
= {1 \over 4}
\begin{pmatrix}
10&16&18\\
16&28&32&\\
9&16&19
\end{pmatrix}
##
Can you give at least one square root explicitly? The hint that we look for a Cartan matrix of a simple Lie algebra narrows it down!
 
  • #41
13,475
10,529
I'll circle back to 7 a) later, I think I got this...

So, don't consider ##f(z)## directly? I posit that the exponential part of ##f(z)## is entire so that leaves the square of the conjugate as the problem child: let ##h(z):=\bar{z} ^2## then for real ##u,v## define ##h(x+iy)=u(x,y)+i v(x,y)## so that ##u(x,y)=x^2+y^2## and ##v(x,y)=-2xy## so we may check the CR-eqns for complex differentiability: ##u_x=2x=-2x=v_y\implies x:=0## and ##u_y=2y=-(-2y)=-v_x\implies y\in\mathbb{R}##, hence the ##f(z)## is complex differentiable along the imaginary axis.

$$ $$
Bon Jovi, uhm I mean, slippery when wet, uhm I mean: calculation error.
 
  • #42
julian
Gold Member
588
108
Can you give at least one square root explicitly? The hint that we look for a Cartan matrix of a simple Lie algebra narrows it down!
Just noticed that ##(2 + \sqrt{3})^2 = 7 + 4 \sqrt{3}## and ##(2 - \sqrt{3})^2 = 7 - 4 \sqrt{3}##. Just a second.
 
  • #43
julian
Gold Member
588
108
##
\sqrt{A} = S \sqrt{D} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
2 + \sqrt{3}&0&0\\
0&2& 0&\\
0&0&2- \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
=
\begin{pmatrix}
2&-1&0\\
-1&2& -2&\\
0&-1&2
\end{pmatrix}
##
 
  • Like
Likes fresh_42
  • #44
13,475
10,529
##
\sqrt{A} = S \sqrt{D} S^{-1} =
{1 \over 6}
\begin{pmatrix}
1&-2&1\\
-\sqrt{3}&0& \sqrt{3}&\\
1&1&1
\end{pmatrix}
\begin{pmatrix}
2 + \sqrt{3}&0&0\\
0&2& 0&\\
0&0&2- \sqrt{3}
\end{pmatrix}
\begin{pmatrix}
1&-\sqrt{3}&2\\
-2&0& 2&\\
1&\sqrt{3}&2
\end{pmatrix}
##
##
=
\begin{pmatrix}
2&-1&0\\
-1&2& -2&\\
0&-1&2
\end{pmatrix}
##
Yes. This is the Cartan matrix of the simple Lie algebra of type ##B_3## which is the ##21## dimensional orthogonal Lie algebra ##\mathfrak{o}(7,\mathbb{R})=\mathfrak{so}(7,\mathbb{R}).##
 
  • #45
benorin
Homework Helper
Insights Author
Gold Member
1,275
81
Bon Jovi, uhm I mean, slippery when wet, uhm I mean: calculation error.
Oh duh, here:

So, don't consider ##f(z)## directly? I posit that the exponential part of ##f(z)## is entire so that leaves the square of the conjugate as the problem child: let ##h(z):=\bar{z} ^2## then for real ##u,v## define ##h(x+iy)=u(x,y)+i v(x,y)## so that ##u(x,y)=x^2-y^2## and ##v(x,y)=-2xy## so we may check the CR-eqns for complex differentiability: ##u_x=2x=-2x=v_y\implies x:=0## and ##u_y=-2y=-(-2y)=-v_x\implies y:=0##, hence the ##f(z)## is complex differentiable at the origin.
 
  • #46
benorin
Homework Helper
Insights Author
Gold Member
1,275
81
7. a)

$$\begin{gathered}\int_{|z|=5}\tfrac{e^z}{z^2+\pi ^2}\, dz=\tfrac{1}{2\pi i}\int_{|z|=5}e^z\left(\tfrac{1}{z-i\pi }-\tfrac{1}{z+i\pi }\right) dz \\ = \lim_{z\to i \pi}\left[ (z-i\pi )\cdot \tfrac{e^z}{z-i\pi}\right]-\lim_{z\to -i \pi}\left[ (z+i\pi )\cdot \tfrac{e^z}{z+i\pi}\right] \\ =e^{i\pi}-e^{-i\pi}=0 \\ \end{gathered} $$
by Residue Theorem
 
  • #47
13,475
10,529
7. a)

$$\begin{gathered}\int_{|z|=5}\tfrac{e^z}{z^2+\pi ^2}\, dz=\tfrac{1}{2\pi i}\int_{|z|=5}e^z\left(\tfrac{1}{z-i\pi }-\tfrac{1}{z+i\pi }\right) dz \\ = \lim_{z\to i \pi}\left[ (z-i\pi )\cdot \tfrac{e^z}{z-i\pi}\right]-\lim_{z\to -i \pi}\left[ (z+i\pi )\cdot \tfrac{e^z}{z+i\pi}\right] \\ =e^{i\pi}-e^{-i\pi}=0 \\ \end{gathered} $$
by Residue Theorem
It would have been helpful if you had said how you calculated the residues. For all readers of this thread:
Residue theorem: https://en.wikipedia.org/wiki/Residue_theorem
Residues of simple roots: https://en.wikipedia.org/wiki/Residue_(complex_analysis)#Simple_poles
 
  • #48
benorin
Homework Helper
Insights Author
Gold Member
1,275
81
@fresh_42 I'd like to ask if #2 can hope to be solved with only basic modulo arithmetic because I don't remember Number Theory from the 1 class I took that NT-ish whose text was Concrete Mathematics (I do recall liking the text though)? I did play with it and find some relation to binary sequences and multiplication by integers in base 2, but IDK what the heck I'm doing lol
 
  • #49
13,475
10,529
@fresh_42 I'd like to ask if #2 can hope to be solved with only basic modulo arithmetic because I don't remember Number Theory from the 1 class I took that NT-ish whose text was Concrete Mathematics (I do recall liking the text though)? I did play with it and find some relation to binary sequences and multiplication by integers in base 2, but IDK what the heck I'm doing lol
A standard result of number theory with Legendre symbols helps a lot. Whether this is basic modular arithmetics or not depends on whom you ask. It is not sophisticated though.
 

Related Threads on Math Challenge - September 2020

Replies
104
Views
4K
Replies
64
Views
7K
Replies
39
Views
6K
Replies
156
Views
5K
Replies
107
Views
8K
Replies
150
Views
6K
Replies
77
Views
6K
Replies
137
Views
5K
  • Last Post
3
Replies
61
Views
3K
Replies
38
Views
3K
Top