Intermediate Math Challenge - June 2018

In summary, Summer is coming and brings a new intermediate math challenge! Enjoy! If you find the problems difficult to solve don't be disappointed! Just check our other basic level math challenge thread!
  • #36
Fred Wright said:
Let ##y'=y+2, x'=x+1, dx'=dx, dy'=dy##, and the differential equation becomes
##(2x′−4y′)dx′+(x′+y′)dy′=0##​

I don't think that the substitution is carried out right.

EDIT: The result of this problem is not ugly at all. It just needs some different thinking about simplification in the beginning in order to manage things properly.
 
Physics news on Phys.org
  • #37
QuantumQuest said:
How is this justified?

That should be ##y \le x^4 \le 8y##, which follows from ##0 \le z^2 \le x^4 \le 4z^2## and ##0\le y\le z^2 \le 2y##.
Consequently the solution was unfortunately wrong. It should be:
$$\begin{cases}z^2 - y \geq 0\\ x^2 - z \geq 0\\ y^2 - x \geq 0\\ z^2 \leq 2y\\ x^2 \leq 2z\\ y^2 \leq 2x
\end{cases} \Rightarrow
\begin{cases}0 \le z \le x^2 \le 2z\\ 0 \le x \le y^2 \le 2x\\ 0 \le y\le z^2 \le 2y \end{cases} \Rightarrow
\begin{cases}0 \le z^2 \le x^4 \le 4z^2 \\ 0 \le y \le x^4 \le 8y \\ 0 \le y^2 \le x^8 \le 64y^2 \\ 0 \le x \le x^8 \le128 x \end{cases} \Rightarrow
x=0 \lor 1\le x^7 \le 128 \Rightarrow
x=0\lor 1\le x\le 2
$$
With ##x=0## it follows that ##y=z=0##, meaning this won't give any volume.
So we are left with (with limits from the 2nd step):
$$\text{Volume S} = \int_1^{2}\int_{\sqrt x}^{\sqrt{2x}}\int_{\sqrt y}^{\sqrt{2y}} 1 \,dz\,dy\,dx
= \int_1^{2}\int_{\sqrt x}^{\sqrt{2x}} (\sqrt{2}-1)\sqrt y \,dy\,dx
= \int_1^{2} (\sqrt{2}-1)\frac 23 y^{3/2}\Big|_{\sqrt x}^{\sqrt{2x}} \,dx \\
= \int_1^{2} (\sqrt{2}-1)\frac 23 \left[(2x)^{3/4}-x^{3/4}\right] \,dx
= \int_1^{2} (\sqrt{2}-1)\frac 23 (2^{3/4}-1)x^{3/4} \,dx
= (\sqrt{2}-1)\frac 23 (2^{3/4}-1)\frac 47 x^{7/4} \Big|_1^{2} \\
= (\sqrt{2}-1)\frac 23 (2^{3/4}-1)\frac 47 \Big(2^{7/4} - 1\Big)
$$
 
  • #38
I like Serena said:
That should be ##y \le x^4 \le 8y##, which follows from ##0 \le z^2 \le x^4 \le 4z^2## and ##0\le y\le z^2 \le 2y##.
Consequently the solution was unfortunately wrong. It should be:

The volume you find is not correct. I'll just ask how did you find / deduce this

##0 \le x \le x^8 \le128 x## because I cannot see it. I don't know how exactly you solve all these inequalities but if it is of any help, the bounds you give for ##x## i.e. ##1\le x\le 2## are the right bounds but not for the ##x## alone. You must find some other expression(s) from the given of the problem that is / are bounded this way.
 
  • #39
QuantumQuest said:
As an extra hint I would point you towards the direction of making some transformation(s) and seeing if the notion of a determinant is any useful in this case.
We don't seem to be making much progress on this problem, so let's take what we've got so far and try to come up with an approach to solve it.

I can tell you that doing the integration in the given cartesian coordinates results in a bunch of triple integrals to account for all of the combinations of constraints. I think it is solvable that way, but tedious in the extreme. It would be difficult to know if you arrived at the correct answer with that approach because you could never be sure there was no mistake somewhere unless you automated the whole process, and in that case you might as well use Mathematica.

@QuantumQuest gives us a couple of hints. The idea of using a matrix determinant suggests transforming the solid defined by the constraints into a parallelepiped. The volume of a parallelepiped is equal to the determinant of a 3X3 matrix constructed using four of its vertices. This idea seems credible. The volume in question resembles a parallelepiped in that it has 6 faces, each of which forms an edge with four of the other faces. The faces and edges are warped and twisted, though, so a transformation would be needed to make them linear. I have not been able to coax Mathematica into drawing a plot of this shape, but you can get an idea of it by looking at the constraints:

##y \leq z^2 \leq 2y##
##z \leq x^2 \leq 2z##
##x \leq y^2 \leq 2x##

A negative value of ##x, y,## or ##z## would violate a constraint, so we can assume they are non-negative. Then we can bound their values further to within the cube ##1\leq x \leq 2##, ##1\leq y \leq 2##, ##1\leq z \leq 2##. This follows from a chain of inequalities:
##~~~~~~~~~~~~~~~~~~~y \leq z^2 \leq 2y##
##~~~~~~~~~x\leq y^2 \leq z^4 \leq 4y^2 \leq 8x##
##z\leq x^2\leq y^4 \leq z^8 \leq 16y^4 \leq 64x^2\leq 128z##

##1\leq z^7 \leq 128##

The other variables can be bounded similarly. It is also clear that both ##(x,y,z) = (1,1,1)## and ##(x,y,z) = (2,2,2)## satisfy all of the constraints.

Now consider one face, say ##y=z^2##. Within the cube, this face cannot intersect ##z^2=2y##, but it does intersect each of the other four faces. For example, its intersection with ##z=x^2## is described by the parametric curve ##(x, x^4, x^2)##.

It is not obvious what kind of transformation would make that curve (and each of the five other edges) look linear. I tried what I thought was the simplest possibility. Since the volume is trilaterally symmetric about the line ##w\vec S## where ##w## is a real-valued parameter and ##\vec S = \frac 1 {\sqrt 3}(1, 1, 1)##, it would be really nice if the distance between ##\vec T(x) \equiv (x-1, x^4-1, x^2-1)## and ##w \vec S## varied linearly. (I subtract 1 from each coordinate because the curve passes through the point (1,1,1), at which point its distance from ##w \vec S## is 0.)

Specifically, if ##|\vec T(x) -(\vec T(x) \cdot \vec S) \vec S| = a(x-1)## where ##a## is some constant (and assuming the same transformation worked for the other edges) we could just apply a rotation depending on the values of ##x, y,## and ##z## and turn our shape into a parallelepiped. It is easy to factor ##x-1## out of this expression, but the other factor looks nothing like a constant because the coefficients of ##x^n## terms for ##n>0## fail to cancel out. So this transformation would not work. Does anyone have an idea for a transformation that would work?
 
  • #40
tnich said:
We don't seem to be making much progress on this problem, so let's take what we've got so far and try to come up with an approach to solve it.

Although the post of @tnich does not ask me for an answer or a hint, I regard it fair to answer myself as an appreciation to the efforts put from all members who tried this particular problem. I see that, particularly, both you @tnich, and @I like Serena put good efforts to solve the problem, so I owe some further and better hint. A first observation is that the solid is surrounded by the cylindrical surfaces ##y = z^2##, ##z = x^2##, ##x = y^2##, ##2y = z^2##, ##2z = x^2##, ##2x = y^2##. Now, try the transformation ##\frac{z^2}{y} = u##, ##\frac{x^2}{z} = v##, ##\frac{y^2}{x} = w##.
 
  • #41
QuantumQuest said:
Although the post of @tnich does not ask me for an answer or a hint, I regard it fair to answer myself as an appreciation to the efforts put from all members who tried this particular problem. I see that, particularly, both you @tnich, and @I like Serena put good efforts to solve the problem, so I owe some further and better hint. A first observation is that the solid is surrounded by the cylindrical surfaces ##y = z^2##, ##z = x^2##, ##x = y^2##, ##2y = z^2##, ##2z = x^2##, ##2x = y^2##. Now, try the transformation ##\frac{z^2}{y} = u##, ##\frac{x^2}{z} = v##, ##\frac{y^2}{x} = w##.
It looks obvious now that you have shown it to us. The Jacobean determinant of that transformation would be
##\begin{vmatrix}
0 &-\frac {z^2} {y^2} & \frac {2z} y \\
\frac {2x} z & 0 & -\frac {x^2} {z^2} \\
-\frac {y^2} {x^2} & \frac {2y} x & 0
\end{vmatrix} = 7##

So ##dx~dy~dz = \frac 1 7 du~dv~dw## and the volume of the solid is

##\int_1^2\int_1^2\int_1^2 \frac 1 7 du~dv~dw = \frac 1 7##
 
  • Like
Likes QuantumQuest
  • #42
(5) looks like a standard differential equation. When my last exam is done tomorrow, I will attempt it.

Here is already a sketch of what I would attempt:

- Calculate the intercept of the lines in the DE.
- Substitute ##u = x- x_0, v = y - y_0 ## with ##(x_0,y_0)## the intersection point.
- The DE becomes homogeneous: divide everything by ##u##, and subsitute ##z:= v/u##
- The DE becomes solvable by separation of variables.

and then we are done.
 
Last edited by a moderator:
  • #43
QuantumQuest said:
##10.## b) Show that there are infinitely many units (invertible elements) in ##\mathbb{Z}[\sqrt{3}]##.
Any pair of integers ##(x, y)## such that ##x^2 - 3 y^2 = 1## forms two invertible elements ##x \pm \sqrt 3 y##. For the first few pairs with this property ##(1,0), (2,1), (7,4), (26,15)##, both the ##x_n## and the ##y_n## satisfy the linear difference equation ##z_n=4z_{n-1}-z_{n-2}##. The general solution of this difference equation is ##z_n = a(2+\sqrt 3)^n + b(2-\sqrt 3)^n##. For ##x_n##, ##a_x = b_x = \frac 1 2##. For ##y_n##, ##a_y = -b_y = \frac 1 {2\sqrt 3}##.

Claim. Let ##x_n = \frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n]##, and ##y_n = \frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n]##. Then for ##\forall n \in \mathbb Z^+##, ##x_n, y_n \in \mathbb Z## and ##x_n^2 - 3 y_n^2 = 1##.

Proof.
##\forall n \in \mathbb Z^+, x_n \in \mathbb Z##:
$$\begin{align}x_n & = \frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n] \nonumber\\
&= \frac 1 2 \left [\sum_{j=0}^n \binom n j 2^{n-j}\sqrt3^j + \sum_{j=0}^n \binom n j 2^{n-j}(-\sqrt3)^j\right]\nonumber
\end{align}$$
For odd j, the terms in the two sums cancel, leaving
$$x_n = \sum_{k=0}^{\lfloor \frac n 2 \rfloor} \binom n {2k} 2^{n-2k}3^k \in \mathbb Z$$
##\forall n \in \mathbb Z^+, y_n \in \mathbb Z##:
$$\begin{align}y_n & = \frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n] \nonumber\\
&= \frac 1 {2\sqrt 3} \left [\sum_{j=0}^n \binom n j 2^{n-j}\sqrt3^j - \sum_{j=0}^n \binom n j 2^{n-j}(-\sqrt3)^j\right]\nonumber
\end{align}$$
For even j, the terms in the two sums cancel, leaving
$$y_n = \sum_{k=0}^{\lfloor \frac {n-1} 2 \rfloor} \binom n {2k+1} 2^{n-2k-1}3^k \in \mathbb Z$$
##\forall n \in \mathbb Z^+##, ##x_n^2 - 3 y_n^2 = 1##:
First note that ##x_1 = 2, y_1 = 1## and ##x_1^2-3y_1^2 = (2+\sqrt 3)(2-\sqrt 3)=1##.
Now
$$\begin{align}x_n^2 - 3 y_n^2 & = \left(\frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n]\right)^2-3\left(\frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n]\right)^2\nonumber \\
& = \frac 1 4 [(2+\sqrt 3)^{2n} + 2+(2-\sqrt 3)^{2n}]-3\frac 1 {12} [(2+\sqrt 3)^{2n} - 2+(2-\sqrt 3)^{2n}]\nonumber \\
& = 1\nonumber \\
\end{align}$$
This shows that there is an infinite number of invertible elements in ##\mathbb Z[\sqrt 3]## formed from pairs ##(x_n, y_n)## for which ##(x_n+\sqrt 3 y_n)(x_n-\sqrt 3 y_n) = 1##.
 
  • #44
tnich said:
Any pair of integers ##(x, y)## such that ##x^2 - 3 y^2 = 1## forms two invertible elements ##x \pm \sqrt 3 y##. For the first few pairs with this property ##(1,0), (2,1), (7,4), (26,15)##, both the ##x_n## and the ##y_n## satisfy the linear difference equation ##z_n=4z_{n-1}-z_{n-2}##. The general solution of this difference equation is ##z_n = a(2+\sqrt 3)^n + b(2-\sqrt 3)^n##. For ##x_n##, ##a_x = b_x = \frac 1 2##. For ##y_n##, ##a_y = -b_y = \frac 1 {2\sqrt 3}##.

Claim. Let ##x_n = \frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n]##, and ##y_n = \frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n]##. Then for ##\forall n \in \mathbb Z^+##, ##x_n, y_n \in \mathbb Z## and ##x_n^2 - 3 y_n^2 = 1##.

Proof.
##\forall n \in \mathbb Z^+, x_n \in \mathbb Z##:
$$\begin{align}x_n & = \frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n] \nonumber\\
&= \frac 1 2 \left [\sum_{j=0}^n \binom n j 2^{n-j}\sqrt3^j + \sum_{j=0}^n \binom n j 2^{n-j}(-\sqrt3)^j\right]\nonumber
\end{align}$$
For odd j, the terms in the two sums cancel, leaving
$$x_n = \sum_{k=0}^{\lfloor \frac n 2 \rfloor} \binom n {2k} 2^{n-2k}3^k \in \mathbb Z$$
##\forall n \in \mathbb Z^+, y_n \in \mathbb Z##:
$$\begin{align}y_n & = \frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n] \nonumber\\
&= \frac 1 {2\sqrt 3} \left [\sum_{j=0}^n \binom n j 2^{n-j}\sqrt3^j - \sum_{j=0}^n \binom n j 2^{n-j}(-\sqrt3)^j\right]\nonumber
\end{align}$$
For even j, the terms in the two sums cancel, leaving
$$y_n = \sum_{k=0}^{\lfloor \frac {n-1} 2 \rfloor} \binom n {2k+1} 2^{n-2k-1}3^k \in \mathbb Z$$
##\forall n \in \mathbb Z^+##, ##x_n^2 - 3 y_n^2 = 1##:
First note that ##x_1 = 2, y_1 = 1## and ##x_1^2-3y_1^2 = (2+\sqrt 3)(2-\sqrt 3)=1##.
Now
$$\begin{align}x_n^2 - 3 y_n^2 & = \left(\frac 1 2 [(2+\sqrt 3)^n + (2-\sqrt 3)^n]\right)^2-3\left(\frac 1 {2\sqrt 3} [(2+\sqrt 3)^n - (2-\sqrt 3)^n]\right)^2\nonumber \\
& = \frac 1 4 [(2+\sqrt 3)^{2n} + 2+(2-\sqrt 3)^{2n}]-3\frac 1 {12} [(2+\sqrt 3)^{2n} - 2+(2-\sqrt 3)^{2n}]\nonumber \\
& = 1\nonumber \\
\end{align}$$
This shows that there is an infinite number of invertible elements in ##\mathbb Z[\sqrt 3]## formed from pairs ##(x_n, y_n)## for which ##(x_n+\sqrt 3 y_n)(x_n-\sqrt 3 y_n) = 1##.
Yes, this is correct, although a bit too much work. With your setting of ##x_n## and ##y_n## you immediately have
$$x_n-\sqrt{3}y_n\; , \;x_n+\sqrt{3}y_n \in \mathbb{Z}[\sqrt{3}]$$
and by the first line and the calculation at the end (3 lines) that these are units. You didn't need to prove ##x_n,y_n## are integers.

So the prove reduces to:
  • 1st line
  • definition of ##x_n,y_n##
    but without the scaling factors: ##\pm (2 \pm \sqrt{3})^n## do the job
  • last five lines to prove they are units
So your proof is correct, just a bit too complicated.
 
  • #45
fresh_42 said:
You didn't need to prove ##x_n,y_n## are integers.

fresh_42 said:
  • definition of ##x_n,y_n## but without the scaling factors: ##\pm (2 \pm \sqrt{3})^n## do the job
Even better, noting that ##(2+\sqrt 3)(2-\sqrt 3) = 1##, it follows directly that ##(2+\sqrt 3)^n(2-\sqrt 3)^n = 1~ \forall n \in \mathbb Z^+##. So ##\{(2 \pm \sqrt 3)^n: n \in \mathbb Z^+\}## is an infinite set of units in ##\mathbb Z[\sqrt 3]##.
 
  • Like
Likes fresh_42
  • #46
tnich said:
Even better, noting that ##(2+\sqrt 3)(2-\sqrt 3) = 1##, it follows directly that ##(2+\sqrt 3)^n(2-\sqrt 3)^n = 1~ \forall n \in \mathbb Z^+##. So ##\{(2 \pm \sqrt 3)^n: n \in \mathbb Z^+\}## is an infinite set of units in ##\mathbb Z[\sqrt 3]##.
Yep. I think with ##\pm 1## they are also all units, but I haven't checked. So if you're interested, go ahead.
 
  • #47
QuantumQuest said:
##10.## a) Give an example of an integral domain (no field), which has common divisors, but doesn't have greatest common divisors.
b) (solved by @tnich ) Show that there are infinitely many units (invertible elements) in ##\mathbb{Z}[\sqrt{3}]##.
c) Determine the units of ##\{\,\frac{1}{2}a+ \frac{1}{2}b\sqrt{-3}\,\vert \,a+b \text{ even }\}##.
d) The ring ##R## of integers in ##\mathbb{Q}(\sqrt{-19})## is the ring of all elements, which are roots of monic polynomials with integer coefficients. Show that ##R## is built by all elements of the form ##\frac{1}{2}a+\frac{1}{2}b\sqrt{-19}## where ##a,b\in \mathbb{Z}## and both are either even or both are odd. ##\space## ##\space## (by @fresh_42)

a) The Eisenstein Integers ##\mathbb Z[\sqrt{-3}]## (same as in (c)). For instance the numbers ##a=4=2\cdot 2=(1+\sqrt{-3})(1-\sqrt{-3})## and ##b=2(1+\sqrt{-3})## have both ##2## and ##1+\sqrt{-3}## as maximal common divisor. As they have the same norm ##N=2^2=1^2+3\cdot 1^2##, that means that ##a## and ##b## do not have a greatest common divisor.

b) Skipping as @tnich already solved it.

c) Every unit has norm ##\pm 1##. So we must have ##a^2+3b^2=\pm 1##. It follows that ##|a|\le 1## and ##|b|\le 1##. When enumerating the possibilities we find the six units ##\pm 1## and ##\pm1 \pm \sqrt{-3}##.

d) Let ##p+q\sqrt{-19}## be an element in ##R## with rational p and q. Then ##p-q\sqrt{-19}## is also in ##R##, and ##(x-(p+q\sqrt{-19}))(x-(p-q\sqrt{-19})=x^2-2px + (p^2+19 q^2)## must be a monic polynomial with integer coefficients.
It follows that p must be of the form ##\frac 12 a## where ##a## is an integer.
If p is an integer, then ##19q^2## must be an integer as well, and since 19 is prime, q must be an integer.
If ##p=\frac 12 a## with odd a, then ##p^2+19 q^2=(\frac 12 a)^2+19 q^2## must be an integer as well, it follows that ##a^2+19(2q)^2 \equiv 1 + 19(2q)^2 \equiv 0 \pmod 4 \Rightarrow (2q)^2 \equiv 19^{-1}\cdot -1 \equiv 1 \pmod 4##. Therefore 2q must be an odd integer as well.
Thus any element in R is of the form ##\frac{1}{2}a+\frac{1}{2}b\sqrt{-19}## where ##a,b\in \mathbb{Z}## and both are either even or both are odd.
 
  • #48
I like Serena said:
a) The Eisenstein Integers ##\mathbb Z[\sqrt{-3}]## (same as in (c)). For instance the numbers ##a=4=2\cdot 2=(1+\sqrt{-3})(1-\sqrt{-3})## and ##b=2(1+\sqrt{-3})## have both ##2## and ##1+\sqrt{-3}## as maximal common divisor. As they have the same norm ##N=2^2=1^2+3\cdot 1^2##, that means that ##a## and ##b## do not have a greatest common divisor.
Correct. It also works for other rings of the form ##\mathbb{Z}[\sqrt{-p}]##. But it is not the example in c).
b) Skipping as @tnich already solved it.

c) Every unit has norm ##\pm 1##. So we must have ##a^2+3b^2=\pm 1##. It follows that ##|a|\le 1## and ##|b|\le 1##. When enumerating the possibilities we find the six units ##\pm 1## and ##\pm1 \pm \sqrt{-3}##.
Correct ... in a way, but you could have been a bit less sloppy here. As the elements are defined as ##\dfrac{1}{2}a+\dfrac{1}{2}b\sqrt{-3}## with ##a+b## even, this ring is not ##\mathbb{Z}[\sqrt{-3}]## as you have suggested in a) and apparently assumed here, so the norm of units is actually ##a^2+3b^2=+4## (or ##a^2+3b^2=+1## in ##\mathbb{Z}[\sqrt{-3}]##, it is not ##\pm 1\,!##).

Therefore the possibilities are ##|a|=2\; , \;b=0## or ##|a|=|b|=1##. The six elements are then ## \dfrac{1}{2}\left( a+b\sqrt{-3}\right)^n## for ##n=0,\ldots ,5##.

I will take your solution as solved, since the basic idea is the same as for the ring I actually used, and I'd have to post a solution today anyway; which I herewith did.
d) Let ##p+q\sqrt{-19}## be an element in ##R## with rational p and q. Then ##p-q\sqrt{-19}## is also in ##R##,...
Why does ##p-q\sqrt{-19}## have to be in ##R\,?## The reason is, that the integers are, ergo ##p## is and thus ##p-q\sqrt{-19}##. So ##\mathbb{Z}\subseteq R## is crucial here, which means it deserves mention.
... and ##(x-(p+q\sqrt{-19}))(x-(p-q\sqrt{-19})=x^2-2px + (p^2+19 q^2)## must be a monic polynomial with integer coefficients.
It follows that p must be of the form ##\frac 12 a## where ##a## is an integer.
You should have mentioned how you use ##a## and ##b## here, since it is not clear which quantifiers apply in your proof. This has to do with the fact that you didn't clearly say which direction of the proof your in. This makes the entire proof unnecessarily harder to read.
If p is an integer, then ##19q^2## must be an integer as well, and since 19 is prime, q must be an integer.
... and ##p \equiv q \mod 2## which follows immediately from the fact that ##19## is prime.
If ##p=\frac 12 a## with odd a, then ##p^2+19 q^2=(\frac 12 a)^2+19 q^2## must be an integer as well, it follows that ##a^2+19(2q)^2 \equiv 1 + 19(2q)^2 \equiv 0 \pmod 4 \Rightarrow (2q)^2 \equiv 19^{-1}\cdot -1 \equiv 1 \pmod 4##. Therefore 2q must be an odd integer as well.
... and ##b=2q\,.##

What happened to the case that both, ##a## and ##b## are even? As it is part of the statement, it does need a reason. That integers are in ##R## is somehow trivial, but ##\mathbb{Z}+\mathbb{Z}[\sqrt{-19}]## needs a little argument. Both could (should?) have been done at the beginning. Also the modulo calculations could have been a bit more explicit, since you have always to think about whom do you write those lines for - and it is not me!
Thus any element in R is of the form ##\frac{1}{2}a+\frac{1}{2}b\sqrt{-19}## where ##a,b\in \mathbb{Z}## and both are either even or both are odd.
Formally you only have shown a necessary condition and not sufficiency, i.e. that those elements are indeed within ##R##. I know, that it is no big deal and implicitly covered by what you actually wrote, but I think we should be less sloppy here, since sloppiness must be earned. Only if you can write a proof properly such that everybody can follow it, only then you're allowed to be sloppy. Students should not get used to it unless they have the ability to do it clean.

So although your proof contains all necessary parts (somewhere), let me give an example of what I meant:

We recall that ##R## is defined as all integers in ##\mathbb{Q}(\sqrt{-19})##, i.e. all elements of ##\mathbb{Q}(\sqrt{-19})## with a minimal polynomial in ##\mathbb{Z}[x]##.

##"\Rightarrow "## : Elements of the stated form are integers in ##\mathbb{Q}(\sqrt{-19})##.

For ##r=\frac{1}{2}(a+b\sqrt{-19})\in R## we have ##\frac{1}{2}(a+b\sqrt{-19})\cdot \frac{1}{2}(a-b\sqrt{-19})=\frac{1}{4}(a^2+19b^2)\in \mathbb{Z}##, because ##a+b\equiv 0 (2)##. So $$(x-\frac{1}{2}(a+b\sqrt{-19}))(x-\frac{1}{2}(a-b\sqrt{-19}))=x^2-ax+\frac{1}{4}(a^2+19b^2)\in \mathbb{Z}[x]$$

##"\Leftarrow "## : Integers in ##\mathbb{Q}(\sqrt{-19})## are of the stated form.

If we have ##r \in \mathbb{Q}(\sqrt{-19})## an integer, then ##r^2+ar+b=0## for ##a,b \in \mathbb{Z}##, i.e. ##2r=-a\pm \sqrt{a^2-4b}##. As ##r\in \mathbb{Q}(\sqrt{-19})## we have ##\mathbb{Z} \ni a^2-4b=(\alpha + \beta \sqrt{-19})^2=\alpha^2 -19\beta^2 +2\alpha \beta\sqrt{-19}## and thus ##\alpha \beta=0##.

Case 1: ##\beta = 0##. Then ##b=\frac{a-\alpha}{2}\cdot \frac{a+\alpha}{2}\in \mathbb{Z}## and ##r_{1,2}=-\frac{1}{2}(a \pm \alpha)## and ##a\pm \alpha \equiv 0(2)## which means ##r_{1,2}\in R##.

Case 2: ##\alpha =0##. Then ##b=\frac{1}{4}(a^2+19\beta^2)\in \mathbb{Z}## and ##r_{1,2}=-\frac{1}{2}(a \pm \beta \sqrt{-19})## and we have to show that ##-a\pm \beta \equiv 0 (2)##. Let us assume this is not the case and ##\beta^2=(2k+a+1)^2##. Then
$$
4\,|\,(a^2+19\beta^2)=20a^2+76\cdot(n^2+an+n)+38a+19 \not\equiv 0(4)
$$
which is a contradiction. Therefore we have again ##r_{1,2}\in R##.
 
Last edited:
  • #49
Here is my attempt for 5.

We want to solve the differential equation ##(2x-4y+6)dx + (x+y-3)dy = 0##

Perform the subsitution ##u = x-1, v = y- 2, du = dx, dv = dy##.

The DE becomes: ##(2(u+1)-4(v+2) +6)du + (u+1 + v+2 - 3)dv = (2u - 4v)du + (u+v)dv=0##

This implies that ##(2-4v/u)du + (1+v/u)dv = 0##

Substitute ##z:= v/u, dv/du = d(zu)/du = dz/du u + z##

The DE becomes: ##(2-4z) + (1+z)(z'u + z) = 0##

Or equivalently:

##dz \frac{1+z}{-z^2 + 3z -2} = 1/u du##

We integrate both sides, and find:

##2\log|1-z| - 3\log|2-z| = \log|u| + C##

Thus, the solution is given by:

##2 \log|1 - v/u| - 3\log|2-v/u| = \log|u| + C##

or

##2 \log|1-(y-2)/(x-1)| - 3 \log|2 - (y-2)/(x-1)| = \log|x-1| + C##

A quick calculation leads to:

##\log\frac{(x-y+1)^2}{(2x-y)^3} = C \implies \frac{(x-y+1)^2}{(2x-y)^3} = K, K > 0##

assuming all the absolute values fall away (this gives conditions on the domain on which the solution is defined).
 
Last edited by a moderator:
  • Like
Likes QuantumQuest
  • #50
Math_QED said:
Integrate both sides and substitute back the variables, we obtain the solution.

Could you do that?
 
  • #51
QuantumQuest said:
Could you do that?

I edited my post. The solution is now implicitely given. I hope that I didn't make any mistakes as I have to admit I was too lazy to check my steps.

I also hope an implicit solution is sufficient, and that my solution is simplified enough.
 
  • #52
fresh_42 said:
Correct. It also works for other rings of the form ##\mathbb{Z}[\sqrt{-p}]##. But it is not the example in c).

I admit to being sloppy.
I should have written ##\mathbb Q[\sqrt{-3}]##, which are the Eisenstein Integers, instead of ##\mathbb Z[\sqrt{-3}]##.
Then it is the example of (c) isn't it?

fresh_42 said:
Correct ... in a way, but you could have been a bit less sloppy here.

True. I made a mistake there - I'm not used to a and b being divided by 2 and wrote as if they weren't.

fresh_42 said:
I know, that it is no big deal and implicitly covered by what you actually wrote, but I think we should be less sloppy here, since sloppiness must be earned.

Consider me on the road of trying to earn my sloppiness.
 
  • #53
I like Serena said:
I admit to being sloppy.
I should have written ##\mathbb Q[\sqrt{-3}]##, which are the Eisenstein Integers, instead of ##\mathbb Z[\sqrt{-3}]##.
Then it is the example of (c) isn't it?
No. The example in c) is all numbers ##\frac{1}{2}(a+b\sqrt{-3})## with an even sum ##a+b## which can also be odd+odd, and the only denominator is ##2##. O.k. you caught me being sloppy, as I didn't mention ##a,b \in \mathbb{Z}##. I thought this was clear by demanding ##a+b## being even and the entire thing being a ring.
##\mathbb{Q}[\sqrt{-3}]## is a field and all elements apart from zero are units.
True. I made a mistake there - I'm not used to a and b being divided by 2 and wrote as if they weren't.
Yes, it's a bit of an unusual construction, which I found charming because of it.
Consider me on the road of trying to earn my sloppiness.
You're welcome. :smile:
By reading your proofs, I thought that many things seem to be obvious to you where others need to think a bit. That's good and promising, but don't forget the less gifted among us who are still learning. I have meanwhile the impression that algebraic structures alone are considered difficult on PF even if they are not. Rings, groups and algebras are apparently not high ranked on physicists' schedule.
 
Last edited:
  • #54
Math_QED said:
I edited my post. The solution is now implicitely given. I hope that I didn't make any mistakes as I have to admit I was too lazy to check my steps.

I also hope an implicit solution is sufficient, and that my solution is simplified enough.

Your solution is basically correct but there are a few missing things. So, I'll give my solution too, in order to demonstrate these points.

We can do the transformation ##2x - 4y + 6 = u##, ##x + y - 3 = v##. Differentiating
##2dx - 4dy = du##, ##dx + dy = dv##.
From these we have ##dx = \frac{4dv + du}{6}##, ##dy = \frac{2dv - du}{6}##.
So, the initial differential equation which we are asked to solve becomes ##u\frac{4dv + du}{6} + v\frac{2dv - du}{6} = 0## or ##(u - v)du + (2v + 4u)dv = 0##. This last equation is homogeneous and can be written as ##(\frac{u}{v} - 1)du + (4\frac{u}{v} + 2)dv = 0##. Now, we utilize the transformation ##\frac{u}{v} = w## or ##u = wv## so ##du = wdv + vdw##. So, our last equation above becomes (after doing some math),

##(w - 1)vdw = -(w +1)(w + 2)dv## or ##\frac{w - 1}{(w + 1)(w + 2)}dw = - \frac{dv}{v} + c_1## (##w \neq -1##,##w \neq -2##).
So, ##\int_{}^{} \frac{w - 1}{(w + 1)(w + 2)}dw = - \int_{}^{} \frac{dv}{v} + c_1##. We calculate the integrals and we have

##\ln{\lvert \frac{(w + 2)^3}{(w + 1)^2}\rvert} = \ln{\lvert \frac{1}{c_2 v} \rvert}## ##\space\space\space## (##c_1 = -\ln{c_2}##) or
##\lvert c_2 v(w + 2)^3 \rvert = (w + 1)^2##. Substituting back for ##w## we have

##\lvert c_2 v(\frac{u}{v} + 2)^3 \rvert = (\frac{u}{v} + 1)^2## or
##\lvert c_2 (u + 2v)^3 \rvert = (u + v)^2## so
##\lvert \frac{8}{9}c_2(2x - y)^3 \rvert = (x - y + 1)^2 ## ##(1)##
If ##w = -1## or ##w = -2## we have ##u = -v##, ##u = -2v## respectively so ##x - y + 1 = 0##, ##2x - y = 0## ##(2)##.
##(1)## and ##(2)## are the set of solutions of the initial differential equation we are asked to solve.
 
Last edited:
  • Like
Likes member 587159
  • #55
Since the month is up and problem 1 has been solved by @tnich who also inquired about my solution, I've dropped it in below. The main idea is to interpret the matrix in terms of a directed graph and build a basis off an easily interpreted walk. That's it.

It looks a lot longer than that because there's a bit of book-keeping at the end.

solve the easier problem:

##\mathbf X:= \mathbf A + \mathbf I##

We can view this directly as a change of variables for the minimal polynomial

## \mathbf A^{m} + \binom{m-1}{1}\mathbf A^{m-1} + \binom{m-1}{2}\mathbf A^{m-2} + \binom{m-1}{3}\mathbf A^{m-3} +...+ \mathbf A = \Big(\big(\mathbf A + \mathbf I\big) - \mathbf I \Big)\Big(\mathbf A +\mathbf I\Big)^{m-1} = \Big(\mathbf X - \mathbf I \Big)\Big(\mathbf X\Big)^{m-1}##

## = \Big(\mathbf X\Big)^{m-1} \Big(\mathbf X - \mathbf I \Big) ##The rest of this problem ignores ##\mathbf A## and talks in terms of ##\mathbf X##

So, we focus on

##\mathbf X = \left[\begin{matrix}
1- p_{} & 1 - p_{}& 1 - p_{} & \dots &1 - p_{} &1 - p_{} & 1 - p_{}
\\p_{} & 0&0 &\dots & 0 & 0 & 0
\\0 & p_{}&0 &\dots & 0 & 0 & 0
\\0 & 0& p_{}&\dots & 0 & 0 & 0
\\0 & 0&0 & \ddots & 0&0 &0
\\0 & 0&0 & \dots & p_{}&0 &0
\\0 & 0&0 & \dots &0 & p_{} & p_{}
\end{matrix}\right]##

notice that ##\mathbf X## is the matrix associated with a (column stochastic) Markov chain.
- - - -

Plan of Attack:

Step 1:
Easiest thing first: a quick scan of the diagonal tells us that ##\text{trace}\big(\mathbf X\big) = (1-p) + p = 1##. The second step will confirm that the only possible eigenvalues of ##\mathbf X## are zeros and ones hence the trace tells us that the algebraic multiplicity of eigenvalue = 1, is one, and the algebraic multiplicity of eigenvalue 0 is ##m -1##.

Step 2:
show ##\mathbf X## is annihilated by a polynomial consisting only of roots of zeros and ones -- hence those are the only possible eigenvalues of ##\mathbf X##. In particular we are interested in the polynomial

##\mathbf X^m - \mathbf X^{m-1} = \mathbf X^{m-1}\big(\mathbf X - \mathbf I\big) = \mathbf 0##

with a slight refinement at the end we show that the above annihilating polynomial must in fact be the minimal polynomial.

- - - - -
Proof of the annihilating polynomial

note that ##\mathbf X## is a (column stochastic) matrix for a Markov chain. The idea here is to make use of the graph structure and a well chosen basis.

We seek to prove here that

##\mathbf X^m - \mathbf X^{m-1} = \mathbf X^{m-1}\big(\mathbf X - \mathbf I\big) = \big(\mathbf X -0\mathbf I\big)^{m-1}\big(\mathbf X - \mathbf I\big) = \mathbf 0##

or equivalently that
##\mathbf X^m = \mathbf X^{m-1}##

We proceed to do this one vector at a time by showing
## \mathbf s_k^T \mathbf X^m = \mathbf s_k^T \mathbf X^{m-1}##
for ##m## well chosen row vectors.

first, consider
##\mathbf s_m := \mathbf 1##

the ones vector is a left eigenvector of ##\mathbf X##, with eigenvalue of 1, which gives us

##\mathbf 1^T= \mathbf 1^T \mathbf X = \mathbf 1^T \mathbf X^2 = \mathbf 1^T \mathbf X^3 = ... = \mathbf 1^T \mathbf X^{m-1} = \mathbf 1^T \mathbf X^{m}##

This is so easy to work with, it suggests that we may be able to build an entire proof by using it as a base case of sorts.

The key insight is to then view everything in terms of the underlying directed graph. Let's consider reversing the transition diagram associated with the markov chain, and ignore re-normalization that would typically be done when trying to reverse a markov chain. (Equivalently, we're just transposing our matrix and no longer calling it a markov chain.)
- - - -
edit: the right way to state this is to say we are not using our matrix as a transition matrix but instead as an expectations operator (which is naturally given by the transpose).
- - - -

for illustrative purposes, consider the ##m :=7## case. Literally interpreted our backwards walk has a transition diagram that looks like this:
upload_2018-7-2_13-12-55.png


(note for graphics formatting, ##1-p## was not rendering properly, so ##q:= 1-p## has been used in the above)

But there is some special structure about dealing with the ones vector (equivalently, being in a uniform un-normalized distribution). And looking at the above diagram we can see that node 7 has rather different behavior than the others, so let's ignore it for the moment.

With just a small bit of insight, we can re-interpret the interesting parts of the state diagram as
upload_2018-7-2_13-13-15.png


If this is not clear to people, I'm happy to discuss more. If one 'gets' this state diagram, and recognizes that the above states can form a basis, the argument that follows for how long it takes to 'wipe out' the embedded nilpotent matrix becomes stunningly obvious. The below symbol manipulation is needed to make the argument complete, but really the above picture is the argument for the minimal polynomial.- - - - -
Some bookkeeping to finish it off:

With these ideas in mind, now consider standard basis vectors ##\mathbf e_i##

##\mathbf s_i := \mathbf e_i## for ##i \in \{1, 2, ..., m-1\}##

the end goal is to evaluate each vector in

##\mathbf S^T =
\bigg[\begin{array}{c|c|c|c|c}
\mathbf e_1 & \mathbf e_2 &\cdots & \mathbf e_{m-1} & \mathbf 1
\end{array}\bigg]^T##

(recall: we have already done it for the ones vector)

This ##\text{m x m}## matrix is triangular with ones on the diagonal and hence invertible. After iterating through the following argument, we'll have##\mathbf S^T \mathbf X^{m} = \mathbf S^T \mathbf X^{m-1}##
or
##\mathbf X^{m} = \big(\mathbf S^T\big)^{-1} \mathbf S^T \mathbf X^{m} = \big(\mathbf S^T\big)^{-1}\mathbf S^T \mathbf X^{m-1} = \mathbf X^{m-1}##the remaining steps thus are to confirm the relation holds for ##\mathbf e_i##

for ##i=1## we have

## \mathbf e_1^T \mathbf X = (1-p) \mathbf 1^T##

and

## \mathbf e_1^T \mathbf X^2 = (1-p) \mathbf 1^T\mathbf X = (1-p) \mathbf 1^T##

hence

##\mathbf e_1^T \mathbf X = \mathbf e_1^T \mathbf X^2##

multiplying each side by ##\mathbf X^{m-2}## gives the desired result.

To chain on the end result, by following the graph, for ## 2 \leq r \leq m-1##

##\mathbf e_{r}^T \mathbf X = (p) \mathbf e_{r-1}^T##

hence
##\mathbf e_{r}^T \mathbf X^2 = (p)^2 \mathbf e_{r-2}^T##
##\vdots##
##\mathbf e_{r}^T \mathbf X^{r-1} = (p)^{r-1} \mathbf e_{1}^T##
##\mathbf e_{r}^T \mathbf X^r = (p)^{r-1}(1-p) \mathbf 1^T##and

##\mathbf e_{r}^T \mathbf X^{r +1} = \big(\mathbf e_{r}^T \mathbf X^r\big) \mathbf X = \big((p)^{r-1}(1-p) \mathbf 1^T\big) \mathbf X = (p)^{r-1}(1-p) \big(\mathbf 1^T \mathbf X\big) = \big((p)^{r-1}(1-p) \mathbf 1^T\big) = \mathbf e_{r}^T \mathbf X^r ##

if ##r = m-1## we have the desired equality.
- - - -
now to consider the ##r \lt m-1## case,
we right mutliply each side by ##\mathbf X^{(m-1) -r}## and get

## \mathbf e_{r}^T \mathbf X^{m}= \big(\mathbf e_{r}^T \mathbf X^{r+1}\big)\mathbf X^{(m-1) -r} = \big(\mathbf e_{r}^T \mathbf X^{r}\big)\mathbf X^{(m-1) -r} = \mathbf e_{r}^T \mathbf X^{m-1}##collecting all these relationships gives us

##\begin{bmatrix}
\mathbf e_1^T \\
\mathbf e_2^T \\
\vdots\\
\mathbf e_{m-1}^T \\
\mathbf 1^T
\end{bmatrix}\mathbf X^m = \begin{bmatrix}
\mathbf e_1^T \\
\mathbf e_2^T \\
\vdots\\
\mathbf e_{m-1}^T \\
\mathbf 1^T
\end{bmatrix}\mathbf X^{m-1}##
which proves the stated annihilating polynomial.

Combined with our knowledge of the trace (and Cayley Hamilton), we know that the below is the characteristic polynomial of ##\mathbf X##

##p\big(\mathbf X\big) = \mathbf X^m - \mathbf X^{m-1} = \mathbf X^{m-1}\big(\mathbf X - \mathbf I\big) = \mathbf 0##- - - -
A slight refinement: it is worth remarking here that there are some additional insights to be gained from the ##r = m-1## case. In particular we can see the imprint of the minimal polynomial as this ##r = m-1## case takes longest for the implicit walk on the graph to 'get to' the uniform state.

That is (and again, considering the picture of the graph is highly instructive here), if we consider the case of

##\mathbf e_{r}^T \mathbf X^{r-1} = (p)^{r-1} \mathbf e_{1}^T ##

and set ##r := m-1##, then we have

##\mathbf e_{m-1}^T \mathbf X^{m-2} = (p)^{m-2} \mathbf e_{1}^T \neq (p)^{m-2}(1-p) \mathbf 1^T = \mathbf e_{m-1}^T \mathbf X^{m-1}##

we have thus found
##\mathbf e_{m-1}^T \mathbf X^{m-2}\neq \mathbf e_{m-1}^T \mathbf X^{m-1}##

which means in general

## \mathbf X^{m-2} \neq \mathbf X^{m-1}##

(i.e. if the above were an equality it would have to hold for all vectors including ##\mathbf e_{m-1}^T##)

This also means that ## \mathbf X^{m-2}\big(\mathbf X - \mathbf I\big)= \mathbf X^{m-1} - \mathbf X^{m-2} \neq \mathbf 0## which rules out a minimal polynomial of degree ##m-1##, which also means that an even lower degree polynomial cannot annhiliate ##\mathbf X##. This point alone, confirms that the degree of the minimal polynomial must match that of the characteristic polynomial, which completes the problem.
 

Attachments

  • upload_2018-7-2_13-12-55.png
    upload_2018-7-2_13-12-55.png
    3.9 KB · Views: 328
  • upload_2018-7-2_13-13-15.png
    upload_2018-7-2_13-13-15.png
    3.1 KB · Views: 320
Last edited:
  • #56
Solution to #6.

##6.## Consider ##\mathfrak{su}(3)=\operatorname{span}\{\,T_3,Y,T_{\pm},U_{\pm},V_{\pm}\,\}## given by the basis elements
##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## ##\space## $$
\begin{align*}
T_3&=\frac{1}{2}\lambda_3\; , \;Y=\frac{1}{\sqrt{3}}\lambda_8\; ,\\
T_{\pm}&=\frac{1}{2}(\lambda_1\pm i\lambda-2)\; , \;U_{\pm}=\frac{1}{2}(\lambda_6\pm i\lambda_7)\; , \;V_{\pm}=\frac{1}{2}(\lambda_4\pm i\lambda_5)
\end{align*}$$

(cp. https://www.physicsforums.com/insights/representations-precision-important) where the ##\lambda_i## are the Gell-Mann matrices and its maximal solvable Borel-subalgebra ##\mathfrak{B}:=\langle T_3,Y,T_+,U_+,V_+ \rangle##

Now ##\mathfrak{A(B)}=\{\,\alpha: \mathfrak{g} \to \mathfrak{g}\, : \,[X,\alpha(Y)]=[Y,\alpha(X)]\,\,\forall \,X,Y\in \mathfrak{B}\,\}## is the one-dimensional Lie algebra spanned by ##\operatorname{ad}(V_+)## because ##\mathbb{C}V_+## is a one-dimensional ideal in ##\mathfrak{B}## (Proof?). Then ##\mathfrak{g}:=\mathfrak{B}\ltimes \mathfrak{A(B)}## is again a Lie algebra by the multiplication ##[X,\alpha]=[\operatorname{ad}X,\alpha]## for all ##X\in \mathfrak{B}\; , \;\alpha \in \mathfrak{A(B)}##. (For a proof see problem 9 in https://www.physicsforums.com/threads/intermediate-math-challenge-may-2018.946386/ )

a) Determine the center of ##\mathfrak{g}## , and whether ##\mathfrak{g}## is semisimple, solvable, nilpotent or neither.
b) Show that ##(X,Y) \mapsto \alpha([X,Y])## defines another Lie algebra structure on ##\mathfrak{B}## , which one?
c) Show that ##\mathfrak{A(g)}## is at least two-dimensional.

We have the following multiplication table
\begin{align*}
[T_3,Y]&=[T_+,Y]=[T_+,V_+]=[U_+,V_+]=0\\
[T_3,T_+]&=T_+\; , \;[T_3,U_+]=-\frac{1}{2}U_+\; , \;[T_3,V_+]=\frac{1}{2}V_+ \\
[U_0+,T_+]&=-V_+\; , \;[Y,U_+]=U_+\; , \;[Y,V_+]=V_+ \\
\end{align*}
With ##\mathfrak{A(B)}=\mathbb{C}\cdot \alpha\; , \;\alpha(Z)=\operatorname{ad}V_+(Z)=[V_+,Z]## we get
$$
[X,\alpha]=X.\alpha=[\operatorname{ad}X,\operatorname{ad}V_+]=\operatorname{ad}[X,V_+] \sim \operatorname{ad}V_+ \sim \alpha
$$
and ##\operatorname{span}\{\,T_+,U_+,V_+\,\} \subseteq \operatorname{ker}\alpha = \operatorname{ker}\operatorname{ad}V_+=\mathfrak{C}_\mathfrak{B}(V_+)##, so
\begin{align*}
\mathfrak{g}^{(0)}&=\mathfrak{g}=\mathfrak{B}\oplus \mathfrak{A(B)}\\
\mathfrak{g}^{(1)}&=[\mathfrak{g},\mathfrak{g}]=[\mathfrak{B},\mathfrak{B}]\oplus \mathfrak{A(B)}= \langle T_+,U_+,V_+\rangle \oplus \mathfrak{A(B)}\\
\mathfrak{g}^{(2)}&=[\mathfrak{g}^{(1)},\mathfrak{g}^{(1)}]=\mathbb{C}V_+ \oplus \{\,0\,\}\\
\mathfrak{g}^{(3)}&=[\mathfrak{g}^{(2)},\mathfrak{g}^{(2)}]= \{\,0\,\}
\end{align*}
Therefore ##\mathfrak{g}=\mathfrak{B}\ltimes \mathfrak{A(B)}## is solvable, and not semisimple. If we take a central element ##Z=aT_3+bY+cT_++dU_++eV_++f\alpha \in \mathfrak{Z(g)}## and solve successively
$$
[Z,U_+]=0 \to [Z,V_+]=0 \to [Z,Y]=0
$$
then we get all coefficients have to be zero, i.e. ##\mathfrak{Z(g)}=\{\,0\,\}## and ##\mathfrak{g}## cannot be nilpotent. It also shows, that ##\alpha([X,Y])=0## is an Abelian structure on ##\mathfrak{B}##.\\[6pt]
For a one-dimensional ideal ##\mathfrak{I}=\langle V_0 \rangle## of any Lie algebra ##\mathfrak{h}## we have ##[X,V_0]=\mu(X)V_0## for all ##X\in \mathfrak{h}## and some linear form ##\mu \in \mathfrak{h}^*##. With ##\alpha(X):=\operatorname{ad}(V_0)(X)=-\mu(X)V_0## we always get a non-trivial antisymmetric transformation of ##\mathfrak{h}##. Therefore ##\beta_1(B+b\alpha):=-\mu(X)V_+## defines a non-trivial antisymmetric transformation of ##\mathfrak{g}=\mathfrak{B}\ltimes \mathfrak{A(B)}##, since ##\mathfrak{I}=\mathbb{C}\cdot V_+ \triangleleft \mathfrak{g}## is a one-dimensional ideal. However, ##\mathbb{C}\cdot \alpha = \mathfrak{A(B)}## is also a one-dimensional ideal of ##\mathfrak{g}##, so ##\beta_2(B+b\alpha):=\mu(X)\alpha## is antisymmetric, too, and linear independent of ##\beta_1##. Thus
$$
\dim \mathfrak{A(g)}=\mathfrak{A}(\mathfrak{B}\ltimes \mathfrak{A(B)}) \ge 2
$$
 
Last edited:

Similar threads

  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
2
Replies
61
Views
9K
  • Math Proof Training and Practice
Replies
33
Views
7K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
2
Replies
69
Views
4K
  • Math Proof Training and Practice
3
Replies
93
Views
6K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
Replies
28
Views
5K
  • Math Proof Training and Practice
2
Replies
67
Views
8K
  • Math Proof Training and Practice
3
Replies
77
Views
12K
Back
Top