Math Challenge - April 2021

In summary, the conversation discussed the solution for Kepler's law and showed that the period of a planet's orbit can be expressed as T(a) = (pi*a^(3/2))/gamma, where a is the length of the semi-major axis and gamma is a positive constant. The equation was also solved for all values of a and it was determined that the solution is unique due to the constraint given.
  • #1
fresh_42
Mentor
Insights Author
2023 Award
18,994
23,995
Summary: Differential Equations, Linear Algebra, Topology, Algebraic Geometry, Number Theory, Functional Analysis, Integrals, Hilbert Spaces, Algebraic Topology, Calculus.1. (solved by @etotheipi ) Let ##T## be a planet's orbital period, ##a## the length of the semi-major axis of its orbit. Then
$$
T'(a)=\gamma \sqrt[3]{T(a)}\, , \,T(0)=0
$$
with a constant proportional factor ##\gamma >0.## Solve this equation for all ##a\in \mathbb{R}## and determine whether the solution is unique and why.

2. (solved by @Office_Shredder ) Show that the Hadamard (elementwise) product of two positive definite complex matrices is again positive definite.

3. A function ##f\, : \,S^k\longrightarrow X## is called antipodal if it is continuous and ##f(-x)=-f(x)## for all ##x\in S^k## and any topological space ##X\subseteq \mathbb{R}^m.## Show that the following statements are equivalent:

(a) For every antipodal map ##f: S^n\longrightarrow \mathbb{R}^n## there is a point ##x\in S^n## satisfying ##f(x)=0.##
(b) There is no antipodal map ##f: S^n\longrightarrow S^{n-1}.##
(c) There is no continuous mapping ##f : B^n\longrightarrow S^{n-1}## that is antipodal on the boundary.

Assume the conditions hold. Prove Brouwer's fixed point theorem:
Any continuous map ##f : B^n\longrightarrow B^n## has a fixed point.

4. (solved by @Office_Shredder ) Let ##Y## be an affine, complex variety. Prove that ##Y## is irreducible if and only if ##I(Y)## is a prime ideal.

5. Let ##p>5## be a prime number. Show that
$$
\left(\dfrac{6}{p}\right) = 1 \Longleftrightarrow p\equiv k \,(24) \text{ with } k\in \{1,5,19,23\}.
$$
The parentheses are the Legendre symbol.

6. (solved by @benorin ) Let ## f\in L^2 ( \mathbb{R} ) ## and ## g : \mathbb{R} \longrightarrow \overline{\mathbb{R}} ## be given as
$$
g(t):=t\int_\mathbb{R} \chi_{[t,\infty )}(|x|)\exp(-t^2(|x|+1))f(x)\,dx
$$
Show that ##g\in L^1(\mathbb{R}).##

7.(a) (solved by @etotheipi and @Office_Shredder ) Let ##V## be the pyramid with vertices ##(0,0,1),(0,1,0),(1,0,0)## and ##(0,0,0).## Calculate
$$
\int_V \exp(x+y+z) \,dV
$$
7.(b) (solved by @graphking and @julian ) ##A\in \operatorname{GL}(d,\mathbb{R}).## Calculate
$$
\int_{\mathbb{R}^d}\exp(-\|Ax\|_2^2)\,dx
$$

8. (solved by @etotheipi ) Consider the Hilbert space ##L^2([0,1])## and its subspace ##K:=\operatorname{span}_\mathbb{C}\{x,1\}##. Let ##\pi^\perp\, : \,H\longrightarrow K## be the orthogonal projection. Give an explicit formula for ##\pi^\perp## and calculate ##\pi^\perp(e^x).##

9. (solved by @Office_Shredder ) Prove ##\pi_1(S^n;x)=\{e\}## for ##n\geq 2.##

10. (solved by @Office_Shredder ) Let ##U\subseteq \mathbb{R}^{2n}## be an open set and ##f\in C^2(U,\mathbb{R})## a twice continuously differentiable function at a point ##\vec{a}\in U.## Prove that if ##f## has a critical point in ##\vec{a}## and the Hessian matrix ##Hf(\vec{a})## has a negative determinant, then ##f## has neither a local maximum nor a local minimum in ##\vec{a}.##
1606835746499-png-png-png.png


High Schoolers only (until 26th)

11.
Show that every non-negative real polynomial ##p(x)## can be written as ##p(x)=a(x)^2+b(x)^2## with ##a(x),b(x)\in \mathbb{R}[x].##

12. (solved by @Not anonymous ) Show that all Pythagorean triples ##x^2+y^2=z^2## can be found by
$$
(x,y,z)=d\cdot (u^2-v^2,2uv,u^2+v^2) \text{ with }u,v\in \mathbb{N}\, , \,u>v \quad (*)
$$
and which are primitive (no common divisor of ##x,y,z##) if and only if ##u,v## are coprime and one is odd and the other one even.

(*) corrected statement

13. (solved by @Not anonymous ) Write
$$
\sqrt[8]{2207-\dfrac{1}{2207-\dfrac{1}{2207-\dfrac{1}{2207-\ldots}}}}
$$
as ##\dfrac{a+b\sqrt{c}}{d}.##

14. (solved by @Not anonymous ) To each positive integer with ##n^2## decimal digits, we associate the determinant of the matrix obtained by writing
the digits in order across the rows. For example, for ##n =2##, to the integer ##8617## we associate ##\det\left(\begin{bmatrix}
8&6\\1&7 \end{bmatrix}\right)=50.## Find, as a function of ##n##, the sum of all the determinants associated with ##n^2-##digit integers. Leading digits are assumed to be nonzero; for example, for ##n = 2##, there are ##9000## determinants: ##\displaystyle{f(2)=\sum_{1000\leq N\leq 9999}\det(N)}.##

15. (solved by @Not anonymous ) All squares on a chessboard are labeled from ##1## to ##64## in reading order (from left to right, row by row top-down). Then someone places ##8## rooks on the board such that none threatens any other. Let ##S## be the sum of all squares which carry a rook. List all possible values of ##S.##
 
Last edited:
  • Like
Likes Not anonymous, Anti Hydrogen and (deleted member)
Physics news on Phys.org
  • #2
For 7a, I think it's just$$
\begin{align*}

\int_V \mathrm{d}V \, e^x e^y e^z &= \int_0^1 \mathrm{d}z \int_0^{1-z} \mathrm{d}y \int_0^{1-y-z} \mathrm{d}x \, e^x e^y e^z \\

&= \int_0^{1} \mathrm{d}z \int_0^{1-z} \mathrm{d}y \, (e - e^y e^z) \\

&= \int_0^1 \mathrm{d}z \, (e^z - ez) \\

&= \frac{1}{2} e - 1

\end{align*}$$Also @fresh_42, may I ask what does the notation ##\|Ax\|_2^2## mean?
 
  • #3
etotheipi said:
For 7a, I think it's just$$
\begin{align*}

\int_V \mathrm{d}V \, e^x e^y e^z &= \int_0^1 \mathrm{d}z \int_0^{1-z} \mathrm{d}y \int_0^{1-y-z} \mathrm{d}x \, e^x e^y e^z \\

&= \int_0^{1} \mathrm{d}z \int_0^{1-z} \mathrm{d}y \, (e - e^y e^z) \\

&= \int_0^1 \mathrm{d}z \, (e^z - ez) \\

&= \frac{1}{2} e - 1

\end{align*}$$Also @fresh_42, may I ask what does the notation ##\|Ax\|_2^2## mean?
The ##2-##norm (Euclidean norm) squared.
 
  • Like
Likes etotheipi
  • #4
A fun alternate solution for 7a, is to integrate along the axis ##(t/3,t/3,t/3)## for ##t\in (0,1)## Then you get ##\int_{0}^1 A(t)e^t dt## where ##A(t)## is the area in the intersection of the plane ##x+y+z=t## and the pyramid,l. This is a triangle whose proportions grow linearly and hence whose area grows proportional to ##t^2##, and ##A(0)=0## and ##A(1)## is the triangle with vertices ##(1,0,0),(0,1,0),(0,0,1)## which is an equilateral triangle with all sides length ##\sqrt{2}## and hence area ##1/2##.

This turns the whole integral into ##\frac{1}{2} \int_0^{1} t^2 e^t dt## which you can solve by doing integration by parts twice.
 
  • Like
Likes etotheipi
  • #5
Problem 1
Let ##f## be the area of a sector bounded by two neighboring radius vectors and an element of the path of the orbit. For Kepler's law (equal areas are swept in equal times) we have,
$$
\dot f=\gamma
$$
where ##\gamma## is a positive constant. Integrating we find
$$
\pi ab=\gamma T
$$
where ##\pi ab ## is the area of the ellipse and T is the period. I express the semi-major ##a## and semi-minor ##b## axis in terms of the semi latus rectum ##p =a(1-e^2)##
$$
a=\frac{p}{1-e^2}
$$
$$
b=\frac{p}{\sqrt{1-e^2}}
$$
where ##e## is the eccentricity of the orbit. Observe for ##p=1## then ##b=\sqrt{a}## and thus
$$
T(a)=\frac{\pi a^{\frac{3}{2}}}{\gamma}
$$
$$
T(0)=0
$$
$$
T'=\frac{3}{2}\frac{\pi a^{\frac{1}{2}}}{\gamma}=\frac{3}{2}\left [ \frac{\pi}{\gamma}\right ]^{\frac{3}{2}}\left [ T(a) \right ]^{\frac{1}{3}}
$$
The solution is unique because of the constraint ##p=1##.
 
Last edited:
  • #6
Is the condition for 3(a) correct? Isn't f(x)=x a counterexample? I suspect you meant ##S^n\to \mathbb{R}^{n-1}##, mostly because I think I'm able to prove that equivalence with that one 😁
 
  • #7
Fred Wright said:
Problem 1
Let ##f## be the area of a sector bounded by two neighboring radius vectors and an element of the path of the orbit. For Kepler's law (equal areas are swept in equal times) we have,
$$
\dot f=\gamma
$$
where ##\gamma## is a positive constant. Integrating we find
$$
\pi ab=\gamma T
$$
where ##\pi ab ## is the area of the ellipse and T is the period. I express the semi-major ##a## and semi-minor ##b## axis in terms of the semi latus rectum ##p =a(1-e^2)##
$$
a=\frac{p}{1-e^2}
$$
$$
b=\frac{p}{\sqrt{1-e^2}}
$$
where ##e## is the eccentricity of the orbit. Observe for ##p=1## then ##b=\sqrt{a}## and thus
$$
T(a)=\frac{\pi a^{\frac{3}{2}}}{\gamma}
$$
$$
T(0)=0
$$
$$
T'=\frac{3}{2}\frac{\pi a^{\frac{1}{2}}}{\gamma}=\frac{3}{2}\left [ \frac{\pi}{\gamma}\right ]^{\frac{3}{2}}\left [ T(a) \right ]^{\frac{1}{3}}
$$
The solution is unique because of the constraint ##p=1##.
This is not the most general solution.
 
  • #8
Office_Shredder said:
Is the condition for 3(a) correct? Isn't f(x)=x a counterexample? I suspect you meant ##S^n\to \mathbb{R}^{n-1}##, mostly because I think I'm able to prove that equivalence with that one 😁
No typo.
 
  • #9
fresh_42 said:
No typo.
What about his counter example?
 
  • #10
martinbn said:
What about his counter example?
I don't know, I haven't seen the proof of equivalence. The statements are for any antipodal maps, so I don't see how a single one can prove something about an equivalence for any. And ##f(x)=x=0## does not occur in the situation (a).
 
  • #11
I'm confused, do you think that (a) is a true statement? I think f(x)=x proves that (a) is not true, to since it is antipodal and there is no point in the sphere for which ##f(x)=0##.

Unless you intend for the question is prove these three things which are equivalent, and never mind the fact that they all must be false (since (a) is false).There's a decent chance I just don't understand the question, in which case I'll just wait to see what other people write about it.
 
Last edited:
  • #12
Office_Shredder said:
I'm confused, do you think that (a) is a true statement? I think f(x)=x proves that (a) is not true, to since it is antipodal and there is no point in the sphere for which ##f(x)=0##.

Unless you intend for the question is prove these three things which are equivalent, and never mind the fact that they all must be false (since (a) is false).There's a decent chance I just don't understand the question, in which case I'll just wait to see what other people write about it.
As I understand it, we have ##S^{n}## naturally embedded in ##\mathbb{R}^{n+1}##. How do you define the identity map from a circle to a line?
 
  • #13
Oh jeez, I'm dumb. Thank you.

Somehow I had convinced myself that ##S^n## is embedded in ##\mathbb{R}^n## and hence why you obviously need to drop the dimension by one.
 
  • #14
Office_Shredder said:
Somehow I had convinced myself that ##S^n## is embedded in ##\mathbb{R}^n## and hence why you obviously need to drop the dimension by one.
Hmm, what shall I say: I'm happy that I'm not the only one who regularly stumbles upon this ##n## at ##S^n##, or as a fellow student of mine used to put it: Misery seeks companions. :biggrin:
 
  • #15
10. use taylor, also using the continuous of each entry of hessian over U.we know when the point a is a local max/min point, the hessian matrix is a seminegative/semipositve definite matrix. and we are in R^2n, the dim is even. so either conditions means the det of hessian matrix is positive(using the norm).
 
  • #16
7b.the answer is:
pi^(d/2)/det(A)
first, assume A is identity. then the calculate simply is a classic intergral on R^1. you may know that the remann intergral of e^(-x^2) from -infinite to +infinite is pi^1/2, which could be sovled in a best way(as far as my knowledge reach) by calculate its square: remann intergral on R^2 of e^(-x^2-y^2), and using a polar coordinate transformation is simply constant*the intergral of e^(-r^2)*r from 0 to +infinite...
for any A who got a inverse, using the linear transform X=A^-1*Y to change the variable. using the formula of changing variable, you got the answer.

btw, I am always curious about why the jordan measurement of sth like A^-1*y(linear transformed from square) is (the det of transform)*m(formal square), which seems really correct when n=1,2,3
 
Last edited:
  • #17
graphking said:
7b.the answer is:
pi^(d/2)/det(A)
first, assume A is identity. then the calculate simply is a classic intergral on R^1. you may know that the remann intergral of e^(-x^2) from -infinite to +infinite is pi^1/2, which could be sovled in a best way(as far as my knowledge reach) by calculate its square: remann intergral on R^2 of e^(-x^2-y^2), and using a polar coordinate transformation is simply constant*the intergral of e^(-r^2)*r from 0 to +infinite...
for any A who got a inverse, using the linear transform X=A^-1*Y to change the variable. using the formula of changing variable, you got the answer.

btw, I am always curious about why the jordan measurement of sth like A^-1*y(linear transformed from square) is (the det of transform)*m(formal square), which seems really correct when n=1,2,3
Please write down your suggestions in more detail so that more readers can see what you actually mean. And please use LaTeX: Here is explained how you can type formulas on PF: https://www.physicsforums.com/help/latexhelp/
 
  • #18
fresh_42 said:
1. Let ##T## be a planet's orbital period, ##a## the length of the semi-major axis of its orbit. Then
$$
T'(a)=\gamma \sqrt[3]{T(a)}\, , \,T(0)=0
$$
with a constant proportional factor ##\gamma >0.## Solve this equation for all ##a\in \mathbb{R}## and determine whether the solution is unique and why.

Solving by separation of variables,$$
\begin{align*}
\int \frac{dT}{T^{1/3}} &= \int \gamma da \\

\frac{3}{2} T^{2/3} &= \gamma a + c \implies T_{\pm}(a) = \pm \left(\frac{2}{3} \gamma a + c' \right)^{3/2}

\end{align*}$$With ##T(0) = 0## you have ##T_{\pm}(a) = \pm \left(\frac{2}{3}\gamma a \right)^{3/2}##.

And because ##\frac{\gamma}{3} T^{-2/3}## is discontinuous at ##a=0##, there's not a unique solution in any open interval containing ##a=0##...for example, ##T(a) = 0## is also a solution.
 
  • #19
etotheipi said:
Solving by separation of variables,$$
\begin{align*}
\int \frac{dT}{T^{1/3}} &= \int \gamma da \\

\frac{3}{2} T^{2/3} &= \gamma a + C \implies T(a) = \pm \left(\frac{2}{3} \gamma a + D \right)^{3/2}

\end{align*}$$With ##T(0) = 0## you have ##T(a) = \pm \left(\frac{2}{3}\gamma a \right)^{3/2}##.

And because ##\frac{\gamma}{3} T^{-2/3}## is discontinuous at ##a=0##, there's not a unique solution in any open interval containing ##a=0##...for example, ##T(a) = 0## is also a solution.
Better than post #5, but still not the full solution. You even said it(!) but failed to combine the two facts. Hint: There is another reason for the lack of uniqueness.
 
Last edited:
  • #20
fresh_42 said:
Better than post #5, but still not the full solution. You even said it(!) but failed to combine the two facts. Hint: One can resolve the discontinuity problem, which means there has to be another reason for the lack of uniqueness.

Hmm, I'm not sure comprendo completely. Do you mean that you can combine together the ##T(a)=0## solution and the ##T_{\pm}(a) = \pm \left(\frac{2}{3}\gamma a \right)^{3/2}## but shifted a little to the right, e.g. ##T(a) = 0## if ##a < \xi## and ##T_{\pm}(a) =\pm \left(\frac{2}{3}\gamma (a - \xi) \right)^{3/2}## if ##a \geq \xi## for some ##\xi \geq 0##, and then you have another solution satisfying the ICs?
 
Last edited by a moderator:
  • Like
Likes fresh_42
  • #21
etotheipi said:
Hmm, I'm not sure comprendo completely. Do you mean that you can combine together the ##y=0## solution and the ##T(a) = \pm \left(\frac{2}{3}\gamma a \right)^{3/2}## but shifted a little to the right, e.g. ##T(a) = 0## if ##a < \xi## and ##T(a) =\pm \left(\frac{2}{3}\gamma (a - \xi) \right)^{3/2}## if ##a \geq \xi## for some ##\xi \geq 0##, and then you have another solution satisfying the ICs?
Yes. Infinitely many ##\xi## give infinitely many solutions. All conditions for the theorem of Picard-Lindelöf hold, except the Lipschitz continuity of ##f(a,T)=\gamma \sqrt[3]{T(a)}## at ##T(0)=0.## The function ##x\longmapsto \sqrt[3]{x}## isn't Lipschitz continuous in any neighborhood of ##0.## This example shows that Lipschitz continuity is crucial for the uniqueness part in the (local and global version) of the theorem of Picard-Lindelöf.
 
  • Like
Likes etotheipi
  • #22
That's way better than my solution, which is it's not unique because it's plus or minus...
 
  • #23
Problem 2 attempt. There are... so many ways to rewrite the inner and outer products of hadamard products. Or my proof is wrong :)

We will use ##X\times Y## to denote the Hadamard product of ##X## and ##Y##. If ##X## and ##Y## are positive definite, we can write them as
$$X=\sum_{i=1}^n \lambda_i x_i x_i^t,$$
$$Y=\sum_{j=1}^n \gamma_j y_j y_j^t$$
where ##x_i## are orthonormal eigenvectors (written as column vectors) of ##X## with eigenvalue ##\lambda_i>0##, and similar for ##Y##. The Hadamard product is distributive, so
$$X\times Y = \sum_{i,j} \lambda_i \gamma_j (x_i x_i^t)\times (y_j y_j^t)$$

If ##x=(x^1,...,x^n)^t## and ##y=(y^1,...,y^n)## are column vectors, the matrix ##x x^t## has as its ##(i,j)## entry ##x^i x^j##, so the ##(i,j)## entry of ##(x x^t)\times (y y^t)## is ##(x^i y^i x^j y^j)##, which means ##(x x^t)\times (y y^t) = (x\times y)(x\times y)^t##.

So we can rewrite ##X\times Y## as
$$X\times Y = \sum_{i,j} \lambda_i \gamma_j (x_i\times y_j)(x_i\times y_j)^t$$.
This is obviously positive semidefinite as it's a sum of positive semidefinite matrices. To prove it is positive definite, we only need to show there is no vector ##v## such that ##v^t (X\times Y)v = 0##. It suffices to show there is no vector ##v## orthogonal to every ##x_i\times y_j##. But ##\left<v,x_i\times y_j\right> = \sum_{k=1}^n v^k x_i^k y_j^k## where ##v^k## is the ##k##th component of ##v## (and similar for ##x_i## and ##y_j##), which we can rewrite as ##\left<v\times y_j,x_i\right>##.

For any fixed ##j##, ##\left<v\times y_j,x_i\right> = 0## for all ##i## can only happen if ##v\times y_j=0##. So the only possible ##v## is one for which ##v\times y_j = 0## for all ##j##. But if ##v\times y_j## is the zero vector, then in particular the sum of its components are zero, which means ##\left<v,y_j\right>=0## for all ##j##. Since the ##y_j## are a basis of the full space to begin with, this can only happen if ##v=0##, hence proving that ##X\times Y## is positive definite.
 
Last edited:
  • Love
Likes etotheipi
  • #24
Office_Shredder said:
Problem 2 attempt. There are... so many ways to rewrite the inner and outer products of hadamard products. Or my proof is wrong :)

We will use ##X\times Y## to denote the Hadamard product of ##X## and ##Y##. If ##X## and ##Y## are positive definite, we can write them as
$$X=\sum_{i=1}^n \lambda_i x_i x_i^t,$$
$$Y=\sum_{j=1}^n \gamma_j y_j y_j^t$$
where ##x_i## are orthonormal eigenvectors (written as column vectors) of ##X## with eigenvalue ##\lambda_i>0##, and similar for ##Y##. The Hadamard product is distributive, so
$$X\times Y = \sum_{i,j} \lambda_i \gamma_j (x_i x_i^t)\times (y_j y_j^t)$$

If ##x=(x^1,...,x^n)^t## and ##y=(y^1,...,y^n)## are column vectors, the matrix ##x x^t## has as its ##(i,j)## entry ##x^i x^j##, so the ##(i,j)## entry of ##(x x^t)\times (y y^t)## is ##(x^i y^i x^j y^j)##, which means ##(x x^t)\times (y y^t) = (x\times y)(x\times y)^t##.

So we can rewrite ##X\times Y## as
$$X\times Y = \sum_{i,j} \lambda_i \gamma_j (x_i\times y_j)(x_i\times y_j)^t$$.
This is obviously positive semidefinite as it's a sum of positive semidefinite matrices. To prove it is positive definite, we only need to show there is no vector ##v## such that ##v^t (X\times Y)v = 0##. It suffices to show there is no vector ##v## orthogonal to every ##x_i\times y_j##. But ##\left<v,x_i\times y_j\right> = \sum_{k=1}^n v^k x_i^k y_j^k## where ##v^k## is the ##k##th component of ##v## (and similar for ##x_i## and ##y_j##), which we can rewrite as ##\left<v\times y_j,x_i\right>##.

For any fixed ##j##, ##\left<v\times y_j,x_i\right> = 0## for all ##i## can only happen if ##v\times y_j=0##. So the only possible ##v## is one for which ##v\times y_j = 0## for all ##j##. But if ##v\times y_j## is the zero vector, then in particular the sum of its components are zero, which means ##\left<v,y_j\right>=0## for all ##j##. Since the ##y_j## are a basis of the full space to begin with, this can only happen if ##v=0##, hence proving that ##X\times Y## is positive definite.
Can you help me understand why ##X\times Y,## resp. ##(x_i\times y_j)(x_i\times y_j)^t## are positive semidefinite?
 
  • #25
fresh_42 said:
Can you help me understand why ##X\times Y,## resp. ##(x_i\times y_j)(x_i\times y_j)^t## are positive semidefinite?

Any matrix of the form ##xx^t## is positive semidefinite since ##v^t(xx^t)v = (v^tx)(x^tv)= \left<v,x\right>^2 \geq 0##. So ##v^t (X\times Y) v = \sum \lambda_i \gamma_j \left<v,x_i\times y_j\right>^2##. This is a sum of nonnegative numbers so is always nonnegative, and then the last step shows at least one term in this sum is positive.
 
  • #26
Office_Shredder said:
Any matrix of the form ##xx^t## is positive semidefinite since ##v^t(xx^t)v = (v^tx)(x^tv)= \left<v,x\right>^2 \geq 0##. So ##v^t (X\times Y) v = \sum \lambda_i \gamma_j \left<v,x_i\times y_j\right>^2##. This is a sum of nonnegative numbers so is always nonnegative, and then the last step shows at least one term in this sum is positive.
Do you mean ##xx^t=x\otimes x## as a rank one matrix? I have difficulties with your notations.
 
  • #27
Yes, ##x## is a column matrix and ##x^t## is a row matrix, and ##xx^t## is an ##n\times n## matrix of rank 1 computed by doing normal matrix multiplication.
 
  • #28
Some definitions for #4:

An ideal J in C[X1,...,Xn] defines a complex affine variety V(J) = the set of points p in C^n such that every function in J vanishes at p. Thus p is in V(J) if and only if for all f in J, f(p) = 0.

A subset V of C^n is an affine variety if and only if there is some ideal J in C[X1,...,Xn] such that V = V(J). This J is usually not unique.

If V is such a variety, V defines a unique ideal I(V) = all functions g in C[X1,...,Xn] such that for all p in V, g(p) = 0.

Thus J is contained in I(V(J)) but J may be smaller than I(V(J)). E.g. if J = (X1^2), then I(V(J)) = (X1). Indeed I(V) is the largest ideal I such that V = V(I).

If V is a variety, then p belongs V if and only if, for all f in I(V), f(p) = 0, and p does not belong to V if and only if for some f in I(V), f(p) ≠ 0.

A non empty variety V is reducible if and only if there exist non empty varieties U,W, both different from V, with V = U union W.

As a warmup, one might begin by showing the affine subvarieties of C^n satisfy the axioms for the closed sets of a topology, i.e. the intersection of any collection of affine subvarieties is an affine subvariety, as is the union of any two.

(People differ over whether to call the empty variety irreducible, even though it cannot be written as a union of two varieties different from itself, but in this problem, that would require calling the unit ideal (1) = C[X1,...,Xn] a prime ideal, which most people do not do, with the one notable exception perhaps of Zariski and Samuel.)

CHALLENGE: prove the ring R of complex valued functions on the 2 point space consisting of (0,1) and (1,0) in the complex x,y plane, is not an integral domain. I.e. there exist non zero functions f,g in R such that f.g = 0.

If V is union of the x and y axes in C^2, show V is an affine variety compute I(V), and show C[x,y]/I(V) is not a domain.

Then generalize your answer to solve problem #4, i.e. show that if V is a reducible affine variety in C^n, then C[x1,...,xn]/I(V) is not an integral domain.
 
Last edited:
  • #29
What is the ##\chi_{[t,\infty]}(x)## function in 6? I am guessing it's an indicator function that the argument is in the interval but wanted to check.
 
  • #30
Office_Shredder said:
What is the ##\chi_{[t,\infty]}(x)## function in 6? I am guessing it's an indicator function that the argument is in the interval but wanted to check.
It is the indicator function.
 
  • #31
For number 10
short proof:
The hessian matrix is symmetric with real eigenvalues. There's a theorem that if all its eigenvalues are negative then you have a local maximum, if they are all positive then it's a local minimum, and if there's at least one of each sign then it's neither a minimum or a maximum.

Since the dimension of the space is even, the determinant is the product of an even number of real numbers, and since that gives us a negative result there must be at least one positive and one negative eigenvalue, and hence ##a## is neither a maximum or a minimum

Here's a full proof with details, using Taylor polynomials in multiple dimensions:we write down the second degree Taylor polynomial with its error term. ##(Hf)(a)## is the ##2n\times 2n## matrix of second degree partial derivatives at ##a##, and ##(Df)(a)## is a ##1\times 2n## row vector of the first order partial derivatives. ##x## and ##a## are column vectors, and everything below is regular matrix multiplication. Then a generally true thing for twice continuously differentiable functions

$$f(x)= f(a)+\left((Df)(a)\right)(x-a) + (x-a)^t\left((Hf)(a)\right)(x-a)+ (x-a)^t h(x)(x-a)$$

Where ##h(x)## is a matrix with entries ##h_{ij}(x)## that are continuous and ##\lim_{x\to a}h_{ij}(x)=0##.

For the specific situation in #10, ##(Df)(a)=0##, and the determinant of ##(Hf)(a)## is negative. Since ##(Hf)(a)## is a symmetric matrix, its eigenvalues are all real numbers. Furthermore since it's symmetric it is diagonalizable with the diagonal entries being the eigenvalues. There are an even number of them, so for the product of them to be a negative number there must be both a positive and a negative eigenvalue.

Suppose ##v## is a unit eigenvector with eigenvalue ##\lambda## and consider ##f(a+\epsilon v)## for ##\epsilon >0##. Let's also write ##H=(Hf)(a)##. Then we get
$$f(a+\epsilon v) = f(a) + \epsilon^2 v^t H v + \epsilon^2 v^t h(a+\epsilon v) v = f(a) + \epsilon^2 \left(\lambda+v^t h(a+\epsilon v) v\right)$$.

We will now show that if ##\epsilon## is small enough, ##|v^t h(a+\epsilon v) v| < |\lambda/2|##. In general for a matrix ##M## whose largest entry is ##\kappa##, and for any unit vector ##v##,

$$|v^t M v| = |\sum_{i,j} M_{ij} v_i v_j | \leq \sum_{ij} |M_{ij} v_i v_j|$$

We use the fact that ##|M_{ij}|\leq \kappa## and ##|v_k|\leq 1|## for all ##k## since ##v## is a unit vector (obviously this is a crude bound, I think you can probably do ##\sqrt{n}## better but we don't care) to get (remember ##M## is ##2n\times 2n##)
$$|v^t M v| \leq \sum_{i,j} \kappa = 4n^2 \kappa$$.

We know that the entries of the error term ##h(a+\epsilon v)## go to 0 as ##\epsilon## goes to zero, so if we pick ##\epsilon## small enough that all its entries are smaller in magnitude than ##\frac{|\lambda|}{8n^2}##, and then ##|v^t h(a+\epsilon v) v| \leq \lambda/2##. Let's let ##g(\epsilon)= v^t h(a+\epsilon v) v##, and restrict ourselves to ##\epsilon## small enough that ##|g(\epsilon)| \leq \lambda/2##.So we have ##f(a+\epsilon v)= f(a)+\epsilon^2(\lambda +g(\epsilon))##. If we pick ##v## to be an eigenvector with positive eigenvalue, then ##f(a+\epsilon v) \geq f(a) + \epsilon^2 \lambda/2 > f(a)##. If we pick ##v## to be an eigenvector with a negative eigenvalue , which I'll write as ##-\lambda## for ##\lambda > 0##, then ##f(a+\epsilon v) \leq f(a) - \epsilon^2 \lambda/2 < f(a)##. So ##a## is neither a local maximum or a local minimum.
 
Last edited:
  • Like
Likes etotheipi and graphking
  • #32
Office_Shredder said:
For number 10
short proof:
The hessian matrix is symmetric with real eigenvalues. There's a theorem that if all its eigenvalues are negative then you have a local maximum, if they are all positive then it's a local minimum, and if there's at least one of each sign then it's neither a minimum or a maximum.

Since the dimension of the space is even, the determinant is the product of an even number of real numbers, and since that gives us a negative result there must be at least one positive and one negative eigenvalue, and hence ##a## is neither a maximum or a minimum
It's also short if we use the integral remainder of the Taylor seriess:
$$
f(\vec{a}+\vec{h})= f(\vec{a})+\int_0^1(1-t)\langle \vec{h},Hf(\vec{a}+t\vec{h})\vec{h} \rangle\,dt
$$
 
  • #33
fresh_42 said:
It's also short if we use the integral remainder of the Taylor seriess:
$$
f(\vec{a}+\vec{h})= f(\vec{a})+\int_0^1(1-t)\langle \vec{h},Hf(\vec{a}+t\vec{h})\vec{h} \rangle\,dt
$$

Oohh, that would have been really smart.
 
  • #34
for question 10, I would like to mention that the hessian matrix is
fresh_42 said:
It's also short if we use the integral remainder of the Taylor seriess:
$$
f(\vec{a}+\vec{h})= f(\vec{a})+\int_0^1(1-t)\langle \vec{h},Hf(\vec{a}+t\vec{h})\vec{h} \rangle\,dt
$$
I think peano or the intergral form of the remainder would just be as same efficient
 
  • #35
If anyone feels intimidated by #8 because you're not sure what the definitions are, it only requires a small amount of linear algebra and calculus, see the spoiler for more.

##L^2([0,1])## is the set of all functions ##f(x)## on ##[0,1]## such that ##\int_0^1 |f(x)|^2 dx## exists. This is a vector space, and has an inner product defined by ##\left<f,g\right> = \int_0^1 f(x)\bar{g(x)}dx ##. Then ##1## and ##x## are both in ##L^2([0,1])## and##\left<1,x\right>=\int_0^1 1 \times x dx =1/2##. So these vectors are not orthogonal.
 
Last edited:

Similar threads

  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
3
Replies
86
Views
9K
  • Math Proof Training and Practice
2
Replies
56
Views
7K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
  • Math Proof Training and Practice
3
Replies
80
Views
4K
  • Math Proof Training and Practice
2
Replies
67
Views
8K
  • Math Proof Training and Practice
2
Replies
60
Views
8K
Back
Top