Challenge Math Challenge - April 2021

fresh_42
Staff Emeritus
Science Advisor
Homework Helper
Insights Author
2024 Award
Messages
20,627
Reaction score
27,781
Summary: Differential Equations, Linear Algebra, Topology, Algebraic Geometry, Number Theory, Functional Analysis, Integrals, Hilbert Spaces, Algebraic Topology, Calculus.1. (solved by @etotheipi ) Let ##T## be a planet's orbital period, ##a## the length of the semi-major axis of its orbit. Then
$$
T'(a)=\gamma \sqrt[3]{T(a)}\, , \,T(0)=0
$$
with a constant proportional factor ##\gamma >0.## Solve this equation for all ##a\in \mathbb{R}## and determine whether the solution is unique and why.

2. (solved by @Office_Shredder ) Show that the Hadamard (elementwise) product of two positive definite complex matrices is again positive definite.

3. A function ##f\, : \,S^k\longrightarrow X## is called antipodal if it is continuous and ##f(-x)=-f(x)## for all ##x\in S^k## and any topological space ##X\subseteq \mathbb{R}^m.## Show that the following statements are equivalent:

(a) For every antipodal map ##f: S^n\longrightarrow \mathbb{R}^n## there is a point ##x\in S^n## satisfying ##f(x)=0.##
(b) There is no antipodal map ##f: S^n\longrightarrow S^{n-1}.##
(c) There is no continuous mapping ##f : B^n\longrightarrow S^{n-1}## that is antipodal on the boundary.

Assume the conditions hold. Prove Brouwer's fixed point theorem:
Any continuous map ##f : B^n\longrightarrow B^n## has a fixed point.

4. (solved by @Office_Shredder ) Let ##Y## be an affine, complex variety. Prove that ##Y## is irreducible if and only if ##I(Y)## is a prime ideal.

5. Let ##p>5## be a prime number. Show that
$$
\left(\dfrac{6}{p}\right) = 1 \Longleftrightarrow p\equiv k \,(24) \text{ with } k\in \{1,5,19,23\}.
$$
The parentheses are the Legendre symbol.

6. (solved by @benorin ) Let ## f\in L^2 ( \mathbb{R} ) ## and ## g : \mathbb{R} \longrightarrow \overline{\mathbb{R}} ## be given as
$$
g(t):=t\int_\mathbb{R} \chi_{[t,\infty )}(|x|)\exp(-t^2(|x|+1))f(x)\,dx
$$
Show that ##g\in L^1(\mathbb{R}).##

7.(a) (solved by @etotheipi and @Office_Shredder ) Let ##V## be the pyramid with vertices ##(0,0,1),(0,1,0),(1,0,0)## and ##(0,0,0).## Calculate
$$
\int_V \exp(x+y+z) \,dV
$$
7.(b) (solved by @graphking and @julian ) ##A\in \operatorname{GL}(d,\mathbb{R}).## Calculate
$$
\int_{\mathbb{R}^d}\exp(-\|Ax\|_2^2)\,dx
$$

8. (solved by @etotheipi ) Consider the Hilbert space ##L^2([0,1])## and its subspace ##K:=\operatorname{span}_\mathbb{C}\{x,1\}##. Let ##\pi^\perp\, : \,H\longrightarrow K## be the orthogonal projection. Give an explicit formula for ##\pi^\perp## and calculate ##\pi^\perp(e^x).##

9. (solved by @Office_Shredder ) Prove ##\pi_1(S^n;x)=\{e\}## for ##n\geq 2.##

10. (solved by @Office_Shredder ) Let ##U\subseteq \mathbb{R}^{2n}## be an open set and ##f\in C^2(U,\mathbb{R})## a twice continuously differentiable function at a point ##\vec{a}\in U.## Prove that if ##f## has a critical point in ##\vec{a}## and the Hessian matrix ##Hf(\vec{a})## has a negative determinant, then ##f## has neither a local maximum nor a local minimum in ##\vec{a}.##
1606835746499-png-png-png.png


High Schoolers only (until 26th)

11.
Show that every non-negative real polynomial ##p(x)## can be written as ##p(x)=a(x)^2+b(x)^2## with ##a(x),b(x)\in \mathbb{R}[x].##

12. (solved by @Not anonymous ) Show that all Pythagorean triples ##x^2+y^2=z^2## can be found by
$$
(x,y,z)=d\cdot (u^2-v^2,2uv,u^2+v^2) \text{ with }u,v\in \mathbb{N}\, , \,u>v \quad (*)
$$
and which are primitive (no common divisor of ##x,y,z##) if and only if ##u,v## are coprime and one is odd and the other one even.

(*) corrected statement

13. (solved by @Not anonymous ) Write
$$
\sqrt[8]{2207-\dfrac{1}{2207-\dfrac{1}{2207-\dfrac{1}{2207-\ldots}}}}
$$
as ##\dfrac{a+b\sqrt{c}}{d}.##

14. (solved by @Not anonymous ) To each positive integer with ##n^2## decimal digits, we associate the determinant of the matrix obtained by writing
the digits in order across the rows. For example, for ##n =2##, to the integer ##8617## we associate ##\det\left(\begin{bmatrix}
8&6\\1&7 \end{bmatrix}\right)=50.## Find, as a function of ##n##, the sum of all the determinants associated with ##n^2-##digit integers. Leading digits are assumed to be nonzero; for example, for ##n = 2##, there are ##9000## determinants: ##\displaystyle{f(2)=\sum_{1000\leq N\leq 9999}\det(N)}.##

15. (solved by @Not anonymous ) All squares on a chessboard are labeled from ##1## to ##64## in reading order (from left to right, row by row top-down). Then someone places ##8## rooks on the board such that none threatens any other. Let ##S## be the sum of all squares which carry a rook. List all possible values of ##S.##
 
Last edited:
  • Like
Likes Not anonymous, Anti Hydrogen and (deleted member)
Physics news on Phys.org
For 7a, I think it's just$$
\begin{align*}

\int_V \mathrm{d}V \, e^x e^y e^z &= \int_0^1 \mathrm{d}z \int_0^{1-z} \mathrm{d}y \int_0^{1-y-z} \mathrm{d}x \, e^x e^y e^z \\

&= \int_0^{1} \mathrm{d}z \int_0^{1-z} \mathrm{d}y \, (e - e^y e^z) \\

&= \int_0^1 \mathrm{d}z \, (e^z - ez) \\

&= \frac{1}{2} e - 1

\end{align*}$$Also @fresh_42, may I ask what does the notation ##\|Ax\|_2^2## mean?
 
etotheipi said:
For 7a, I think it's just$$
\begin{align*}

\int_V \mathrm{d}V \, e^x e^y e^z &= \int_0^1 \mathrm{d}z \int_0^{1-z} \mathrm{d}y \int_0^{1-y-z} \mathrm{d}x \, e^x e^y e^z \\

&= \int_0^{1} \mathrm{d}z \int_0^{1-z} \mathrm{d}y \, (e - e^y e^z) \\

&= \int_0^1 \mathrm{d}z \, (e^z - ez) \\

&= \frac{1}{2} e - 1

\end{align*}$$Also @fresh_42, may I ask what does the notation ##\|Ax\|_2^2## mean?
The ##2-##norm (Euclidean norm) squared.
 
  • Like
Likes etotheipi
A fun alternate solution for 7a, is to integrate along the axis ##(t/3,t/3,t/3)## for ##t\in (0,1)## Then you get ##\int_{0}^1 A(t)e^t dt## where ##A(t)## is the area in the intersection of the plane ##x+y+z=t## and the pyramid,l. This is a triangle whose proportions grow linearly and hence whose area grows proportional to ##t^2##, and ##A(0)=0## and ##A(1)## is the triangle with vertices ##(1,0,0),(0,1,0),(0,0,1)## which is an equilateral triangle with all sides length ##\sqrt{2}## and hence area ##1/2##.

This turns the whole integral into ##\frac{1}{2} \int_0^{1} t^2 e^t dt## which you can solve by doing integration by parts twice.
 
  • Like
Likes etotheipi
Problem 1
Let ##f## be the area of a sector bounded by two neighboring radius vectors and an element of the path of the orbit. For Kepler's law (equal areas are swept in equal times) we have,
$$
\dot f=\gamma
$$
where ##\gamma## is a positive constant. Integrating we find
$$
\pi ab=\gamma T
$$
where ##\pi ab ## is the area of the ellipse and T is the period. I express the semi-major ##a## and semi-minor ##b## axis in terms of the semi latus rectum ##p =a(1-e^2)##
$$
a=\frac{p}{1-e^2}
$$
$$
b=\frac{p}{\sqrt{1-e^2}}
$$
where ##e## is the eccentricity of the orbit. Observe for ##p=1## then ##b=\sqrt{a}## and thus
$$
T(a)=\frac{\pi a^{\frac{3}{2}}}{\gamma}
$$
$$
T(0)=0
$$
$$
T'=\frac{3}{2}\frac{\pi a^{\frac{1}{2}}}{\gamma}=\frac{3}{2}\left [ \frac{\pi}{\gamma}\right ]^{\frac{3}{2}}\left [ T(a) \right ]^{\frac{1}{3}}
$$
The solution is unique because of the constraint ##p=1##.
 
Last edited:
Is the condition for 3(a) correct? Isn't f(x)=x a counterexample? I suspect you meant ##S^n\to \mathbb{R}^{n-1}##, mostly because I think I'm able to prove that equivalence with that one 😁
 
Fred Wright said:
Problem 1
Let ##f## be the area of a sector bounded by two neighboring radius vectors and an element of the path of the orbit. For Kepler's law (equal areas are swept in equal times) we have,
$$
\dot f=\gamma
$$
where ##\gamma## is a positive constant. Integrating we find
$$
\pi ab=\gamma T
$$
where ##\pi ab ## is the area of the ellipse and T is the period. I express the semi-major ##a## and semi-minor ##b## axis in terms of the semi latus rectum ##p =a(1-e^2)##
$$
a=\frac{p}{1-e^2}
$$
$$
b=\frac{p}{\sqrt{1-e^2}}
$$
where ##e## is the eccentricity of the orbit. Observe for ##p=1## then ##b=\sqrt{a}## and thus
$$
T(a)=\frac{\pi a^{\frac{3}{2}}}{\gamma}
$$
$$
T(0)=0
$$
$$
T'=\frac{3}{2}\frac{\pi a^{\frac{1}{2}}}{\gamma}=\frac{3}{2}\left [ \frac{\pi}{\gamma}\right ]^{\frac{3}{2}}\left [ T(a) \right ]^{\frac{1}{3}}
$$
The solution is unique because of the constraint ##p=1##.
This is not the most general solution.
 
Office_Shredder said:
Is the condition for 3(a) correct? Isn't f(x)=x a counterexample? I suspect you meant ##S^n\to \mathbb{R}^{n-1}##, mostly because I think I'm able to prove that equivalence with that one 😁
No typo.
 
fresh_42 said:
No typo.
What about his counter example?
 
  • #10
martinbn said:
What about his counter example?
I don't know, I haven't seen the proof of equivalence. The statements are for any antipodal maps, so I don't see how a single one can prove something about an equivalence for any. And ##f(x)=x=0## does not occur in the situation (a).
 
  • #11
I'm confused, do you think that (a) is a true statement? I think f(x)=x proves that (a) is not true, to since it is antipodal and there is no point in the sphere for which ##f(x)=0##.

Unless you intend for the question is prove these three things which are equivalent, and never mind the fact that they all must be false (since (a) is false).There's a decent chance I just don't understand the question, in which case I'll just wait to see what other people write about it.
 
Last edited:
  • #12
Office_Shredder said:
I'm confused, do you think that (a) is a true statement? I think f(x)=x proves that (a) is not true, to since it is antipodal and there is no point in the sphere for which ##f(x)=0##.

Unless you intend for the question is prove these three things which are equivalent, and never mind the fact that they all must be false (since (a) is false).There's a decent chance I just don't understand the question, in which case I'll just wait to see what other people write about it.
As I understand it, we have ##S^{n}## naturally embedded in ##\mathbb{R}^{n+1}##. How do you define the identity map from a circle to a line?
 
  • #13
Oh jeez, I'm dumb. Thank you.

Somehow I had convinced myself that ##S^n## is embedded in ##\mathbb{R}^n## and hence why you obviously need to drop the dimension by one.
 
  • #14
Office_Shredder said:
Somehow I had convinced myself that ##S^n## is embedded in ##\mathbb{R}^n## and hence why you obviously need to drop the dimension by one.
Hmm, what shall I say: I'm happy that I'm not the only one who regularly stumbles upon this ##n## at ##S^n##, or as a fellow student of mine used to put it: Misery seeks companions. :biggrin:
 
  • #15
10. use taylor, also using the continuous of each entry of hessian over U.we know when the point a is a local max/min point, the hessian matrix is a seminegative/semipositve definite matrix. and we are in R^2n, the dim is even. so either conditions means the det of hessian matrix is positive(using the norm).
 
  • #16
7b.the answer is:
pi^(d/2)/det(A)
first, assume A is identity. then the calculate simply is a classic intergral on R^1. you may know that the remann intergral of e^(-x^2) from -infinite to +infinite is pi^1/2, which could be sovled in a best way(as far as my knowledge reach) by calculate its square: remann intergral on R^2 of e^(-x^2-y^2), and using a polar coordinate transformation is simply constant*the intergral of e^(-r^2)*r from 0 to +infinite...
for any A who got a inverse, using the linear transform X=A^-1*Y to change the variable. using the formula of changing variable, you got the answer.

btw, I am always curious about why the jordan measurement of sth like A^-1*y(linear transformed from square) is (the det of transform)*m(formal square), which seems really correct when n=1,2,3
 
Last edited:
  • #17
graphking said:
7b.the answer is:
pi^(d/2)/det(A)
first, assume A is identity. then the calculate simply is a classic intergral on R^1. you may know that the remann intergral of e^(-x^2) from -infinite to +infinite is pi^1/2, which could be sovled in a best way(as far as my knowledge reach) by calculate its square: remann intergral on R^2 of e^(-x^2-y^2), and using a polar coordinate transformation is simply constant*the intergral of e^(-r^2)*r from 0 to +infinite...
for any A who got a inverse, using the linear transform X=A^-1*Y to change the variable. using the formula of changing variable, you got the answer.

btw, I am always curious about why the jordan measurement of sth like A^-1*y(linear transformed from square) is (the det of transform)*m(formal square), which seems really correct when n=1,2,3
Please write down your suggestions in more detail so that more readers can see what you actually mean. And please use LaTeX: Here is explained how you can type formulas on PF: https://www.physicsforums.com/help/latexhelp/
 
  • #18
fresh_42 said:
1. Let ##T## be a planet's orbital period, ##a## the length of the semi-major axis of its orbit. Then
$$
T'(a)=\gamma \sqrt[3]{T(a)}\, , \,T(0)=0
$$
with a constant proportional factor ##\gamma >0.## Solve this equation for all ##a\in \mathbb{R}## and determine whether the solution is unique and why.

Solving by separation of variables,$$
\begin{align*}
\int \frac{dT}{T^{1/3}} &= \int \gamma da \\

\frac{3}{2} T^{2/3} &= \gamma a + c \implies T_{\pm}(a) = \pm \left(\frac{2}{3} \gamma a + c' \right)^{3/2}

\end{align*}$$With ##T(0) = 0## you have ##T_{\pm}(a) = \pm \left(\frac{2}{3}\gamma a \right)^{3/2}##.

And because ##\frac{\gamma}{3} T^{-2/3}## is discontinuous at ##a=0##, there's not a unique solution in any open interval containing ##a=0##...for example, ##T(a) = 0## is also a solution.
 
  • #19
etotheipi said:
Solving by separation of variables,$$
\begin{align*}
\int \frac{dT}{T^{1/3}} &= \int \gamma da \\

\frac{3}{2} T^{2/3} &= \gamma a + C \implies T(a) = \pm \left(\frac{2}{3} \gamma a + D \right)^{3/2}

\end{align*}$$With ##T(0) = 0## you have ##T(a) = \pm \left(\frac{2}{3}\gamma a \right)^{3/2}##.

And because ##\frac{\gamma}{3} T^{-2/3}## is discontinuous at ##a=0##, there's not a unique solution in any open interval containing ##a=0##...for example, ##T(a) = 0## is also a solution.
Better than post #5, but still not the full solution. You even said it(!) but failed to combine the two facts. Hint: There is another reason for the lack of uniqueness.
 
Last edited:
  • #20
fresh_42 said:
Better than post #5, but still not the full solution. You even said it(!) but failed to combine the two facts. Hint: One can resolve the discontinuity problem, which means there has to be another reason for the lack of uniqueness.

Hmm, I'm not sure comprendo completely. Do you mean that you can combine together the ##T(a)=0## solution and the ##T_{\pm}(a) = \pm \left(\frac{2}{3}\gamma a \right)^{3/2}## but shifted a little to the right, e.g. ##T(a) = 0## if ##a < \xi## and ##T_{\pm}(a) =\pm \left(\frac{2}{3}\gamma (a - \xi) \right)^{3/2}## if ##a \geq \xi## for some ##\xi \geq 0##, and then you have another solution satisfying the ICs?
 
Last edited by a moderator:
  • Like
Likes fresh_42
  • #21
etotheipi said:
Hmm, I'm not sure comprendo completely. Do you mean that you can combine together the ##y=0## solution and the ##T(a) = \pm \left(\frac{2}{3}\gamma a \right)^{3/2}## but shifted a little to the right, e.g. ##T(a) = 0## if ##a < \xi## and ##T(a) =\pm \left(\frac{2}{3}\gamma (a - \xi) \right)^{3/2}## if ##a \geq \xi## for some ##\xi \geq 0##, and then you have another solution satisfying the ICs?
Yes. Infinitely many ##\xi## give infinitely many solutions. All conditions for the theorem of Picard-Lindelöf hold, except the Lipschitz continuity of ##f(a,T)=\gamma \sqrt[3]{T(a)}## at ##T(0)=0.## The function ##x\longmapsto \sqrt[3]{x}## isn't Lipschitz continuous in any neighborhood of ##0.## This example shows that Lipschitz continuity is crucial for the uniqueness part in the (local and global version) of the theorem of Picard-Lindelöf.
 
  • Like
Likes etotheipi
  • #22
That's way better than my solution, which is it's not unique because it's plus or minus...
 
  • #23
Problem 2 attempt. There are... so many ways to rewrite the inner and outer products of hadamard products. Or my proof is wrong :)

We will use ##X\times Y## to denote the Hadamard product of ##X## and ##Y##. If ##X## and ##Y## are positive definite, we can write them as
$$X=\sum_{i=1}^n \lambda_i x_i x_i^t,$$
$$Y=\sum_{j=1}^n \gamma_j y_j y_j^t$$
where ##x_i## are orthonormal eigenvectors (written as column vectors) of ##X## with eigenvalue ##\lambda_i>0##, and similar for ##Y##. The Hadamard product is distributive, so
$$X\times Y = \sum_{i,j} \lambda_i \gamma_j (x_i x_i^t)\times (y_j y_j^t)$$

If ##x=(x^1,...,x^n)^t## and ##y=(y^1,...,y^n)## are column vectors, the matrix ##x x^t## has as its ##(i,j)## entry ##x^i x^j##, so the ##(i,j)## entry of ##(x x^t)\times (y y^t)## is ##(x^i y^i x^j y^j)##, which means ##(x x^t)\times (y y^t) = (x\times y)(x\times y)^t##.

So we can rewrite ##X\times Y## as
$$X\times Y = \sum_{i,j} \lambda_i \gamma_j (x_i\times y_j)(x_i\times y_j)^t$$.
This is obviously positive semidefinite as it's a sum of positive semidefinite matrices. To prove it is positive definite, we only need to show there is no vector ##v## such that ##v^t (X\times Y)v = 0##. It suffices to show there is no vector ##v## orthogonal to every ##x_i\times y_j##. But ##\left<v,x_i\times y_j\right> = \sum_{k=1}^n v^k x_i^k y_j^k## where ##v^k## is the ##k##th component of ##v## (and similar for ##x_i## and ##y_j##), which we can rewrite as ##\left<v\times y_j,x_i\right>##.

For any fixed ##j##, ##\left<v\times y_j,x_i\right> = 0## for all ##i## can only happen if ##v\times y_j=0##. So the only possible ##v## is one for which ##v\times y_j = 0## for all ##j##. But if ##v\times y_j## is the zero vector, then in particular the sum of its components are zero, which means ##\left<v,y_j\right>=0## for all ##j##. Since the ##y_j## are a basis of the full space to begin with, this can only happen if ##v=0##, hence proving that ##X\times Y## is positive definite.
 
Last edited:
  • Love
Likes etotheipi
  • #24
Office_Shredder said:
Problem 2 attempt. There are... so many ways to rewrite the inner and outer products of hadamard products. Or my proof is wrong :)

We will use ##X\times Y## to denote the Hadamard product of ##X## and ##Y##. If ##X## and ##Y## are positive definite, we can write them as
$$X=\sum_{i=1}^n \lambda_i x_i x_i^t,$$
$$Y=\sum_{j=1}^n \gamma_j y_j y_j^t$$
where ##x_i## are orthonormal eigenvectors (written as column vectors) of ##X## with eigenvalue ##\lambda_i>0##, and similar for ##Y##. The Hadamard product is distributive, so
$$X\times Y = \sum_{i,j} \lambda_i \gamma_j (x_i x_i^t)\times (y_j y_j^t)$$

If ##x=(x^1,...,x^n)^t## and ##y=(y^1,...,y^n)## are column vectors, the matrix ##x x^t## has as its ##(i,j)## entry ##x^i x^j##, so the ##(i,j)## entry of ##(x x^t)\times (y y^t)## is ##(x^i y^i x^j y^j)##, which means ##(x x^t)\times (y y^t) = (x\times y)(x\times y)^t##.

So we can rewrite ##X\times Y## as
$$X\times Y = \sum_{i,j} \lambda_i \gamma_j (x_i\times y_j)(x_i\times y_j)^t$$.
This is obviously positive semidefinite as it's a sum of positive semidefinite matrices. To prove it is positive definite, we only need to show there is no vector ##v## such that ##v^t (X\times Y)v = 0##. It suffices to show there is no vector ##v## orthogonal to every ##x_i\times y_j##. But ##\left<v,x_i\times y_j\right> = \sum_{k=1}^n v^k x_i^k y_j^k## where ##v^k## is the ##k##th component of ##v## (and similar for ##x_i## and ##y_j##), which we can rewrite as ##\left<v\times y_j,x_i\right>##.

For any fixed ##j##, ##\left<v\times y_j,x_i\right> = 0## for all ##i## can only happen if ##v\times y_j=0##. So the only possible ##v## is one for which ##v\times y_j = 0## for all ##j##. But if ##v\times y_j## is the zero vector, then in particular the sum of its components are zero, which means ##\left<v,y_j\right>=0## for all ##j##. Since the ##y_j## are a basis of the full space to begin with, this can only happen if ##v=0##, hence proving that ##X\times Y## is positive definite.
Can you help me understand why ##X\times Y,## resp. ##(x_i\times y_j)(x_i\times y_j)^t## are positive semidefinite?
 
  • #25
fresh_42 said:
Can you help me understand why ##X\times Y,## resp. ##(x_i\times y_j)(x_i\times y_j)^t## are positive semidefinite?

Any matrix of the form ##xx^t## is positive semidefinite since ##v^t(xx^t)v = (v^tx)(x^tv)= \left<v,x\right>^2 \geq 0##. So ##v^t (X\times Y) v = \sum \lambda_i \gamma_j \left<v,x_i\times y_j\right>^2##. This is a sum of nonnegative numbers so is always nonnegative, and then the last step shows at least one term in this sum is positive.
 
  • #26
Office_Shredder said:
Any matrix of the form ##xx^t## is positive semidefinite since ##v^t(xx^t)v = (v^tx)(x^tv)= \left<v,x\right>^2 \geq 0##. So ##v^t (X\times Y) v = \sum \lambda_i \gamma_j \left<v,x_i\times y_j\right>^2##. This is a sum of nonnegative numbers so is always nonnegative, and then the last step shows at least one term in this sum is positive.
Do you mean ##xx^t=x\otimes x## as a rank one matrix? I have difficulties with your notations.
 
  • #27
Yes, ##x## is a column matrix and ##x^t## is a row matrix, and ##xx^t## is an ##n\times n## matrix of rank 1 computed by doing normal matrix multiplication.
 
  • #28
Some definitions for #4:

An ideal J in C[X1,...,Xn] defines a complex affine variety V(J) = the set of points p in C^n such that every function in J vanishes at p. Thus p is in V(J) if and only if for all f in J, f(p) = 0.

A subset V of C^n is an affine variety if and only if there is some ideal J in C[X1,...,Xn] such that V = V(J). This J is usually not unique.

If V is such a variety, V defines a unique ideal I(V) = all functions g in C[X1,...,Xn] such that for all p in V, g(p) = 0.

Thus J is contained in I(V(J)) but J may be smaller than I(V(J)). E.g. if J = (X1^2), then I(V(J)) = (X1). Indeed I(V) is the largest ideal I such that V = V(I).

If V is a variety, then p belongs V if and only if, for all f in I(V), f(p) = 0, and p does not belong to V if and only if for some f in I(V), f(p) ≠ 0.

A non empty variety V is reducible if and only if there exist non empty varieties U,W, both different from V, with V = U union W.

As a warmup, one might begin by showing the affine subvarieties of C^n satisfy the axioms for the closed sets of a topology, i.e. the intersection of any collection of affine subvarieties is an affine subvariety, as is the union of any two.

(People differ over whether to call the empty variety irreducible, even though it cannot be written as a union of two varieties different from itself, but in this problem, that would require calling the unit ideal (1) = C[X1,...,Xn] a prime ideal, which most people do not do, with the one notable exception perhaps of Zariski and Samuel.)

CHALLENGE: prove the ring R of complex valued functions on the 2 point space consisting of (0,1) and (1,0) in the complex x,y plane, is not an integral domain. I.e. there exist non zero functions f,g in R such that f.g = 0.

If V is union of the x and y axes in C^2, show V is an affine variety compute I(V), and show C[x,y]/I(V) is not a domain.

Then generalize your answer to solve problem #4, i.e. show that if V is a reducible affine variety in C^n, then C[x1,...,xn]/I(V) is not an integral domain.
 
Last edited:
  • #29
What is the ##\chi_{[t,\infty]}(x)## function in 6? I am guessing it's an indicator function that the argument is in the interval but wanted to check.
 
  • #30
Office_Shredder said:
What is the ##\chi_{[t,\infty]}(x)## function in 6? I am guessing it's an indicator function that the argument is in the interval but wanted to check.
It is the indicator function.
 
  • #31
For number 10
short proof:
The hessian matrix is symmetric with real eigenvalues. There's a theorem that if all its eigenvalues are negative then you have a local maximum, if they are all positive then it's a local minimum, and if there's at least one of each sign then it's neither a minimum or a maximum.

Since the dimension of the space is even, the determinant is the product of an even number of real numbers, and since that gives us a negative result there must be at least one positive and one negative eigenvalue, and hence ##a## is neither a maximum or a minimum

Here's a full proof with details, using Taylor polynomials in multiple dimensions:we write down the second degree Taylor polynomial with its error term. ##(Hf)(a)## is the ##2n\times 2n## matrix of second degree partial derivatives at ##a##, and ##(Df)(a)## is a ##1\times 2n## row vector of the first order partial derivatives. ##x## and ##a## are column vectors, and everything below is regular matrix multiplication. Then a generally true thing for twice continuously differentiable functions

$$f(x)= f(a)+\left((Df)(a)\right)(x-a) + (x-a)^t\left((Hf)(a)\right)(x-a)+ (x-a)^t h(x)(x-a)$$

Where ##h(x)## is a matrix with entries ##h_{ij}(x)## that are continuous and ##\lim_{x\to a}h_{ij}(x)=0##.

For the specific situation in #10, ##(Df)(a)=0##, and the determinant of ##(Hf)(a)## is negative. Since ##(Hf)(a)## is a symmetric matrix, its eigenvalues are all real numbers. Furthermore since it's symmetric it is diagonalizable with the diagonal entries being the eigenvalues. There are an even number of them, so for the product of them to be a negative number there must be both a positive and a negative eigenvalue.

Suppose ##v## is a unit eigenvector with eigenvalue ##\lambda## and consider ##f(a+\epsilon v)## for ##\epsilon >0##. Let's also write ##H=(Hf)(a)##. Then we get
$$f(a+\epsilon v) = f(a) + \epsilon^2 v^t H v + \epsilon^2 v^t h(a+\epsilon v) v = f(a) + \epsilon^2 \left(\lambda+v^t h(a+\epsilon v) v\right)$$.

We will now show that if ##\epsilon## is small enough, ##|v^t h(a+\epsilon v) v| < |\lambda/2|##. In general for a matrix ##M## whose largest entry is ##\kappa##, and for any unit vector ##v##,

$$|v^t M v| = |\sum_{i,j} M_{ij} v_i v_j | \leq \sum_{ij} |M_{ij} v_i v_j|$$

We use the fact that ##|M_{ij}|\leq \kappa## and ##|v_k|\leq 1|## for all ##k## since ##v## is a unit vector (obviously this is a crude bound, I think you can probably do ##\sqrt{n}## better but we don't care) to get (remember ##M## is ##2n\times 2n##)
$$|v^t M v| \leq \sum_{i,j} \kappa = 4n^2 \kappa$$.

We know that the entries of the error term ##h(a+\epsilon v)## go to 0 as ##\epsilon## goes to zero, so if we pick ##\epsilon## small enough that all its entries are smaller in magnitude than ##\frac{|\lambda|}{8n^2}##, and then ##|v^t h(a+\epsilon v) v| \leq \lambda/2##. Let's let ##g(\epsilon)= v^t h(a+\epsilon v) v##, and restrict ourselves to ##\epsilon## small enough that ##|g(\epsilon)| \leq \lambda/2##.So we have ##f(a+\epsilon v)= f(a)+\epsilon^2(\lambda +g(\epsilon))##. If we pick ##v## to be an eigenvector with positive eigenvalue, then ##f(a+\epsilon v) \geq f(a) + \epsilon^2 \lambda/2 > f(a)##. If we pick ##v## to be an eigenvector with a negative eigenvalue , which I'll write as ##-\lambda## for ##\lambda > 0##, then ##f(a+\epsilon v) \leq f(a) - \epsilon^2 \lambda/2 < f(a)##. So ##a## is neither a local maximum or a local minimum.
 
Last edited:
  • Like
Likes etotheipi and graphking
  • #32
Office_Shredder said:
For number 10
short proof:
The hessian matrix is symmetric with real eigenvalues. There's a theorem that if all its eigenvalues are negative then you have a local maximum, if they are all positive then it's a local minimum, and if there's at least one of each sign then it's neither a minimum or a maximum.

Since the dimension of the space is even, the determinant is the product of an even number of real numbers, and since that gives us a negative result there must be at least one positive and one negative eigenvalue, and hence ##a## is neither a maximum or a minimum
It's also short if we use the integral remainder of the Taylor seriess:
$$
f(\vec{a}+\vec{h})= f(\vec{a})+\int_0^1(1-t)\langle \vec{h},Hf(\vec{a}+t\vec{h})\vec{h} \rangle\,dt
$$
 
  • #33
fresh_42 said:
It's also short if we use the integral remainder of the Taylor seriess:
$$
f(\vec{a}+\vec{h})= f(\vec{a})+\int_0^1(1-t)\langle \vec{h},Hf(\vec{a}+t\vec{h})\vec{h} \rangle\,dt
$$

Oohh, that would have been really smart.
 
  • #34
for question 10, I would like to mention that the hessian matrix is
fresh_42 said:
It's also short if we use the integral remainder of the Taylor seriess:
$$
f(\vec{a}+\vec{h})= f(\vec{a})+\int_0^1(1-t)\langle \vec{h},Hf(\vec{a}+t\vec{h})\vec{h} \rangle\,dt
$$
I think peano or the intergral form of the remainder would just be as same efficient
 
  • #35
If anyone feels intimidated by #8 because you're not sure what the definitions are, it only requires a small amount of linear algebra and calculus, see the spoiler for more.

##L^2([0,1])## is the set of all functions ##f(x)## on ##[0,1]## such that ##\int_0^1 |f(x)|^2 dx## exists. This is a vector space, and has an inner product defined by ##\left<f,g\right> = \int_0^1 f(x)\bar{g(x)}dx ##. Then ##1## and ##x## are both in ##L^2([0,1])## and##\left<1,x\right>=\int_0^1 1 \times x dx =1/2##. So these vectors are not orthogonal.
 
Last edited:
  • #36
probably wrong for #8 but I'll try anyway; by gram schmidt we can come up with an orthogonal basis ##(v_1, v_2)## for ##K## by setting ##v_1 = 1## and ##v_2 = x - \langle x, 1 \rangle = x - 1/2##. then it's just
$$\pi^{\bot}(v) = \frac{\langle v, x - \frac{1}{2} \rangle}{|x - \frac{1}{2}|^2} (x - \frac{1}{2}) + \langle v , 1 \rangle$$then$$\begin{align*}
\int_0^1 (x-\frac{1}{2}) e^x dx &= \frac{3-e}{2} \\

\int_0^1 (x-\frac{1}{2})^2 dx &= \frac{1}{12} \\

\int_0^1 e^x dx = e-1
\end{align*}$$so you just get$$\pi^{\bot}(e^x) = 6(3-e)(x-\frac{1}{2}) + (e-1)$$lol idk if that's right, but it's 3am so cba to check atm :smile:
 
  • #37
etotheipi said:
probably wrong for #8 but I'll try anyway; by gram schmidt we can come up with an orthogonal basis ##(v_1, v_2)## for ##K## by setting ##v_1 = 1## and ##v_2 = x - \langle x, 1 \rangle = x - 1/2##. then it's just
$$\pi^{\bot}(v) = \frac{\langle v, x - \frac{1}{2} \rangle}{|x - \frac{1}{2}|^2} (x - \frac{1}{2}) + \langle v , 1 \rangle$$then$$\begin{align*}
\int_0^1 (x-\frac{1}{2}) e^x dx &= \frac{3-e}{2} \\

\int_0^1 (x-\frac{1}{2})^2 dx &= \frac{1}{12} \\

\int_0^1 e^x dx = e-1
\end{align*}$$so you just get$$\pi^{\bot}(e^x) = 6(3-e)(x-\frac{1}{2}) + (e-1)$$lol idk if that's right, but it's 3am so cba to check atm :smile:
It is correct. And here are the sorted results:
$$
\pi^\perp(v) = \langle v,1 \rangle 1 + 12 \;\langle v,x-\dfrac{1}{2}\rangle \left(x-\dfrac{1}{2}\right)
$$
$$
\pi^\perp(e^x)=6x(3-e) +4e-10
$$
 
  • Like
Likes etotheipi
  • #38
Another way of doing 7 b:

\begin{align*}
\| A x \|_2^2 &= x^T A^T A x
\end{align*}

Define ##M = A^T A##. Obviously, ##M^T = M##. As ##M## is real and symmetric there exist the matrix ##R## who's columns are eigenvectors of ##M## and such that ##R^T R = \mathbb{1}## and

\begin{align*}
M = R D R^T , \quad
D=
\begin{pmatrix}
\lambda_1 & & & \\
& \lambda_2 & & 0 \\
0 & & \ddots & \\
& & & \lambda_d
\end{pmatrix}
\end{align*}

where ##\lambda_i## are the eigenvalues. We make the following change of variables with unit Jacobian:

\begin{align*}
x' = R^T x \quad (\det R^T = 1)
\end{align*}

in

\begin{align*}
\int \prod_{i=1}^d dx_i \exp ( -x^T M x) &= \int \prod_{i=1}^d dx_i' \exp ( -x^{'T} D x')
\\
&= (\pi)^{d/2} / (\prod_{i=1}^d \lambda_i)^{1/2}
\\
&= (\pi)^{d/2} / (\det M)^{1/2}
\\
&= (\pi)^{d/2} / \det A .
\end{align*}
 
  • Like
Likes graphking
  • #39
fresh_42 said:
6. Let ## f\in L^2 ( \mathbb{R} ) ## and ## g : \mathbb{R} \longrightarrow \overline{\mathbb{R}} ## be given as
$$
g(t):=t\int_\mathbb{R} \chi_{[t,\infty )}(|x|)\exp(-t^2(|x|+1))f(x)\,dx
$$
Show that ##g\in L^1(\mathbb{R}).##

Reference for this definition is Papa Rudin pg. 65:

Definition: If ##0<p<\inf## and if ##f## is a complex measurable function on the measure space ##X##, define

$$\| f \|_{p}:=\left\{\int_{X} |f|^p\, d\mu\right\}^{\tfrac{1}{p}}$$

and let ##L^p( \mu )## consist of all ##f## for which ##\| f\|_{p}<\infty##.

Work:
$$\begin{align} \| g\|_{1} & =\int_{\mathbb{R}}\left| t \int_{\mathbb{R}}\chi_{\left[ t,\infty \right)}(|x|) \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt \\ & =
\begin{cases}
\int_{\mathbb{R}}\left| t \left( \int_{-\infty}^{-t}+\int_{t}^{\infty}\right) \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt & \text{if } t \geq 0 \\
\int_{\mathbb{R}}\left| t \int_{-\infty}^{\infty} \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt & \text{if } t < 0
\end{cases} \\ & = 4\int_{0}^{\infty} t\left| \int_{\max\left\{ t,0 \right\} }^{\infty} \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt \\ & \leq 4\int_{0}^{\infty} t \int_{\max\left\{ t,0 \right\} }^{\infty} \exp (-t^2(|x|+1))\left| f(x)\right| \, dx \, dt \\ & \leq \underbrace{4\int_{0}^{\infty} t e^{-t^2}\, dt}_{=2}\cdot \int_{\max\left\{ t,0 \right\} }^{\infty} |f(x)|\, dx \\ & \leq 2\left\{\left[ \int_{\max\left\{ t,0 \right\} }^{\infty} |f(x)|^2\, dx\right]^{\tfrac{1}{2}}\right\} ^{2} < \infty \\ \end{align}$$

where the last inequality (finiteness) follow from the hypothesis that ##f\in L^{2}(\mathbb{R} )## and this was to be shown.
 
  • Like
Likes Office_Shredder
  • #40
benorin said:
Reference for this definition is Papa Rudin pg. 65:

Definition: If ##0<p<\inf## and if ##f## is a complex measurable function on the measure space ##X##, define

$$\| f \|_{p}:=\left\{\int_{X} |f|^p\, d\mu\right\}^{\tfrac{1}{p}}$$

and let ##L^p( \mu )## consist of all ##f## for which ##\| f\|_{p}<\infty##.

Work:
$$\begin{align} \| g\|_{1} & =\int_{\mathbb{R}}\left| t \int_{\mathbb{R}}\chi_{\left[ t,\infty \right)}(|x|) \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt \\ & =
\begin{cases}
\int_{\mathbb{R}}\left| t \left( \int_{-\infty}^{-t}+\int_{t}^{\infty}\right) \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt & \text{if } t \geq 0 \\
\int_{\mathbb{R}}\left| t \int_{-\infty}^{\infty} \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt & \text{if } t < 0
\end{cases} \\ & = 4\int_{0}^{\infty} t\left| \int_{\max\left\{ t,0 \right\} }^{\infty} \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt \\ & \leq 4\int_{0}^{\infty} t \int_{\max\left\{ t,0 \right\} }^{\infty} \exp (-t^2(|x|+1))\left| f(x)\right| \, dx \, dt \\ & \leq \underbrace{4\int_{0}^{\infty} t e^{-t^2}\, dt}_{=2}\cdot \int_{\max\left\{ t,0 \right\} }^{\infty} |f(x)|\, dx \\ & \leq 2\left\{\left[ \int_{\max\left\{ t,0 \right\} }^{\infty} |f(x)|^2\, dx\right]^{\tfrac{1}{2}}\right\} ^{2} < \infty \\ \end{align}$$

where the last inequality (finiteness) follow from the hypothesis that ##f\in L^{2}(\mathbb{R} )## and this was to be shown.
The last inequality's name is Hölder.
 
  • #41
Don't the outermost square and square root cancel, and then it looks a lot to me like the second to last step is just asserting that
$$\int_{\max(t,0)}^\infty |f(x)| dx \leq \int_{\max(t,0)}^{\infty} |f(x)|^2 dx$$

I must be missing something because I don't think that's true.
 
  • #42
Office_Shredder said:
Don't the outermost square and square root cancel, and then it looks a lot to me like the second to last step is just asserting that
$$\int_{\max(t,0)}^\infty |f(x)| dx \leq \int_{\max(t,0)}^{\infty} |f(x)|^2 dx$$

I must be missing something because I don't think that's true.
If we do not separate the factors, then we have Hölder (##1=1/2 +1/2##)
$$
\int_\mathbb{R} |u(x)f(x)|\,dx =\|uf\|_1\leq \|u\|_2\|f\|_2 < \infty
$$
where ##u## is the exponential factor.
 
  • #43
julian said:
Another way of doing 7 b:

\begin{align*}
\| A x \|_2^2 &= x^T A^T A x
\end{align*}

Define ##M = A^T A##. Obviously, ##M^T = M##. As ##M## is real and symmetric there exist the matrix ##R## who's columns are eigenvectors of ##M## and such that ##R^T R = \mathbb{1}## and

\begin{align*}
M = R D R^T , \quad
D=
\begin{pmatrix}
\lambda_1 & & & \\
& \lambda_2 & & 0 \\
0 & & \ddots & \\
& & & \lambda_d
\end{pmatrix}
\end{align*}

where ##\lambda_i## are the eigenvalues. We make the following change of variables with unit Jacobian:

\begin{align*}
x' = R^T x \quad (\det R^T = 1)
\end{align*}

in

\begin{align*}
\int \prod_{i=1}^d dx_i \exp ( -x^T M x) &= \int \prod_{i=1}^d dx_i' \exp ( -x^{'T} D x')
\\
&= (\pi)^{d/2} / (\prod_{i=1}^d \lambda_i)^{1/2}
\\
&= (\pi)^{d/2} / (\det M)^{1/2}
\\
&= (\pi)^{d/2} / \det A .
\end{align*}
Here's my proof again. I should mention that in my proof I used Sylveter's criterion. In particular the theorem:

"A real-symmetric matrix ##M## has non-negative eigenvalues if and only if ##M## can be factored as ## M = A^TA##, and all eigenvalues are positive if and only if ##A## is non-singular".

Also, I think the answer is supposed to be ##\pi^{d/2} / |\det A|##.
 
Last edited:
  • #44
julian said:
I should mention that in my proof I used Sylveter's criterion. In particular the theorem:

"A real-symmetric matrix ##M## has non-negative eigenvalues if and only if ##M## can be factored as ## M = A^TA##, and all eigenvalues are positive if and only if ##A## is non-singular".

Also, I think the answer is supoosed to be ##\pi^{d/2} / |\det A|##.
The shortest version is probably by the transformation theorem for integrals. ##\varphi (x)=Ax## is ##C^1## and ##D\varphi =A.##
 
  • #45
fresh_42 said:
If we do not separate the factors, then we have Hölder (##1=1/2 +1/2##)
$$
\int_\mathbb{R} |u(x)f(x)|\,dx =\|uf\|_1\leq \|u\|_2\|f\|_2 < \infty
$$
where ##u## is the exponential factor.
Look at you giving mana from the prof: I had no idea this was so useful here... ;)

@Office_Shredder Yes I indeed reasoned like you had and must have assumed ##|f(x)|\geq 1## in my head lol
 
  • #46
I should have just said, that in my proof of 7 b, I used: Given that ##M = A^T A##, the eigenvalues of ##M## are non-negative because

\begin{align*}
\lambda = \frac{v^T M v}{v^T v} = \frac{v^T A^T A v}{v^T v} = \frac{\| A v \|_2^2}{\| v \|_2} \geq 0 .
\end{align*}

And the eigenvalues must be non-zero because ##M## is non-singular if ##A## is non-singular. That would have been more helpful to people. This is the reverse implication of the theorem:

"A real-symmetric matrix ##M## has non-negative eigenvalues if and only if ##M## can be factored as ## M = A^TA##, and all eigenvalues are positive if and only if ##A## is non-singular"

And, at the very end of my calculation (post #38) I should have written down ##\pi^{d/2} / |\det A|## because ##(\det M)^{1/2}## is a positive number.
 
  • #47
fresh_42 said:
The shortest version is probably by the transformation theorem for integrals. ##\varphi (x)=Ax## is ##C^1## and ##D\varphi =A.##

So you are taking a vector valued function ##\varphi (x)## and considering the derivative of it. So in component form ##(D \varphi)_{ij} = \partial_j \varphi_i (x) = A_{ij}##. Is that right? Where do you go from there?
 
Last edited:
  • #48
  • #50
fresh_42 said:
13. Write
A doubt, possibly silly, about question 13. Are and required to be integers?
 

Similar threads

2
Replies
61
Views
11K
Replies
42
Views
10K
2
Replies
93
Views
14K
3
Replies
100
Views
11K
2
Replies
86
Views
13K
2
Replies
56
Views
10K
3
Replies
114
Views
10K
2
Replies
67
Views
11K
2
Replies
60
Views
11K
2
Replies
61
Views
12K
Back
Top