Challenge Math Challenge - July 2020

  • #51
@lavinia Manifolds are CW complexes by Morse theory, and a CW complex must be finite to be compact (otherwise picking an interior point from each cell gives an infinite discrete subspace), so I think that part of the argument is fine. The asphericity assumption looks like the main issue to me.

Your second argument has the same idea as my argument in post 46. You can take a look there to see one way to sharpen it.
 
Physics news on Phys.org
  • #52
Infrared said:
@lavinia Manifolds are CW complexes by Morse theory, and a CW complex must be finite to be compact (otherwise picking an interior point from each cell gives an infinite discrete subspace), so I think that part of the argument is fine. The asphericity assumption looks like the main issue to me.

Your second argument has the same idea as my argument in post 46. You can take a look there to see one way to sharpen it.
The assumption of aspherical is weaker than the problem demanded., It is used for getting a contradiction. The argument is correct.

Morse theory I think works for smooth manifolds but I don't know what works for other manifolds.
 
  • #53
I see, I didn't read carefully. It looks fine then.
 
  • #54
"Morse theory I think works for smooth manifolds but I don't know what works for other manifolds."

Yup. PL manifolds are immediately CW complexes. I'm not sure about those non-PL and not even triangulable manifolds they now know exist in all dimensions ≥ 4. (I just read somewhere that an argument of Milnor shows that all topological manifolds have the homotopy type of a CW-complex.)
 
  • #55
fresh_42 said:
Summary:: Functional Analysis, Topology, Differential Geometry, Analysis, Physics
Authors: Math_QED (MQ), Infrared (IR), Wrobel (WR), fresh_42 (FR).

(solved by @Isaac0427 with complex numbers, completely real solution still possible)
I'll leave that for someone else to answer-- I thought of the complex solution first for some reason, but I see that there's a much easier way to go about it.
 
  • #56
Isaac0427 said:
I'll leave that for someone else to answer-- I thought of the complex solution first for some reason, but I see that there's a much easier way to go about it.
The solution I have in mind is basically only a real version of your solution, but it uses a function which helps here. That's why I wrote that remark. Something left to learn.
 
  • #57
fresh_42 said:
The solution I have in mind is basically only a real version of your solution, but it uses a function which helps here. That's why I wrote that remark. Something left to learn.
The other one I had in mind is an inductive proof.
 
  • #58
Isaac0427 said:
The other one I had in mind is an inductive proof.
I think formally it is always an induction. Even your pairing is formally an induction since commutativity is only defined for two factors. My solution works with matrices, but of course it is only another formalism than yours.
 
  • #59
Another method for Question 15.

By Stirling’s approximation we have ##n! =\sqrt{2\pi n} \left( \frac{n}{e}\right)^n ##.
Therefore,
$$
(n!)^2= 2\pi n \left(\frac{n}{e}\right)^{2n}
$$
And
$$
(2n)!= 2 \sqrt{\pi n} 4^n\left( \frac{n}{e}\right)^{2n}$$
$$\frac{(2n)!}{(n!)^2}= \frac{4^n}{\sqrt{\pi n}}
$$
So, we got to compare ##\frac{4^n}{\sqrt{\pi n}}## and ##\frac{4^n}{n+1}##.
My claim: ##\sqrt{\pi n} \lt n+1##,
$$
\pi n \lt n^2 + 1 + 2n
$$ above statement is true for n=2 and n=3 by trial and error method. For ##n \gt 3## we have
$$
\pi n \lt n^2 \\
\therefore
\pi n \lt n^2 +1 + 2n
$$
Hence, ##\sqrt{\pi n} \lt (n+1)##, therefore
$$
\frac{4^n}{\sqrt {\pi n}} \gt \frac{4^n}{n+1}
$$
 
  • #60
Adesh said:
Another method for Question 15.

By Stirling’s approximation we have ##n! =\sqrt{2\pi n} \left( \frac{n}{e}\right)^n ##.
Therefore,
$$
(n!)^2= 2\pi n \left(\frac{n}{e}\right)^{2n}
$$
And
$$
(2n)!= 2 \sqrt{\pi n} 4^n\left( \frac{n}{e}\right)^{2n}$$
$$\frac{(2n)!}{(n!)^2}= \frac{4^n}{\sqrt{\pi n}}
$$
So, we got to compare ##\frac{4^n}{\sqrt{\pi n}}## and ##\frac{4^n}{n+1}##.
My claim: ##\sqrt{\pi n} \lt n+1##,
$$
\pi n \lt n^2 + 1 + 2n
$$ above statement is true for n=2 and n=3 by trial and error method. For ##n \gt 3## we have
$$
\pi n \lt n^2 \\
\therefore
\pi n \lt n^2 +1 + 2n
$$
Hence, ##\sqrt{\pi n} \lt (n+1)##, therefore
$$
\frac{4^n}{\sqrt {\pi n}} \gt \frac{4^n}{n+1}
$$
Nice idea! The problem with Stirling is, that your equalities are only approximations. So in order for proof to count, you have to take the error margins into account. Especially the division needs watching, since a small error could theoretically turn into something large if you divide it. So you need to use expressions with upper and lower bounds.
 
  • Like
Likes Adesh and member 587159
  • #61
mathwonk said:
I am interested in your thoughts on prob. #8. The only solution I know is via non trivial properties of Eilenberg Maclane spaces, i.e. if a K(G,1) has a finite dimensional CW structure, then G is torsion free. Might be enough to know the cohomology of (infinite) lens space.
Another way at your argument is to invoke the theorem that the manifold ##M## together with the covering ##M^{*}→M## must be the universal classifying space for principle discrete ##π_1(M)## bundles. ##M## then has the cohomology of its fundamental group and since ##π_1(M)## is finite and non-trivial it has non-zero cohomology in unbounded dimensions. One can get away with a cyclic subgroup of prime order. For all of this I think you need ##M^{*}## to be weakly contractible which is a stronger result than is necessary to answer the question. This is the same as @zinq 's argument I think.

BTW: If one argues by contradiction that the manifold has no homotopy groups in dimension 1 through n then the manifold can not be closed for then Hurewicz's Theorem would say that the nth homotopy group of the universal covering manifold is isomorphic to ##Z##. You can also argue that the boundary can have only 1 connected component from the exact homology sequence of the pair. I tried taking this idea further but didn't succeed.

One thought was that a contractible compact manifold with boundary reminds one of Brouwer's fixed point theorem and one can ask when such manifolds can be made convex with respect to some geometry.
 
Last edited:
  • Like
Likes mathwonk
  • #62
fresh_42 said:
Nice idea! The problem with Stirling is, that your equalities are only approximations. So in order for proof to count, you have to take the error margins into account. Especially the division needs watching, since a small error could theoretically turn into something large if you divide it. So you need to use expressions with upper and lower bounds.
Oh thank you for pointing that out! Yes for small ##n## we must take care of errors.
 
  • #63
fresh_42 said:
Nice idea! The problem with Stirling is, that your equalities are only approximations. So in order for proof to count, you have to take the error margins into account. Especially the division needs watching, since a small error could theoretically turn into something large if you divide it. So you need to use expressions with upper and lower bounds.
Okay, in this article the error bounds for ##n!## are given as
$$
\sqrt{2\pi n} \left( \frac{n}{e} \right)^n \leq n! \leq \sqrt{2\pi n} \left(\frac{n}{e}\right)^n e^{\frac{1}{12n}}$$
So, working with error bounds we have
$$
\frac{1}{
{2\pi n} \left( \frac{n}{e} \right)^{2n} ~e^{\frac{1}{6n}}} \leq \frac{1}{(n!)^2} \leq \cdots $$
$$2\sqrt{\pi n} ~4^n ~\left(\frac{n}{e}\right)^{2n} \leq (2n)! \leq \cdots $$

$$\frac{
4^n}
{\sqrt{\pi n} e^{\frac{1}{6n} }} \leq \frac{(2n)!}{(n!)^2} \leq \cdots$$
Now, it is left to show that ##\sqrt{\pi n} e^{\frac{1}{6n}} \lt n+1##. Which we can easily prove by noting that we have to compare

$$\pi ~n~e^{\frac{1}{3n}} ~and~n^2 +1 +2n$$

if ##n\gt 3~~ (n\in \mathbb N)## then we have

$$\pi e^{ \frac{1}{3n}} n \lt n^2 $$
$$\implies ~~\pi e^{ \frac{1}{3n}} n \lt n^2 +1 + 2n $$
For ##n=2 , 3## we can do trial and error.

Hopefully, we are done. Please point if there are some errors, or you have more elegant way for doing this.
 
  • #64
Adesh said:
Hopefully, we are done. Please point if there are some errors, or you have more elegant way for doing this.
No, that's ok. Maybe you should have mentioned that you can only do this, because the numbers are positive. And one doesn't say "trial and error" here. Better say: "we check for ..." but you could have simply said that it is true for ##n=2##: ##\pi \cdot 2 \cdot e^{\frac{1}{6}} \approx 7.423 < 8 < 9 = 2^2+1+2\cdot 2## and then observed that the RHS grows faster than the LHS.
 
  • Informative
Likes Adesh
  • #65
In reference to post #51, does a CW complex which is homotopy equivalent to a compact manifold have to also be compact? I.e. does one know that a compact topological n manifold has a structure of finite CW complex? (The paper of Milnor I have seen referenced apparently proved only that there is a countable such CW complex.)
 
  • #66
@mathwonk It certainly doesn't have to be compact (e.g. ##\mathbb{R}## is a CW complex homotopy equivalent to a point) but I did find a reference that you can always find a homotopy equivalent finite CW complex to a compact manifold (boundary allowed): http://people.math.harvard.edu/~lurie/281notes/Lecture34-Part3.pdf I haven't tried to read it. There's also a sketch here: https://math.stackexchange.com/questions/1648250/spaces-homotopy-equivalent-to-finite-cw-complexes (again I haven't tried to read it)
 
  • Like
Likes mathwonk
  • #67
Thanks for the link to the nice, if brief, notes. They do not actually prove the desired result, referring to a "non trivial" theorem of Chapman. I have no hope of getting through the works of chapman, but the (first) link you gave states a very nice result, apparently having the homotopy type of a finite CW complex is equivalent to having the homotopy type of a compact manifold (possibly with boundary)!

(I had also seen the second link but found it somewhat unclear.)
 
Last edited:
  • #68
Please forgive my primitive typography. (In what follows, Int denotes the integral from 0 to 1.)

Let f : R —> R be continuous such that for all x, f(x+1) = x and f(x) > 0.

Let a = p/q be a rational number in lowest terms.

Then

Int ( (f(x + p/q) / f(x)) * (f(x + 2*p/q) / f(x + p/q)) * ... * (f(x + q*p/q) / f(x + (q-1)*p/q)) )^(1/q) dx

= Int 1 dx = 1

because by periodicity the integrand is the constant function = 1^(1/q) = 1 for all x.

Since any (arithmetic mean) ≥ (the corresponding geometric mean), we get

(1/q) * Int (f(x + p/q) / f(x)) + ... + (f(x + q*p/q) / f(x + (q-1)*p/q)) dx ≥ 1

But the integrals from 0 to 1 of all the summands are equal by periodicity. Hence

Int( f(x+p/q) / f(x) dx ≥ 1

But

G(a) = Int f(x+a)/f(x) dx

is a continuous function of a, and rationals are dense in R. Hence by taking a sequence of rationals p/q converging to an arbitrary a, we get that

G(a) ≥ 1

for all a.
 
  • Like
Likes nuuskur, wrobel and Adesh
  • #69
3 has been solved by @zinq
another solution is by Jensen's inequality:
$$\ln\int_0^1\frac{f(x+a)}{f(x)}dx\ge\int_0^1\ln f(x+a)-\ln f(x) dx=0$$
 
  • Informative
  • Like
Likes nuuskur, member 587159 and etotheipi
  • #70
I don't know if you're allowed to answer/give hints, but for problem 14, I had this thought:

We have two equations and two unknowns. This is a good start. Although, I don't see any way we can solve for ##x## in terms of ##y## (I've really tried), which makes it hard to make progress. Although I recognize that ##\sin^n{(x+\frac{\pi}{2})} = \cos^n{(x)}##. Does that mean for in order for there to be any solutions at all that $$\sin^n{(x+\frac{\pi}{2})} - \cos^n{(x)} = y^4+\left(x + \frac{\pi}{2} \right)y^2-4y^2+4 - x^4+x^2y^2-4x^2+1 = 0$$

I might be completely wrong in my reasoning, but I was just wondering.

Disclaimer: I've never done a "find all solutions x,y for a function" type problem.
 
  • #71
Mayhem said:
I don't know if you're allowed to answer/give hints, but for problem 14, I had this thought:

We have two equations and two unknowns. This is a good start. Although, I don't see any way we can solve for ##x## in terms of ##y## (I've really tried), which makes it hard to make progress. Although I recognize that ##\sin^n{(x+\frac{\pi}{2})} = \cos^n{(x)}##. Does that mean for in order for there to be any solutions at all that $$\sin^n{(x+\frac{\pi}{2})} - \cos^n{(x)} = y^4+\left(x + \frac{\pi}{2} \right)y^2-4y^2+4 - x^4+x^2y^2-4x^2+1 = 0$$

I might be completely wrong in my reasoning, but I was just wondering.

Disclaimer: I've never done a "find all solutions x,y for a function" type problem.
You cannot solve it directly. The trick is to get rid of what disturbs most. This is always a good plan. You can eliminate the trig functions to the cost that equality turns into an inequality.
 
  • #72
fresh_42 said:
You cannot solve it directly. The trick is to get rid of what disturbs most. This is always a good plan. You can eliminate the trig functions to the cost that equality turns into an inequality.

If you add them, using ##\sin^4{x} + \cos^4{x} \in [\frac{1}{2},1]##, you can get the inequality $$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) < -4$$ $$(x^2 + y^2)^2 -4(x^2 + y^2) + 4 < 0$$ $$([x^2 + y^2] -2)^2 < 0$$That would seem to imply that there are no possible values of ##x^2 + y^2## that satisfy the inequality, and that there are no solutions.
 
  • Like
Likes Mayhem
  • #73
etotheipi said:
That would seem to imply that there are no possible values of ##x^2+y^2## that satisfy the inequality, and that there are no solutions...
... if you only had worked properly ...

Or to say it in chess speak: Sauber setzen!
Sorry, the rhythm doesn't work in English.
 
  • Like
Likes etotheipi
  • #74
For the first problem in the high school section, I think one can use induction (Don't know if this has been done already)...noting that for n=1 the solution is trivial i.e. a1 + b1 = a + b, and assuming the statement is true for n = m,
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2) = (a^2 + b^2) ## , we can prove
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2)(a_{m+1}^2 + b_{m + 1}^2) = (a^2 + b^2)(a_{m+1}^2 + b_{m+1}^2 ) = (aa_{m+1} + bb_{m+1} )^2 + (ab_{m+1} - ba_{m+1})^2 ##
 
  • Like
Likes fresh_42
  • #75
ItsukaKitto said:
For the first problem in the high school section, I think one can use induction (Don't know if this has been done already)...noting that for n=1 the solution is trivial i.e. a1 + b1 = a + b, and assuming the statement is true for n = m,
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2) = (a^2 + b^2) ## , we can prove
##(a_1^2 + b_1^2)...(a_m^2 + b_m^2)(a_{m+1}^2 + b_{m + 1}^2) = (a^2 + b^2)(a_{m+1}^2 + b_{m+1}^2 ) = (aa_{m+1} + bb_{m+1} )^2 + (ab_{m+1} - ba_{m+1})^2 ##
Another possibility is to write ##a_k^2+b_k^2 =\det \left| \begin{pmatrix}a_k&-b_k\\b_k&a_k\end{pmatrix} \right|##, show that this form is conserved by matrix multiplication, and then use that the determinant respects matrix multiplication.
 
Last edited:
  • Like
Likes ItsukaKitto
  • #76
Can we prove problem 11 using vector algebra? Let ##a_k = \binom{a_k}{a_k}##, then we can rewrite the question as
$$\prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \mathbf{a}\cdot\mathbf{b}$$
We know that the scalar product returns a scalar. Which would mean that a product of ##n## scalar products would also return a scalar, namely ##\mathbf{a}\cdot\mathbf{b}##.

And since the scalar product for a 2-d vector is by definition ##a_1a_2+b_1b_2##, then for ##a_1 = a_2## and ##b_1 + b_2##, the resulting scalar of \mathbf{a}\cdot\mathbf{b} must be a sum of two integer squares given the initial conditions for ##\mathbf{a}_k##.

Maybe I'm using circular logic/assuming the conclusion. I'm quite new to proofs.
 
  • #77
I know one way to solve the trigonometric problem, only that it happens to be just etotheipi 's approach, but done...(I hope) correctly:
Adding the equations we see that we get an inequality in ## A = x^2 + y^2 ## , namely
##cos^4 (x) + sin^4(x) = A(A-4) + 5 <= 1 ##
But we see the greatest value of the expression in A is 1, so that we have equality when it is equal to 1, then we solve ##A^2 - 4A + 4 = 0## , getting ##A = 2##;
Now, we also see that ##cos^4(x) + sin^4(x) = 1##, which will occur when ##x = \pm π/2, 0 ##
So that the values of y can be found from the relation A = 2, to be ## y = \pm \sqrt{2}, \pm \sqrt{2 - π^2/4 } ##
 
  • #78
Mayhem said:
Can we prove problem 11 using vector algebra? Let ##a_k = \binom{a_k}{a_k}##, then we can rewrite the question as
$$\prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \mathbf{a}\cdot\mathbf{b}$$
We know that the scalar product returns a scalar. Which would mean that a product of ##n## scalar products would also return a scalar, namely ##\mathbf{a}\cdot\mathbf{b}##.

And since the scalar product for a 2-d vector is by definition ##a_1a_2+b_1b_2##, then for ##a_1 = a_2## and ##b_1 + b_2##, the resulting scalar of \mathbf{a}\cdot\mathbf{b} must be a sum of two integer squares given the initial conditions for ##\mathbf{a}_k##.

Maybe I'm using circular logic/assuming the conclusion. I'm quite new to proofs.
I guess you meant ##\mathbf{a}_k=\mathbf{b}_k=\begin{bmatrix}a_k\\b_k\end{bmatrix}## in which case we have
$$
(a_1^2+b_1^2)\cdot \ldots \cdot (a_n^2+b_n^2)= \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{a}_k = \prod_{k = 1}^{n} \|\mathbf{a}_k\|_2^2
$$
And at this point you have to insert the argument why the result is again of the requested form. Do you know this argument?
 
  • Like
Likes Mayhem
  • #79
fresh_42 said:
I guess you meant ##\mathbf{a}_k=\mathbf{b}_k=\begin{bmatrix}a_k\\b_k\end{bmatrix}## in which case we have
$$
(a_1^2+b_1^2)\cdot \ldots \cdot (a_n^2+b_n^2)= \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{b}_k = \prod_{k = 1}^{n} \mathbf{a}_k\cdot\mathbf{a}_k = \prod_{k = 1}^{n} \|\mathbf{a}_k\|_2^2
$$
And at this point you have to insert the argument why the result is again of the requested form. Do you know this argument?
Not on the top of my head, no. And yes, I messed up the definition of ##\mathbf{a}_k##. Thanks for noticing that.
 
  • #80
Mayhem said:
Not on the top of my head, no. And yes, I messed up the definition of ##\mathbf{a}_k##. Thanks for noticing that.
The scalar product of a vector with itself is the squared Euclidean norm of this vector, its length squared. Now which properties do norms have?
 
  • #81
ItsukaKitto said:
I know one way to solve the trigonometric problem, only that it happens to be just etotheipi 's approach, but done...(I hope) correctly:
Adding the equations we see that we get an inequality in ## A = x^2 + y^2 ## , namely
##cos^4 (x) + sin^4(x) = A(A-4) + 5 <= 1 ##
But we see the greatest value of the expression in A is 1, so that we have equality when it is equal to 1, then we solve ##A^2 - 4A + 4 = 0## , getting ##A = 2##;
Now, we also see that ##cos^4(x) + sin^4(x) = 1##, which will occur when ##x = \pm π/2, 0 ##
So that the values of y can be found from the relation A = 2, to be ## y = \pm \sqrt{2}, \pm \sqrt{2 - π^2/4 } ##
Have you checked which of all these numbers is actually a solution?
 
  • #82
fresh_42 said:
The scalar product of a vector with itself is the squared Euclidean norm of this vector, its length squared. Now which properties do norms have?
Ah, so ##||\mathbf{a}||## is norm? During high school, we used ##|\mathbf{a}|##, but I should have guessed that it was the same thing.

The norm of a vector ##|\mathbf{a}|##, that is to say its "length" is defined as

$$|\mathbf{a}|=\sqrt{a_1^2+a_2^2}$$

If we square both sides, we get ##|\mathbf{a}|^2 = a_1^2+b_2^2##. Generalize that for ##|\mathbf{a_k}|## and we get what we wanted. I see! Thank you.
 
  • #83
etotheipi said:
$$([x^2 + y^2] -2)^2 < 0$$
Sorry, this should be a non-strict inequality.
 
  • #84
Mayhem said:
Ah, so ##||\mathbf{a}||## is norm? During high school, we used ##|\mathbf{a}|##, but I should have guessed that it was the same thing.

The norm of a vector ##|\mathbf{a}|##, that is to say its "length" is defined as

$$|\mathbf{a}|=\sqrt{a_1^2+a_2^2}$$

If we square both sides, we get ##|\mathbf{a}|^2 = a_1^2+b_2^2##. Generalize that for ##|\mathbf{a_k}|## and we get what we wanted. I see! Thank you.
Yes, but you still need a property of the norm.
 
  • #85
fresh_42 said:
Yes, but you still need a property of the norm.
That it is a scalar?
 
  • #86
fresh_42 said:
Have you checked which of all these numbers is actually a solution?
Ah yes, they're only a list of permissible values, the actual solutions I think must be a subset of these...
##(0,\sqrt{2}) , (0, -\sqrt{2})## work but the other values don't seem to work.
 
  • Like
Likes fresh_42
  • #87
Mayhem said:
That it is a scalar?
The problem we still have is whether the product of the squared norms is still of the form ##\|\mathbf{a}\|_2^2##. I don't see this. So maybe I was wrong and the norm doesn't work as easy as I thought. We will still need additional information. I guess the determinant idea is the closest one to that approach.
 
  • #88
fresh_42 said:
The problem we still have is whether the product of the squared norms is still of the form ##\|\mathbf{a}\|_2^2##. I don't see this. So maybe I was wrong and the norm doesn't work as easy as I thought. We will still need additional information. I guess the determinant idea is the closest one to that approach.
So in conclusion, setting it up as a product of scalar products probably isn't particularly useful. I really ought to brush up on my linear algebra. Unfortunately calculus is more fun.
 
  • #89
Mayhem said:
So in conclusion, setting it up as a product of scalar products probably isn't particularly useful. I really ought to brush up on my linear algebra. Unfortunately calculus is more fun.
13.) and 12.c.) from last month are calculus - basically.
 
  • Like
Likes Mayhem
  • #90
Given,
\begin{align*}
f(xy)&=f(x)f(y)-f(x)-f(y)+2 ...(1)\\
f(x+y)&=f(x)+f(y)+2xy-1 ...(2)\\
f(1)&=2 ...(3)
\end{align*}
Put ##y=0## in ##(2)##, gives ##f(0)=1##
Put ##y=1## in ##(2)##, gives
$$f(x+1)=f(x)+f(1)+2x-1$$
or,
\begin{align*}
f(x+1)-f(x)&=2x+1 ...(4)
\end{align*}
Hence,
\begin{align*}
[f(x+1)-f(x)]\\+[f(x)-f(x-1)]\\+...\\+[f(3)-f(2)]\\+[f(2)-f(1)]&=[2*x+1]+[2*(x-1)+1]+...+[2*2+1]+[2*1+1]\\
\Rightarrow f(x+1)-f(1)&=2*\frac {x(x+1)} {2}+x \\
\Rightarrow f(x+1)&=x^2+2x+2=(x+1)^2+1 ...(5)\\
\end{align*}
or,
\begin{align*}
f(x)&=x^2+1 ...(6)
\end{align*}
It is easy to check ##(6)## satisfies ##(1)## & ##(2)##.
 
  • Like
Likes fresh_42
  • #91
etotheipi said:
If you add them, using ##\sin^4{x} + \cos^4{x} \in [\frac{1}{2},1]##, you can get the inequality $$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) < -4$$ $$(x^2 + y^2)^2 -4(x^2 + y^2) + 4 < 0$$ $$([x^2 + y^2] -2)^2 < 0$$That would seem to imply that there are no possible values of ##x^2 + y^2## that satisfy the inequality, and that there are no solutions.
I think, pal, you missed the equality. The first expression should be

$$x^4 + y^4 + 2x^2 y^2 -4(x^2 + y^2) \leq -4$$
because 1 is allowed. Hence, your last expression becomes
$$([x^2 + y^2] -2)^2 \leq 0$$
since we want only real solutions, therefore equality is something that we want (a square cannot be less than zero if it's components are less than 0). So, we have
$$
[(x^2 +y^2)-2]^2= 0$$
$$
x^2 + y^2 = 2 $$
With this value we can put it into our original given equations to get the real solutions.

(Pal I'm not alluding anything to our discussion in that thread, we are friends and I respect that).
 
  • Like
Likes etotheipi
  • #92
Adesh said:
I think, pal, you missed the equality. The first expression should be

You're right, see #83 :wink:
 
  • Like
Likes Adesh
  • #93
etotheipi said:
You're right, see #83 :wink:
Sorry missed that. Actaully, there were so many posts and even the tab “there are more posts to display” but I didn’t click on it :biggrin:
 
  • #94
SPOILER Remarks/hints:
Problem #1 taught me that i did not really know what the weak topology is, i.e. what the basic open sets are. nice problem.

Problem #9 taught me that you can reverse the inequality and still get the same conclusion, even wthout assuming the map is onto, (essentially same proof). Another instructive problem.

Problem #2, I don't know much about, but I would start by finding their lie algebras. (Since, as you probably know, the lie algebra is the tangent space to the lie group at the identity, and transversaity is related to relative position of tangent spaces.)
 
Last edited:
  • #95
@mathwonk Yes, computing their Lie algebras is the right way to go. And in general that should be enough, because if ##G,G'\subset GL_n## are Lie groups, and ##g\in G\cap G'##, then ##G## and ##G'## intersect transversely at ##g## iff they do at ##1## (since ##T_gG=g_* \left(T_1G\right)## and similarly for ##G'##).
 
  • #96
mathwonk said:
you can reverse the inequality and still get the same conclusion, even wthout assuming the map is onto, (essentially same proof). Another instructive problem.
Sure, to remember the Poincare recurrence theorem is also instructive in this concern
 
  • #97
I'm afraid I don't know that theorem of Poincare'. Ok I googled it, yes, nice connection.
Also one gets from the reverse result that every (necessarily injective) isometry of a compact metric space into itself is also surjective, a possibly useful fact, analogous to results about linear maps of finite dimensional vector spaces, and algebraic maps of irreducible projective varieties, and of course functions on finite sets, another illustration of the principle that "compact" generalizes "finite".And hopefully the lie algebra result gives what we want, since then SU(n) has a nice lie algebra structure compatible with the other two!
Possible spoiler comments on #4.
And in looking at continuous functions on the interval, I was able to find a sequence of constant norm that converges to zero weakly, (using the known structure of the dual of C[0,1] due to Riesz in 1909, as functions of bounded variation), but so far that only illustrates problem #1, apparently not whether C[0,1] is a dual. I thought maybe I could use the theorem that the unit ball in a dual is weak star compact, but have not seen how to do that yet. I.e. even if the ball were not weakly compact, which is not at all clear either, it seems it might still be weak-star compact. Some people say the key is to look at "extreme points", i.e. maybe convexity is involved? (i.e. Krein Milman as well as Alaoglu.) Haven't written anything down yet but if the picture in my head is close, among positive valued functions in the unit ball in C[0,1], there seems to be only one extreme point, hence altogether only two? Just tossing out guesses.
 
Last edited:
  • #98
mathwonk said:
theorem that the unit ball in a dual is weak star compact,
mathwonk said:
Krein Milman
the key words have been pronounced :)
 
  • #99
Since \dim V \leq \dim V&#039; always holds, the space V&#039; must be infinite dimensional. Assume for a contradiction \sigma (V,V&#039;) is normable. The dual space of normed space is a Banach space, so it is enough to show that the vector space V&#039; admits a countable basis, which is impossible for Banach spaces.

Let \|\cdot\| be the inducing norm. I will attempt to justify that we may assume a countable neighborhood basis of 0 which consists of kernels of functionals in V&#039;. Consider a nh basis of 0, for instance T_n = \{x\in V \mid \|x\| &lt; 1/n\},\quad n\in\mathbb N.
Fix n\in\mathbb N. By Hahn-Banach (the point separation corollary), take \varphi _n \in V&#039; such that \overline{T_n}\subseteq \mathrm{Ker}\varphi _n. Then the sequence of kernels constitute a nh basis of 0. In fact,
<br /> \left\{\bigcap _{k=1}^n\varphi _k^{-1}(B(0,\varepsilon)) \mid n\in\mathbb N, \varepsilon &gt;0\right\}<br />
is a nh basis of 0 w.r.t \sigma (V,V&#039;). Take \varphi \in V&#039;. For every \varepsilon &gt;0 we have N_\varepsilon \in\mathbb N s.t
<br /> \bigcap _{k=1} ^{N_\varepsilon} \varphi _{k}^{-1}(B(0,\varepsilon)) \subseteq \varphi ^{-1}(B(0,\varepsilon))<br />
Put N := \min _{\varepsilon }N_{\varepsilon}. Then x\in \bigcap _{k=1}^N \mathrm{Ker}\varphi _k implies x\in\mathrm{Ker}\varphi, thus \varphi \in \mathrm{span}(\varphi _1,\ldots, \varphi _N).
I think Hahn Banach is overkill and I also think a similar argument passes assuming \sigma (V,V&#039;) is metrisable. I can't put my finger on it atm. It's likely something stupidly simple.
 
Last edited:
  • #100
nuuskur said:
Since \dim V \leq \dim V&#039; always holds, the space V&#039; must be infinite dimensional. Assume for a contradiction \sigma (V,V&#039;) is normable. The dual space of normed space is a Banach space, so it is enough to show that V&#039; admits a countable basis, which is impossible for Banach spaces.

Let \|\cdot\| be the inducing norm. I will attempt to justify that we may assume a countable nh basis of 0 which consists of kernels of functionals in V&#039;. Consider a neighborhood basis of 0, for instance T_n = \{x\in V \mid \|x\| &lt; 1/n\},\quad n\in\mathbb N.
Fix n\in\mathbb N. By Hahn-Banach (the point separation corollary), take \varphi _n \in V&#039; such that \overline{T_n}\subseteq \mathrm{Ker}\varphi _n. Then the sequence of kernels constitute a nh basis of 0. In fact,
<br /> \mathcal N := \left\{\bigcap _{k=1}^n\varphi _k^{-1}(B(0,\varepsilon)) \mid n\in\mathbb N, \varepsilon &gt;0\right\}<br />
is a nh basis of 0 w.r.t \sigma (V,V&#039;). Take \varphi \in V&#039;. For every \varepsilon &gt;0 we have N_\varepsilon \in\mathbb N s.t
<br /> \bigcap _{k=1} ^{N_\varepsilon} \varphi _{k}^{-1}(B(0,\varepsilon)) \subseteq \varphi ^{-1}(B(0,\varepsilon))<br />
Put N := \min _{\varepsilon }N_{\varepsilon}. Then x\in \bigcap _{k=1}^N \mathrm{Ker}\varphi _n implies x\in\mathrm{Ker}\varphi, thus \varphi \in \mathrm{span}(\varphi _1,\ldots, \varphi _N).
I think Hahn Banach is overkill and I also think a similar argument passes assuming \sigma (V,V&#039;) is metrisable. I can't put my finger on it atm. It's likely something stupidly simple.

I didn't read yet in detail but what do you mean with "basis"? A topological basis? In that case ##l^2(\Bbb{N})## is separable and thus has a countable basis, yet has infinite dimension. So I don't quite see how you would get a contradiction.
 

Similar threads

Replies
42
Views
10K
3
Replies
104
Views
16K
2
Replies
61
Views
11K
Replies
33
Views
8K
4
Replies
156
Views
20K
3
Replies
100
Views
11K
2
Replies
80
Views
9K
3
Replies
102
Views
10K
2
Replies
60
Views
11K
2
Replies
69
Views
8K
Back
Top