Math Challenge - October 2021

In summary, the conversation covers various mathematical topics such as functional analysis, project management, set theory, group theory, Lie theory, countability, Banach algebra, stochastic, function theory, and calculus. The problems discussed include proving the compactness of a function space, finding a partition that all workers agree upon in project management, using the axiom schema of separation to prove the cardinality of a set is less than its power set, showing the linear independence of homomorphisms in a field, proving the nilpotency of general Heisenberg algebras, establishing the bijection of a polynomial function, proving the non-emptiness of the spectrum in a complex Banach algebra, using probability to show the upper bound of
  • #36
Problem 10.

A way of proving that ##\pi## is irrational is considering the integral

\begin{align*}
I_n = \frac{1}{n!} \int_0^\pi C^n [(\pi - x) x]^n \sin x dx .
\end{align*}

You show that ##0 < I_n < 1## for ##n## large enough. Then you show that for all ##n \geq 0## that ##I_n## is an integer if ##\pi = a/b## and if we choose ##C = b##, obtaining a contradiction.

It is trivial to use this method to prove that ##\pi^2## is irrational.

Let us quickly run through the details of this argument for proving ##\pi## to be irrational. We want to obtain a contradiction by putting

\begin{align*}
\pi = \frac{a}{b} .
\end{align*}

Write

\begin{align*}
I_n = \frac{1}{n!} \int_0^\pi C^n [(\pi - x) x]^n \sin x dx
\end{align*}

The polynomial ##C^n [(\pi - x) x]^n## is non-negative over ##[0, \pi]## and symmetric about the point ##\pi / 2## and has a maximum of ##(C \pi^2 /4)^n## there which implies ##0 < I_n < \pi \frac{(C \pi^2 / 4)^n}{n!}##, in which ##\pi \frac{(C \pi^2 / 4)^n}{n!}## can be made arbitrarily small by choosing ##n## large enough.

Integrating ##I_n## by parts twice gives

\begin{align*}
I_n = (4n - 2) C I_{n-1} - C^2 \pi^2 I_{n-2} \qquad (*)
\end{align*}

Also we have

\begin{align*}
I_0 = 2 \quad \text{and} \quad I_1 = 4C \qquad (**)
\end{align*}

By choosing ##C = b## we have, by our original assumption, that ##C^2 \pi^2 = b^2 \pi^2 = a^2 =##integer. We then have by ##(*)## and ##(**)## that all the ##I_n## are integers in contradiction with the fact that ##0 < I_n < 1## for ##n## large enough.

Now to prove that ##\pi^2## is irrational. We obtain a contradiction by putting

\begin{align*}
\pi^2 = \frac{p}{q} .
\end{align*}

By choosing ##C = q## we have that ##C^2 \pi^2 = q^2 \pi^2 = pq =##integer. We then have by ##(*)## and ##(**)## that all the ##I_n## are integers in contradiction with the fact that ##0 < I_n < 1## for ##n## large enough. As easy as pie.
 
Last edited:
  • Like
Likes fresh_42
Physics news on Phys.org
  • #37
fresh_42 said:
We have only two conditions: the work has to be done ##\cup X_i = S## and nothing to do at all ##x_i = 0## will be accepted.

Are those 2 conditions really sufficient to guarantee that there will exist a partition acceptable to all workers? Consider a simple 2 worker case where worker 1 accepts only partitions where ##x_1 \in \mathbb{Q} \, \cap [0,1)## (note that ##x_1 = 1## is not acceptable to worker 1 in this case) and worker 2 accepts only partitions where ##x_2 \in \{0\} \, \cup ((\mathbb{R}∖ \mathbb{Q}) \, \cap [0,1])##. Then, every ##(x_1 \in \mathbb{R_{\geq 0}}, x_2 \in \mathbb{R_{\geq 0}})## pair that sums up to 1 will be acceptable to one of the two workers but none of these will be acceptable to both workers, since when ##x_1 + x_2 =1##, either both values should be rational numbers, or both should be irrational. So I guess there must be some additional conditions implicitly assumed by the question. What are those? Or am I missing something obvious?
 
  • #38
Not anonymous said:
Are those 2 conditions really sufficient to guarantee that there will exist a partition acceptable to all workers? Consider a simple 2 worker case where worker 1 accepts only partitions where ##x_1 \in \mathbb{Q} \, \cap [0,1)## (note that ##x_1 = 1## is not acceptable to worker 1 in this case) and worker 2 accepts only partitions where ##x_2 \in \{0\} \, \cup ((\mathbb{R}∖ \mathbb{Q}) \, \cap [0,1])##. Then, every ##(x_1 \in \mathbb{R_{\geq 0}}, x_2 \in \mathbb{R_{\geq 0}})## pair that sums up to 1 will be acceptable to one of the two workers but none of these will be acceptable to both workers, since when ##x_1 + x_2 =1##, either both values should be rational numbers, or both should be irrational. So I guess there must be some additional conditions implicitly assumed by the question. What are those? Or am I missing something obvious?
##(0,1),(1,0) \in S.## But you are right in so far, as we need a metric and its induced topology.
 
  • #39
8. Let ##X_1,\ldots,X_n## be independent random variables, such that almost certain ##a_i\leq X_i-E(X_i)\leq b_i##, and let ##0<c\in \mathbb{R}##. Prove that
$$
\operatorname{Pr}\left(\sum_{i=1}^n (X_i-E (X_i))\geq c\right)\leq \exp\left(\dfrac{-2c^2}{\sum_{i=1}^n(b_i-a_i)^2}\right).
$$

been struggling with this. As no one else has tried, here is what I have been trying, with I think some issues with the final argument
So first, set n=1 to eliminate the summation

##\operatorname{Pr}\left((X-E (X))\geq c\right)\leq \exp\left(\dfrac{-2c^2}{(b-a)^2}\right)##

set c=kσ and (b-a)=λσ, where σ2 is the variance of X (as we can say almost certainly the interval (X-E (X)) lies within (a-b) then X has finite variance)

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2\sigma^2}{\lambda^2\sigma^2}\right)##

cancel the variance term

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2}{\lambda^2}\right)##

Note Chevychef's theorem
##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \frac{1}{k^2}##

As almost certain ##a_i\leq X_i-E(X_i)\leq b_i## then λ>>3, as per Chebyshev
##\operatorname{Pr}\left((X-E (X))\geq 3\sigma\right)\leq \frac{1}{9}##

so two cases:
1)##k\geq \lambda##.
As (b-a)=λσ and ##k\geq \lambda## and Pr(λσ<(X-E (X))= 0 almost certainly then

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2}{\lambda^2}\right)##

2) where ##k<\lambda##
##\exp\left(\dfrac{-2k^2}{\lambda^2}\right) \geq \exp(-2)##

per Chebyshev
##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \frac{1}{k^2}##

for ##k\geq3##,
##\frac{1}{k^2} < \exp(-2)##

for ##k<3## (this argument is pretty weak, got stuck here)
as ##k<3<\lambda## and ##\lambda >>3## then

##\frac{-2k^2}{\lambda^2}## -> 0

so ##\exp\left(\dfrac{-2k^2}{\lambda^2}\right) =1 ##

then can waive my hands and bring back in the summation and it should likewise hold
 
Last edited:
  • #40
BWV said:
8. Let ##X_1,\ldots,X_n## be independent random variables, such that almost certain ##a_i\leq X_i-E(X_i)\leq b_i##, and let ##0<c\in \mathbb{R}##. Prove that
$$
\operatorname{Pr}\left(\sum_{i=1}^n (X_i-E (X_i))\geq c\right)\leq \exp\left(\dfrac{-2c^2}{\sum_{i=1}^n(b_i-a_i)^2}\right).
$$

been struggling with this. As no one else has tried, here is what I have been trying, with I think some issues with the final argument
So first, set n=1 to eliminate the summation

##\operatorname{Pr}\left((X-E (X))\geq c\right)\leq \exp\left(\dfrac{-2c^2}{(b-a)^2}\right)##

set c=kσ and (b-a)=λσ, where σ2 is the variance of X (as we can say almost certainly the interval (X-E (X)) lies within (a-b) then X has finite variance)

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2\sigma^2}{\lambda^2\sigma^2}\right)##

cancel the variance term

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2}{\lambda^2}\right)##

Note Chevychef's theorem
##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \frac{1}{k^2}##

As almost certain ##a_i\leq X_i-E(X_i)\leq b_i## then λ>>3, as per Chebyshev
##\operatorname{Pr}\left((X-E (X))\geq 3\sigma\right)\leq \frac{1}{9}##

so two cases:
1)##k\geq \lambda##.
As (b-a)=λσ and ##k\geq \lambda## and Pr(λσ<(X-E (X))= 0 almost certainly then

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2}{\lambda^2}\right)##

2) where ##k<\lambda##
##\exp\left(\dfrac{-2k^2}{\lambda^2}\right) \geq \exp(-2)##

per Chebyshev
##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \frac{1}{k^2}##

for ##k\geq3##,
##\frac{1}{k^2} \leq \exp(-2)##

for ##k<3## (this argument is pretty weak, got stuck here)
as ##k<3<\lambda## and ##\lambda >>3## then

##\frac{-2k^2}{\lambda^2}## -> 0

so ##\exp\left(\dfrac{-2k^2}{\lambda^2}\right) =1 ##

then can waive my hands and bring back in the summation and it should likewise hold
My proof uses the exponential version of Chebyshev
\begin{align*}
\operatorname{Pr}(X\geq a)&=\operatorname{Pr}(\exp(X)\geq \exp(a))\\
&\leq \inf_{p>0} \dfrac{1}{\exp(ap)}\int_\mathbb{R}\exp(pX)\,d\operatorname{Pr}=\inf_{p>0} \dfrac{E(\exp(pX))}{\exp(pa)}
\end{align*}
on ##Y_i:=X_i-E(X_i)## and the convexity of the exponential function.
 
  • Like
Likes BWV
  • #41
fresh_42 said:
11. (open) Find all functions f,g such that

23-10-2021.jpg
 

Attachments

  • 23-10-2021.pdf
    46.6 KB · Views: 146
Last edited:
  • #43
fresh_42 said:
Right. But could you write it out here instead of uploading a pdf?
Here is explained how you can type formulas on PF: https://www.physicsforums.com/help/latexhelp/
It looks as if you already had used TeX to write it, so you can just copy and paste it here. The double $$ remains the same, and the single $ must be replaced by ##.
I used Texmacs to write the answer... I thought it would work out because i was able to convert the file into .html, .pdf, .tex and .png but i was unable to find a way to parse the answer into the comments... so maybe i have to try Lyx next..
 
  • #44
ANB3010 said:
I used Texmacs to write the answer... I thought it would work out because i was able to convert the file into .html, .pdf, .tex and .png but i was unable to find a way to parse the answer into the comments... so maybe i have to try Lyx next..
Simpler than that: mark your text (Ctrl+A), copy it (Ctrl+C), and paste it here (Ctrl+V).
 
  • #45
fresh_42 said:
11. (open) Find all functions ##f,g## such that $$
f,g : \mathbb{R} \\ \{-1,0,1\} \rightarrow \mathbb{R}
$$
$$
xf(x) = 1 + \frac{1}{x} g(\frac{1}{x}) \text{ and } \frac{1}{x^2}f(\frac{1}{x}) = x^2 g(x)
$$
##xf(x) = 1 + \frac{1}{x} g(\frac{1}{x}) \Rightarrow f(x) = \frac{1}{x} + \frac{1}{x^2} g(\frac{1}{x})##
(1.1)​
##\Rightarrow f(\frac{1}{x}) = x + {x^2} g(x)##
(1.2)​

Substituting equation (1.1) in the other given condition in the question, i.e. ##\frac{1}{x^2}f(\frac{1}{x}) = x^2 g(x)##, we get the solution for function ##g##: $$
\frac{1}{x} + g(x) = x^2 g(x) \Rightarrow g(x) = \frac{1}{x(x^2-1)}
$$
(2)​

Using equation (2) in equation (1.1) gives the solution for ##f##: $$
f(x) = \frac{1}{x} + \frac{1}{x^2} \dfrac{1}{\frac{1}{x}(\frac{1}{x^2}-1)} = \frac{1}{x} + \frac{x}{1-x^2} = \frac{1}{x(1-x^2)}
$$
 
  • #46
fresh_42 said:
14. (open) Solve the following equation system for real numbers:
(1)##\,\,\, x + xy + xy^2 = -21##
(2)## \,\,\, y + xy + x^2 y = 14##
(3)## \,\,\, x + y = -1##

Eq. (3) implies that ##y = -(1+x)##. Substituting for ##y## based on this in Eq. (1) gives:
##x(1 + y + y^2) = x(1 -1 - x + 1 + 2x + x^2) = x(1+x+x^2) = -21##
(4)​

Similarly substituting for ##y## in Eq. (2) gives:
##y(1+x+x^2) = -(1+x)(1+x+x^2) = 14 ##
(5)​

From equations (4) and (5), we get:$$
\dfrac{-(1+x)(1+x+x^2)}{x(1+x+x^2)} = \dfrac{14}{-21} = - \dfrac{2}{3}
$$
(6)​

##(1+x+x^2)## cannot be zero for any real-valued ##x## since ##1+x+x^2 = 0## has no real-valued roots. Therefore that expression can be canceled from the numerator and denominator of Eq. (6) which then yields the solution for ##x##: $$
\dfrac{-(1+x)}{x} = - \dfrac{2}{3} \Rightarrow \dfrac{1}{x} + 1 = \dfrac{2}{3} \Rightarrow x = -3
$$

Substituting this value of ##x## in Eq. (3) gives the solution for ##y##: $$
y = -(1+x) = -(1 - 3) = 2
$$
 
  • #47
When searching for the definition of linear independent homomorphisms, I found a proof of a 'special' case of this problem. But I think it is nearly the exact same proof for this problem; so feel free to delete my post.

Proof: ##(\Longrightarrow):## Let ##P(n)## be the statement that ##\sigma_1, \dots, \sigma_n## are linearly independent if they are distinct and suppose ##\sigma_1, \dots, \sigma_n## are distinct. For the base case, suppose ##n = 1## and that there exists ##c \in \mathbb{F}## such that for all ##g \in G## we have ##c\sigma_1(g) = 0##. Since ##\sigma_1(g)## is nonzero, we must have ##c = 0##. We proceed by induction.

Suppose there exists ##c_1, \dots, c_n \in \mathbb{F}## such that for all ##g \in G## we have $$c_1\sigma_1(g) + \dots + c_n\sigma_n(g) = 0 (1)$$

Since these homomorphisms are distinct, we can find ##g_0 \in G## such that ##\sigma_1(g_0) \neq \sigma_n(g_0)##. Then we have

$$c_1\sigma_1(gg_0) + \dots + c_n\sigma_n(gg_0) = 0$$

which can be rewritten as

$$c_1\sigma_1(g)\sigma_1(g_0) + \dots + c_n\sigma_n(g)\sigma_n(g_0) = 0 (2)$$

Lastly, we multiply equation (1) by ##\sigma_n(g_0)##. This gives

$$c_1\sigma_1(g)\sigma_n(g_0) + \dots + c_n\sigma_n(g)\sigma_n(g_0) = 0 (3)$$

Subtracting (2) from (3) gives

$$c_1\sigma_1(g)(\sigma_1(g_0) - \sigma_n(g_0)) + \dots + c_{n-1}\sigma_{n-1}(g)(\sigma_{n-1}(g_0) - \sigma_{n}(g_0)) + 0 = 0$$

By ##P(n-1)##, we have ##c_1(\sigma_1(g_0) - \sigma_n(g_0)) = 0##. Since ##\sigma_1(g_0) - \sigma_n(g_0)## is nonzero, we have ##c_1 = 0##. Plugging this into (1), we have

$$0 + c_2\sigma_2(g) + \dots + c_n\sigma_n(g) = 0 $$

By ##P(n-1##), we have ##c_2 = \dots = c_n = 0##. This completes the induction.##(\Longleftarrow):## Suppose ##\sigma_1, \dots, \sigma_n## are not all distinct. WLOG, suppose ##\sigma_1 = \sigma_2##. Then choose ##c_1 = 1, c_2 = -1, c_3 = \dots = c_n = 0##. We see for all ##g \in G##,

$$c_1\sigma_1(g) + \dots + c_n\sigma_n(g) = \sigma_1(g) - \sigma_2(g) = 0$$

which shows ##\sigma_1, \dots, \sigma_n## are not linearly independent. []
 
  • #48
fresh_42 said:
I'm not sure whether this is right. In my example, we have the point ##(0.1,0.1,0.8)## as the only solution where all three workers agree upon. A solution ##T=(0.3,0.3,0.4)## isn't possible even if workers agree on less workload. Only ##Z## would have less work, ##X,Y## have more and so wouldn't agree on ##T##.
But your diagram seems to suggest that X,Y accept workloads up to 0.3 when Z's workload is 0. So I am still confused as to why ##T=(0.3,0.3,0.4)## is unacceptable to workers X, Y in that example. However, I do agree the solution I had posted wouldn't be applicable to the more general setting you mentioned.
 
  • #49
Not anonymous said:
But your diagram seems to suggest that X,Y accept workloads up to 0.3 when Z's workload is 0. So I am still confused as to why ##T=(0.3,0.3,0.4)## is unacceptable to workers X, Y in that example. However, I do agree the solution I had posted wouldn't be applicable to the more general setting you mentioned.
We are looking for a point where all three (n) areas (polyhedrons) intersect. Such a point exists, otherwise, the problem statement would be mean. My picture only illustrates why your proof doesn't work.
 
  • #50
fresh_42 said:
We are looking for a point where all three (n) areas (polyhedrons) intersect. Such a point exists, otherwise, the problem statement would be mean. My picture only illustrates why your proof doesn't work.

But ##(0.3. 0.3, 0.4)## also lies in the region where the 3 areas intersect. Assuming that your diagram was illustrating the case where workers 1, 2 and 3 accepts workloads in the ranges ##[0,0.3], [0,0.3], [0. 0.8]## respectively, the intersection of the 3 polyhedrons (each workers' "acceptance" region would be a trapezium lying on the plane defined by ##x+y+z=1## in this example) would be an area, not just a single point. In the image I have attached, the intersection region is colored pink. The green area is the acceptance region of worker 3 (##z \in [0,0.8]## and lying on ##x+y+z=1##). The region acceptable to workers 1 and 3 is the green area lying within the red rectangle (red rectangle represents ##x## bounded in ##[0,0.3]##), while the region acceptable to workers 2 and 3 is the green area lying within the blue rectangle (blue rectangle defined by ##y## bounds ##[0,0.3]##). It can be seen that the pink region meets the acceptance criteria of all the 3 workers. Point ##(0.3, 0.3, 0.4)## happens to be the point lying on the boundaries of the red and blue rectangles while also lying on the green area. The point you mention, ##(0.1, 0.1, 0.8)## too lies on the pink region but is on the boundary of the green area but inside the bounds of the red and blue rectangles.
plot1.png
 

Attachments

  • plot1.png
    plot1.png
    15.5 KB · Views: 111
  • plot1.png
    plot1.png
    15.2 KB · Views: 120
  • #51
The maximal values in my example are ##w_1=w_x=0.3\, , \,w_2=w_y=0.3\, , \,w_3=w_z=0.8## and the unique solution is ##(0.1\, , \,0.1\, , \,0.8).## Your algorithm doesn't find that.
 
  • #52
fresh_42 said:
The maximal values in my example are ##w_1=w_x=0.3\, , \,w_2=w_y=0.3\, , \,w_3=w_z=0.8## and the unique solution is ##(0.1\, , \,0.1\, , \,0.8).## Your algorithm doesn't find that.
My question in the previous post was not about the algorithm as I already agreed that it does not work in the general case. Rather, I am trying to make sense of the diagram in post #26. That diagram does not seem to conform to the second point of post #18. Is that condition too not applicable to the original question? I ask because if a worker accepts a partition based on only the workload assigned to him and does not bother about the workloads of others, then ##(0.1, 0.1, 0.8)## will be a solution but not a unique one. Instead, all points in the pink-colored region of the diagram in my previous post will be acceptable to all 3 workers. However, your diagram shows two unconnected green triangles suggesting that worker 2 accepts a workload of 0.25 (or any value in ##[0,0.3]## for that matter) only in a more restricted sense. For e.g., the point ##(0.75, 0.25, 0)## lies in the bottom green triangle in your diagram (the region labeled X_y) but ##(0, 0.25, 0.75)## appears to belong only to X_z (grey region) and is not indicated as also belonging X_y.

fresh_42 said:
Every worker has his own set of feasible points regardless of the others. These sets don't have to be convex or even connected. The crucial point is that are part of these sets of feasible partitions.
Does this imply that in a 3-worker case (for example), it is possible and valid for worker 1 to accept the partition ##(a, b, c)## without necessarily accepting the partition ##(a, c, b)## (where ##a,b,c \in [0,1]## and ##a+b+c=1##) even though his workload is the same in both partitions? In other words, the feasible points of a worker are not determined by a single axis? If the answer is yes, then that would resolve my confusion regarding your diagram and it then becomes clear why ##(0.1, 0.1, 0.8)## is the unique solution for partitions acceptable to all three workers.
 
  • #53
You can forget the coordinate system and only draw the triangle. I don't know whether it contradicts your question about the shapes of the ##X_i##, however, it does not contradict the problem statement. Maybe I didn't have answered your question about the shapes precisely enough.

I do not know an algorithm and I am not sure whether there is an easy one since the shapes can be rather wild, especially in higher dimensions. I can only prove the existence of a solution by analytic means. Otherwise, I would have posted it as an example for the simplex algorithm in the HS section.
 
  • #54
fresh_42 said:
Extra: (open) Prove that both sequences converge to ##0##
Does "both sequences" here refer to (1) ##(x_n)_{n \in \mathbb{N}}## and ##(x_{n^2})_{n \in \mathbb{N}}##, or to (2) ##(\frac{x_n}{n})_{n \in \mathbb{N}}## and ##(\frac{x_{n^2}}{n})_{n \in \mathbb{N}}##?
 
  • #55
fresh_42 said:
I am not sure whether there is an easy one since the shapes can be rather wild, especially in higher dimensions
True. Definitely not easy for me at least. Even with just 3 dimensions, I can imagine shapes that seem wild! Jigsaw puzzle piece shapes, for instance (wild in a fun way) :smile:
 
  • #56
Not anonymous said:
Does "both sequences" here refer to (1) ##(x_n)_{n \in \mathbb{N}}## and ##(x_{n^2})_{n \in \mathbb{N}}##, or to (2) ##(\frac{x_n}{n})_{n \in \mathbb{N}}## and ##(\frac{x_{n^2}}{n})_{n \in \mathbb{N}}##?
Show that the sequence ##\dfrac{x_n}{n}## converges to zero.
Hint: You must rule out that the sequence ##x_n## is bounded from below (other than by ##0##).

The second case ##\dfrac{x_{n^2}}{n}## follows from the first because we can define ##y_n:=x_{n^2}## and are in the first case.
 
  • #57
Not anonymous said:
True. Definitely not easy for me at least. Even with just 3 dimensions, I can imagine shapes that seem wild! Jigsaw puzzle piece shapes, for instance (wild in a fun way) :smile:
Yes. My method uses Brouwer's fixed point theorem and the trick is to create a situation where it can be applied.
 
  • #58
fresh_42 said:
Show that the sequence ##\dfrac{x_n}{n}## converges to zero.
Since it is given that ##\sum\limits_{i=1}^{n} \dfrac{x_{i^2}}{i} \leq 1## for all ##n \in \mathbb{N}## and that all ##x_{i}##'s are positive values, it must be obvious that ##x_1 < 1## (since otherwise, we will have the sum start to exceed 1 either from ##n=1##, if ##x_1 > 1##, or from ##n=2##, if ##x_1 = 1##). And since it is given that ##x_1 > x_2 > x_3 > ...##, it follows that ##(\dfrac{x_n}{n})_{n \in \mathbb{N}}## also form a strictly decreasing sequence and that ##\dfrac{x_n}{n} < 1## for all ##n##.
(1)​

Suppose the sequence ##\left(\dfrac{x_n}{n}\right)_{n \in \mathbb{N}}## does not converge to 0. Then there must be some ##0 < \delta < 1## such that ##\dfrac{x_n}{n} > \delta## for all ##n \in \mathbb{N}##.
(2)​

Let ##K \equiv \left\lceil\dfrac{1}{\delta}\right\rceil##. Now consider the value of ##\dfrac{x_{(K+1)n}}{(K+1)n}##, an element which belongs to the same sequence.

Since ##(K+1)n > n## we must have ##x_{(K+1)n} < x_n##. Therefore, ##\dfrac{x_{(K+1)n}}{(K+1)n} < \dfrac{x_n}{(K+1)n} < \dfrac{1}{K} \dfrac{x_n}{n}##
(3)​

By definition of ##K##, ##\dfrac{1}{K} \leq \delta##. And from (1), we know that ##\dfrac{x_n}{n} < 1##. Therefore, ##\dfrac{1}{K} \dfrac{x_n}{n} < \delta##. Applying this in (3) gives ##\dfrac{x_{(K+1)n}}{(K+1)n} < \delta##, but this contradicts (2). Therefore the assumption that the sequence does not converge to 0 must be wrong, proving that it must converge to 0.
 
  • Like
Likes fresh_42
  • #59
Not anonymous said:
Since it is given that ##\sum\limits_{i=1}^{n} \dfrac{x_{i^2}}{i} \leq 1## for all ##n \in \mathbb{N}## and that all ##x_{i}##'s are positive values, it must be obvious that ##x_1 < 1## (since otherwise, we will have the sum start to exceed 1 either from ##n=1##, if ##x_1 > 1##, or from ##n=2##, if ##x_1 = 1##). And since it is given that ##x_1 > x_2 > x_3 > ...##, it follows that ##(\dfrac{x_n}{n})_{n \in \mathbb{N}}## also form a strictly decreasing sequence and that ##\dfrac{x_n}{n} < 1## for all ##n##.
(1)​

Suppose the sequence ##\left(\dfrac{x_n}{n}\right)_{n \in \mathbb{N}}## does not converge to 0. Then there must be some ##0 < \delta < 1## such that ##\dfrac{x_n}{n} > \delta## for all ##n \in \mathbb{N}##.
(2)​

Let ##K \equiv \left\lceil\dfrac{1}{\delta}\right\rceil##. Now consider the value of ##\dfrac{x_{(K+1)n}}{(K+1)n}##, an element which belongs to the same sequence.

Since ##(K+1)n > n## we must have ##x_{(K+1)n} < x_n##. Therefore, ##\dfrac{x_{(K+1)n}}{(K+1)n} < \dfrac{x_n}{(K+1)n} < \dfrac{1}{K} \dfrac{x_n}{n}##
(3)​

By definition of ##K##, ##\dfrac{1}{K} \leq \delta##. And from (1), we know that ##\dfrac{x_n}{n} < 1##. Therefore, ##\dfrac{1}{K} \dfrac{x_n}{n} < \delta##. Applying this in (3) gives ##\dfrac{x_{(K+1)n}}{(K+1)n} < \delta##, but this contradicts (2). Therefore the assumption that the sequence does not converge to 0 must be wrong, proving that it must converge to 0.
Two issues. A minor one: you may only assume ##\dfrac{x_n}{n}>\delta ## for all ##n\geq N## for some ##N\in \mathbb{N},## but this is only a technical point. Your argument remains valid if the inequality does not hold for small ##n.## Another question is: Why does the sequence converge at all? (Heuristic is sufficient.)
 
Last edited:
  • #60
fresh_42 said:
Two issues. A minor one: you may only assume ##\dfrac{x_n}{n}>\delta ## for all ##n\geq N## for some ##N\in \mathbb{N},## but this is only a technical point. Your argument remains valid if the inequality does not hold for small ##n.## Another question is: Why does the sequence converge at all? (Heuristic is sufficient.)

I had mentioned ##\dfrac{x_n}{n}>\delta \,\, \forall n \in \mathbb{N}## (instead of for ##n > N## for some ##N\in \mathbb{N}##) because the ##\dfrac{x_n}{n}## is a monotonically decreasing series with even its first element required to be less than 1 and therefore ##\dfrac{x_N}{N}>\delta \Rightarrow \dfrac{x_{N-1}}{N-1}>\delta \Rightarrow \dfrac{x_{N-2}}{{N-2}}>\delta \Rightarrow ... \Rightarrow \dfrac{x_1}{1}>\delta##, provided ##0 < \delta < 1## as was assumed. But I agree that the proof needn't have assumed that and could have started instead with "##n > N## for some ##N##" and that would be the correct way to go about in general, even if the series wasn't strictly decreasing.

Why does the sequence converge at all? I believe it must converge because of the series is strictly decreasing and it contains only positive values. When the series is strictly decreasing, the value of successive elements must either keep decreasing without a bound (and so tend to ##-\infty##) or must decrease indefinitely with some finite bound. Since the series contains only positive values, it cannot touch or drop below zero, meaning 0 acts as a strict lower bound to the limit toward which such a series can tend towards.
 
  • #61
Not anonymous said:
Why does the sequence converge at all? I believe it must converge because of the series is strictly decreasing and it contains only positive values.

When the series is strictly decreasing, the value of successive elements must either keep decreasing without a bound (and so tend to ##-\infty ##) or must decrease indefinitely with some finite bound. Since the series contains only positive values, it cannot touch or drop below zero, meaning 0 acts as a strict lower bound to the limit toward which such a series can tend towards.

Convergence needs a certain property of the real numbers that should be mentioned. I don't want to hear the technical term but the principle. That's why I wrote that a heuristic is sufficient.
 
  • #62
fresh_42 said:
Extra: (open)

elliptic_curves-png.png

Consider the two elliptic curves and observe that one has two connection components and the other one has only one. Determine the constant ##c \in [1,3]## in ##y^2 = x^3 - 2x + c## where this behavior exactly changes. What is the left-most point of this curve?

In ##y^2 = x^3 -2x + 1##, there are two components (disconnected from one another) because the curve is defined only when ##x^3 -2x + 1 \geq 0## (due to ##y## being the square root of that polynomial in ##x##) and there is a range of ##x##-values (incidentally including ##x=0##) such that ##x^3 -2x + 1## is negative over this range (this is the "gap" between the x-ranges of the two components) but is non-negative for all x-values greater than this range and also for a some range to the left of this gap.

Now, to find the value of constant ##c \in [1,3]## where curves defined by ##y^2 = x^3 - 2x + c## change from containing one component to two components, we use the fact that the .

Let us find the minima of the polynomial ##f(x) = x^3 -2x + 1##. At the minima, ##f'(x) = 0## and ##f''(x) > 0##.
##f'(x) = 3x^2 - 2 = 0 \Rightarrow x = \pm \sqrt{\dfrac{2}{3}}##
##f''(x) = 6x \Rightarrow f''(x) > 0 \iff x > 0##

Therefore ##x=\sqrt{\dfrac{2}{3}}## must correspond to a local minimum point while ##x=-\sqrt{\dfrac{2}{3}}## must correspond to a maximal point.

In the case where the curve ##y^2 = f(x)## consists of 2 components, the minimal value ##f(\sqrt{\dfrac{2}{3}})## must be negative, because otherwise, the ##f(x) < 0## only for ##x < x_1## for some ##x_1 < - \sqrt{\dfrac{2}{3}}## and so the curve be defined for all ##x \geq x_1## and won't have 2 separate components.

##f(\sqrt{\dfrac{2}{3}}) < 0 \Rightarrow \dfrac{2}{3}\sqrt{\dfrac{2}{3}} - 2\sqrt{\dfrac{2}{3}} + c < 0 \Rightarrow c < \dfrac{4}{3}\sqrt{\dfrac{2}{3}}##.
Thus, ##c_0 = \dfrac{4}{3}\sqrt{\dfrac{2}{3}} \approx 1.088662108## is the value of the constant where the change in the nature of the curve (change between consisting of two components vs. one component alone) happens.
(2)​

At the left-most point of the curve ##y^2 = x^3 -2x+c_0##, we must have ##y^2 = y = 0##. We solve for ##x## accordingly.

##x^3 - 2x + c_0 = 0 \Rightarrow x(x^2-2) = -c_0 = -\dfrac{4}{3}\sqrt{\dfrac{2}{3}}##.
(2)​

##x=\sqrt{\dfrac{2}{3}}## is obviously a solution for this equation but it is not the leftmost point. With some trial and error, or informed guesswork, followed by validation, I found that ##x=-2\sqrt{\dfrac{2}{3}}## is the other solution for equation (1). Therefore, ##(x=-2\sqrt{\dfrac{2}{3}}, y=0)## is the left-most point of the curve ##y^2 = x^3 - 2x + \dfrac{4}{3}\sqrt{\dfrac{2}{3}}##
 

Similar threads

  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
  • Math Proof Training and Practice
3
Replies
86
Views
9K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
2
Replies
61
Views
6K
  • Math Proof Training and Practice
2
Replies
56
Views
7K
  • Math Proof Training and Practice
2
Replies
67
Views
8K
  • Math Proof Training and Practice
2
Replies
60
Views
8K
  • Math Proof Training and Practice
Replies
33
Views
7K
Back
Top