Math Challenge - October 2021

  • Challenge
  • Thread starter fresh_42
  • Start date
  • Featured
  • #51
fresh_42
Mentor
Insights Author
2021 Award
15,943
14,377
The maximal values in my example are ##w_1=w_x=0.3\, , \,w_2=w_y=0.3\, , \,w_3=w_z=0.8## and the unique solution is ##(0.1\, , \,0.1\, , \,0.8).## Your algorithm doesn't find that.
 
  • #52
137
56
The maximal values in my example are ##w_1=w_x=0.3\, , \,w_2=w_y=0.3\, , \,w_3=w_z=0.8## and the unique solution is ##(0.1\, , \,0.1\, , \,0.8).## Your algorithm doesn't find that.
My question in the previous post was not about the algorithm as I already agreed that it does not work in the general case. Rather, I am trying to make sense of the diagram in post #26. That diagram does not seem to conform to the second point of post #18. Is that condition too not applicable to the original question? I ask because if a worker accepts a partition based on only the workload assigned to him and does not bother about the workloads of others, then ##(0.1, 0.1, 0.8)## will be a solution but not a unique one. Instead, all points in the pink-colored region of the diagram in my previous post will be acceptable to all 3 workers. However, your diagram shows two unconnected green triangles suggesting that worker 2 accepts a workload of 0.25 (or any value in ##[0,0.3]## for that matter) only in a more restricted sense. For e.g., the point ##(0.75, 0.25, 0)## lies in the bottom green triangle in your diagram (the region labeled X_y) but ##(0, 0.25, 0.75)## appears to belong only to X_z (grey region) and is not indicated as also belonging X_y.

Every worker has his own set of feasible points regardless of the others. These sets don't have to be convex or even connected. The crucial point is that are part of these sets of feasible partitions.
Does this imply that in a 3-worker case (for example), it is possible and valid for worker 1 to accept the partition ##(a, b, c)## without necessarily accepting the partition ##(a, c, b)## (where ##a,b,c \in [0,1]## and ##a+b+c=1##) even though his workload is the same in both partitions? In other words, the feasible points of a worker are not determined by a single axis? If the answer is yes, then that would resolve my confusion regarding your diagram and it then becomes clear why ##(0.1, 0.1, 0.8)## is the unique solution for partitions acceptable to all three workers.
 
  • #53
fresh_42
Mentor
Insights Author
2021 Award
15,943
14,377
You can forget the coordinate system and only draw the triangle. I don't know whether it contradicts your question about the shapes of the ##X_i##, however, it does not contradict the problem statement. Maybe I didn't have answered your question about the shapes precisely enough.

I do not know an algorithm and I am not sure whether there is an easy one since the shapes can be rather wild, especially in higher dimensions. I can only prove the existence of a solution by analytic means. Otherwise, I would have posted it as an example for the simplex algorithm in the HS section.
 
  • #54
137
56
Extra: (open) Prove that both sequences converge to ##0##
Does "both sequences" here refer to (1) ##(x_n)_{n \in \mathbb{N}}## and ##(x_{n^2})_{n \in \mathbb{N}}##, or to (2) ##(\frac{x_n}{n})_{n \in \mathbb{N}}## and ##(\frac{x_{n^2}}{n})_{n \in \mathbb{N}}##?
 
  • #55
137
56
I am not sure whether there is an easy one since the shapes can be rather wild, especially in higher dimensions
True. Definitely not easy for me at least. Even with just 3 dimensions, I can imagine shapes that seem wild! Jigsaw puzzle piece shapes, for instance (wild in a fun way) :smile:
 
  • #56
fresh_42
Mentor
Insights Author
2021 Award
15,943
14,377
Does "both sequences" here refer to (1) ##(x_n)_{n \in \mathbb{N}}## and ##(x_{n^2})_{n \in \mathbb{N}}##, or to (2) ##(\frac{x_n}{n})_{n \in \mathbb{N}}## and ##(\frac{x_{n^2}}{n})_{n \in \mathbb{N}}##?
Show that the sequence ##\dfrac{x_n}{n}## converges to zero.
Hint: You must rule out that the sequence ##x_n## is bounded from below (other than by ##0##).

The second case ##\dfrac{x_{n^2}}{n}## follows from the first because we can define ##y_n:=x_{n^2}## and are in the first case.
 
  • #57
fresh_42
Mentor
Insights Author
2021 Award
15,943
14,377
True. Definitely not easy for me at least. Even with just 3 dimensions, I can imagine shapes that seem wild! Jigsaw puzzle piece shapes, for instance (wild in a fun way) :smile:
Yes. My method uses Brouwer's fixed point theorem and the trick is to create a situation where it can be applied.
 
  • #58
137
56
Show that the sequence ##\dfrac{x_n}{n}## converges to zero.
Since it is given that ##\sum\limits_{i=1}^{n} \dfrac{x_{i^2}}{i} \leq 1## for all ##n \in \mathbb{N}## and that all ##x_{i}##'s are positive values, it must be obvious that ##x_1 < 1## (since otherwise, we will have the sum start to exceed 1 either from ##n=1##, if ##x_1 > 1##, or from ##n=2##, if ##x_1 = 1##). And since it is given that ##x_1 > x_2 > x_3 > ...##, it follows that ##(\dfrac{x_n}{n})_{n \in \mathbb{N}}## also form a strictly decreasing sequence and that ##\dfrac{x_n}{n} < 1## for all ##n##.
(1)​

Suppose the sequence ##\left(\dfrac{x_n}{n}\right)_{n \in \mathbb{N}}## does not converge to 0. Then there must be some ##0 < \delta < 1## such that ##\dfrac{x_n}{n} > \delta## for all ##n \in \mathbb{N}##.
(2)​

Let ##K \equiv \left\lceil\dfrac{1}{\delta}\right\rceil##. Now consider the value of ##\dfrac{x_{(K+1)n}}{(K+1)n}##, an element which belongs to the same sequence.

Since ##(K+1)n > n## we must have ##x_{(K+1)n} < x_n##. Therefore, ##\dfrac{x_{(K+1)n}}{(K+1)n} < \dfrac{x_n}{(K+1)n} < \dfrac{1}{K} \dfrac{x_n}{n}##
(3)​

By definition of ##K##, ##\dfrac{1}{K} \leq \delta##. And from (1), we know that ##\dfrac{x_n}{n} < 1##. Therefore, ##\dfrac{1}{K} \dfrac{x_n}{n} < \delta##. Applying this in (3) gives ##\dfrac{x_{(K+1)n}}{(K+1)n} < \delta##, but this contradicts (2). Therefore the assumption that the sequence does not converge to 0 must be wrong, proving that it must converge to 0.
 
  • #59
fresh_42
Mentor
Insights Author
2021 Award
15,943
14,377
Since it is given that ##\sum\limits_{i=1}^{n} \dfrac{x_{i^2}}{i} \leq 1## for all ##n \in \mathbb{N}## and that all ##x_{i}##'s are positive values, it must be obvious that ##x_1 < 1## (since otherwise, we will have the sum start to exceed 1 either from ##n=1##, if ##x_1 > 1##, or from ##n=2##, if ##x_1 = 1##). And since it is given that ##x_1 > x_2 > x_3 > ...##, it follows that ##(\dfrac{x_n}{n})_{n \in \mathbb{N}}## also form a strictly decreasing sequence and that ##\dfrac{x_n}{n} < 1## for all ##n##.
(1)​

Suppose the sequence ##\left(\dfrac{x_n}{n}\right)_{n \in \mathbb{N}}## does not converge to 0. Then there must be some ##0 < \delta < 1## such that ##\dfrac{x_n}{n} > \delta## for all ##n \in \mathbb{N}##.
(2)​

Let ##K \equiv \left\lceil\dfrac{1}{\delta}\right\rceil##. Now consider the value of ##\dfrac{x_{(K+1)n}}{(K+1)n}##, an element which belongs to the same sequence.

Since ##(K+1)n > n## we must have ##x_{(K+1)n} < x_n##. Therefore, ##\dfrac{x_{(K+1)n}}{(K+1)n} < \dfrac{x_n}{(K+1)n} < \dfrac{1}{K} \dfrac{x_n}{n}##
(3)​

By definition of ##K##, ##\dfrac{1}{K} \leq \delta##. And from (1), we know that ##\dfrac{x_n}{n} < 1##. Therefore, ##\dfrac{1}{K} \dfrac{x_n}{n} < \delta##. Applying this in (3) gives ##\dfrac{x_{(K+1)n}}{(K+1)n} < \delta##, but this contradicts (2). Therefore the assumption that the sequence does not converge to 0 must be wrong, proving that it must converge to 0.
Two issues. A minor one: you may only assume ##\dfrac{x_n}{n}>\delta ## for all ##n\geq N## for some ##N\in \mathbb{N},## but this is only a technical point. Your argument remains valid if the inequality does not hold for small ##n.## Another question is: Why does the sequence converge at all? (Heuristic is sufficient.)
 
Last edited:
  • #60
137
56
Two issues. A minor one: you may only assume ##\dfrac{x_n}{n}>\delta ## for all ##n\geq N## for some ##N\in \mathbb{N},## but this is only a technical point. Your argument remains valid if the inequality does not hold for small ##n.## Another question is: Why does the sequence converge at all? (Heuristic is sufficient.)

I had mentioned ##\dfrac{x_n}{n}>\delta \,\, \forall n \in \mathbb{N}## (instead of for ##n > N## for some ##N\in \mathbb{N}##) because the ##\dfrac{x_n}{n}## is a monotonically decreasing series with even its first element required to be less than 1 and therefore ##\dfrac{x_N}{N}>\delta \Rightarrow \dfrac{x_{N-1}}{N-1}>\delta \Rightarrow \dfrac{x_{N-2}}{{N-2}}>\delta \Rightarrow ... \Rightarrow \dfrac{x_1}{1}>\delta##, provided ##0 < \delta < 1## as was assumed. But I agree that the proof needn't have assumed that and could have started instead with "##n > N## for some ##N##" and that would be the correct way to go about in general, even if the series wasn't strictly decreasing.

Why does the sequence converge at all? I believe it must converge because of the series is strictly decreasing and it contains only positive values. When the series is strictly decreasing, the value of successive elements must either keep decreasing without a bound (and so tend to ##-\infty##) or must decrease indefinitely with some finite bound. Since the series contains only positive values, it cannot touch or drop below zero, meaning 0 acts as a strict lower bound to the limit toward which such a series can tend towards.
 
  • #61
fresh_42
Mentor
Insights Author
2021 Award
15,943
14,377
Why does the sequence converge at all? I believe it must converge because of the series is strictly decreasing and it contains only positive values.

When the series is strictly decreasing, the value of successive elements must either keep decreasing without a bound (and so tend to ##-\infty ##) or must decrease indefinitely with some finite bound. Since the series contains only positive values, it cannot touch or drop below zero, meaning 0 acts as a strict lower bound to the limit toward which such a series can tend towards.

Convergence needs a certain property of the real numbers that should be mentioned. I don't want to hear the technical term but the principle. That's why I wrote that a heuristic is sufficient.
 
  • #62
137
56
Extra: (open)

elliptic_curves-png.png

Consider the two elliptic curves and observe that one has two connection components and the other one has only one. Determine the constant ##c \in [1,3]## in ##y^2 = x^3 - 2x + c## where this behavior exactly changes. What is the left-most point of this curve?

In ##y^2 = x^3 -2x + 1##, there are two components (disconnected from one another) because the curve is defined only when ##x^3 -2x + 1 \geq 0## (due to ##y## being the square root of that polynomial in ##x##) and there is a range of ##x##-values (incidentally including ##x=0##) such that ##x^3 -2x + 1## is negative over this range (this is the "gap" between the x-ranges of the two components) but is non-negative for all x-values greater than this range and also for a some range to the left of this gap.

Now, to find the value of constant ##c \in [1,3]## where curves defined by ##y^2 = x^3 - 2x + c## change from containing one component to two components, we use the fact that the .

Let us find the minima of the polynomial ##f(x) = x^3 -2x + 1##. At the minima, ##f'(x) = 0## and ##f''(x) > 0##.
##f'(x) = 3x^2 - 2 = 0 \Rightarrow x = \pm \sqrt{\dfrac{2}{3}}##
##f''(x) = 6x \Rightarrow f''(x) > 0 \iff x > 0##

Therefore ##x=\sqrt{\dfrac{2}{3}}## must correspond to a local minimum point while ##x=-\sqrt{\dfrac{2}{3}}## must correspond to a maximal point.

In the case where the curve ##y^2 = f(x)## consists of 2 components, the minimal value ##f(\sqrt{\dfrac{2}{3}})## must be negative, because otherwise, the ##f(x) < 0## only for ##x < x_1## for some ##x_1 < - \sqrt{\dfrac{2}{3}}## and so the curve be defined for all ##x \geq x_1## and won't have 2 separate components.

##f(\sqrt{\dfrac{2}{3}}) < 0 \Rightarrow \dfrac{2}{3}\sqrt{\dfrac{2}{3}} - 2\sqrt{\dfrac{2}{3}} + c < 0 \Rightarrow c < \dfrac{4}{3}\sqrt{\dfrac{2}{3}}##.
Thus, ##c_0 = \dfrac{4}{3}\sqrt{\dfrac{2}{3}} \approx 1.088662108## is the value of the constant where the change in the nature of the curve (change between consisting of two components vs. one component alone) happens.
(2)​

At the left-most point of the curve ##y^2 = x^3 -2x+c_0##, we must have ##y^2 = y = 0##. We solve for ##x## accordingly.

##x^3 - 2x + c_0 = 0 \Rightarrow x(x^2-2) = -c_0 = -\dfrac{4}{3}\sqrt{\dfrac{2}{3}}##.
(2)​

##x=\sqrt{\dfrac{2}{3}}## is obviously a solution for this equation but it is not the leftmost point. With some trial and error, or informed guesswork, followed by validation, I found that ##x=-2\sqrt{\dfrac{2}{3}}## is the other solution for equation (1). Therefore, ##(x=-2\sqrt{\dfrac{2}{3}}, y=0)## is the left-most point of the curve ##y^2 = x^3 - 2x + \dfrac{4}{3}\sqrt{\dfrac{2}{3}}##
 

Related Threads on Math Challenge - October 2021

Replies
33
Views
6K
Replies
42
Views
8K
Replies
71
Views
7K
Replies
102
Views
4K
Replies
86
Views
6K
Replies
42
Views
2K
  • Last Post
5
Replies
114
Views
3K
Replies
100
Views
3K
Replies
67
Views
5K
Replies
56
Views
4K
Top