- #51

- 15,943

- 14,377

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- Challenge
- Thread starter fresh_42
- Start date
- Featured

- #51

- 15,943

- 14,377

- #52

- 137

- 56

My question in the previous post was not about the algorithm as I already agreed that it does not work in the general case. Rather, I am trying to make sense of the diagram in post #26. That diagram does not seem to conform to the second point of post #18. Is that condition too not applicable to the original question? I ask because if a worker accepts a partition based on only the workload assigned to him and does not bother about the workloads of others, then ##(0.1, 0.1, 0.8)## will be a solution but not a unique one. Instead, all points in the pink-colored region of the diagram in my previous post will be acceptable to all 3 workers. However, your diagram shows two unconnected green triangles suggesting that worker 2 accepts a workload of 0.25 (or any value in ##[0,0.3]## for that matter) only in a more restricted sense. For e.g., the point ##(0.75, 0.25, 0)## lies in the bottom green triangle in your diagram (the region labeled X_y) but ##(0, 0.25, 0.75)## appears to belong only to X_z (grey region) and is not indicated as also belonging X_y.

Does this imply that in a 3-worker case (for example), it is possible and valid for worker 1 to accept the partition ##(a, b, c)## without necessarily accepting the partition ##(a, c, b)## (where ##a,b,c \in [0,1]## and ##a+b+c=1##) even though his workload is the same in both partitions? In other words, the feasible points of a worker are not determined by a single axis? If the answer is yes, then that would resolve my confusion regarding your diagram and it then becomes clear why ##(0.1, 0.1, 0.8)## is the unique solution for partitions acceptable to all three workers.Every worker has his own set of feasible points regardless of the others. These sets don't have to be convex or even connected. The crucial point is that are part of these sets of feasible partitions.

- #53

- 15,943

- 14,377

I do not know an algorithm and I am not sure whether there is an easy one since the shapes can be rather wild, especially in higher dimensions. I can only prove the existence of a solution by analytic means. Otherwise, I would have posted it as an example for the simplex algorithm in the HS section.

- #54

- 137

- 56

Does "both sequences" here refer to (1) ##(x_n)_{n \in \mathbb{N}}## and ##(x_{n^2})_{n \in \mathbb{N}}##, or to (2) ##(\frac{x_n}{n})_{n \in \mathbb{N}}## and ##(\frac{x_{n^2}}{n})_{n \in \mathbb{N}}##?Extra: (open)Prove that both sequences converge to ##0##

- #55

- 137

- 56

True. Definitely not easy for me at least. Even with just 3 dimensions, I can imagine shapes that seem wild! Jigsaw puzzle piece shapes, for instance (wild in a fun way)I am not sure whether there is an easy one since the shapes can be rather wild, especially in higher dimensions

- #56

- 15,943

- 14,377

Show that the sequence ##\dfrac{x_n}{n}## converges to zero.Does "both sequences" here refer to (1) ##(x_n)_{n \in \mathbb{N}}## and ##(x_{n^2})_{n \in \mathbb{N}}##, or to (2) ##(\frac{x_n}{n})_{n \in \mathbb{N}}## and ##(\frac{x_{n^2}}{n})_{n \in \mathbb{N}}##?

Hint: You must rule out that the sequence ##x_n## is bounded from below (other than by ##0##).

The second case ##\dfrac{x_{n^2}}{n}## follows from the first because we can define ##y_n:=x_{n^2}## and are in the first case.

- #57

- 15,943

- 14,377

Yes. My method uses Brouwer's fixed point theorem and the trick is to create a situation where it can be applied.True. Definitely not easy for me at least. Even with just 3 dimensions, I can imagine shapes that seem wild! Jigsaw puzzle piece shapes, for instance (wild in a fun way)

- #58

- 137

- 56

Show that the sequence ##\dfrac{x_n}{n}## converges to zero.

(1)

Suppose the sequence ##\left(\dfrac{x_n}{n}\right)_{n \in \mathbb{N}}## does not converge to 0. Then there must be some ##0 < \delta < 1## such that ##\dfrac{x_n}{n} > \delta## for all ##n \in \mathbb{N}##.

(2)

Let ##K \equiv \left\lceil\dfrac{1}{\delta}\right\rceil##. Now consider the value of ##\dfrac{x_{(K+1)n}}{(K+1)n}##, an element which belongs to the same sequence.

Since ##(K+1)n > n## we must have ##x_{(K+1)n} < x_n##. Therefore, ##\dfrac{x_{(K+1)n}}{(K+1)n} < \dfrac{x_n}{(K+1)n} < \dfrac{1}{K} \dfrac{x_n}{n}##

(3)

By definition of ##K##, ##\dfrac{1}{K} \leq \delta##. And from (1), we know that ##\dfrac{x_n}{n} < 1##. Therefore, ##\dfrac{1}{K} \dfrac{x_n}{n} < \delta##. Applying this in (3) gives ##\dfrac{x_{(K+1)n}}{(K+1)n} < \delta##, but this contradicts

- #59

- 15,943

- 14,377

Two issues. A minor one: you may only assume ##\dfrac{x_n}{n}>\delta ## for all ##n\geq N## for some ##N\in \mathbb{N},## but this is only a technical point. Your argument remains valid if the inequality does not hold for small ##n.## Another question is: Why does the sequence converge at all? (Heuristic is sufficient.)

(1)

Suppose the sequence ##\left(\dfrac{x_n}{n}\right)_{n \in \mathbb{N}}## does not converge to 0. Then there must be some ##0 < \delta < 1## such that ##\dfrac{x_n}{n} > \delta## for all ##n \in \mathbb{N}##.

(2)

Let ##K \equiv \left\lceil\dfrac{1}{\delta}\right\rceil##. Now consider the value of ##\dfrac{x_{(K+1)n}}{(K+1)n}##, an element which belongs to the same sequence.

Since ##(K+1)n > n## we must have ##x_{(K+1)n} < x_n##. Therefore, ##\dfrac{x_{(K+1)n}}{(K+1)n} < \dfrac{x_n}{(K+1)n} < \dfrac{1}{K} \dfrac{x_n}{n}##

(3)

By definition of ##K##, ##\dfrac{1}{K} \leq \delta##. And from (1), we know that ##\dfrac{x_n}{n} < 1##. Therefore, ##\dfrac{1}{K} \dfrac{x_n}{n} < \delta##. Applying this in (3) gives ##\dfrac{x_{(K+1)n}}{(K+1)n} < \delta##, but this contradicts(2). Therefore the assumption that the sequence does not converge to 0 must be wrong, proving that it must converge to 0.

Last edited:

- #60

- 137

- 56

Two issues. A minor one: you may only assume ##\dfrac{x_n}{n}>\delta ## for all ##n\geq N## for some ##N\in \mathbb{N},## but this is only a technical point. Your argument remains valid if the inequality does not hold for small ##n.## Another question is: Why does the sequence converge at all? (Heuristic is sufficient.)

I had mentioned ##\dfrac{x_n}{n}>\delta \,\, \forall n \in \mathbb{N}## (instead of for ##n > N## for some ##N\in \mathbb{N}##) because the ##\dfrac{x_n}{n}## is a monotonically decreasing series with even its first element required to be less than 1 and therefore ##\dfrac{x_N}{N}>\delta \Rightarrow \dfrac{x_{N-1}}{N-1}>\delta \Rightarrow \dfrac{x_{N-2}}{{N-2}}>\delta \Rightarrow ... \Rightarrow \dfrac{x_1}{1}>\delta##, provided ##0 < \delta < 1## as was assumed. But I agree that the proof needn't have assumed that and could have started instead with "##n > N## for some ##N##" and that would be the correct way to go about in general, even if the series wasn't strictly decreasing.

Why does the sequence converge at all? I believe it must converge because of the series is strictly decreasing

- #61

- 15,943

- 14,377

Why does the sequence converge at all? I believe it must converge because of the series is strictly decreasingandit contains only positive values.

When the series is strictly decreasing, the value of successive elements must either keep decreasing without a bound (and so tend to ##-\infty ##) or must decrease indefinitely with some finite bound. Since the series contains only positive values, it cannot touch or drop below zero, meaning 0 acts as a strict lower bound to the limit toward which such a series can tend towards.

Convergence needs a certain property of the real numbers that should be mentioned. I don't want to hear the technical term but the principle. That's why I wrote that a heuristic is sufficient.

- #62

- 137

- 56

Extra: (open)

Consider the two elliptic curves and observe that one has two connection components and the other one has only one. Determine the constant ##c \in [1,3]## in ##y^2 = x^3 - 2x + c## where this behavior exactly changes. What is the left-most point of this curve?

Now, to find the value of constant ##c \in [1,3]## where curves defined by ##y^2 = x^3 - 2x + c## change from containing one component to two components, we use the fact that the .

Let us find the minima of the polynomial ##f(x) = x^3 -2x + 1##. At the minima, ##f'(x) = 0## and ##f''(x) > 0##.

##f'(x) = 3x^2 - 2 = 0 \Rightarrow x = \pm \sqrt{\dfrac{2}{3}}##

##f''(x) = 6x \Rightarrow f''(x) > 0 \iff x > 0##

Therefore ##x=\sqrt{\dfrac{2}{3}}## must correspond to a local minimum point while ##x=-\sqrt{\dfrac{2}{3}}## must correspond to a maximal point.

In the case where the curve ##y^2 = f(x)## consists of 2 components, the minimal value ##f(\sqrt{\dfrac{2}{3}})## must be negative, because otherwise, the ##f(x) < 0## only for ##x < x_1## for some ##x_1 < - \sqrt{\dfrac{2}{3}}## and so the curve be defined for all ##x \geq x_1## and won't have 2 separate components.

##f(\sqrt{\dfrac{2}{3}}) < 0 \Rightarrow \dfrac{2}{3}\sqrt{\dfrac{2}{3}} - 2\sqrt{\dfrac{2}{3}} + c < 0 \Rightarrow c < \dfrac{4}{3}\sqrt{\dfrac{2}{3}}##.

Thus, ##c_0 = \dfrac{4}{3}\sqrt{\dfrac{2}{3}} \approx 1.088662108## is the value of the constant where the change in the nature of the curve (change between consisting of two components vs. one component alone) happens.

(2)

At the left-most point of the curve ##y^2 = x^3 -2x+c_0##, we must have ##y^2 = y = 0##. We solve for ##x## accordingly.

##x^3 - 2x + c_0 = 0 \Rightarrow x(x^2-2) = -c_0 = -\dfrac{4}{3}\sqrt{\dfrac{2}{3}}##.

(2)

##x=\sqrt{\dfrac{2}{3}}## is obviously a solution for this equation but it is not the leftmost point. With some trial and error, or informed guesswork, followed by validation, I found that ##x=-2\sqrt{\dfrac{2}{3}}## is the other solution for equation (1). Therefore, ##(x=-2\sqrt{\dfrac{2}{3}}, y=0)## is the left-most point of the curve ##y^2 = x^3 - 2x + \dfrac{4}{3}\sqrt{\dfrac{2}{3}}##

Share: