Math Challenge - October 2021

  • Challenge
  • Thread starter fresh_42
  • Start date
  • Featured
  • #26
15,762
14,051
Thanks for the clarifications. I am attempting to solve the problem based on these. I am not sure if the proof is rigorous enough to qualify as an acceptable solution, but I hope that at least the logic I use is not way off the mark.

Let ##w_i## denote the maximum workload accepted by worker ##i##, i.e. ##w_i = \max\limits_{x \in X_i} x_i \, \forall i \in \{1, 2, ..., n\}##.

Now consider the partition ##T \in S## defined as follows:
##T := (w_1, \min (w_2, 1-w_1), \min (w_3, 1-(w_1+w_2)), ..., T_n = 1-\sum_{i=1}^{n-1} T_i)## with
##T_i = \min (w_i, 1-\sum\limits_{j=1}^{i-1} T_j) \, \forall i = 2, 3, ..., n-1## where ##T_i## is the workload assigned to worker ##i## by partition ##T##.

You have to prove ##T\in S## because it is not true without additional assumptions. Consider the following situation with ##n=3##:

1634398367227.png


Set ##(w_1,w_2,w_3):=(0.3\, , \,0.8\, , \,0.3)## then ##T=(0.3\, , \,0.7\, , \,-0.1) \not\in S.## But even in case ##T\in S,## e.g. for ##(w_1,w_2,w_3):=(0.3\, , \,0.3\, , \,0.8)## with ##T=(0.3\, , \,0.3\, , \,0.4)\in S## you do not get the solution, which is the black dot at ##(0.1\, , \,0.1\, , \,0.8).##


In simple terms, ##T## assigns to worker 1 the max workload acceptable to him, then sequentially distributes remaining workload among workers 2 to ##n-1## such that each of those workers get either the whole of the pending workload at point (i.e. the balance remaining after assignment up to previous worker) or the maximum workload acceptable to them, whichever is minimum (so that the assigned workload is accepted by all of them) and finally assigns the entire remaining balance workload to the last worker, worker ##n##.

Based on the way it is defined, it should be obvious that ##T \in X_i \, \forall i=1, 2, ..., n-1##, i.e. the partition is accepted by workers ##1## to ##n-1##. We need to show that ##T \in X_n## too.

Suppose ##T \notin X_n##. Then ##T_n > 0## (since otherwise worker ##n## would have accepted it, as every worker accepts a partition where he is assigned no work).

##T_n > 0 \Rightarrow \sum\limits_{i=1}^{n-1} T_i < 1 \Rightarrow \sum\limits_{i=1}^{k} T_i < 1 \, \, , \forall k = 1, 2, ..., n-1##
(1)

But (1) implies ##w_k < 1 - \sum_{i=1}^{k-1} T_i \, \, , \forall k = 1, 2, ..., n-1##
(2)

because otherwise there would exist some ##k \in \{1, 2, .., n-1\}## such that ##w_k \geq 1 - \sum\limits_{i=1}^{k-1} T_i## and this would imply
##T_k = \min (w_k, 1 - \sum\limits_{i=1}^{k-1} T_i) = 1 - \sum\limits_{i=1}^{k-1} T_i##, which in turn leads to
##\sum_{i=1}^{k} T_i = \sum\limits_{i=1}^{k-1} T_i + (1 - \sum\limits_{i=1}^{k-1} T_i) = ## and this contradicts condition (1).

Therefore partition ##T## must be equivalent to ##\left(w_1, w_2, ..., w_{n-1}, T_n \right)## with ##T_n = (1 - \sum\limits_{i=1}^{n-1} w_i)##
(3)​

##T \notin X_n \Rightarrow T_n > w_n \Rightarrow (T_n - w_n) > 0##
(4)​

Now consider another partition ##T' \in S## defined in terms of ##T## as follows:
##T' = (w_1 + \dfrac{\delta}{n-1}, w_2 + \dfrac{\delta}{n-1}, \ldots, w_{n-1}+ \dfrac{\delta}{n-1}, T_n - \delta)## where ##\delta## is any arbitrarily chosen real number such that ##0 < \delta < (T_n - w_n)##.

Clearly, ##T' \notin X_i## for any ##i \in \{1, 2, ..., n-1, n\}## since ##T'_i = w_i + \dfrac{\delta}{n-1} > w_i## making the partition is unacceptable to workers ##1## to ##n-1## and ##T'_n = T_n - \delta > w_n## making the partition unacceptable to worker ##n## too. But this contradicts the fact/precondition that ##\bigcup_{i=1}^n X_i = S##. Thus our assumption that ##T \notin X_n## must be incorrect. And it therefore follows that ##T \in X_i \, \forall i \in \{1, 2, ..., n\}##, i.e. we have at least one partition in ##S## that is acceptable to all workers.

I do not know an algorithm, but my indirect solution per Brouwer's fixed point theorem is a bit tricky, so I assume that it is difficult to find a constructive proof for any dimension.
 
Last edited:
  • #27
Office_Shredder
Staff Emeritus
Science Advisor
Gold Member
4,766
737
I'm a bit confused by #2. You said in post 18 of the thread that a worker will accept any partition based only on the workload assigned to them (which the original problem description didn't seem to include at all). So if ##(w_1,w_2,w_3)=(0.3,0.3,0.8)##, it sounds like ##(0.3,0.3,0.4)## is in fact an acceptable solution, along with a literal infinitude of other solutions. Are you sure this clarification is correct? I feel like it makes the problem trivially easy, and it is not needed in two dimensions at least.

I'm not really sure how to read the picture you drew, but I suspect it does not conform with post #18 (and that would make me happy to learn #18 is wrong)
 
  • #28
137
55
Set ##(w_1, w_2, w_3) := (0.3, 0.8, 0.3)## then ##T = (0.3, 0.8, -0.1) \notin S##.
Sorry, there was a mistake in the first part of definition of ##T## in my previous post, more specifically in the value of ##T_3## which is not in accordance with the definition of ##T_i## given subsequently. Please read that as:
##T := (w_1, \min (w_2, 1-w_1), \min (w_3, 1-(T_1+T_2)), ..., T_n = 1 - \sum\limits_{i=1}^{n-1} T_i)## with ##T_i = \min (w_i,\sum\limits_{j=1}^{i-1} T_j) \, \, \forall i = 2, 3, \ldots, n-1##. The more verbal explanation I gave for ##T## in that same post too is in accordance with the correct definition. With the correct definion of ##T##, ##(w_1, w_2, w_3) := (0.3, 0.8, 0.3)## will yield ##T = (0.3, 0.7, 0) \in S##, not ##T = (0.3, 0.8, -0.1)##. I apologize for the silly mistake that caused the confusion.

But even in case ##T \in S##, e.g. for ##(w_1, w_2, w_3) := (0.3, 0.3, 0.8)## with ##T = (0.3, 0.3, 0.4) \in S## you do not get the solution, which is the black dot at ##(0.1, 0.1, 0.8)##.

Sorry, I do not understand why only ##(0.1, 0.1, 0.8)## is a solution but not ##(0.3, 0.3, 0.4)##. One of the doubts I asked before attempting to solve this question was whether it could be assumed that "if a worker accepts a partition where his workload is some ##w > 0##, then he will accept any other partition where his workload is less than ##w##" and you replied with a yes. So if ##(w_1, w_2, w_3) := (0.3, 0.3, 0.8)##, why wouldn't ##T = (0.3, 0.3, 0.4) \in S## be a partition that is acceptable to all 3 workers? After all, the workload assignment of ##T## equals but does not exceed the maximum acceptable workloads for workers 1 and 2 and is below the max workload for worker 3, so why would any of them reject ##T##? I don't see why there would always be only one partition that is acceptable to all workers. So for ##(w_1, w_2, w_3) := (0.3, 0.3, 0.8)##, I view both ##(0.3, 0.3, 0.4)##, ##(0.1, 0.1, 0.8)## as being partitions acceptable to all workers (and there would be countless more such acceptable-to-all-workers partitions in this case, such as for e.g. ##(0.15, 0.15, 0.7) \in S##, ##(0.299, 0.299, 0.402) \in S##, ##(0.2991, 0.2991, 0.4018) \in S## and so on).
 
  • #29
15,762
14,051
I'm a bit confused by #2. You said in post 18 of the thread that a worker will accept any partition based only on the workload assigned to them (which the original problem description didn't seem to include at all).
Yes, the third point in post #18 is wrong. Every worker has his own set of feasible points regardless of the others. These sets don't have to be convex or even connected. The crucial point is that ##x_i=0## are part of these sets of feasible partitions.
3. If the answer to the previous doubt is yes, then is it also implicitly implied that that if worker accepts a partition where his workload is some ##w>0##, then he will accept any other partition where his workload is less than w? If this is indeed the case, then I think I can see how intervals are involved.
This is not necessary.

We have only two conditions: the work has to be done ##\cup X_i=S## and nothing to do at all ##x_i=0## will be accepted.

So if ##(w_1,w_2,w_3)=(0.3,0.3,0.8)##, it sounds like ##(0.3,0.3,0.4)## is in fact an acceptable solution, along with a literal infinitude of other solutions. Are you sure this clarification is correct? I feel like it makes the problem trivially easy, and it is not needed in two dimensions at least.

I'm not really sure how to read the picture you drew, but I suspect it does not conform with post #18 (and that would make me happy to learn #18 is wrong)
##(0.3,0.3,0.4) \not\in X_1\cup X_2##.
 
  • #30
137
55
Yes, the third point in post #18 is wrong. Every worker has his own set of feasible points regardless of the others. These sets don't have to be convex or even connected.
Okay, that makes the problem harder and I will need to think afresh to come up with an answer, if at all it is solvable using only concepts I have learned up to high school and just a little beyond.
 
  • #31
15,762
14,051
Okay, that makes the problem harder and I will need to think afresh to come up with an answer. Thanks again for clarifying.
I'm not sure whether this is right. In my example, we have the point ##(0.1,0.1,0.8)## as the only solution where all three workers agree upon. A solution ##T=(0.3,0.3,0.4)## isn't possible even if workers agree on less workload. Only ##Z## would have less work, ##X,Y## have more and so wouldn't agree on ##T##.
 
  • #33
15,762
14,051
Does

mean that anybody can have a go?
No. It means that the problems for high schoolers are relatively easy and I wanted to add a bonus round for them with a bit more complicated questions. It was an attempt to attract more kids.

The label 'open' only indicates which parts have still to be solved (by high schoolers). It was necessary because people could solve one part but don't solve the 'Extra' part, and theoretically vice versa.
 
Last edited:
  • Like
Likes George Keeling and berkeman
  • #34
137
55
13. (open) If ##(x_n)_{n \in \mathbb{N}} \subseteq \mathbb{R}_{>0}## is a monotone decreasing sequence of positive real numbers such that for every ##n \in \mathbb{N}## $$
\dfrac{x_1}{1} + \dfrac{x_4}{2} + \dfrac{x_9}{3} + ... + \dfrac{x_{n^2}}{n} \leq 1
$$
prove that for every ##n \in \mathbb{N}## $$
\dfrac{x_1}{1} + \dfrac{x_2}{2} + \dfrac{x_3}{3} + ... + \dfrac{x_n}{n} \leq 3
$$

Let us denote the 2 sequence summations by ##S_n## and ##T_n## for any ##n \in \mathbb{N}##, i.e. $$S_n := \dfrac{x_1}{1} + \dfrac{x_4}{2} + \dfrac{x_9}{3} + ... + \dfrac{x_{n^2}}{n}$$ and
$$T_n := \dfrac{x_1}{1} + \dfrac{x_2}{2} + \dfrac{x_3}{3} + ... + \dfrac{x_n}{n}$$

Then, $$
T_{(n+1)^2-1} = \sum_{i=1}^{n} \sum_{j=i^2}^{(i+1)^2-1} \dfrac{x_j}{j} = \sum_{i=1}^{n} \dfrac{x_{i^2}}{i^2} + \sum_{i=1}^{n} \sum_{j=i^2+1}^{(i+1)^2-1} \dfrac{x_j}{j}
$$
(1)​

$$
\sum_{i=1}^{n} \dfrac{x_{i^2}}{i^2} \leq \sum_{i=1}^{n} \dfrac{x_{i^2}}{i} = S_n
$$
(2)​

Since ##(x_n)_{n \in \mathbb{N}}## is a strictly decreasing sequence of positive numbers, ##\dfrac{x_j}{j} < \dfrac{x_{i^2}}{i^2}## if for any ##i, j \in \mathbb{N}## with ##j > i^2##. Therefore,
$$
\sum_{j=i^2+1}^{(i+1)^2-1} \dfrac{x_j}{j} < \sum_{j=i^2+1}^{(i+1)^2-1} \dfrac{x_{i^2}}{i} = ((i+1)^2-1-i^2)\dfrac{x_{i^2}}{i^2} = 2i \dfrac{x_{i^2}}{i^2} = \dfrac{2x_{i^2}}{i}

\Rightarrow \sum_{i=1}^{n} \sum_{j=i^2+1}^{(i+1)^2-1} \dfrac{x_j}{j} < 2S_n
$$
(3)​

Using equations/inequalities (2) and (3) in (1), we get ##T_{(n+1)^2-1} < S_n + 2S_n \equiv 3S_n## for any positive integer ##n##. And since ##T_j > T_i## for any ##j > i## (since ##T_n## is a sum of a sequence consisting of only positive numbers), it follows that ##T_n < T_{(n+1)^2-1} < 3S_n## for any ##n \in \mathbb{N}##.
(4)​

And since it is given that ##S_n \leq 1## for any ##n##, (4) implies that ##T_n < 3## for any ##n##, which is equivalent to saying that ##\dfrac{x_1}{1} + \dfrac{x_2}{2} + \dfrac{x_3}{3} + ... + \dfrac{x_n}{n} < 3## for any ##n##. Hence proved.
 
  • #35
15,762
14,051
Let us denote the 2 sequence summations by ##S_n## and ##T_n## for any ##n \in \mathbb{N}##, i.e. $$S_n := \dfrac{x_1}{1} + \dfrac{x_4}{2} + \dfrac{x_9}{3} + ... + \dfrac{x_{n^2}}{n}$$ and
$$T_n := \dfrac{x_1}{1} + \dfrac{x_2}{2} + \dfrac{x_3}{3} + ... + \dfrac{x_n}{n}$$

Then, $$
T_{(n+1)^2-1} = \sum_{i=1}^{n} \sum_{j=i^2}^{(i+1)^2-1} \dfrac{x_j}{j} = \sum_{i=1}^{n} \dfrac{x_{i^2}}{i^2} + \sum_{i=1}^{n} \sum_{j=i^2+1}^{(i+1)^2-1} \dfrac{x_j}{j}
$$
(1)​

$$
\sum_{i=1}^{n} \dfrac{x_{i^2}}{i^2} \leq \sum_{i=1}^{n} \dfrac{x_{i^2}}{i} = S_n
$$
(2)​

Since ##(x_n)_{n \in \mathbb{N}}## is a strictly decreasing sequence of positive numbers, ##\dfrac{x_j}{j} < \dfrac{x_{i^2}}{i^2}## if for any ##i, j \in \mathbb{N}## with ##j > i^2##. Therefore,
$$
\sum_{j=i^2+1}^{(i+1)^2-1} \dfrac{x_j}{j} < \sum_{j=i^2+1}^{(i+1)^2-1} \dfrac{x_{i^2}}{i} = ((i+1)^2-1-i^2)\dfrac{x_{i^2}}{i^2} = 2i \dfrac{x_{i^2}}{i^2} = \dfrac{2x_{i^2}}{i}

\Rightarrow \sum_{i=1}^{n} \sum_{j=i^2+1}^{(i+1)^2-1} \dfrac{x_j}{j} < 2S_n
$$
(3)​

Using equations/inequalities (2) and (3) in (1), we get ##T_{(n+1)^2-1} < S_n + 2S_n \equiv 3S_n## for any positive integer ##n##. And since ##T_j > T_i## for any ##j > i## (since ##T_n## is a sum of a sequence consisting of only positive numbers), it follows that ##T_n < T_{(n+1)^2-1} < 3S_n## for any ##n \in \mathbb{N}##.
(4)​

And since it is given that ##S_n \leq 1## for any ##n##, (4) implies that ##T_n < 3## for any ##n##, which is equivalent to saying that ##\dfrac{x_1}{1} + \dfrac{x_2}{2} + \dfrac{x_3}{3} + ... + \dfrac{x_n}{n} < 3## for any ##n##. Hence proved.
Right. But let me add a shorter version (with the same ideas):

For every natural number ##n## there is a number ##k\in \mathbb{N}## such that ##k^2\leq n<(k+1)^2.## Hence
\begin{align*}
\sum_{i=1}^n \dfrac{x_i}{i}&\leq\sum_{i=2}^{k+1} \sum_{j=(i-1)^2}^{i^2-1}\dfrac{x_j}{j}\leq \sum_{i=2}^{k+1} \left(2i-1\right)\dfrac{x_{(i-1)^2}}{(i-1)^2}\\&=\sum_{i=1}^k (2i+1)\dfrac{x_{i^2}}{i^2}
\leq 3\sum_{i=1}^k \dfrac{x_{i^2}}{i}\leq 3\cdot 1 =3
\end{align*}
by the given condition.
 
  • #36
julian
Gold Member
659
147
Problem 10.

A way of proving that ##\pi## is irrational is considering the integral

\begin{align*}
I_n = \frac{1}{n!} \int_0^\pi C^n [(\pi - x) x]^n \sin x dx .
\end{align*}

You show that ##0 < I_n < 1## for ##n## large enough. Then you show that for all ##n \geq 0## that ##I_n## is an integer if ##\pi = a/b## and if we choose ##C = b##, obtaining a contradiction.

It is trivial to use this method to prove that ##\pi^2## is irrational.

Let us quickly run through the details of this argument for proving ##\pi## to be irrational. We want to obtain a contradiction by putting

\begin{align*}
\pi = \frac{a}{b} .
\end{align*}

Write

\begin{align*}
I_n = \frac{1}{n!} \int_0^\pi C^n [(\pi - x) x]^n \sin x dx
\end{align*}

The polynomial ##C^n [(\pi - x) x]^n## is non-negative over ##[0, \pi]## and symmetric about the point ##\pi / 2## and has a maximum of ##(C \pi^2 /4)^n## there which implies ##0 < I_n < \pi \frac{(C \pi^2 / 4)^n}{n!}##, in which ##\pi \frac{(C \pi^2 / 4)^n}{n!}## can be made arbitrarily small by choosing ##n## large enough.

Integrating ##I_n## by parts twice gives

\begin{align*}
I_n = (4n - 2) C I_{n-1} - C^2 \pi^2 I_{n-2} \qquad (*)
\end{align*}

Also we have

\begin{align*}
I_0 = 2 \quad \text{and} \quad I_1 = 4C \qquad (**)
\end{align*}

By choosing ##C = b## we have, by our original assumption, that ##C^2 \pi^2 = b^2 \pi^2 = a^2 =##integer. We then have by ##(*)## and ##(**)## that all the ##I_n## are integers in contradiction with the fact that ##0 < I_n < 1## for ##n## large enough.

Now to prove that ##\pi^2## is irrational. We obtain a contradiction by putting

\begin{align*}
\pi^2 = \frac{p}{q} .
\end{align*}

By choosing ##C = q## we have that ##C^2 \pi^2 = q^2 \pi^2 = pq =##integer. We then have by ##(*)## and ##(**)## that all the ##I_n## are integers in contradiction with the fact that ##0 < I_n < 1## for ##n## large enough. As easy as pie.
 
Last edited:
  • #37
137
55
We have only two conditions: the work has to be done ##\cup X_i = S## and nothing to do at all ##x_i = 0## will be accepted.

Are those 2 conditions really sufficient to guarantee that there will exist a partition acceptable to all workers? Consider a simple 2 worker case where worker 1 accepts only partitions where ##x_1 \in \mathbb{Q} \, \cap [0,1)## (note that ##x_1 = 1## is not acceptable to worker 1 in this case) and worker 2 accepts only partitions where ##x_2 \in \{0\} \, \cup ((\mathbb{R}∖ \mathbb{Q}) \, \cap [0,1])##. Then, every ##(x_1 \in \mathbb{R_{\geq 0}}, x_2 \in \mathbb{R_{\geq 0}})## pair that sums up to 1 will be acceptable to one of the two workers but none of these will be acceptable to both workers, since when ##x_1 + x_2 =1##, either both values should be rational numbers, or both should be irrational. So I guess there must be some additional conditions implicitly assumed by the question. What are those? Or am I missing something obvious?
 
  • #38
15,762
14,051
Are those 2 conditions really sufficient to guarantee that there will exist a partition acceptable to all workers? Consider a simple 2 worker case where worker 1 accepts only partitions where ##x_1 \in \mathbb{Q} \, \cap [0,1)## (note that ##x_1 = 1## is not acceptable to worker 1 in this case) and worker 2 accepts only partitions where ##x_2 \in \{0\} \, \cup ((\mathbb{R}∖ \mathbb{Q}) \, \cap [0,1])##. Then, every ##(x_1 \in \mathbb{R_{\geq 0}}, x_2 \in \mathbb{R_{\geq 0}})## pair that sums up to 1 will be acceptable to one of the two workers but none of these will be acceptable to both workers, since when ##x_1 + x_2 =1##, either both values should be rational numbers, or both should be irrational. So I guess there must be some additional conditions implicitly assumed by the question. What are those? Or am I missing something obvious?
##(0,1),(1,0) \in S.## But you are right in so far, as we need a metric and its induced topology.
 
  • #39
BWV
1,032
1,100
8. Let ##X_1,\ldots,X_n## be independent random variables, such that almost certain ##a_i\leq X_i-E(X_i)\leq b_i##, and let ##0<c\in \mathbb{R}##. Prove that
$$
\operatorname{Pr}\left(\sum_{i=1}^n (X_i-E (X_i))\geq c\right)\leq \exp\left(\dfrac{-2c^2}{\sum_{i=1}^n(b_i-a_i)^2}\right).
$$

been struggling with this. As no one else has tried, here is what I have been trying, with I think some issues with the final argument
So first, set n=1 to eliminate the summation

##\operatorname{Pr}\left((X-E (X))\geq c\right)\leq \exp\left(\dfrac{-2c^2}{(b-a)^2}\right)##

set c=kσ and (b-a)=λσ, where σ2 is the variance of X (as we can say almost certainly the interval (X-E (X)) lies within (a-b) then X has finite variance)

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2\sigma^2}{\lambda^2\sigma^2}\right)##

cancel the variance term

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2}{\lambda^2}\right)##

Note Chevychef's theorem
##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \frac{1}{k^2}##

As almost certain ##a_i\leq X_i-E(X_i)\leq b_i## then λ>>3, as per Chebyshev
##\operatorname{Pr}\left((X-E (X))\geq 3\sigma\right)\leq \frac{1}{9}##

so two cases:
1)##k\geq \lambda##.
As (b-a)=λσ and ##k\geq \lambda## and Pr(λσ<(X-E (X))= 0 almost certainly then

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2}{\lambda^2}\right)##

2) where ##k<\lambda##
##\exp\left(\dfrac{-2k^2}{\lambda^2}\right) \geq \exp(-2)##

per Chebyshev
##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \frac{1}{k^2}##

for ##k\geq3##,
##\frac{1}{k^2} < \exp(-2)##

for ##k<3## (this argument is pretty weak, got stuck here)
as ##k<3<\lambda## and ##\lambda >>3## then

##\frac{-2k^2}{\lambda^2}## -> 0

so ##\exp\left(\dfrac{-2k^2}{\lambda^2}\right) =1 ##

then can waive my hands and bring back in the summation and it should likewise hold
 
Last edited:
  • #40
15,762
14,051
8. Let ##X_1,\ldots,X_n## be independent random variables, such that almost certain ##a_i\leq X_i-E(X_i)\leq b_i##, and let ##0<c\in \mathbb{R}##. Prove that
$$
\operatorname{Pr}\left(\sum_{i=1}^n (X_i-E (X_i))\geq c\right)\leq \exp\left(\dfrac{-2c^2}{\sum_{i=1}^n(b_i-a_i)^2}\right).
$$

been struggling with this. As no one else has tried, here is what I have been trying, with I think some issues with the final argument
So first, set n=1 to eliminate the summation

##\operatorname{Pr}\left((X-E (X))\geq c\right)\leq \exp\left(\dfrac{-2c^2}{(b-a)^2}\right)##

set c=kσ and (b-a)=λσ, where σ2 is the variance of X (as we can say almost certainly the interval (X-E (X)) lies within (a-b) then X has finite variance)

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2\sigma^2}{\lambda^2\sigma^2}\right)##

cancel the variance term

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2}{\lambda^2}\right)##

Note Chevychef's theorem
##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \frac{1}{k^2}##

As almost certain ##a_i\leq X_i-E(X_i)\leq b_i## then λ>>3, as per Chebyshev
##\operatorname{Pr}\left((X-E (X))\geq 3\sigma\right)\leq \frac{1}{9}##

so two cases:
1)##k\geq \lambda##.
As (b-a)=λσ and ##k\geq \lambda## and Pr(λσ<(X-E (X))= 0 almost certainly then

##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \exp\left(\dfrac{-2k^2}{\lambda^2}\right)##

2) where ##k<\lambda##
##\exp\left(\dfrac{-2k^2}{\lambda^2}\right) \geq \exp(-2)##

per Chebyshev
##\operatorname{Pr}\left((X-E (X))\geq k\sigma\right)\leq \frac{1}{k^2}##

for ##k\geq3##,
##\frac{1}{k^2} \leq \exp(-2)##

for ##k<3## (this argument is pretty weak, got stuck here)
as ##k<3<\lambda## and ##\lambda >>3## then

##\frac{-2k^2}{\lambda^2}## -> 0

so ##\exp\left(\dfrac{-2k^2}{\lambda^2}\right) =1 ##

then can waive my hands and bring back in the summation and it should likewise hold
My proof uses the exponential version of Chebyshev
\begin{align*}
\operatorname{Pr}(X\geq a)&=\operatorname{Pr}(\exp(X)\geq \exp(a))\\
&\leq \inf_{p>0} \dfrac{1}{\exp(ap)}\int_\mathbb{R}\exp(pX)\,d\operatorname{Pr}=\inf_{p>0} \dfrac{E(\exp(pX))}{\exp(pa)}
\end{align*}
on ##Y_i:=X_i-E(X_i)## and the convexity of the exponential function.
 
  • #41
2
0

Attachments

  • 23-10-2021.pdf
    46.6 KB · Views: 15
Last edited:
  • #43
2
0
Right. But could you write it out here instead of uploading a pdf?
Here is explained how you can type formulas on PF: https://www.physicsforums.com/help/latexhelp/
It looks as if you already had used TeX to write it, so you can just copy and paste it here. The double $$ remains the same, and the single $ must be replaced by ##.
I used Texmacs to write the answer... I thought it would work out because i was able to convert the file into .html, .pdf, .tex and .png but i was unable to find a way to parse the answer into the comments... so maybe i have to try Lyx next..
 
  • #44
15,762
14,051
I used Texmacs to write the answer... I thought it would work out because i was able to convert the file into .html, .pdf, .tex and .png but i was unable to find a way to parse the answer into the comments... so maybe i have to try Lyx next..
Simpler than that: mark your text (Ctrl+A), copy it (Ctrl+C), and paste it here (Ctrl+V).
 
  • #45
137
55
11. (open) Find all functions ##f,g## such that $$
f,g : \mathbb{R} \\ \{-1,0,1\} \rightarrow \mathbb{R}
$$
$$
xf(x) = 1 + \frac{1}{x} g(\frac{1}{x}) \text{ and } \frac{1}{x^2}f(\frac{1}{x}) = x^2 g(x)
$$
##xf(x) = 1 + \frac{1}{x} g(\frac{1}{x}) \Rightarrow f(x) = \frac{1}{x} + \frac{1}{x^2} g(\frac{1}{x})##
(1.1)​
##\Rightarrow f(\frac{1}{x}) = x + {x^2} g(x)##
(1.2)​

Substituting equation (1.1) in the other given condition in the question, i.e. ##\frac{1}{x^2}f(\frac{1}{x}) = x^2 g(x)##, we get the solution for function ##g##: $$
\frac{1}{x} + g(x) = x^2 g(x) \Rightarrow g(x) = \frac{1}{x(x^2-1)}
$$
(2)​

Using equation (2) in equation (1.1) gives the solution for ##f##: $$
f(x) = \frac{1}{x} + \frac{1}{x^2} \dfrac{1}{\frac{1}{x}(\frac{1}{x^2}-1)} = \frac{1}{x} + \frac{x}{1-x^2} = \frac{1}{x(1-x^2)}
$$
 
  • #46
137
55
14. (open) Solve the following equation system for real numbers:
(1)##\,\,\, x + xy + xy^2 = -21##
(2)## \,\,\, y + xy + x^2 y = 14##
(3)## \,\,\, x + y = -1##

Eq. (3) implies that ##y = -(1+x)##. Substituting for ##y## based on this in Eq. (1) gives:
##x(1 + y + y^2) = x(1 -1 - x + 1 + 2x + x^2) = x(1+x+x^2) = -21##
(4)​

Similarly substituting for ##y## in Eq. (2) gives:
##y(1+x+x^2) = -(1+x)(1+x+x^2) = 14 ##
(5)​

From equations (4) and (5), we get:$$
\dfrac{-(1+x)(1+x+x^2)}{x(1+x+x^2)} = \dfrac{14}{-21} = - \dfrac{2}{3}
$$
(6)​

##(1+x+x^2)## cannot be zero for any real-valued ##x## since ##1+x+x^2 = 0## has no real-valued roots. Therefore that expression can be cancelled from the numerator and denominator of Eq. (6) which then yields the solution for ##x##: $$
\dfrac{-(1+x)}{x} = - \dfrac{2}{3} \Rightarrow \dfrac{1}{x} + 1 = \dfrac{2}{3} \Rightarrow x = -3
$$

Substituting this value of ##x## in Eq. (3) gives the solution for ##y##: $$
y = -(1+x) = -(1 - 3) = 2
$$
 
  • #47
338
45
When searching for the definition of linear independent homomorphisms, I found a proof of a 'special' case of this problem. But I think it is nearly the exact same proof for this problem; so feel free to delete my post.

Proof: ##(\Longrightarrow):## Let ##P(n)## be the statement that ##\sigma_1, \dots, \sigma_n## are linearly independent if they are distinct and suppose ##\sigma_1, \dots, \sigma_n## are distinct. For the base case, suppose ##n = 1## and that there exists ##c \in \mathbb{F}## such that for all ##g \in G## we have ##c\sigma_1(g) = 0##. Since ##\sigma_1(g)## is nonzero, we must have ##c = 0##. We proceed by induction.

Suppose there exists ##c_1, \dots, c_n \in \mathbb{F}## such that for all ##g \in G## we have $$c_1\sigma_1(g) + \dots + c_n\sigma_n(g) = 0 (1)$$

Since these homomorphisms are distinct, we can find ##g_0 \in G## such that ##\sigma_1(g_0) \neq \sigma_n(g_0)##. Then we have

$$c_1\sigma_1(gg_0) + \dots + c_n\sigma_n(gg_0) = 0$$

which can be rewritten as

$$c_1\sigma_1(g)\sigma_1(g_0) + \dots + c_n\sigma_n(g)\sigma_n(g_0) = 0 (2)$$

Lastly, we multiply equation (1) by ##\sigma_n(g_0)##. This gives

$$c_1\sigma_1(g)\sigma_n(g_0) + \dots + c_n\sigma_n(g)\sigma_n(g_0) = 0 (3)$$

Subtracting (2) from (3) gives

$$c_1\sigma_1(g)(\sigma_1(g_0) - \sigma_n(g_0)) + \dots + c_{n-1}\sigma_{n-1}(g)(\sigma_{n-1}(g_0) - \sigma_{n}(g_0)) + 0 = 0$$

By ##P(n-1)##, we have ##c_1(\sigma_1(g_0) - \sigma_n(g_0)) = 0##. Since ##\sigma_1(g_0) - \sigma_n(g_0)## is nonzero, we have ##c_1 = 0##. Plugging this into (1), we have

$$0 + c_2\sigma_2(g) + \dots + c_n\sigma_n(g) = 0 $$

By ##P(n-1##), we have ##c_2 = \dots = c_n = 0##. This completes the induction.


##(\Longleftarrow):## Suppose ##\sigma_1, \dots, \sigma_n## are not all distinct. WLOG, suppose ##\sigma_1 = \sigma_2##. Then choose ##c_1 = 1, c_2 = -1, c_3 = \dots = c_n = 0##. We see for all ##g \in G##,

$$c_1\sigma_1(g) + \dots + c_n\sigma_n(g) = \sigma_1(g) - \sigma_2(g) = 0$$

which shows ##\sigma_1, \dots, \sigma_n## are not linearly independent. []
 
  • #48
137
55
I'm not sure whether this is right. In my example, we have the point ##(0.1,0.1,0.8)## as the only solution where all three workers agree upon. A solution ##T=(0.3,0.3,0.4)## isn't possible even if workers agree on less workload. Only ##Z## would have less work, ##X,Y## have more and so wouldn't agree on ##T##.
But your diagram seems to suggest that X,Y accept workloads up to 0.3 when Z's workload is 0. So I am still confused as to why ##T=(0.3,0.3,0.4)## is unacceptable to workers X, Y in that example. However, I do agree the solution I had posted wouldn't be applicable to the more general setting you mentioned.
 
  • #49
15,762
14,051
But your diagram seems to suggest that X,Y accept workloads up to 0.3 when Z's workload is 0. So I am still confused as to why ##T=(0.3,0.3,0.4)## is unacceptable to workers X, Y in that example. However, I do agree the solution I had posted wouldn't be applicable to the more general setting you mentioned.
We are looking for a point where all three (n) areas (polyhedrons) intersect. Such a point exists, otherwise, the problem statement would be mean. My picture only illustrates why your proof doesn't work.
 
  • #50
137
55
We are looking for a point where all three (n) areas (polyhedrons) intersect. Such a point exists, otherwise, the problem statement would be mean. My picture only illustrates why your proof doesn't work.

But ##(0.3. 0.3, 0.4)## also lies in the region where the 3 areas intersect. Assuming that your diagram was illustrating the case where workers 1, 2 and 3 accepts workloads in the ranges ##[0,0.3], [0,0.3], [0. 0.8]## respectively, the intersection of the 3 polyhedrons (each workers' "acceptance" region would be a trapezium lying on the plane defined by ##x+y+z=1## in this example) would be an area, not just a single point. In the image I have attached, the intersection region is colored pink. The green area is the acceptance region of worker 3 (##z \in [0,0.8]## and lying on ##x+y+z=1##). The region acceptable to workers 1 and 3 is the green area lying within the red rectangle (red rectangle represents ##x## bounded in ##[0,0.3]##), while the region acceptable to workers 2 and 3 is the green area lying within the blue rectangle (blue rectangle defined by ##y## bounds ##[0,0.3]##). It can be seen that the pink region meets the acceptance criteria of all the 3 workers. Point ##(0.3, 0.3, 0.4)## happens to be the point lying on the boundaries of the red and blue rectangles while also lying on the green area. The point you mention, ##(0.1, 0.1, 0.8)## too lies on the pink region but is on the boundary of the green area but inside the bounds of the red and blue rectangles.
plot1.png
 

Attachments

  • plot1.png
    plot1.png
    20.3 KB · Views: 7
  • plot1.png
    plot1.png
    98.3 KB · Views: 6

Related Threads on Math Challenge - October 2021

Replies
33
Views
6K
Replies
42
Views
8K
Replies
71
Views
7K
Replies
56
Views
4K
Replies
100
Views
2K
Replies
93
Views
3K
  • Last Post
5
Replies
114
Views
3K
Replies
102
Views
4K
Replies
67
Views
4K
Replies
86
Views
6K
Top