Challenge Basic Math Challenge - August 2018

  • Thread starter fresh_42
  • Start date
  • Featured

Math_QED

Science Advisor
Homework Helper
1,205
405
If I made no mistakes ##\lambda = \ln(2)##

So?

Could you give a quotation of this "basic result"?
I thought it was basic because it was mentioned in my probability theory book from my university, but it does mention that the proof is not trivial. So maybe basic theorem, but certainly not a basic proof. I also don't seem to find a good source to back up. When university starts again, I will ask my probability prof for a reference.
 

fresh_42

Mentor
Insights Author
2018 Award
11,606
8,088
If I made no mistakes ##\lambda = \ln(2)##
Yes.
$$
F(x) =
\begin{cases}
0 & \text{if } x < 0 \\
1-\dfrac{1}{2}e^{-\log 2\,x} & \text{if } x \geq 0
\end{cases}
$$
I thought it was basic because it was mentioned in my probability theory book from my university, but it does mention that the proof is not trivial. So maybe basic theorem, but certainly not a basic proof. I also don't seem to find a good source to back up. When university starts again, I will ask my probability prof for a reference.
If we take the Wikipedia definition then we have integrability (Lebesgue) and we are led into measure theory. So all depends on whether continuity is required or not. As it commonly is, the answer should be no.
 

fresh_42

Mentor
Insights Author
2018 Award
11,606
8,088
Solution for problem #4:

Since @lpetrich's solution doesn't contain the calculations for the integrals, used confusingly different letters, and I have made a typo by defining the paths, I now add a complete solution for problem #4 which also has a bit more explicit reason for the question about a possible potential of the two vector fields:

In the first step we parameterize the two curves:
\begin{align*}
c_1\, &: \,\left[ -\frac{\pi}{2},\frac{\pi}{2} \right] \longrightarrow \gamma_1 \text{ with } c_1(t)=(\cos t,\sin t) \text{ for } \gamma_1 \text{ and } \\
c_{2,1}\, &: \,[0,1] \longrightarrow\gamma_2 \text{ with } c_{2,1}(t)=(t,t-1) \text{ and }\\
c_{2,2}\, &: \,[0,1] \longrightarrow\gamma_2 \text{ with } c_{2,2}(t)=(1-t,t) \text{ for } \gamma_2
\end{align*}
and get ##\dot{c}_{1}(t)=(-\sin t, \cos t)\; , \;\dot{c}_{2,1}(t)=(1,1)\; , \;\dot{c}_{2,2}(t)=(-1,1)\,.## Thus
\begin{align*}
\int_{\gamma_1} v\,ds &= \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} v(c_1(t)) \cdot \dot{c}_1(t) \,dt \\
&=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \begin{bmatrix}
\sin t \\ \cos t -\sin t
\end{bmatrix} \cdot \begin{bmatrix}
-\sin t \\ \cos t
\end{bmatrix}\, dt\\
&=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} -\sin^2 t + \cos^2 t - \sin t \cos t \,dt\\
&=\left[ -\frac{1}{2}\cos^2 t + 2\sin t \cos t \right]_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\\
&=0
\end{align*}
\begin{align*}
\int_{\gamma_2} v\,ds &= \int_0^1 v(c_{2,1}(t)) \cdot \dot{c}_{2,1}(t) \,dt + \int_0^1 v(c_{2,2}(t)) \cdot \dot{c}_{2,2}(t) \,dt \\
&= \int_0^1 \begin{bmatrix}
t-1\\1 \end{bmatrix} \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix} + \begin{bmatrix} t \\1-2t \end{bmatrix} \cdot \begin{bmatrix} -1\\1 \end{bmatrix} \,dt \\
&= \int_0^1 1-2t \,dt\\
&= \left[ t-t^2 \right]_0^1 \\
&= 0
\end{align*}
\begin{align*}
\int_{\gamma_1} w\,ds &= \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} w(c_1(t)) \cdot \dot{c}_1(t) \,dt \\
&=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \begin{bmatrix}
\sin t - \cos t \\ -\sin t
\end{bmatrix} \cdot \begin{bmatrix}
-\sin t \\ \cos t
\end{bmatrix}\, dt\\
&=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} -\sin^2 t \,dt\\
&=\left[ -\frac{1}{2} (t - \sin t \cos t ) \right]_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\\
&=-\frac{\pi}{2}
\end{align*}
\begin{align*}
\int_{\gamma_2} w\,ds &= \int_0^1 w(c_{2,1}(t)) \cdot \dot{c}_{2,1}(t) \,dt + \int_0^1 w(c_{2,2}(t)) \cdot \dot{c}_{2,2}(t) \,dt \\
&= \int_0^1 \begin{bmatrix} -1 \\ 1-t
\end{bmatrix} \cdot \begin{bmatrix} 1 \\ 1 \end{bmatrix} + \begin{bmatrix} 2t-1 \\ -t \end{bmatrix} \cdot \begin{bmatrix} -1\\1 \end{bmatrix} \,dt \\
&= \int_0^1 1-4t \,dt\\
&= \left[ t-2t^2 \right]_0^1 \\
&= -1
\end{align*}
So the vector field ##v## is apparently path independent whereas ##w## is not. We check this by the calculation of their curl.
$$\operatorname{rot}\vec{F}(x,y) = \operatorname{curl}\vec{F}(x,y) =
\dfrac{\partial \vec{F}_y}{\partial x} - \dfrac{\partial \vec{F}_x}{\partial y} =
\begin{cases}
0 & \text{if } \vec{F} = v \\
-1 & \text{if } \vec{F} = w
\end{cases}$$
So ##v## has a potential and thus is path independent, and ##w## has none.
 

Math_QED

Science Advisor
Homework Helper
1,205
405
So?

Could you give a quotation of this "basic result"?
But anyway, this is the "Basic" challenge, so we do not talk about measure theory and integrability means Riemann, not Lebesgue. However, as you pointed out the critical point ##x=0## it is a correct answer, however, in basic math terms there is no density function because of the discontinuity at ##x=0\,.##

Summary:

Basic: Because the distribution function isn't continuous (at ##x=0##) and thus not differentiable, it cannot result from a density function.
(acceptable answer in this case)

Advanced mathematics: The discontinuity at ##x=0## isn't a satisfactory hurdle for not to speak of a density function, which becomes obvious if we draw the graph. To overcome such obstacles is the main reason why probability theory nowadays is based on measure theory and Lebesgue integration, rather than combinatorics and Riemann integrability. So in case you encounter ##\sigma-##algebras somewhere, this is the reason: make sets measurable (=volume) in a meaningful way, although they might contain a few problematic points. It means: Make the best out of the misery, as long as we still can get a well-defined theory. It's like having a ball minus a point or a two-dimensional slice, in which case the ball has still the volume ##\frac{4}{3}\pi r^3\,.## That's the basic idea behind measure theory and the related ##\sigma-##algebras, so don't be scared.
Revisiting this thread, I feel like some things should be cleared out if future readers read these posts.

From a measure theoretic viewpoint, it is necessary that a distribution function ##F## is continuous if we want an absolutely continuous distribution for a random variable ##X##. This, because ##F## is continuous in ##x## if and only if ##P(X = x) = 0## and if our distribution is continuous, we can write
##\mu(A) = \int_A f d \lambda## where ##\mu]-\infty, x]= F(x)##. In particular ##P(X= x)## must be ##0## for all ##x##.

So in any case, there is no density. Also not if we consider measure theoretic stuff.

The result I quoted is the following:

Let ##\mu: \mathcal{R} \to [0,1]## be a probability measure on the Borel sets of the real numbers such that the distribution function ##F(x) = \mu]-\infty,x]## is continuous and differentiable everywhere except in an at most countable set. Then ##\mu## is absolutely continuous, i.e. the distribution function ##F## has a density. It is in most measure theory books that treat differentiation, but maybe not in the probability theoretic form like I wrote it down here.

Because of the discontinuity in 0, it was not applicable.
 
107
26
Solution for problem #1:
Let $$\sum 2^{n}x_{2^{n}}=A \; \; \; \, \, \: \: \: \: ,\sum x_{n}=B\; \; \; \, \, \: \: \: \: , \sum 2^{n-1}x_{2^{n}}=C $$
If we assume a ##a_{n}## such that ##A## diverge and ##B## converge
we derive to a contradiccion; so:

because ##a_{n}## is monotone decreasent, is easy to see that $$A>B>C$$ and because B converge then C must converges as well.

but $$A-C=C$$ $$A=2C$$

this is a contradiccion , because some constant multiplied with a convergent series cannot be a divergent series.

then cannot exist a ##a_{n}## that make ##A## diverge and ##B## converge.
finally, the convergance of ##B## implies the convergance of ##A## and viceversa
 

fresh_42

Mentor
Insights Author
2018 Award
11,606
8,088
because ##a_{n}## is monotone decreasent, is easy to see that
$$A>B>C$$
This is the crucial point and "easy to see" shouldn't occur in the Basic Challenge.
 
63
13
1. Given a non-negative, monotone decreasing sequence ##(a_n)_{n \in \mathbb{N}}\subseteq \mathbb{R}\,.## Prove that ##\sum_{n \in \mathbb{N}}a_n## converges if and only if ##\sum_{n \in \mathbb{N}_0}2^na_{2^n}## converges.
$$S = \sum_{n \in \mathbb{N}}a_n \\ W = \sum_{n \in \mathbb{N}_0}2^na_{2^n} $$
If ##~W## converges then ##S## also converges
$$ a_1 \leq a_1 \\ a_2 + a_3 \leq 2a_2 \\ a_4 + a_5+a_6 + a_7 \leq 4a_4 \\ \dots \\a_{2^n} + a_{2^n+1}+ \dots + a_{2^{n+1}-1} \leq 2^n a_{2^n} \\ \dots$$
If ##~S## converges then ##W## also converges
$$ 2a_1 \gt a_1 \\ 2a_2 \geq 2 a_2 \\ 2(a_3+a_4) \geq 4 a_4 \\ 2(a_5+a_6+a_7+a_8) \geq 8a_8 \\ \dots \\ 2(a_{2^n+1}+a_{2^n+2}+ \dots +a_{2^{n+1}}) \geq 2^{n+1}a_{2^{n+1}} \\ \dots$$
 

fresh_42

Mentor
Insights Author
2018 Award
11,606
8,088
$$S = \sum_{n \in \mathbb{N}}a_n \\ W = \sum_{n \in \mathbb{N}_0}2^na_{2^n} $$
If ##~W## converges then ##S## also converges
$$ a_1 \leq a_1 \\ a_2 + a_3 \leq 2a_2 \\ a_4 + a_5+a_6 + a_7 \leq 4a_4 \\ \dots \\a_{2^n} + a_{2^n+1}+ \dots + a_{2^{n+1}-1} \leq 2^n a_{2^n} \\ \dots$$
If ##~S## converges then ##W## also converges
$$ 2a_1 \gt a_1 \\ 2a_2 \geq 2 a_2 \\ 2(a_3+a_4) \geq 4 a_4 \\ 2(a_5+a_6+a_7+a_8) \geq 8a_8 \\ \dots \\ 2(a_{2^n+1}+a_{2^n+2}+ \dots +a_{2^{n+1}}) \geq 2^{n+1}a_{2^{n+1}} \\ \dots$$
This is correct, as it is in here (I know you didn't see it earlier):
And you deserve the same comment, too :wink::
Your proof is correct, although I think that working with partial sums instead of infinite many dots would have been a bit more professional. Here is what I mean (same argument, just written a bit differently):

Cauchy's Condensation criterion.
Let's assume ##\sum_{n \in \mathbb{N}}a_n## converges. We set ##S_{n}=\sum_{k=1}^{n}a_k## and calculate
\begin{align*}
S_{2^n} &\geq a_1+a_2+2a_4+4a_8+ \ldots +2^{n-1}a_{2^n}\\
&\geq \frac{1}{2}\left(a_1+2a_2+4a_4+8a_8+\ldots +2^na_{2^n}\right)\\
&=\frac{1}{2}\sum_{k=0}^{2^n}2^ka_k
\end{align*}
Since ##\sum_{k=1}^{\infty} a_k## converges, so does the series ##(S_n)_{n \in \mathbb{N}}## of partial sums and thus twice the subsequence ##2\cdot(S_{2^n})_{n \in \mathbb{N}}##. But this is the boundary from above for the non-negative sums ##\sum_{k=1}^{n}2^ka_k##, i.e. ##\sum_{k=1}^{\infty}2^ka_k## converges.

Let now ##n<2^{m+1}-1\,.## Then
\begin{align*}
\sum_{k=1}^{n}a_k&\leq \sum_{k=1}^{2^{m+1}-1}a_k\\
&\leq a_1 + (a_2+a_2)+(a_4+a_4+a_4+a_4)+(a_8+\ldots)+(a_{2^m}+\ldots)\\
&=\sum_{k=0}^{m}2^ka_k
\end{align*}
If ##\sum_{k=0}^{\infty}2^ka_k## converges, then ##\sum_{k=0}^{\infty}a_k## is bounded and converges, too.
 

Want to reply to this thread?

"Basic Math Challenge - August 2018" You must log in or register to reply here.

Related Threads for: Basic Math Challenge - August 2018

Replies
20
Views
3K
Replies
63
Views
5K
Replies
82
Views
6K
Replies
62
Views
5K
Replies
38
Views
3K
Replies
119
Views
8K
Replies
50
Views
4K
Replies
67
Views
5K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top