MHB Unsolved statistics questions from other sites....

  • Thread starter Thread starter chisigma
  • Start date Start date
  • Tags Tags
    Statistics
  • #51
chisigma said:
Posted on 06 15 2012 on www.mathhelpforum.com by the member saravananbs and not yet solved…

… if x and y are independent random variable such that f(x)= e-x , x> or = 0 g(y)=3e-3y, y>or = 0 find the probability distribution function of z=x/y how it can be taken forward…

In the post #31 of this thread we found that, if X and Y are r.v. with p.d.f. $\displaystyle f_{x}(*)$ and $\displaystyle f_{y} (*)$ , then the r.v. $U=\frac{X}{Y}$ has p.d.f. ...

$\displaystyle f_{u}(u)= \int_{- \infty}^{+ \infty} |v|\ f_{x\ y} (u v,v)\ dv$ (1)

Now for $f_{x}(x)=e^{-x},\ x>0$ and $f_{y}(y)=3\ e^{-3\ y},\ y>0$ we have...

$\displaystyle f_{u}(u)= \int_{0}^{\infty} v\ e^{-(u+3)\ v}\ dv = \frac{3}{(u+3)^{2}},\ u>0$ (2)

Kind regards

$\chi$ $\sigma$
 
Mathematics news on Phys.org
  • #52
Posted on 06 15 2012 on www.mathhelpforum.com by the member saravananbs and not yet solved…

… if x and y are independent random variable such that

$ \displaystyle f(x)= \frac{1}{\pi\ (1+x^{2})},\ |x|<1$

... and...

$\displaystyle g(y)= y\ e^{- \frac{y^{2}}{2}},\ y>0$

...find the joint density function z and w where z=xy and w=x…

Kind regards

$\chi$ $\sigma$
 
  • #53
chisigma said:
Posted on 06 15 2012 on www.mathhelpforum.com by the member saravananbs and not yet solved…

… if x and y are independent random variable such that

$ \displaystyle f(x)= \frac{1}{\pi\ (1+x^{2})},\ |x|<1$

... and...

$\displaystyle g(y)= y\ e^{- \frac{y^{2}}{2}},\ y>0$

...find the joint density function z and w where z=xy and w=x…


In the post #31 of this thread we found that, if X and Y are r.v. with p.d.f. $\displaystyle f_{x}(*)$ and $\displaystyle f_{y}(*)$, then the r.v. U= X Y has p.d.f. ...

$\displaystyle f_{u}(u)= \int_{- \infty}^{+ \infty} \frac{1}{|v|}\ f_{x,y} (\frac{u}{v}, v)\ dv$ (1)

Now for $\displaystyle f_{x}(x)= \frac{1}{\pi\ (1+x^{2})},\ |x|<1$ and $\displaystyle f_{y}(y)= y\ e^{- \frac{y^{2}}{2}},\ y>0$ we have...

$\displaystyle f_{u} (u)= \frac{1}{\pi}\ \int_{u}^{\infty} \frac{e^{- \frac{v^{2}}{2}}}{1+\frac{u^{2}}{v^{2}}}\ dv$ (2)

The integral (2) however is not very comfortable and some more study is necessary...

Kind regards

$\chi$ $\sigma$
 
  • #54
Posted on 06 15 2012 on www.matematicamente.it by the member Edwavit [original in Italian…] and not yet solved…

Hello boys!... a question I can’t solve: X is a Gaussian r.v. with $\mu = 10$ and $\sigma=4$ and $P(X>C)=.05$. Find C…

Kind regards

$\chi$ $\sigma$
 
  • #55
chisigma said:
Posted on 06 15 2012 on www.matematicamente.it by the member Edwavit [original in Italian…] and not yet solved…

Hello boys!... a question I can’t solve: X is a Gaussian r.v. with $\mu = 10$ and $\sigma=4$ and $P(X>C)=.05$. Find C…

Setting $\frac{C-\mu}{\sigma}=x$ is...

$\displaystyle P\{X<C\}= \frac{1}{2}\ \{1+ \text{erf}\ (\frac{x}{\sqrt{2}})\} = .95$ (1)

... where...

$\displaystyle \text{erf}\ (t)= \frac{2}{\sqrt{\pi}}\ \int_{0}^{t} e^{- \xi^{2}}\ d \xi$ (2)

... so that from (1) we derive...

$\displaystyle x= \sqrt{2}\ \text{erf}\ ^{-1} (.9)$ (3)

Now of course the problem is the computation of the function $\text{erf}\ ^{-1} (*)$. In...

http://www.mathhelpboards.com/threads/1223-erf

… it has been explained how to find the coefficients of the McLaurin expansion of the function $\text{erf}\ ^{-1} (*)$ and the task defined as ‘tedious but not very difficult’. In …

http://mathworld.wolfram.com/InverseErf.html


… we discover that the ‘tedious task’ has been done by somebody some years ago and the result is the series expansion…

$\displaystyle \text{erf}\ ^{-1} (\frac{2\ x}{\sqrt{\pi}}) = \sum_{n=0}^{\infty} a_{n}\ x^{2n+1}$ (4)

... where...

$\displaystyle a_{n}=\frac{c_{n}}{2n+1}$ (5)

... with $c_{n}$ solution of the difference equation...

$\displaystyle c_{n}= \sum_{k=0}^{n-1} \frac{c_{k}\ c_{n-k-1}}{(k=1)\ (2\ k+1)},\ c_{0}=1$ (6)

The first $a_{n}$ are $a_{0}=1$,$a_{1}= \frac{1}{3}$, $a_{2}= \frac {7}{30}$, $a_{3}= \frac{127}{630}$,... The remaining computation is relatively 'comfortable' and are left to the reader...

Kind regards

$\chi$ $\sigma$
 
  • #56
Posted on 06 16 2012 on www.talkstat.com by the member Ramirez and not yet solved…

A light bulb manufacturer advertises that 'the average life of our new light bulb is 50,000 seconds. An immediate adjustment will be made on any bulb that does not last 50,000 seconds'. You purchased four of these bulbs. What is the probability that all four bulbs will last more than 50,000 seconds?...

Kind regards

$\chi$ $\sigma$
 
  • #57
chisigma said:
Posted on 06 16 2012 on www.talkstat.com by the member Ramirez and not yet solved…

A light bulb manufacturer advertises that 'the average life of our new light bulb is 50,000 seconds. An immediate adjustment will be made on any bulb that does not last 50,000 seconds'. You purchased four of these bulbs. What is the probability that all four bulbs will last more than 50,000 seconds?...

The life T of a manufactured article with mean life time $\tau$ has p.d.f. ...

$\displaystyle f(t)= \frac{1}{\tau}\ e^{- \frac{t}{\tau}},\ t>0$ (1)

... so that the probability that the life time is greater than $\tau$ is...

$\displaystyle P\{T>\tau\} = \frac{1}{\tau}\ \int_{\tau}^{\infty} e^{-\frac{t}{\tau}}\ dt = e^{-1}$ (2)

For 4 manufactured articles the probability that all life times are greater than $\tau$ is $P=e^{-4}$...

Kind regards

$\chi$ $\sigma$
 
  • #58
Posted on 06 21 2012 on www.matematicamente.it by the member sairaki87 [original in Italian language...] and not yet solved...

There are two fellows A and B and an urn with 50 white balls and 1 black ball. Alternatively A and B extract a ball and the winner is who extract the black ball. A first extracts. What is the probability for A and B to be the winner?...

Kind regards

$\chi$ $\sigma$
 
  • #59
chisigma said:
Posted on 06 21 2012 on www.matematicamente.it by the member sairaki87 [original in Italian language...] and not yet solved...

There are two fellows A and B and an urn with 50 white balls and 1 black ball. Alternatively A and B extract a ball and the winner is who extract the black ball. A first extracts. What is the probability for A and B to be the winner?...

Of course it is sufficient to compute the probability $P_{A}$ that A is the winner, and the probability that B is the winner is $P_{B}=1-P_{A}$. If n is the overall number of balls [n-1 whites and 1 black...], the probability that the black ball is extracted at the k-th extraction is...

$\displaystyle P_{k}=\frac{n-1}{n}\ \frac{n-2}{n-1}\ ... \frac{n-k+1}{n-k+2}\ \frac{1}{n-k+1}=\frac{1}{n}$ (1)

Now we have two possibilities...

a) k is even so that $\displaystyle P_{A}= \sum_{k\ \text{odd}} \frac{1}{n}= \frac{1}{2}$

b) k is odd so that $\displaystyle P_{A}= \sum_{k\ \text{odd}} \frac{1}{n}= \frac{n+1}{2\ n}$

Kind regards

$\chi$ $\sigma$
 
  • #60
Posted on 04 23 2012 on http://www.scienzematematiche.it/ by the user whitefang [original in Italian…] and not yet properly solved…

We are shooting at a target over a two-dimension plane. The horizontal and vertical distances of the hits respect to the target are normal r.v. with $\mu=0$ and $\sigma=4$. D is the distance between the hit and the target. Find $E\{D\}$...

Kind regards

$\chi$ $\sigma$
 
  • #61
chisigma said:
Posted on 04 23 2012 on http://www.scienzematematiche.it/ by the user whitefang [original in Italian…] and not yet properly solved…

We are shooting at a target over a two-dimension plane. The horizontal and vertical distances of the hits respect to the target are normal r.v. with $\mu=0$ and $\sigma=4$. R is the distance between the hit and the target. Find $E\{R\}$...

That is material for a basic course of probability. If X and Y are normal r.v. with mean 0 and variance $\sigma$, then $R=\sqrt{X^{2}+Y^{2}}$ is Rayleigh distributed, i.e. its p.d.f. is...

$\displaystyle f(r) =\frac{r}{\sigma^{2}}\ e^{- \frac{r^{2}}{2\ \sigma^{2}}}$ (1)

... and the expected value of D is...

$\displaystyle E\{R\}= \sigma\ \sqrt{\frac{\pi}{2}}$ (2)

See for more details...

http://mathworld.wolfram.com/RayleighDistribution.html

Kind regards

$\chi$ $\sigma$
 
  • #62

Posted on 04 19 2012 on www.mathhelpforum.com by the member cjtdevil and not yet solved…

How do you find a generating function for S(n,2) {S is the 2nd Stirling function} as a ratio of polynomials?...

That is not properly a probability question, even if it has been posted in the 'Advaced Statistic' section. Anyway it is interesting...

Kind regards

$\chi$ $\sigma$
 
  • #63
chisigma said:

Posted on 04 19 2012 on www.mathhelpforum.com by the member cjtdevil and not yet solved…

How do you find a generating function for S(n,2) {S is the 2nd Stirling function} as a ratio of polynomials?...

That is not properly a probability question, even if it has been posted in the 'Advaced Statistic' section. Anyway it is interesting...

Kind regards

$\chi$ $\sigma$


We can start from the definition of Second Kind Stirling Numbers...

$\displaystyle S(n,k)= \frac{1}{k!}\ \sum_{i=0}^{k} (-1)^{k-i}\ \binom{k}{i}\ i^{n}$ (1)

... that obey to the recursive relation...

$\displaystyle S(n+1,k)= k\ S(n,k)+ S(n,k-1)$ (2)

From (1) we derive $\displaystyle S(n,1)= 1-\delta_{n}$ and $\displaystyle S(n,2)=1 - 2^{n-1}- \frac{\delta_{n}}{2}$ so that from (2) we have...

$\displaystyle S(n+1,2)= 2^{n} \implies S(n,2)= 2^{n-1}$ (3)

... and its generating function is...

$\displaystyle f(x)=\frac{1}{2}\ \sum_{n=0}^{\infty} (2 x)^{n}= \frac{1}{2\ (1-2 x)}$ (4)

Kind regards

$\chi$ $\sigma$
 
  • #64
Posted on 05 10 2012 on www.talkstat.com by the member rogersticks and not yet solved…

A game is played as follows: A pile contains 1 dollar and a coin is flipped. Each time a heads occurs, the amount in the piled is doubled and if a tail appears, the pile is given to the player. How much money should be payed to play this game?...

Kind regards

$\chi$ $\sigma$
 
  • #65
chisigma said:
Posted on 05 10 2012 on www.talkstat.com by the member rogersticks and not yet solved…

A game is played as follows: A pile contains 1 dollar and a coin is flipped. Each time a heads occurs, the amount in the piled is doubled and if a tail appears, the pile is given to the player. How much money should be payed to play this game?...

The probability that a tail appears after n-1 consecutive heads is $\displaystyle p_{n}= 2^{-n}$ and the cost has been $\displaystyle c_{n}= 2^{n-1}-1$ so that the expected value of the cost is...

$\displaystyle C= \sum_{n=1}^{\infty} c_{n}\ p_{n} = \sum_{n=1}^{\infty} (\frac{1}{2} - 2^{-n})$ (1)

Now the series (1) diverges, so that the expected cost is unlimited. That is a 'paradox' of the same type of the 'Saint Petersburg Paradox'...

Kind regards

$\chi$ $\sigma$
 
  • #66
Posted the o6 30 2012 on www.matematicamente.it by superfox [original in Italian…] and not yet properly solved…

How to demonstrate that, given the r.v. X and Y uniformly distributed in (0,1), is $\displaystyle P \{X^{2}+Y^{2} <1 \}= \frac{\pi}{4}$ ?...

Kind regards

$\chi$ $\sigma$
 
  • #67
chisigma said:
Posted the o6 30 2012 on www.matematicamente.it by superfox [original in Italian…] and not yet properly solved…

How to demonstrate that, given the r.v. X and Y uniformly distributed in (0,1), is $\displaystyle P \{X^{2}+Y^{2} <1 \}= \frac{\pi}{4}$ ?...

If $X$ is uniformly distributed in (0,1), then the p.d.f. $f(x)$ of $X^{2}$ can be found as follows...

$\displaystyle P\{X^{2}<x \}= \int_{0}^{\sqrt{x}} d \xi = \sqrt{x} \implies f(x)=\begin{cases}\frac{1}{2\ \sqrt{x}} &\text{if}\ 0<x<1\\ 0 &\text{otherwise} \end{cases} $ (1)

The Laplace Transform of (1) is...

$\displaystyle F(s)= \mathcal {L}\{f(x) \}= \frac{\sqrt{\pi}}{2}\ \frac{1-e^{-s}}{\sqrt{s}}$ (2)

... so that the Laplace Transform of the p.d.f. $g(x)$ of the r,v, $Z=X^{2}+Y^{2}$ is...

$\displaystyle G(s)=F^{2}(s)= \frac{\pi}{4}\ \frac{(1-e^{-s})^{2}}{s}$ (3)

Now we are interested to the integral from 0 to 1 of the $g(x)$ that is...

$\displaystyle P\{Z<1\}= \mathcal{L}^{-1} \{\frac{G(s)}{s}\}_{x=1} = \frac{\pi}{4}$ (4)

Kind regards

$\chi$ $\sigma$
 
  • #68
chisigma said:
If $X$ is uniformly distributed in (0,1), then the p.d.f. $f(x)$ of $X^{2}$ can be found as follows...

$\displaystyle P\{X^{2}<x \}= \int_{0}^{\sqrt{x}} d \xi = \sqrt{x} \implies f(x)=\begin{cases}\frac{1}{2\ \sqrt{x}} &\text{if}\ 0<x<1\\ 0 &\text{otherwise} \end{cases} $ (1)

The Laplace Transform of (1) is...

$\displaystyle F(s)= \mathcal {L}\{f(x) \}= \frac{\sqrt{\pi}}{2}\ \frac{1-e^{-s}}{\sqrt{s}}$ (2)

... so that the Laplace Transform of the p.d.f. $g(x)$ of the r,v, $Z=X^{2}+Y^{2}$ is...

$\displaystyle G(s)=F^{2}(s)= \frac{\pi}{4}\ \frac{(1-e^{-s})^{2}}{s}$ (3)

Now we are interested to the integral from 0 to 1 of the $g(x)$ that is...

$\displaystyle P\{Z<1\}= \mathcal{L}^{-1} \{\frac{G(s)}{s}\}_{x=1} = \frac{\pi}{4}$ (4)

Kind regards

$\chi$ $\sigma$

Making hard work of a trivial problem: the question is asking for the area of the unit circle in the first quadrant, so without calculation the answer must be \(\pi/4\)

CB
 
  • #69
Posted on 07 05 2012 on www.mathhelpforum.com by the member Len and not yet solved…

Let $X_{1}$, $X_{2}$ and $X_{3}$ be i.i.d. r.v.'s with common probability density function...

$ f(x)=\begin{cases} e^{-x} &\text{if}\ x \ge 0\\ 0 &\text{otherwise} \end{cases} $

Find $\displaystyle P \{ X_{1}<X_{3} -X_{2} \}$...

Kind regards

$\chi$ $\sigma$
 
  • #70
chisigma said:
Posted on 07 05 2012 on www.mathhelpforum.com by the member Len and not yet solved…

Let $X_{1}$, $X_{2}$ and $X_{3}$ be i.i.d. r.v.'s with common probability density function...

$ f(x)=\begin{cases} e^{-x} &\text{if}\ x \ge 0\\ 0 &\text{otherwise} \end{cases} $

Find $\displaystyle P \{ X_{1}<X_{3} -X_{2} \}$...



The requested probability of course is $\displaystyle P \{X_{1} + X_{2} - X_{3} <0 \}$. The r.v. $X_{1}$ and $X_{2}$ have p.d.f. $ f(x)=\begin{cases} e^{-x} &\text{if}\ x \ge 0\\ 0 &\text{otherwise} \end{cases}$ and the r.v. $X_{3}$ has p.d.f $f(x)=\begin{cases} e^{x} &\text{if}\ x \le 0\\ 0 &\text{otherwise} \end{cases}$ so that the Fourier Transform of the p.d.f. of the r.v. $Z=X_{1} + X_{2} - X_{3}$ is...

$\displaystyle F(i\ \omega) = \frac{1}{4}\ \frac{1}{1+i\ \omega} + \frac{1}{2}\ \frac{1}{(1+i\ \omega)^{2}} + \frac{1}{4}\ \frac{1}{1-i\ \omega}$ (1)

From (1) we derive immediately that is $P\{X_{1} + X_{2} - X_{3} <0 \}= \frac{1}{4}$ ...

Kind regards

$\chi$ $\sigma$
 
  • #71
Posted on 07 06 2012 on www.matematicamente.it by the member simeon [original in Italian language...] and not yet solved...

In 80000 trials we have 50000 'favorable results'... what is the 'error' if we estimate the probability as $p=\frac{50000}{80000}$?...

Although not 'rigorously expressed' that is a 'top probability problem' ...

Kind regards

$\chi$ $\sigma$
 
  • #72
chisigma said:
Posted on 07 06 2012 on www.matematicamente.it by the member simeon [original in Italian language...] and not yet solved...

In 80000 trials we have 50000 'favorable results'... what is the 'error' if we estimate the probability as $p=\frac{50000}{80000}$?...

Although not 'rigorously expressed' that is a 'top probability problem' ...

The probability of k 'successes' in n trials is given by...

$\displaystyle P_{n,k} = \binom{n}{k}\ p^{k}\ (1-p)^{n-k}$ (1)

... and for n 'large enough', according to the De Moivre-Laplace theorem is...

$\displaystyle P_{n,k} \sim \frac{1}{\sqrt {2\ \pi\ n\ p\ (1-p)}}\ e^{- \frac{(k-n\ p)^{2}}{2\ n\ p\ (1-p)}}$ (2)

If we consider the r.v. $X= \frac{k}{n} -p$ for n 'large enough' the (2) shows that X is normal distributed with $\mu=0$ and $\sigma^{2}= \frac{p\ (1-p)}{n}$...

Kind regards

$\chi$ $\sigma$
 
  • #73
Posted on 07 05 2012 on www.artofproblemsolving.com by the member tian_275461 and not yet solved…

Indicating with $\{*\}$ the function 'fractional part of', prove that $\displaystyle \{ \frac {e^{n}}{2\ \pi} \}$ is uniformly distributed in [0,1)

Kind regards

$\chi$ $\sigma$
 
  • #74
chisigma said:
Posted on 07 05 2012 on www.artofproblemsolving.com by the member tian_275461 and not yet solved…

Indicating with $\{*\}$ the function 'fractional part of', prove that $\displaystyle \{ \frac {e^{n}}{2\ \pi} \}$ is uniformly distributed in [0,1)



This is probably the most 'difficult' problem posted in this section till now. According to Weyl's criterion a sequence $a_{n}$ is uniformly distributed modulo 1 if and only if for any integer $k \ne 0$ is...

$\displaystyle \lim_{m \rightarrow \infty} \frac{1}{m}\ \sum_{n=1}^{m} e^{2\ \pi\ i\ k\ a_{n}} =0$ (1)

On the basis of (1) Weyls has demonstrated that if the sequence is a polynomial p(n) with at least one irrational coefficient then the sequence is uniformly distributed modulo 1. If that holds also for a polynomial with infinite coefficients, then is...

$\displaystyle \frac{e^{n}}{2\ \pi} = \sum_{k=0}^{\infty} \frac{n^{k}}{2\ \pi\ k!}$ (2)

and the sequence $\displaystyle \frac{e^{n}}{2\ \pi}$ is uniformly distributed modulo 1. A demonstration of that however does't seem comfortable...

Kind regards

$\chi$ $\sigma$
 
  • #75
Posted on 07 12 2012 on www.talkstat.com by the member kzeldon and not yet solved…

Working a problem from a text that asks if you have n balls and place them in n bins, what is the probability that exactly one bin remains empty. The text gives the answer as C(n 2)*n!/n^n. I wish to understand where this expression came from...

Kind regards

$\chi$ $\sigma$
 
  • #76
chisigma said:
Posted on 07 12 2012 on www.talkstat.com by the member kzeldon and not yet solved…

Working a problem from a text that asks if you have n balls and place them in n bins, what is the probability that exactly one bin remains empty. The text gives the answer as C(n 2)*n!/n^n. I wish to understand where this expression came from...

The solution requires the concept of multinomial distribution, illustrated for example in...

http://mathworld.wolfram.com/MultinomialDistribution.html

In general if we have a set of n r.v. $\displaystyle X_{1},X_{2},...,X_{n}$ with $\displaystyle \sum_{k=1}^{n} X_{k}=N$, the r.v. are multinomial distributed if their probability function is...

$\displaystyle P\{X_{1}=x_{1},X_{2}=x_{2},...,X_{n}=x_{n}\}= N!\ \prod_{k=1}^{n} \frac{\theta_{k}^{x_{k}}}{x_{k}!}$ (1)

... where all the $\theta_{k}$ are positive and is $\displaystyle \sum_{k=1}^{n} \theta_{k}=1$. In our case is $\displaystyle N=n$, $\displaystyle \theta_{k}=\frac{1}{n}$ and let's start with the set $\displaystyle x_{1}=0, x_{2}=2, x_{2}=x_{3}=...=x_{n}=1$. From (1) we obtain...

$\displaystyle P\{X_{1}=0, X_{2}=2, X_{3}=X_{4}=...=X_{n}=1\} = \frac{n!}{2\ n^{n}}$ (2)

Since the set satisfying the same condition are $\displaystyle n\ (n-1)$ the probability that only one bin remains empty is...

$\displaystyle P= \frac{n\ (n-1)\ n!}{2\ n^n}$ (3)

Kind regards

$\chi$ $\sigma$
 
  • #77
Posted on 07 11 2012 on www.talkstat.com by the member wuid and not yet properly solved…

In a gambling game , you can win 1 dollar in each round with probability 0.6 or lose 2 dollars in probability 0.4 and suppose you start with 100 dollars. Find the probability that after 10 rounds you have between 93 to 107 dollars...

Kind regards

$\chi$ $\sigma$
 
  • #78
chisigma said:
Posted on 07 11 2012 on www.talkstat.com by the member wuid and not yet properly solved…

In a gambling game , you can win 1 dollar in each round with probability 0.6 or lose 2 dollars in probability 0.4 and suppose you start with 100 dollars. Find the probability that after 10 rounds you have between 93 to 107 dollars...

Setting u the number of 'up jumps', d the number of 'down jumps' and $\displaystyle \Delta=u-2\ d$, for $\displaystyle -7 \le \Delta \le 7$ we have the following possible cases...

$\displaystyle \Delta=-5 \implies u=5,\ d=5$

$\displaystyle \Delta=-2 \implies u=6,\ d=4$

$\displaystyle \Delta=1 \implies u=7,\ d=3$

$\displaystyle \Delta=4 \implies u=8,\ d=2$

$\displaystyle \Delta=7 \implies u=9,\ d=1$ (1)

From (1) we derive that the requested probability is...

$\displaystyle P= \sum_{u=5}^{9} \binom {10}{u}\ p^{u}\ (1-p)^{10-u}$ (2)

... where $p=.6$ is the probability of 'up jump'. The effective computation of (2) can be made with a simple calculator and it is left to the reader...

Kind regards

$\chi$ $\sigma$
 
  • #79

Attachments

  • r. v. problem.png
    r. v. problem.png
    21.6 KB · Views: 116
  • #80
chisigma said:
Posted on 07 20 2012 on www.talkstats.com by the user goodmorningda and not yet solved...

https://www.physicsforums.com/attachments/274

a) Mean and variance are computed in standard fashion...

$\displaystyle E\{X\}= \mu = \int_{0}^{1} 2\ x\ (1-x)\ dx = \frac{1}{3}$ (1)

$\displaystyle E\{(X-\mu)^{2}\}= \sigma^{2}= \int_{0}^{1} 2\ (x-\frac{1}{3})^{2}\ (1-x)\ dx= \frac{1}{18} \implies \sigma=\frac{1}{3\sqrt{2}}$ (2)

... so that is $\sigma=\mu$...

b) the Laplace Transform of the p.d.f. of X is...

$\displaystyle \mathcal{L} \{f(x)\} = F(s)= \int_{0}^{1} 2\ (1-x)\ e^{- s\ x}\ dx = 2\ \frac{s-1+ e^{-s}}{s^{2}}$ (3)

... so that the Laplace Transform of the p.d.f. g(x) of the r.v. Y is...

$\displaystyle \mathcal{L} \{g(x)\} = F^{2}(s)= 4\ \{(\frac{1}{s^{2}} - \frac{2}{s^{3}} + \frac{1}{s^{4}}) + (\frac{1}{s^{3}} -\frac{2}{s^{4}})\ e^{-s} + \frac{1}{s^{4}}\ e^{-2\ s}\}$ (4)

... and g(x) is found computing the inverse Laplace Transform of (4)...

$\displaystyle g(x)= \mathcal{L}^{-1} \{F^{2}(s)\}= \begin{cases}4\ (x - x^{2} + \frac{1}{6}\ x^{3}) &\ \text{if }\ 0<x<1\\ 4\ (\frac{4}{3} -2\ x + x^{2} -\frac{1}{6}\ x^{3}) &\text {if } 1<x<2\\ 0 &\text{otherwise}\end{cases}$ (5)

d) from (5) we derive quickly...

$\displaystyle P\{1<Y<2\}= 1-P\{0<Y<1\} = 1- 4\ \int_{0}^{1} (x-x^{2}+ \frac{x^{3}}{6})\ dx = 1-\frac{5}{6} = \frac{1}{6}$ (6)

The points c) and e) will be attacked in a successive post...

Kind regards

$\chi$ $\sigma$
 
Last edited by a moderator:
  • #81
chisigma said:
Posted on 07 20 2012 on www.talkstats.com by the user goodmorningda and not yet solved...

https://www.physicsforums.com/attachments/274

The solution of the points c) requires the formulas that the French mathematician Ire'ne'e Jules Bienayme' found about hundred and fifty years ago about the mean and variance of the r.v. $\displaystyle X= \sum_{k=1}^{n} X_{k}$ where the $X_{k}$ are a set of n independent r.v. with mean value $\mu_{k}=E\{X_{k}\}$ and mean square error $\sigma^{2}_{k}= E\{(X_{k}-\mu_{k})^{2}\}$...

$\displaystyle E\{X\}=\mu = \sum_{k=1}^{n} \mu_{k}$

$\displaystyle E\{(X-\mu)^{2}\}=\sigma^{2} = \sum_{k=1}^{n} \sigma^{2}_{k}$ (1)

In our case is $\displaystyle n=2$, $\displaystyle \mu_{1}=\mu_{2}= \frac{1}{3}$ and $\displaystyle \sigma^{2}_{1}=\sigma^{2}_{2}= \frac{1}{18}$, so that it would be $\displaystyle \mu=\frac{2}{3}$ and $\displaystyle \sigma^{2}= \frac{1}{9} \implies \sigma=\frac{1}{3}$. Bienayme’ however didn’t give rigorousdemonstration of the fact that the (1) are valid for any n and the discussion is still open so that now we verify these results using the p.d.f. of X we found in last post…

$\displaystyle \mu= 4\ \{\int_{0}^{1} x\ (x-x^{2}+ \frac{1}{6}\ x^{3})\ dx + \int_{1}^{2} x\ (\frac{4}{3} -2\ x + x^{2} -\frac{1}{6}\ x^{3})\ dx \} = \frac{28}{60} + \frac{4}{20}= \frac{2}{3}$ (2)

$\displaystyle \sigma^{2}= 4\ \{\int_{0}^{1} (x-\frac{2}{3})^{2}\ (x-x^{2}+ \frac{1}{6}\ x^{3})\ dx + \int_{1}^{2} (x-\frac{2}{3})^{2}\ (\frac{4}{3} -2\ x + x^{2} -\frac{1}{6}\ x^{3})\ dx \} = \frac{8}{135} + \frac{28}{540}= \frac{1}{9}$ (3)

The solution of e) is also very interesting and it will be made in a successive post...

Kind regards

$\chi$ $\sigma$

 
  • #82
chisigma said:
Posted on 07 20 2012 on www.talkstats.com by the user goodmorningda and not yet solved...

https://www.physicsforums.com/attachments/274

The solution of the last question implies the use of the so called 'Central Limit Theorem'. Formulated in the years 1920-1930, the CLT has an enormous number of 'daddies', but in my [very modest] opinion it is the natural development of the Bienayme' result of the previous century. In simply words the CLT establishes that, given n independent r.v. $\displaystyle X_{1},\ X_{2},..., X_{n}$ each of them has an arbitrary p.d.f. with mean $\displaystyle \mu_{k}$ and variance $\displaystyle \sigma^{2}_{k}$, then the r.v. $\displaystyle X= \sum_{k=1}^{n} X_{k}$ for n 'large enough' has a p.d.f that approaches ...

$\displaystyle f(x)= \frac{1}{\sigma \sqrt{2\ \pi}}\ e^{-\frac{(x-\mu)^{2}}{2\ \sigma^{2}}}$ (1)

... where...

$\displaystyle \mu=\sum_{k=1}^{n} \mu_{k}$

$\displaystyle \sigma^{2}= \sum_{k=1}^{n} \sigma^{2}_{k}$ (2)

In our case is n=100 [we suppose it 'large enough'...], $\displaystyle \mu_{k}= \frac{1}{3}$, $\displaystyle \sigma^{2}_{k}= \frac{1}{18}$ so that is $\displaystyle \mu=\frac{100}{3}$ and $\displaystyle \sigma^{2}= \frac{100}{18}$. Now the probability $\displaystyle P\ \{X>x\}$ is given by...

$\displaystyle P\ \{X>x\}= \frac{1}{2}\ \text{erfc}\ (\frac{z}{\sqrt{2}})$ (3)

... where is $\displaystyle z=\frac{x-\mu}{\sigma}$. For x=34 is...

​$\displaystyle P\ \{X>34\}= \frac{1}{2}\ \text{erfc}\ (\frac{1}{5})= .388649... $ (4)

Kind regards

$\chi$ $\sigma$
 
  • #83
chisigma said:
The solution of the last question implies the use of the so called 'Central Limit Theorem'. Formulated in the years 1920-1930, the CLT has an enormous number of 'daddies'

1820-30's by Laplace and Poisson.

CB
 
  • #84
CaptainBlack said:
1820-30's by Laplace and Poisson.

CB

A good description of the historical course of the CLT is the following...

http://www.sal.tkk.fi/vanhat_sivut/Opinnot/Mat-2.108/pdf-files/emet03.pdf

As curious tail I can add what I read in a Alan Turing's biography written by Andrew Hodges: a 'proof' of the CLT was the subject of Alan Turing's 1934 Fellowship Dissertation for King's College at the University of Cambridge. Only after submitting the work did Turing learn it had already been proved in 1922 by Jarl Waldemar Lindenberg and consequently Turing's dissertation was never published...

Kind regards

$\chi$ $\sigma$
 
  • #85
Posted on 07 26 2012 on www.mathhelpforum.com by the member salohcin and not yet solved…

Suppose you have some random variable X that is distributed according to a Pareto distribution...

$\displaystyle f(x)=\begin{cases} k\ \frac{m^{k}}{x^{k-1}} &\text{if}\ x>m\\ 0 &\text{if}\ x<m \end{cases}$

... and then you have a transformation $Y = c\ X$ where c is a constant. I want to find out the standard deviation of the natural log of Y. I have been trying for several hours but have had no success...

Kind regards

$\chi$ $\sigma$
 
  • #86
chisigma said:
Posted on 07 26 2012 on www.mathhelpforum.com by the member salohcin and not yet solved…

Suppose you have some random variable X that is distributed according to a Pareto distribution...

$\displaystyle f(x)=\begin{cases} k\ \frac{m^{k}}{x^{k-1}} &\text{if}\ x>m\\ 0 &\text{if}\ x<m \end{cases}$

... and then you have a transformation $Y = c\ X$ where c is a constant. I want to find out the standard deviation of the natural log of Y. I have been trying for several hours but have had no success...



I apologize with the community of MHB but the 'Pareto distribution' I posted is wrong. The correct expression is...

$\displaystyle f(x)=\begin{cases} k\ \frac{m^{k}}{x^{k+1}} &\text{if}\ x>m\\ 0 &\text{if}\ x<m \end{cases}$ (1)

Very sorry! (Thinking) ...

Kind regards

$\chi$ $\sigma$
 
  • #87
chisigma said:
I apologize with the community of MHB but the 'Pareto distribution' I posted is wrong. The correct expression is...

$\displaystyle f(x)=\begin{cases} k\ \frac{m^{k}}{x^{k+1}} &\text{if}\ x>m\\ 0 &\text{if}\ x<m \end{cases}$ (1)

Very sorry! ...

Now that we have the effective 'Pareto distribution' we are able to solve the question. Because is $\displaystyle \ln (c\ X)= \ln X + \ln c$ and the variance is required, we can set c=1 and operate on the r.v. $\displaystyle Z= \ln X$. First we compute the probability...

$\displaystyle P \{\ln m <Z< z \} = P \{m< X< e^{z} \} = \int_{m}^{e^{z}} k\ \frac{m^{k}}{x^{k-1}}\ dx=$
$\displaystyle = \begin{cases} 1-(m\ e^{-z})^{k} &\text{if}\ z> \ln m \\ 0 &\text{if}\ z< \ln m \end{cases}$ (1)

The p.d.f. of the r.v. $\ln X$ is found deriving (1)...

$\displaystyle g(z) = \begin{cases} k\ m^{k}\ e^{-k\ z} &\text{if}\ z> \ln m \\ 0 &\text{if}\ z< \ln m \end{cases}$ (2)

The mean value of $\ln X$ is...

$\displaystyle E \{ \ln X\} = \mu = \int_{\ln m}^{\infty} z\ g(z)\ dz= \frac{1}{k} + \ln m $ (3)

... and the variance ...

$\displaystyle \text{Var}\ \{\ln X \}= \sigma^{2}= \int_{\ln m}^{\infty} (z-\mu)^{2}\ g(z)\ dz= \frac{1}{k^{2}}$ (4)

... so that the final result is [that is not a surprise...] quite simple ...

Kind regards

$\chi$ $\sigma$
 
  • #88
Posted the o7 29 2012 on www.matematicamente.it by Martessa [original in Italian…] and not yet solved…

Alice writes on a paper an integer between 1 and n and asks Bob to guess it. Bob can try indefinitely until he guess the number. What is the expected number of efforts of Bob to have success?...

Kind regards

$\chi$ $\sigma$
 
  • #89
chisigma said:
Posted the o7 29 2012 on www.matematicamente.it by Martessa [original in Italian…] and not yet solved…

Alice writes on a paper an integer between 1 and n and asks Bob to guess it. Bob can try indefinitely until he guess the number. What is the expected number of efforts of Bob to have success?...

The probability that Bob guess the number at the k-th trial is...

$\displaystyle P_{k,n}= \frac{n-1}{n}\ \frac{n-2}{n-1}\ ...\ \frac{1}{n-k+1}= \frac{1}{n}$ (1)

... so that the expected number of trials is...

$\displaystyle E\{k\}= \frac{1}{n}\ \sum_{k=1}^{n} k = \frac{n+1}{2}$ (2)

Very easy!...

Kind regards

$\chi$ $\sigma$
 
  • #90
chisigma said:
Posted on 07 26 2012 on www.mathhelpforum.com by the member salohcin and not yet solved…

Suppose you have some random variable X that is distributed according to a Pareto distribution...

$\displaystyle f(x)=\begin{cases} k\ \frac{m^{k}}{x^{k-1}} &\text{if}\ x>m\\ 0 &\text{if}\ x<m \end{cases}$

... and then you have a transformation $Y = c\ X$ where c is a constant. I want to find out the standard deviation of the natural log of Y. I have been trying for several hours but have had no success...



This thread, that I really hope has been appreciated by You, now has a too large size so that it will be terminated and a new thread with the same scope will be open. Before to do that however I would to spent some word about the figure of the Italian engineer, sociologist, economist, political scientist and philosopher Vilfredo Pareto...

View attachment 285

Pareto was born in Paris in 1848 of a nobile Genoese family and in 1870 he earned the degree of civil engineering in Polytechnic of Turin. For some years he worked in the Italian Railway Company and after he decided to dedicate his interest to the economy. In my opinion his greatest credit is to have been one of the first to use Math instruments in economical studies. His name is tied to the Pareto principle [know as '80-20 principle'...] according to that the most stable economical status of a country is when, independently form historical and geographical contests, 80% of the wealth is owned by 20% of the population. How present is the Pareto's thought You can understand considering that the present global status of economy is near 90-10 and that , according to Pareto, sooner or later will produce explosive consequences...

Kind regards

$\chi$ $\sigma$
 

Attachments

  • Vilfredo_Pareto.jpg
    Vilfredo_Pareto.jpg
    7.2 KB · Views: 100
  • #91
Time for part II of this thread. :)

A big thanks to chisigma for all the time he has put into this useful collection of problems and solutions. I don't want this thread to disappear because it's closed so hopefully we can figure out a way to showcase it and other outstanding threads permanently.
 

Similar threads

Back
Top