Interesting Feymann statement about probability.

AI Thread Summary
The discussion centers on Richard Feynman's explanation of probability in his lectures, particularly regarding the binomial distribution and the random walk problem. The key point is Feynman's assertion that the net distance traveled (D) in a random walk has a distribution related to the number of heads (N_H) in coin tosses, which are both dichotomous variables. There is a debate about whether D and the number of heads (k) share the same distribution solely due to their linear relationship, with some arguing that this connection is not immediately obvious. The conversation also touches on the nature of probability distributions and how transformations of random variables affect their distributions. Ultimately, the discussion emphasizes the importance of understanding the relationship between these variables in the context of probability theory.
Portuga
Messages
56
Reaction score
6
1. Hello gentlemen! I've studying the Feymann lectures, and I came across an interesting question, from reading the chapter 6, about probability.
In this chapter, Feymann deduces the binomial distribution, and I am ok with that.
But, in the 3rd section, when adresses to the random walk problem, he registered something that was very cumbersome for me to understand.
Let me try to explain.
He deal with two variables when solving the binomial distribution problem: n, the number of tosses of a "fair" coin; and k, the number of heads thrown. In the end, he gets:
P(k,n)=\frac{\left( \stackrel{n}{k} \right)}{2^n}.

Ok, I got the point.
But later, he starts to solve the random walk problem, and introduces another variable, D, which is the net distance traveled in N steps.
Ok.
Stablishing a relation to the binomial distribution problem, D is just the difference between the number of heads and the number of tails, as heads stands for a forward step and tais for backward steps.
So,
D = N_H - N_T,
and
N = N_H + N_T.
So,
D = 2N_H - N.
All right. Nothing difficult up to now.
But then, comes the magic.
He afirms that "We have derived earlier an expression for the expected distribution for D. Since N is just a constant, we have the corresponding distribution for D."
I got the impression that he is trying to pass the idea that as D is in a linear relation with N_H, they must have the same distribution.
2. In my opinion, Feymann did not express himself clearly, as he gives the impression that D and k have the same distribution because they have a linear relation between them. But, this is far from obvious, at least in my opinion. I cannot imagine that if tho variables are related by a linear relation, they will have same probability distribution.
I think both D and k have same distribution because they are basically Dichotomous variables. I would like to receive your advices. Thank you in advance.


P.S.: sorry for my poor english.
 
Physics news on Phys.org
If ## y = ax + b ## and ## x ## is a random variable with distribution ## \Phi (x) ##, what does the distribution function of y look like?

A coin toss (outcomes heads or tails) and a lottery ticket (outcomes win or lose) are both dichotomous variables. Do they have the same distribution?
 
Last edited:
I think I got the point. The distribution doesn't change because y only takes x values to another value, which is inside the x dominium, am I right? This way, \Phi has a value for it. Is this what I should think?
 
Last edited:
The key point (these being discrete distributions) is that there is a 1-1 correspondence between values of D and values of NH. Consequently we can map the probabilities where they are nonzero: P[D=2h-N] = P[NH=h]. But note we also have P[D=2h-N+1] = 0 for all integer h.
 
  • Like
Likes 1 person
I multiplied the values first without the error limit. Got 19.38. rounded it off to 2 significant figures since the given data has 2 significant figures. So = 19. For error I used the above formula. It comes out about 1.48. Now my question is. Should I write the answer as 19±1.5 (rounding 1.48 to 2 significant figures) OR should I write it as 19±1. So in short, should the error have same number of significant figures as the mean value or should it have the same number of decimal places as...
Thread 'A cylinder connected to a hanging mass'
Let's declare that for the cylinder, mass = M = 10 kg Radius = R = 4 m For the wall and the floor, Friction coeff = ##\mu## = 0.5 For the hanging mass, mass = m = 11 kg First, we divide the force according to their respective plane (x and y thing, correct me if I'm wrong) and according to which, cylinder or the hanging mass, they're working on. Force on the hanging mass $$mg - T = ma$$ Force(Cylinder) on y $$N_f + f_w - Mg = 0$$ Force(Cylinder) on x $$T + f_f - N_w = Ma$$ There's also...
Back
Top