A specific combination problem

  • I
  • Thread starter JohnnyGui
  • Start date
  • #26
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,169
569
Thanks a lot, I think I get what you mean with this. You're basically transforming the distribution so that you can show that the standard deviation would always stay within the boundaries of ##a## and ##b## if I understand correctly.

I've got one question. I've tried to test this summation using summation calculators from several websites but they all give deviating answers from 1. This is what I'm trying to calculate:
$$\lim_{n \rightarrow \infty} \sum_{k=a \cdot n}^{b\cdot n} \frac{n!}{k!\cdot (n-k)!} \cdot \frac{1}{2^n}$$
Here, ##a## and ##b## are any percentages of ##n## equally far from the mean (##0.5n##) where ##b > a##. Shouldn't this limit give the answer ##1## again, regardless of how wide or small the range ##a-b## is?
Or the flip side -- I centered and normalized so that you always have zero mean and 1 std deviation on the normal random variable. Note: the process of converting it to zero mean changes the interpretation of a and b a little bit so I went back and edited my post -- a belated thought on my end.

so long as ##a## is less than the mean and ##b## greater than the mean, and you scale them both by n, then in principle it should tend to one. The series you are evaluating doesn't have a direct, closed form, though, so I have no idea how its being evaluated by whatever computer algebra systems. Also the form you are using isn't with zero mean, which could create some challenges.

There are basically two ways to evaluate it: 1.) think more abstractly and learn your limit laws in probability. 2.) if you code: run a bunch of simulation trials, where each one has you tossing a very large number n -- the averaged result of heads in that range should be awfully close to one.
 
  • #27
766
49
Or the flip side -- I centered and normalized so that you always have zero mean and 1 std deviation on the normal random variable. Note: the process of converting it to zero mean changes the interpretation of a and b a little bit so I went back and edited my post -- a belated thought on my end.

so long as ##a## is less than the mean and ##b## greater than the mean, and you scale them both by n, then in principle it should tend to one. The series you are evaluating doesn't have a direct, closed form, though, so I have no idea how its being evaluated by whatever computer algebra systems. Also the form you are using isn't with zero mean, which could create some challenges.

There are basically two ways to evaluate it: 1.) think more abstractly and learn your limit laws in probability. 2.) if you code: run a bunch of simulation trials, where each one has you tossing a very large number n -- the averaged result of heads in that range should be awfully close to one.
This got me wondering how one actually deduces the individual chance of, for example, heads or tails being 50% each. Apparently, it is not based on the chance that one would throw an exact equal amount of heads as tails with increasing ##n##, since that chance actually declines with ##n##.
So are the individual chances of a head or tail determined by the mean of a range that reaches a chance of 1 with increasing ##n##?
 
Last edited:
  • #28
766
49
Scratch what I asked above. I was confused for a bit. I was thinking about the ratio between the amount of outcomes in a very narrow percentage range (49%-51% for example) w.r.t. the amount of outcomes in a wider percentage range (40%-60%) for a particular value of ##n##. Is it correct that for the following formula:
$$\frac{\sum_{k=0.49n}^{0.51n} \frac{n!}{k!\cdot (n-k)!}}{\sum_{k=0.4n}^{0.6n} \frac{n!}{k!\cdot (n-k)!}}$$
This fraction would give 1 as well as ##n## increases? Which would mean that with increasing ##n##, the amount of outcomes within the same narrow range would contribute more to the amount of outcomes within the wider percentage range?
 
  • #29
766
49
@StoneTemplePython :

There is something else I was quite surprised by.

When you do a lot of trials, say a ##T## amount, each trial consisting of flipping a coin ##n## times where ##n## is a relatively low number (like 10), the most frequent trial coming out of that would be the trial having an equal amount of heads and tails of its ##n## throws, which is what I expect. From this, you would deduce that the chance for heads or tails are ach 50%. This whole method can be considered as 1 large trial consisting of flipping the coin ##T \cdot n## times.

However, something peculiar arises when the chance for a head is different from tails, for example 0.3 for heads and 0.7 for tails. When you do a lot of trials, each trial having the same relatively low ##n## throws, then no matter how many times you do a trial, the most frequent trial would have 27.93% heads from its ##n## throws. You would therefore erroneously deduce that the chance for heads is 27.93% instead of 30%. The part that I don't understand is, that when you consider this whole method as 1 trial consisting of ##T \cdot n## throws, then the amount of heads of all those throws together would be indeed nearly 30%.

I do understand this mathematically, but I can't seem put this in terms of logical thinking. Why can you accurately deduce the individual chance of 2 outcomes by doing many trials, each consisting of a low number ##n## only if the chance of each outcome is 50%? While when they differ in chance, then doing many trials each consisting of the same low number of ##n##, would show a less accurate individual chance?
 
  • #30
StoneTemplePython
Science Advisor
Gold Member
2019 Award
1,169
569
I do understand this mathematically, but I can't seem put this in terms of logical thinking. Why can you accurately deduce the individual chance of 2 outcomes by doing many trials, each consisting of a low number ##n## only if the chance of each outcome is 50%? While when they differ in chance, then doing many trials each consisting of the same low number of ##n##, would show a less accurate individual chance?
This is pretty far astray from the original post that this thread is under. I.e. your original question was asked and answered. Some follow-ups, also asked and answered. Now you have a question about inference -- this requires a new thread, at a minimum. Your line of thinking here doesn't make sense to me. With a large enough number of tosses we should be able to estimate probability of heads up to any amount of (in the real world, reasonable) precision. There are a lot of different approaches, and ways to frame the inference problem.

Personally, I think you need to study probability theory first, then revisit these questions in a few months.
 
Last edited:
  • #31
WWGD
Science Advisor
Gold Member
2019 Award
5,341
3,295
Hello,

I have been trying to solve this problem but I can't seem to find a way.

Given are ##n## cards and each card can show one of two values: M or K.

How many possible permutations are there in which there are as many cards with M as there are with K? Given that ##n## is an even amount of cards.

Is it possible to derive a formula for this as a function of ##n##? How does one deduce this?
This way makes sense to me ( a rephrasing of omeone else's answer, I think Dr Claude's ) : Assume you need to go from point A to point B along a grid system , where you must go ,say, north (M) j times and east(K) j times in order to arrive at B, i.e. n=2j. How many ways can you do this trip? Once the j places where you make a turn east(north) fully determine the rest of the trip.
This gives you a way of counting paths where M=K. The total number of paths is straightforward.
 
Last edited:
  • #32
766
49
This is pretty far astray from the original post that this thread is under. I.e. your original question was asked and answered. Some follow-ups, also asked and answered. Now you have a question about inference -- this requires a new thread, at a minimum. Your line of thinking here doesn't make sense to me. With a large enough number of tosses we should be able to estimate probability of heads up to any amount of (in the real world, reasonable) precision. There are a lot of different approaches, and ways to frame the inference problem.

Personally, I think you need to study probability theory first, then revisit these questions in a few months.
I was talking about when those large number of tosses are divided into small number of tosses, each being a trial, and how one can interpret these small trials to deduce the individual chance.
It was my intention to ask the question in my OP as a base that leads to this question. Making a new thread for every question that I have regarding this would seem ineffective to me. I got it eventually figured out though, so nevermind.

This way makes sense to me ( a rephrasing of omeone else's answer, I think Dr Claude's ) : Assume you need to go from point A to point B along a grid system , where you must go ,say, north (M) j times and east(K) j times in order to arrive at B, i.e. n=2j. How many ways can you do this trip? Once the j places where you make a turn east(north) fully determine the rest of the trip.
This gives you a way of counting paths where M=K. The total number of paths is straightforward.
This is a creative way to think about it. It helped me deduce the same formula again. Thanks!
 

Related Threads on A specific combination problem

  • Last Post
Replies
4
Views
2K
  • Last Post
Replies
6
Views
572
  • Last Post
Replies
14
Views
8K
Replies
2
Views
3K
Replies
1
Views
2K
Replies
1
Views
6K
Replies
1
Views
1K
Replies
3
Views
2K
  • Last Post
Replies
3
Views
2K
Top