My discrete mathematics book gives the following definition for the pigeonhole principle: If m objects are distributed into k containers where m > k, then one container must have more than [itex]\lfloor[/itex][itex]\frac{m-1}{k}[/itex][itex]\rfloor[/itex] objects. It then states as a corollary that the arithmetic mean of a set of numbers must be in between the smallest and largest numbers of the set. No proof is given, it pretty much just says "well it's just obvious that this is the case." I think it is obvious that the arithmetic mean of a set of numbers is in between its smallest and largest values. What isn't obvious to me is how their definition of the pigeonhole principle leads to the corollary. Can anyone help me out?
Hmmm.... well I can interpret it so that the arithmetic mean is larger than the smallest number: Taking the mean of numbers n_{1},...,n_{k} is the same thing as having k containers with n_{i} objects in the ith container, and then re-distributing them so that each container has an even amount. (the number of objects in each container is the mean) WLOG suppose that n_{1} is the smallest number (otherwise just re-arrange). There must be at least n_{1}*k objects, and hence by the pigeonhole principle, one container (and hence all of them) must have more than the floor of (n_{1}k-1)/k = n_{1}-1objects. Hence they have at least n_{1} objects each. There are missing details in the case that the mean is not exactly an integer, but then the containers each hold a number of objects that is smaller than the mean so it still works.