# Spivak's "Calculus": AM-GM inequality problem.

## Homework Statement

The problem is stated as follows:

"The result in Problem 1-7 has an important generalization: If ##a_1,...,a_n≥0##, then the "arithmetic mean" ##A_n=\frac {a_1+...+a_n} {n}##
and "geometric mean"
##G_n=\sqrt[n] {a_1...a_n}##
Satisfy
##G_n≤A_n##

Suppose that ##a_1\lt A_n##. Then some ##a_i## satisfies ##a_i\gt A_n##; for convenience, say ##a_2\gt A_n##. Let ##\bar a_1=A_n## and let ##\bar a_2=a_1+a_2-\bar a_1##. Show that ##\bar a_1 \bar a_2≥a_1a_2##.
Why does repeating this process enough times eventually prove that ##G_n≤A_n##? (This is another place where it is a good exercise to provide a formal proof by induction, as well as an informal reason.) When does equality hold in the formula ##G_n≤A_n##?"

2. Homework Equations

None that I can think of, perhaps except ##\sqrt {ab} ≤ \frac {a+b} 2## since the AM-GM inequality is an extension of this but I doubt I will actually use it here.

## The Attempt at a Solution

I've proved ##\bar a_1 \bar a_2≥a_1a_2## fairly easily, my problem is with what comes after that. What does it mean "repeating the process"? What do I do for the other ##a_i##? I don't know how the "bar" is defined here...

If I replace ##a_1,a_2## with ##\bar a_1,\bar a_2## I can see that the arithmetic mean doesn't change and the geometric mean grows or doesn't change, but that's all I can really deduce since I don't understand the question, so I would love so assistance.

Thanks in advance to all the helpers.

Gold Member

## Homework Statement

The problem is stated as follows:

"The result in Problem 1-7 has an important generalization: If ##a_1,...,a_n≥0##, then the "arithmetic mean" ##A_n=\frac {a_1+...+a_n} {n}##
and "geometric mean"
##G_n=\sqrt[n] {a_1...a_n}##
Satisfy
##G_n≤A_n##

Suppose that ##a_1\lt A_n##. Then some ##a_i## satisfies ##a_i\gt A_n##; for convenience, say ##a_2\gt A_n##. Let ##\bar a_1=A_n## and let ##\bar a_2=a_1+a_2-\bar a_1##. Show that ##\bar a_1 \bar a_2≥a_1a_2##.
Why does repeating this process enough times eventually prove that ##G_n≤A_n##? (This is another place where it is a good exercise to provide a formal proof by induction, as well as an informal reason.) When does equality hold in the formula ##G_n≤A_n##?"
I know of 10 or so different proofs of ##\text{GM} \leq \text{AM}## and the outline here in Spivak seems to be one of the worst. Do you need to follow this mould? (The underlying idea seems ok but as you've pointed out, he's very light on details...)

If you want an inductive proof, the classical proof involves forward-backward induction. It was done in a close but different setting by @julian yesterday in the Intermediate math challenge problem.

There is a decent walk through here:

https://brilliant.org/wiki/forward-backwards-induction/

## Homework Equations

None that I can think of, perhaps except ##\sqrt {ab} ≤ \frac {a+b} 2## since the AM-GM inequality is an extension of this but I doubt I will actually use it here.

The underlying idea, actually is that you repeatedly use this fact and pad things just right along the way to pull the general n term form of ##\text{GM} \leq \text{AM}## out of a hat.

Let's narrow the scope a bit -- what exactly do you want to focus on?

my main question for now: Can you prove the inequality holds for an ##n =2 ## and ##n = 4## case?
as a hint, for n= 4 case, where you have ##x_1, x_2, x_3, x_4##, consider first proving a smaller problem where ##a: = (x_1x_2)^\frac{1}{2}## and ##b: = (x_3x_4)^\frac{1}{2}## and apply your 'relevant inequality', then find a way to iterate and apply your 'relevant inequality' once more.

Last edited:
This question is the first part out of three, the second part proves the GM-AM inequality with induction (proving by induction on k that the equality holds for ##n=2^k##) and the third part is another proof, so I guess Spivak wanted to hit this from multiple angles.
So I want to prove it his way as a matter of principle if nothing else, to see if I can milk out some new mathematical knowledge from the process.

As for your main question, the case for n=2 is the 'relevant inequality' which I have proven in a previous question in the book. As for the n=4 case your hint is basically the answer, since setting ##a=\sqrt {x_1x_2}## and ##b=\sqrt {x_3x_4}## in the 'relevant inequality' gives:
##\sqrt[4] {x_1x_2x_3x_4}≤\frac {\sqrt {x_1x_2}+\sqrt {x_3x_4}} 2##
and applying the inequalites ##\sqrt {x_1x_2}≤ \frac {x_1+x_2} 2## and ##\sqrt {x_3x_4}≤ \frac {x_3+x_4} 2## gives:
##\sqrt[4] {x_1x_2x_3x_4}≤\frac {\sqrt {x_1x_2}+\sqrt {x_3x_4}} 2≤\frac {x_1+x_2+x_3+x_4} 4## Which is the desired result.

StoneTemplePython
Gold Member
This question is the first part out of three, the second part proves the GM-AM inequality with induction (proving by induction on k that the equality holds for ##n=2^k##) and the third part is another proof, so I guess Spivak wanted to hit this from multiple angles.
So I want to prove it his way as a matter of principle if nothing else, to see if I can milk out some new mathematical knowledge from the process.

As for your main question, the case for n=2 is the 'relevant inequality' which I have proven in a previous question in the book. As for the n=4 case your hint is basically the answer, since setting ##a=\sqrt {x_1x_2}## and ##b=\sqrt {x_3x_4}## in the 'relevant inequality' gives:
##\sqrt[4] {x_1x_2x_3x_4}≤\frac {\sqrt {x_1x_2}+\sqrt {x_3x_4}} 2##
and applying the inequalites ##\sqrt {x_1x_2}≤ \frac {x_1+x_2} 2## and ##\sqrt {x_3x_4}≤ \frac {x_3+x_4} 2## gives:
##\sqrt[4] {x_1x_2x_3x_4}≤\frac {\sqrt {x_1x_2}+\sqrt {x_3x_4}} 2≤\frac {x_1+x_2+x_3+x_4} 4## Which is the desired result.

Nicely done. Dare I ask: can you prove it for the n= 8 case? And then seamlessly prove it for all powers of 2 via induction (this is part b in your book I suppose).

Nicely done. Dare I ask: can you prove it for the n= 8 case? And then seamlessly prove it for all powers of 2 via induction (this is part b in your book I suppose).
Well setting ##x_1=\sqrt {y_1y_2}, x_2=\sqrt {y_3y_4},x_3=\sqrt {y_5y_6},x_4=\sqrt {y_7y_8}##, and doing the same steps as the case for n=4 brings the desired equality. Using this method in induction shows it applied for all ##n=2^k##, which means I have the second part licked. I would assume your continuation of the proof is the same as Spivak's in the third part:
"For a general n, let ##2^m\gt n##. Apply part (b) to the ##2^m## numbers ##a_1,...,a_n,A_n,...,A_n## where ##A_n## repeats ##2^m-n## times to prove that ##G_n≤A_n##."
Still need to understand what exactly I need to do in the first part before I go to tackle the third though, I just don't like the fact that I'm completely unable to use this method, regardless of how overly complicated it is.

Gold Member
Well setting ##x_1=\sqrt {y_1y_2}, x_2=\sqrt {y_3y_4},x_3=\sqrt {y_5y_6},x_4=\sqrt {y_7y_8}##, and doing the same steps as the case for n=4 brings the desired equality. Using this method in induction shows it applied for all ##n=2^k##, which means I have the second part licked. I would assume your continuation of the proof is the same as Spivak's in the third part:
"For a general n, let ##2^m\gt n##. Apply part (b) to the ##2^m## numbers ##a_1,...,a_n,A_n,...,A_n## where ##A_n## repeats ##2^m-n## times to prove that ##G_n≤A_n##."
Still need to understand what exactly I need to do in the first part before I go to tackle the third though, I just don't like the fact that I'm completely unable to use this method, regardless of how overly complicated it is.

yes. Part (b) as you've stated has a very natural iterative feel -- for some power of ##2^m## you just need to iteratively apply the process ##m## times and get the result.

And yes, the procedure in the 3rd part is just 'filling in the gaps' as you stated by filling it in with copies of the arithmetic mean. The credit for the technique goes to Cauchy. He pioneered this and basically did (b) and (c). Cauchy did not do (a) and I don't think you need to either.

- - - -
When I look at (a) it seems like the author omitted some facts / verbiage / ideas here. I can see a majorization relation being built here and the schur concavity of the (nth) elementary symmetric function will give the result, but that is way outside the scope.
- - - -

I'd suggest taking a page from Cauchy and finish (b) and (c) first which gives the proof you need, and then working backwards to fill any gaps .

Last edited:
yes. Part (b) as you've stated has a very natural iterative feel -- for some power of ##2^m## you just need to iteratively apply the process ##m## times and get the result.

And yes, the procedure in the 3rd part is just 'filling in the gaps' as you stated by filling it in with copies of the arithmetic mean. The credit for the technique goes to Cauchy. He pioneered this and basically did (b) and (c). Cauchy did not do (a) and I don't think you need to either.

- - - -
When I look at (a) it seems like the author omitted some facts / verbiage / ideas here. I can see a majorization relation being built here and the schur concavity of the (nth) elementary symmetric function will give the result, but that is way outside the scope.
- - - -

I'd suggest taking a page from Cauchy and finish (b) and (c) first which gives the proof you need, and then working backwards to fill any gaps .
Did what you said, managed to do part (a) too by ignoring what the problem says and thinking logically for myself, thanks for the assistance :).

StoneTemplePython