How can extrema points be used to prove mathematical inequalities?

hamsterman
Messages
74
Reaction score
0
I'm reading a math book and found a couple of proofs I can't do.

1. Given x \in R^n, a \in R, \sum\limits_{i=1}^n{x_i}=na, prove that
\sum\limits_{i \in A}\prod\limits_{j = 1}^k {x_{i_j}} \leq \binom{k}{n}a^k where
A = \{i \in \{1, 2, ... n\}^k : i_1 < i_2 < ... < i_k\}
which essentially says that if the average of all x is a, then taken a product of any k x, it will usually not be greater than a^k

2. Given A = (a_{ij}) \in L(R^n), prove that
\det^2 A \leq \prod\limits^n_{i=1}\sum\limits^n_{j=1}{a_{ij}^2}

The problems are given in a section about extrema points. I do see that these can be proved by finding the minimum of (right side - left side). I do know how to use, in the first case, Lagrange multiplier and, in the second case, plain differentiation to find that point. The problem is that the derivatives turn out very ugly. I don't think I can solve them.

One idea I had was that there exist matrices that have determinants (or some other function) equal to the expressions or the left side of (1) and right side of (2), so that this whole problem could be lifted to linear algebra. But then my algebra is really poor.

I'd love to hear some suggestions about this.
 
Physics news on Phys.org
hamsterman said:
I'm reading a math book and found a couple of proofs I can't do.

1. Given x \in R^n, a \in R, \sum\limits_{i=1}^n{x_i}=na, prove that
\sum\limits_{i \in A}\prod\limits_{j = 1}^k {x_{i_j}} \leq \binom{k}{n}a^k where
A = \{i \in \{1, 2, ... n\}^k : i_1 < i_2 < ... < i_k\}
which essentially says that if the average of all x is a, then taken a product of any k x, it will usually not be greater than a^k

2. Given A = (a_{ij}) \in L(R^n), prove that
\det^2 A \leq \prod\limits^n_{i=1}\sum\limits^n_{j=1}{a_{ij}^2}

The problems are given in a section about extrema points. I do see that these can be proved by finding the minimum of (right side - left side). I do know how to use, in the first case, Lagrange multiplier and, in the second case, plain differentiation to find that point. The problem is that the derivatives turn out very ugly. I don't think I can solve them.

One idea I had was that there exist matrices that have determinants (or some other function) equal to the expressions or the left side of (1) and right side of (2), so that this whole problem could be lifted to linear algebra. But then my algebra is really poor.

I'd love to hear some suggestions about this.

(1) Follows from Maclaurin's inequality, which is stated in many places on-line, but proved in few. One proof can be found in http://www.nerdburrow.com/Newtonmaclaurininequality/ . The proof is short, but has some suspect aspects. A complete, but longer proof can be found in http://www2.math.su.se/gemensamt/grund/exjobb/matte/2004/rep21/report.pdf (which, despite its title, is in English).

RGV
 
Last edited by a moderator:
Thanks a lot.
Any ideas about (2)?
 
hamsterman said:
Thanks a lot.
Any ideas about (2)?

No, but I recall seeing it proved somewhere; I just don't remember where. I suggest you Google "determinant inequalities" so see what comes up.

RGV
 
If you're interested, I[/PLAIN] found it.

Thanks again. I wouldn't have thought that google could help here.
 
Last edited by a moderator:
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top