Trying to prove inequality with Lagrange multipliers

In summary: I'll tell you in a second.) If you replace i<j by i≠j, you'd just about have it, although there's a more convenient way to write it (in terms of S). And remember, S is a constant. So, in summary, if we have N positive numbers such that \sum_{i} p_{i}=1 and we have the inequality \prod_{i=1}^{N} x_{i}^{2 p_{i}} \leq \sum_{i=1}^{N} p_{i}x_{i}^{2} then for any N numbers we have
  • #1
xman
93
0
Show that if we have N positive numbers
[tex] \left[ p_{i}\right]_{i=1}^{N} [/tex]
such that
[tex] \sum_{i} p_{i} =1 [/tex]
then for any N numbers
[tex] \left\{x_{i}\right\}_{i=1}^{N} [/tex]
we have the inequality
[tex] \prod_{i=1}^{N} x_{i}^{2 p_{i}} \leq \sum_{i=1}^{N} p_{i}x_{i}^{2} [/tex]

So I am thinking to show the inequality is true using Lagrange multipliers first take the set
[tex] W = \sum_{i} p_{i}x_{i}^{2} [/tex]
and we want to minimize above subject to constraint
[tex] S = \prod_{i} x_{i}^{2p_{i}} [/tex]
so we form the function
[tex] f^{\star} = f + \lambda g \Rightarrow f^{\star} =\sum_{i} p_{i}x_{i}^{2}+\lambda \left(S-\prod_{i} x_{i}^{2p_{i}}\right) [/tex]
So I think everything so far is ok...my question is how do you differentiate an infinite series and an infinite product. Also in this case is the Lagrange multiplier a single value [tex]\lambda[/tex] or is there one multiplier for each value of i , that is; do I need a [tex] \lambda_{i}[/tex] Any direction or input is greatly appreciated.
 
Physics news on Phys.org
  • #2
I'm a little confused by some of your wording, but I think what you're trying to do is show that, for a fixed set of pi, and a fixed value of S, W is always greater than or equal to S, whatever the xi that give this S may be. If you show this is true for any S, then for a given set of xi, these give rise to some S, and then you know the corresponding W for these xi must be greater than or equal to S. That's an interesting approach. I'm not sure if it'll work, but it's worth a try.

You'll only need one [itex]\lambda[/itex], because there's only one constraint equation. But you need to differentiate with respect to each xi. Namely, you have:

[tex]\frac{\partial}{\partial x_i} \left( f + \lambda g \right) = 0[/tex]

for all i from 1 to N.
 
  • #3
Thanks for replying StatusX,
Sorry if I wasn't a little more clear, but that's exactly what I'm trying to show. I thought this would be a fun problem, and I am not really familiar with Lagrange multipliers, so I thought I would try proving it, this way. So, only one common multiplier, great. Now is this correct
[tex] \frac{\partial}{\partial x_{j}} \left(p_{i} x_{i}^{2}\right)= 2p_{i}x_{i} \frac{\partial x_{i}}{\partial x_{j}} \delta_{ij} [/tex]
where [tex] \delta_{ij} [/tex] is the Kronecker delta of course. Now for the infinite product I'm not sure.
 
  • #4
First off, you want:

[tex]\frac{\partial x_i}{\partial x_j}= \delta_{ij}[/tex]

since the xi are independent. And, just for the record, neither the product nor the sum are infinite, they both range from 1 to N.

To differentiate the product, just write it out. All the terms that don't involve xi will be constants when you differentiate with respect to xi. You will end up with the original product times some prefactor.
 
  • #5
StatusX said:
First off, you want:

[tex]\frac{\partial x_i}{\partial x_j}= \delta_{ij}[/tex]

since the xi are independent. And, just for the record, neither the product nor the sum are infinite, they both range from 1 to N.

To differentiate the product, just write it out. All the terms that don't involve xi will be constants when you differentiate with respect to xi. You will end up with the original product times some prefactor.

Right, I meant to write
[tex] \frac{\partial x_{i}}{\partial x_{j}}=\delta_{ij} [/tex]
Sorry I keep saying "infinite " for product and sum. Ok, so is this correct
[tex] 2 \sum_{i=1}^{N}p_{i} x_{i}+ \lambda \left(S-2 \prod_{i=1}^{N} p_{i}x_{i}^{2p_{i}-1} \right) =0 [/tex]
Does this seems reasonable?
 
  • #6
No, you only want to differentiate with respect to one xi at a time. You'll get N different equations.
 
  • #7
StatusX said:
No, you only want to differentiate with respect to one xi at a time. You'll get N different equations.

Sorry, I'm a little uncomfortable with these differentiation rules. Here it goes so I should get
[tex] 2 p_{j} x_{j}+ \lambda \left( \frac{\partial S}{\partial x_{j}}-2 p_{j} x_{j}^{2p_{j}-1} \left( \prod_{i <j} x_{i}^{2p_{i}}\right)\right)=0 [/tex]
where [tex] 1<j<i<N [/tex]
 
  • #8
Closer. If you replaced i<j by i≠j, you'd just about have it, although there's a more convenient way to write it (in terms of S). And remember, S is a constant.
 
  • #9
Great so it would be something like[tex] \ldots [/tex]

[tex] 2 p_{j} x_{j}+ \lambda \left( -2 p_{j} x_{j}^{2p_{j}-1} \left( \prod_{i \neq j} x_{i}^{2p_{i}}\right)\right)=0 [/tex]

From here we solve for [tex] \lambda [/tex] with the requirement that we want to minimize, right?
 
  • #10
Well, if you rewrite the last term in terms of S, like I suggested, you'll see that the equation is the same for every xi. What does this tell you? (You don't need to know lambda)
 
  • #11
Oh snap are you saying something along the lines of
[tex]
\frac{\partial f^{\star}}{\partial x_{1}} = 2 p_{1} x_{1}
- 2\lambda p_{1}x_{1}^{2p_{1}-1}\prod_{i=2}^{N} x_{i}^{2p_{i}}=0 \cr
\ldots \\
\frac{\partial f^{\star}}{\partial x_{N}} = 2p_{N} x_{N}
-2\lambda p_{N}x_{N}^{2p_{N}-1} \prod_{i=1}^{N-1}
x_{i}^{2p_{i}}=0 \\
[/tex]
So
[tex]
\Rightarrow
\frac{\partial f^{\star}}{\partial x_{1}} = p_{1} x_{1}
-\lambda p_{1}x_{1}^{-1}S=0
\ldots
\frac{\partial f^{\star}}{\partial x_{N}} = p_{N} x_{N}
-\lambda p_{N}x_{N}^{-1}S
\\
\Rightarrow
\sum_{i} p_{i} x_{i}^{2} = \lambda \left(\sum_{i}p_{i}\right)S
[/tex]
Thus
[tex]
\Rightarrow
S^{-1} \sum_{i} p_{i} x_{i}^{2} = \lambda
[/tex]
 
Last edited:
  • #12
I was with you up till the last line. Try writing an expression for each xi in terms of only lambda and S. The exact equation isn't important, what is important is that it is the same for all xi, which means... (the key point).
 
  • #13
So we have
[tex] p_{1} x_{1}=\lambda \frac{p_{1}S}{x_{1}} \ldots p_{N}x_{N}= \lambda
\frac{p_{N} S}{x_{N}} [/tex]
which simplifies to
[tex] x_{1} = \lambda \frac{S}{x_{1}} \ldots
x_{N} = \lambda \frac{S}{x_{N}} \Rightarrow x_{1}^{2}=\lambda
S\ldots x_{N}^{2}=\lambda S [/tex]
Right? I guess I'm obviously missing the key point here, the only thing that comes to mind now is
[tex] \lambda = \frac{x_{1}^{2}}{S} = \ldots =\frac{x_{N}^{2}}{S} [/tex]
So
[tex] x_{1}^{2}=\ldots = x_{N}^{2} [/tex]
So
[tex]
\prod_{i=1}^{N} \left(x_{i}^{2}\right)^{p_{i}} \Rightarrow
\left(x_{N}^{2}\right)^{\sum_{i}p_{i}} = x_{N}^{2} [/tex]
since we are given that
[tex] \sum_{i} p_{i} =1 [/tex]
Am I heading down the wrong path again?
 
  • #14
Right, you needed to show they were all equal. What this means is that W is at an extreme value (max or min) when all of the xi are equal. Now just calculate what S are W are in this case, and verify the inequality. You then need to verify this is actually the minimum of W.
 
  • #15
StatusX said:
Right, you needed to show they were all equal. What this means is that W is at an extreme value (max or min) when all of the xi are equal. Now just calculate what S are W are in this case, and verify the inequality. You then need to verify this is actually the minimum of W.
Awesome, so I want to show that W is a minimum here. Hence,
[tex] W(x^{c}) \leq W(x^{c}+\epsilon) \Rightarrow
x^{2} \left(\sum_{i=1}^{N}p_{i}\right) \leq \left(x+\epsilon\right)^{2} \left(\sum_{i=1}^{N} p_{i}\right) [/tex]
Yielding
[tex] 0\leq \epsilon \left(2x+\epsilon\right) [/tex]
which shows us that for points to the left we have a negative slope and for points to the right we have a positive slope since [tex] \epsilon >0 [/tex] the sign of the equation is dominated by the x-which indeed is a minimum. Now for the inequality it suffices to show
[tex] \prod_{i=1}^{N} x_{i}^{2p_{i}} \leq \sum_{i=1}^{N} p_{i}x_{i}^{2} \mid_{x=x^{c}} [/tex]
which immediately reduces to equality as
[tex] x^{2}=x^{2} [/tex]
per my previous post. Finally, we conclude since W is a minimum, and we have equality at the critical point [tex] x^{c} = x_{1}=\cdots=x_{n}[/tex] then this is indeed an upper bound for our S and therefore the inequality is true.

Is there any points I missed in the wrap up here?
 

1. What is the purpose of using Lagrange multipliers to prove inequality?

Lagrange multipliers are used in optimization problems to find the maximum or minimum value of a function subject to certain constraints. In the case of proving inequality, we can use Lagrange multipliers to find the maximum or minimum value of a function under certain constraints and then compare it to the given inequality to determine if it is true or not.

2. How do Lagrange multipliers work in proving inequality?

Lagrange multipliers work by introducing a new variable, called the Lagrange multiplier, to the original function and setting up a system of equations to find the critical points. These critical points correspond to the maximum or minimum value of the function under the given constraints, which can then be compared to the given inequality to determine its validity.

3. Can Lagrange multipliers be used for any type of inequality?

Yes, Lagrange multipliers can be used for any type of inequality as long as it is a differentiable function and the constraints are continuous. The method is applicable for both one-variable and multi-variable inequalities.

4. Are there any limitations to using Lagrange multipliers in proving inequality?

One limitation of using Lagrange multipliers is that they can only be used to prove inequalities where the constraints are equality constraints. In other words, the inequality must be in the form of f(x,y) ≥ c or f(x,y) ≤ c, where c is a constant. Additionally, Lagrange multipliers may not always give the most efficient solution to an optimization problem.

5. Can Lagrange multipliers be used to prove strict inequalities?

Yes, Lagrange multipliers can be used to prove strict inequalities, such as f(x,y) > c or f(x,y) < c, by using a small positive or negative constant in the constraints. This will ensure that the critical points found are not at the boundary of the constraints, allowing for a strict inequality to be proven.

Similar threads

  • Calculus and Beyond Homework Help
Replies
4
Views
884
  • Calculus and Beyond Homework Help
Replies
8
Views
470
  • Calculus and Beyond Homework Help
Replies
1
Views
704
  • Calculus and Beyond Homework Help
Replies
10
Views
371
  • Calculus and Beyond Homework Help
Replies
16
Views
2K
  • Calculus and Beyond Homework Help
Replies
13
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
560
  • Calculus and Beyond Homework Help
Replies
2
Views
834
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
Back
Top