Question in Proof of second order condition with linear constraints

In summary, the paper discusses a proof for determining the sign definiteness of a quadratic form with linear constraints, specifically focusing on the bordered hessian matrix. The proof involves manipulating the basis of the bordered hessian and using the quadratic form of a matrix to determine its sign definiteness. The author also addresses two questions regarding the specific form of the quadratic forms used in the proof and the reasoning behind certain terms in the equations.
  • #1
holemole
10
0
http://www.math.northwestern.edu/~clark/285/2006-07/handouts/lin-constraint.pdf


It's actually proof of finding sign definiteness of quadratic form with linear constraints with sign of submatrices of bordered hessian.

The proof is from page 2~page 3. I have 2 questions:

1. From about 6th line of the proof it mentions "E" being a quadratic form of A, hessian of our objective function. Its specific form is mentioned in the paper, but why is it formed that way? Is it just to make a quadratic form that will fit in another quadratic form presented later in the proof? I have similar question with quadratic form of H, the bordered hessian.

2. The last 6lines of the proof.

The two conditions each representing positive and negative definite case, as far as I understand, follows from (-1)^k det(B1)^2 det(E). so in the negative definite case where does (-1)^(j−k) et(Hj) = (-1)^(j−2k) det(B1)^2 det(Ej−2k) > 0 come from?
 

Attachments

  • linearly constrained quadratic forms.pdf
    31 KB · Views: 885
Physics news on Phys.org
  • #2
Kind of figured it out by myself now. Reason why the proof states E is because while E is quadratic form of A, Q is again quadratic form of E. Thus sign definiteness of Q can rely on det(E), which is attainable if we follow the proof's manipulation ofchanging the basis of the bordered hessian.

the (-1)^(j-2k) comes from the fact that in order for Q to be negative semi-definite its discriminant(in this case det(E)) needs to have negative-positive-negative...signs for its submatrices. (-1)^(j-2k) allows this, thus multiplied to both sides of the equation.
 

FAQ: Question in Proof of second order condition with linear constraints

What is the second order condition in proof of linear constraints?

The second order condition is a mathematical concept used to determine the nature of a critical point in a function. In the context of linear constraints, it is used to check if the critical point is a local minimum, maximum, or saddle point.

How is the second order condition applied in the proof of linear constraints?

The second order condition is applied by taking the second derivative of the objective function and evaluating it at the critical point. If the second derivative is positive, the critical point is a local minimum. If it is negative, the critical point is a local maximum. If it is zero, further analysis is needed to determine the nature of the critical point.

What are the necessary and sufficient conditions for the second order condition to hold?

The necessary and sufficient conditions for the second order condition to hold are that the objective function must be twice continuously differentiable and the Hessian matrix (matrix of second partial derivatives) must be positive definite at the critical point.

Can the second order condition be used to prove global optimality in linear constraints?

No, the second order condition only applies to local optimality. In order to prove global optimality in linear constraints, other techniques such as convexity or duality must be used.

What are some common mistakes to avoid when using the second order condition in proof of linear constraints?

Some common mistakes to avoid include: not checking the necessary and sufficient conditions, assuming the critical point is a local minimum without further analysis, and confusing the second order condition with other optimality conditions such as the first order condition.

Back
Top