earti193 said:
I've gone over the original matrix a few times and can't see how it can be any different. I think that these are constrained optimums because they are optimum demand functions. The matrix is a Skutsky matrix which by definition is identical to the Hessian of the expenditure function. If my approach was only testing for semidefiniteness in the 'whole space' (not sure what this means), what do I need to do differently to test it in the tangent space?
You would need to show me your entire problem for me to "get" what you are saying. However, I can show you by example what I was saying.
Suppose we want to maximize ##f = 5x_1 x_2 x_3 - x_1^2## subject to the constraint ##g = x_1^2+x_2^2+x_3^2-1= 0.## If we form the Lagrangian
##L = f + u g## (##u=## a Lagrange multiplier---often called ##\lambda##, but ##u## is easier to write) the first-order necessary conditions for a max are that
\partial L / \partial x_i = 0, \; i = 1,2,3, \text{ and } g = 0. Without going into the details, the solution is ##(x_1,x_2,x_3,u) = (\bar{x_1},\bar{x_2},\bar{x_3},\bar{u}),## where
\bar{x_1} = -\frac{2}{15} +\frac{\sqrt{79}}{15}, \:<br />
\bar{x_2} = \bar{x_3} = \frac{1}{15} \sqrt{71 + 2\sqrt{79}}, \: <br />
\bar{u} = \frac{1}{3} - \frac{1}{6} \sqrt{79}
Numerically, we have
\bar{x_1} = 0.4592129612, \: \bar{x_2} = \bar{x_3} = 0.6281414874,\:<br />
\bar{u} = -1.148032403 .
In order to test if ##\bar{x}## is a (local) constrained max, we must examine the Hessian of the Lagrangian ##L(x_1,x_2,x_3,\bar{u})## at the point ##\bar{x}##
Note that the objective function f (the thing we want to maximize) has only saddle points as stationary, so in 3-space has no max or min. However, we need to test the Hessian of L, not of f. We have ##HL = \text{Hessian of } L(x_1,x_2,x_3,\bar{u}) |_{\bar{x}}## given as
HL = \pmatrix{-2 & 3.140707436 & 3.140707436\\<br />
3.140707436 & 0 & 2.296064805 \\<br />
3.140707436 & 2.296064805 & 0 }
This is an indefinite matrix.
However, we are supposed to test HL projected down into the subspace of the tangent to the constraint at ##\bar{x}##. What this means is the following: we look at points near ##\bar{x}##, of the form ##(x_1,x_2,x_3) = (\bar{x_1}+p_1,\bar{x_2}+p_2,\bar{x_3}+p_3)## for small ##|p|##. The Hessian HL gives the quadratic expansion of ##L(x_1,x_2,x_3,\bar{u})## in terms of the ##p_i##; that is, it gives the second-order terms in a Taylor expansion, so that
L(\bar{x}+p,\bar{u}) \doteq Q(p) \equiv \sum_i \sum_j HL_{ij} p_i p_j if we drop terms of higher than second order in the ##p_j##. We have just seen that HL is indefinite, so the above quadratic form in the ##p_i## is indefinite; that is, we have Q(p) > 0 for some p and Q(p) < 0 for other p. However, for vectors p lying in the tangent space of the constraint we have ##p## perpendicular to ##\nabla g(\bar{x})##, so must have ##\bar{x_1} p_1 + \bar{x_2} p_2 + \bar{x_3} p_3 =0##. That is, we can express ##p_3## as a linear combination of ##p_1## and ##p_2##:
p_3 = -0.7310661219 p_1-p_2 . When we plug this into Q(p) we end up with a quadratic form in ##p_1## and ##p_2## alone. This 2-variable quadratic form is given as
-10.11534387 p_1^2 - 6.714300770 p_1 p_2 - 9.184259223 p_2^2
The Hessian of this is
H_0 = \pmatrix{-20.23068775 & -6.714300766 \\<br />
-6.714300766 & -18.36851845}
We can see that ##H_0## is a
negative-definite matrix, so the point ##\bar{x}## is a strict local constrained max of f.
Note that none of the matrices involved were definite or semidefinite over the whole space of three variables; however, the one matrix that we really care about IS negative defiinite in the tangent subspace, and that is enough (by some theorems in optimization theory).