1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Determining if a 3x3 matrix is negative semidefinite

  1. Apr 1, 2013 #1
    1. The problem statement, all variables and given/known data

    I have the matrix A = [-10 3.5 3; 3.5 -4 0.75; 3 0.75 -0.75]

    I need to determine whether this is negative semidefinite.

    2. Relevant equations



    3. The attempt at a solution

    1st order principal minors:

    -10
    -4
    -0.75

    2nd order principal minors:

    2.75
    -1.5
    2.4375

    3rd order principal minor:

    =det(A) = 36.5625

    To be negative semidefinite principal minors of an odd order need to be ≤
    0, and ≥0 fir even orders. This suggests that the matrix is not negative semidefinite.

    I don't believe my answer though for two reasons:
    - I thought that if the diagonal entries were all negative that meant it was negative semidefinite?
    - I am looking at the Hessian of an expenditure function and the expenditure function satisfies all the other conditions of being an expenditure function, so I think it should be negative semi definite.

    Where have I gone wrong?
     
  2. jcsd
  3. Apr 1, 2013 #2

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    I find it easier to just test whether B = -A is positive semi-definite. It isn't in this case. One can see this explicitly by trying to find the Cholesky factorization of B (which exists for both a positive-definite and a positive-semidefinite matrix). If the Cholesky factorization does not exist (or if some diagonal elements are complex) the matrix is indefinite. That is what happens in this case.

    Note: in practice one never uses determinants to make these tests; one, instead, does matrix factorization. In this case, the factorization is just finding a way to write the quadratic form ##Q(x) = x^T B x## as a sum of squares. In your case it leads to
    [tex] Q(x) = x^T B x = \sum_{i=1}^3 b_{ii} x_i^2 + 2 \sum_{i < j} a_{ij} x_i x_j[/tex]
    of the form
    [tex] Q(x) = U_1^2 + U_2^2 - \frac{195}{148} x_3^2,\\
    \text{where}\\
    U_1 = \sqrt{10} x_1 - \frac{7}{ 2 \sqrt{10}} x_2 - \frac{3}{\sqrt{10}} x_3 \\
    U_2 = \frac{\sqrt{1110}}{20} x_2 - \frac{6\sqrt{1110}}{185} x_3.
    [/tex]
    Note that it is possible to have some ##x = (x_1,x_2,x_3)## giving ##Q(x) > 0## and other x giving ##Q(x) < 0##, because of the negative sign in the third term. That is why the matrix is indefinite.

    If, as you believe, your matrix should be semidefinite, then you must have copied it down incorrectly; or, perhaps, you are using it to test a constrained optimum for a maximum, in which case you need to test the Hessian of the Lagrangian in the tangent space of the constraints (not the Hessian of the objective function over the whole space).
     
  4. Apr 1, 2013 #3
    I've gone over the original matrix a few times and can't see how it can be any different. I think that these are constrained optimums because they are optimum demand functions. The matrix is a Skutsky matrix which by definition is identical to the Hessian of the expenditure function. If my approach was only testing for semidefiniteness in the 'whole space' (not sure what this means), what do I need to do differently to test it in the tangent space?
     
  5. Apr 1, 2013 #4

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    You would need to show me your entire problem for me to "get" what you are saying. However, I can show you by example what I was saying.

    Suppose we want to maximize ##f = 5x_1 x_2 x_3 - x_1^2## subject to the constraint ##g = x_1^2+x_2^2+x_3^2-1= 0.## If we form the Lagrangian
    ##L = f + u g## (##u=## a Lagrange multiplier---often called ##\lambda##, but ##u## is easier to write) the first-order necessary conditions for a max are that
    [tex] \partial L / \partial x_i = 0, \; i = 1,2,3, \text{ and } g = 0.[/tex] Without going into the details, the solution is ##(x_1,x_2,x_3,u) = (\bar{x_1},\bar{x_2},\bar{x_3},\bar{u}),## where
    [tex] \bar{x_1} = -\frac{2}{15} +\frac{\sqrt{79}}{15}, \:
    \bar{x_2} = \bar{x_3} = \frac{1}{15} \sqrt{71 + 2\sqrt{79}}, \:
    \bar{u} = \frac{1}{3} - \frac{1}{6} \sqrt{79} [/tex]
    Numerically, we have
    [tex] \bar{x_1} = 0.4592129612, \: \bar{x_2} = \bar{x_3} = 0.6281414874,\:
    \bar{u} = -1.148032403 .[/tex]

    In order to test if ##\bar{x}## is a (local) constrained max, we must examine the Hessian of the Lagrangian ##L(x_1,x_2,x_3,\bar{u})## at the point ##\bar{x}##

    Note that the objective function f (the thing we want to maximize) has only saddle points as stationary, so in 3-space has no max or min. However, we need to test the Hessian of L, not of f. We have ##HL = \text{Hessian of } L(x_1,x_2,x_3,\bar{u}) |_{\bar{x}}## given as
    [tex] HL = \pmatrix{-2 & 3.140707436 & 3.140707436\\
    3.140707436 & 0 & 2.296064805 \\
    3.140707436 & 2.296064805 & 0 }[/tex]
    This is an indefinite matrix.

    However, we are supposed to test HL projected down into the subspace of the tangent to the constraint at ##\bar{x}##. What this means is the following: we look at points near ##\bar{x}##, of the form ##(x_1,x_2,x_3) = (\bar{x_1}+p_1,\bar{x_2}+p_2,\bar{x_3}+p_3)## for small ##|p|##. The Hessian HL gives the quadratic expansion of ##L(x_1,x_2,x_3,\bar{u})## in terms of the ##p_i##; that is, it gives the second-order terms in a Taylor expansion, so that
    [tex] L(\bar{x}+p,\bar{u}) \doteq Q(p) \equiv \sum_i \sum_j HL_{ij} p_i p_j [/tex] if we drop terms of higher than second order in the ##p_j##. We have just seen that HL is indefinite, so the above quadratic form in the ##p_i## is indefinite; that is, we have Q(p) > 0 for some p and Q(p) < 0 for other p. However, for vectors p lying in the tangent space of the constraint we have ##p## perpendicular to ##\nabla g(\bar{x})##, so must have ##\bar{x_1} p_1 + \bar{x_2} p_2 + \bar{x_3} p_3 =0##. That is, we can express ##p_3## as a linear combination of ##p_1## and ##p_2##:
    [tex]p_3 = -0.7310661219 p_1-p_2 . [/tex] When we plug this into Q(p) we end up with a quadratic form in ##p_1## and ##p_2## alone. This 2-variable quadratic form is given as
    [tex] -10.11534387 p_1^2 - 6.714300770 p_1 p_2 - 9.184259223 p_2^2 [/tex]
    The Hessian of this is
    [tex] H_0 = \pmatrix{-20.23068775 & -6.714300766 \\
    -6.714300766 & -18.36851845}[/tex]
    We can see that ##H_0## is a negative-definite matrix, so the point ##\bar{x}## is a strict local constrained max of f.

    Note that none of the matrices involved were definite or semidefinite over the whole space of three variables; however, the one matrix that we really care about IS negative defiinite in the tangent subspace, and that is enough (by some theorems in optimization theory).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Determining if a 3x3 matrix is negative semidefinite
  1. 3x3 matrix (Replies: 2)

Loading...