I Find the minimum and maximum value of a quadratic form

85
2
Summary
Find minimum and maximum value of quadratic form ##r(\textbf{x})=x_1^2+x_2^2+x_3^3## subject to the constraint ##q(\textbf{x})=x_1^2+3x_2^2+x_3^3+2x_1x_2-2x_1x_3-2x_2x_3=1##.
By working with the following definition of minimum of a quadratic form ##r(\textbf{x})##,

##\lambda_1=\underset{||\textbf{x}||=1}{\text{min}} \ r(\textbf{x})##
where ##\lambda_1## denotes the smallest eigenvalue of ##r##, how would one tackle the above problem?

It is clear that the diagonal representation of ##q## is ##y_2^2+4y_3^2##, but how can one apply the above to find the minimum? (Bonus question; does the diagonal representation preserve scale?)
 

StoneTemplePython

Science Advisor
Gold Member
1,123
533
Summary: Find minimum and maximum value of quadratic form ##r(\textbf{x})=x_1^2+x_2^2+x_3^3## subject to the constraint ##q(\textbf{x})=x_1^2+3x_2^2+x_3^3+2x_1x_2-2x_1x_3-2x_2x_3=1##...

It is clear that the diagonal representation of ##q## is ##y_2^2+4y_3^2##, but...
here's one approach:
write
##\mathbf x = \mathbf Q \mathbf y ##

where ##\mathbf q_i## is the ith eigenvector and the eigs are in ascending order
##\lambda_1 \leq \lambda_2 \leq \lambda_3##
##\mathbf A = \left[\begin{matrix}1 & 1 & -1\\1 & 3 & -1\\-1 & -1 & 1\end{matrix}\right] = \mathbf {Q\Sigma Q}^T##

your problem statement indicates that you want a minimum and a maximum...

for the minimum
set ##\mathbf y = \frac{1}{2}\mathbf e_3## (3rd standard basis vector). You can easily check that this is feasible. And if your constraint holds you know

##0y_1^2 + y_2^2 +4 y_3^3 = 1##
so plugging this into your objective function you have

##y_1^2 + y_2^2 + y_3^2 = y_1^2 + y_2^2 + y_3^2 + 1 - \big(0y_1^2 + y_2^2 +4 y_3^3\big) ##
so
##\text{min: }\big \Vert \mathbf x\big \Vert_2^2 ##
##= \text{min: }\big \Vert \mathbf y\big \Vert_2^2 ##
##=\text{min: } y_1^2 + y_2^2 + y_3^2 ##
##= \text{min: } y_1^2 + y_2^2 + y_3^2 + 1 - \big(0y_1^2 + y_2^2 +4 y_3^3\big) ##
##=\text{min: } 1 + y_1^2 - 3 y_3^2##

allocating to ##y_1## monotone increases the payoff while not altering your constraint -- this should tell you there is no max.

as for the minimum... you just need to convince yourself that i) this implies no allocation to ##y_1## and for ##y_2' \neq 0##, you have

##1 = (y_2')^2 \cdot 1 + (y_3')^2\cdot 4 = y_3^2 \cdot 4##
implies
##(y_3')^2 \lt (y_3)^2##
and so you have an exchange argument showing any ##y_2' \neq 0## is inferior
i.e. revisiting the payoff function you have
##-3(y_3')^2 \gt -3(y_3)^2##
- - - -
a technical nit remains -- it's good form to show a kind of compactness here -- i.e. after ruling out any allocation to ##y_1## for the minimization, the problem lives in a 2-d space where
##y_2^2 + 4y_3^2 = 1 = z_2^2 + z_3^2 = 1## with ##y_2 = z_2## and ##z_3 = 2 \cdot y_3##

i.e. we have an optimization problem on the unit circle in 2-d space Z that is closed and bounded and hence compact guaranteeing the existence of the minimum since the objective function is continuous...
- - - -
for a more typical linear algebraic take, consider ##\mathbf D^\frac{1}{2}\mathbf y = \mathbf z##. Your original problem in some sense is to optimize

##\mathbf y^T \mathbf I \mathbf y## with a peculiar constraint on ##\mathbf y## or instead to optimize on
##\mathbf z^T \mathbf D^\frac{-1}{2}\mathbf I \mathbf D^\frac{-1}{2}\mathbf z = \mathbf z^T \mathbf D^{-1}\mathbf z## with the constraint that ##\big \Vert \mathbf z \big \Vert_2 = 1## which would be in standard form.... but the issue is that ##\mathbf D^\frac{-1}{2}## doesn't exist since ##\mathbf D## is singular and that makes the problem a bit of a mess.
 
85
2
Thanks for the reply. What’s confusing is that that the “vector” constraint, i.e. ##y_2^2+4y_3^2## doesn’t appear within ##r(\textbf{x})=x_1^2+x_2^2+x_3^3##. It makes it less pleasant versus the constraint in the definition. Anyway, how does the constraint ##y_2^2+4y_3^2## change the definition of minimum given in #1 for this particular case?​
 
85
2
By the way, in the basis of eigenvectors of ##q## it becomes ##y_2^2+4y_3^2##, whereas ##r## in the basis of its eigenvectors stays the same, that is ##x_1^2+x_2^2+x_3^3##. How come one can insert the one expression into the other when they express different coordinate systems? In the basis of eigenvectors of ##q##, ##r## would take on a different form, wouldn’t it?​
 

StoneTemplePython

Science Advisor
Gold Member
1,123
533
By the way, in the basis of eigenvectors of ##q## it becomes ##y_2^2+4y_3^2##, whereas ##r## in the basis of its eigenvectors stays the same, that is ##x_1^2+x_2^2+x_3^3##. How come one can insert the one expression into the other envectors of ##q##, ##r## would take on a different form, wouldn’t it?​
I presume you're referring to how I wrote
##\text{min: }\big \Vert \mathbf x\big \Vert_2^2 = \text{min: }\big \Vert \mathbf y\big \Vert_2^2##
that's because
##\mathbf x = \mathbf Q \mathbf y ##
and the 2 norm is unitarily invariant in ##\mathbb C## -- or orthogonally invariant in ##\mathbb R## if you prefer. Just expand and confirm this for yourself.

What’s confusing is that that the “vector” constraint, i.e. ##y_2^2+4y_3^2## doesn’t appear within ##r(\textbf{x})=x_1^2+x_2^2+x_3^3##. It makes it less pleasant versus the constraint in the definition. Anyway, how does the constraint ##y_2^2+4y_3^2## change the definition of minimum given in #1 for this particular case?​
A couple problems here. One I take issue with asserting that ##\lambda_1## is defined as the minimum of a quadratic form. Typically you already know what eigenvalues are and then get that ##\lambda_1## gives the minimum as the result of a theorem. If you insist on trying to view ##\lambda_1## as being defined as a minimum you're going to have all kinds of difficulties generalizing -- e.g. in cases like this where the quadratic form is not in standard form, and also in cases where you have square but not normal matrices, and also real skew symmetric matrices (that aren't zero) will cause you bizarre problems since they always have a quadratic form of zero but definitely are nilpotent. So the constraint doesn't change any definitions, it changes the problem.

Two, I get the impression you didn't read or perhaps understand my response. Revisit it, in particular the end. Suppose your problem's constraint, instead of eigenvalues ##\{0, 1, 4\}## had ##\{\delta, 1, 4\}## for some small ##\delta \gt 0##

then the diagonal matrix of interest
##\mathbf D^\frac{1}{2}## would have diagonal components of ##\delta^\frac{1}{2}, 1, 2## and
for a more typical linear algebraic take, consider ##\mathbf D^\frac{1}{2}\mathbf y = \mathbf z##. Your original problem in some sense is to optimize

##\mathbf y^T \mathbf I \mathbf y## with a peculiar constraint on ##\mathbf y## or instead to optimize on
##\mathbf z^T \mathbf D^\frac{-1}{2}\mathbf I \mathbf D^\frac{-1}{2}\mathbf z = \mathbf z^T \mathbf D^{-1}\mathbf z## with the constraint that ##\big \Vert \mathbf z \big \Vert_2 = 1## which would be in standard form....
so you'd have a standard constraint of
##\big \Vert \mathbf z \big \Vert_2^2 = 1##
i.e. ##\mathbf z## on the unit circle, and and objective function given by
##\mathbf z^T \mathbf D^{-1}\mathbf z##
so the smallest eigenvalue in here is ##\frac{1}{4}## and as your intuition suggests you allocate everything to that eigenpair for the minimization problem.

##\mathbf D^{-1}## of course also has an eigenvalue of 1 and of ##\frac{1}{\delta}## -- this last one may be made arbitrarily large by shrinking our positive ##\delta \to 0## which should give you a feeling for the original problem does not have a max.
 
85
2
Aren't ##y_2^2+4y_3^2## (diagonal representation of ##q##) and ##x_1^2+x_2^2+x_3^3## (diagonal representation of ##r##) representing two different coordinates? How come one can write

##y_1^2 + y_2^2 + y_3^2 = y_1^2 + y_2^2 + y_3^2 + 1 - \big(0y_1^2 + y_2^2 +4 y_3^3\big) \quad## (as in #2)
How would one "translate" the constraint ##y_2^2+4y_3^2=1## into the original one, ##||\textbf{x}||=1##?
 

StoneTemplePython

Science Advisor
Gold Member
1,123
533
Aren't ##y_2^2+4y_3^2## (diagonal representation of ##q##) and ##x_1^2+x_2^2+x_3^3## (diagonal representation of ##r##) representing two different coordinates?
I just answered this

I presume you're referring to how I wrote
##\text{min: }\big \Vert \mathbf x\big \Vert_2^2 = \text{min: }\big \Vert \mathbf y\big \Vert_2^2##
that's because
##\mathbf x = \mathbf Q \mathbf y ##
and the 2 norm is unitarily invariant in ##\mathbb C## -- or orthogonally invariant in ##\mathbb R## if you prefer. Just expand and confirm this for yourself.
I now need to see some work from you where you explicitly expand this squared two norm and show they are equivalent. Start with the definition of the two norm.


How come one can write

##y_1^2 + y_2^2 + y_3^2 = y_1^2 + y_2^2 + y_3^2 + 1 - \big(0y_1^2 + y_2^2 +4 y_3^3\big) \quad## (as in #2)
When we peel this back, you are left with really basic algebraic manipulations

##y_1^2 + y_2^2 + y_3^2 ##
##=y_1^2 + y_2^2 + y_3^2 +0 ##
##= y_1^2 + y_2^2 + y_3^2 +1 -\big(1\big)##
##= y_1^2 + y_2^2 + y_3^2 + 1 - \big(0y_1^2 + y_2^2 +4 y_3^3\big) \quad##

so long as your constraint is satisfied


How would one "translate" the constraint ##y_2^2+4y_3^2=1## into the original one, ##||\textbf{x}||=1##?
that's what I've shown, twice now, with ##\mathbf D^\frac{1}{2}## and the converted problem ideally using ##\mathbf z^T \mathbf D^{-1}\mathbf z##

but as I said in post 2:
the technical issue is that 'D is singular and that makes the problem a bit of a mess.'

you'd be wise to follow my prior post and start with a simpler problem that considers
##\{\delta, 1, 4\}## instead of ##\{0, 1, 4\}## as the diagonal components of ##\mathbf D##

this is a very simple problem when ##\mathbf D## is positive definite. the issue is you don't have that and instead only have positive semi-definite ##\mathbf D## and that makes that problem more complicated. Master the simpler problem first.
 
85
2
What does "unitary invariant" mean? What is meant by ##_2## in ##\big \Vert \mathbf x\big \Vert_2^2##? How would

##\big \Vert \mathbf x\big \Vert ^2 = x_1^2+x_2^2+x_3^2+...+x_n^2##

and

##\big \Vert \mathbf y\big \Vert ^2=y_1^2+y_2^2+y_3^2+...+y_n^2##​

show that ##y_2^2+4y_3^2## can be inserted into ##r(\mathbf x)=x_1^2+x_2^2+x_3^3## (or it's diagonal representation; ##y_1^2+y_2^2+y_3^3##, but where these ##y_i##-coordinates refer to a different basis)?

What is ##\mathbf Q##? And ##\mathbf D^\frac{1}{2}##?
 

StoneTemplePython

Science Advisor
Gold Member
1,123
533
What does "unitary invariant" mean?
you put it in quotes but misquoted it. What I wrote was

the 2 norm is unitarily invariant in ##\mathbb C## -- or orthogonally invariant in ##\mathbb R## if you prefer. Just expand and confirm this for yourself.
these are standard terms. I specifically offered the simpler term of 'orthogonally invariant' as an option, which I now suggest you look up and use.

What is meant by ##_2## in ##\big \Vert \mathbf x\big \Vert_2^2##? How would

##\big \Vert \mathbf x\big \Vert ^2 = x_1^2+x_2^2+x_3^2+...+x_n^2##

and

##\big \Vert \mathbf y\big \Vert ^2=y_1^2+y_2^2+y_3^2+...+y_n^2##
There are different kinds of Lp norms.

standard notation for an Lp norm is
##\big \Vert \mathbf x\big \Vert_p##
and in your problem p = 2.

##\big \Vert \mathbf x\big \Vert_2##
denotes the 2 norm
and
##\big \Vert \mathbf x\big \Vert_2^2##
denotes the squared two norm.

Even without knowing what Lp norms are, I'll point out that in post 2, I told you that your optimization function (in the minimization case) is equivalent to ##\text{min: }\big \Vert \mathbf x\big \Vert_2^2##
which implies
##\big \Vert \mathbf x\big \Vert_2^2 = x_1^2 + x_2^2 + x_3^2##


What is ##\mathbf Q##? And ##\mathbf D^\frac{1}{2}##?
I defined ##\mathbf Q## in post 2. Do you know what it means to diagonalize a matrix?

##\mathbf x = \mathbf Q \mathbf y ##

where ##\mathbf q_i## is the ith eigenvector and the eigs are in ascending order
##\lambda_1 \leq \lambda_2 \leq \lambda_3##
##\mathbf A = \left[\begin{matrix}1 & 1 & -1\\1 & 3 & -1\\-1 & -1 & 1\end{matrix}\right] = \mathbf {Q\Sigma Q}^T##
part of the problem is you didn't show any work in your opening post and instead wrote things like
It is clear that the diagonal representation of ##q## is ##y_2^2+4y_3^2##
by the same token I believe it is clear that ##\mathbf A## may be used in a quadratic form for your constraint, i.e. I can eyeball ##\mathbf A## and see
##\mathbf x^T \mathbf A \mathbf x = x_1^2+3x_2^2+x_3^3+2x_1x_2-2x_1x_3-2x_2x_3:=1##

I defined ##\mathbf D## a bit more subtly at the end of post two. I then defined it again, this time for the simplified problem, in post 5 as

Suppose your problem's constraint, instead of eigenvalues ##\{0, 1, 4\}## had ##\{\delta, 1, 4\}## for some small ##\delta \gt 0##

then the diagonal matrix of interest
##\mathbf D^\frac{1}{2}## would have diagonal components of ##\delta^\frac{1}{2}, 1, 2## and
do you know what it means to raise a matrix to the second power, and in particular a diagonal matrix to the second power? What about taking the square root of a diagonal matrix?

- - - - -
This problem evidently is homework, you haven't stated much at all in terms of background knowledge/relevant equation and I'm not seeing enough effort. The homework forums exist explicitly to flush these things out.
 

Related Threads for: Find the minimum and maximum value of a quadratic form

Replies
3
Views
5K
Replies
1
Views
2K
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
1
Views
1K
  • Last Post
Replies
5
Views
4K
  • Last Post
Replies
1
Views
3K
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
3
Views
3K
Top