Maximizing quantity which is a product of matrices/vectors

  • Thread starter Thread starter AcidRainLiTE
  • Start date Start date
  • Tags Tags
    Product
Click For Summary
SUMMARY

The discussion focuses on maximizing the expression w'Aw, where A is a known matrix and w is a vector, under the constraint w'w = 1. The method involves using Lagrange multipliers, leading to the equation Aw = Lw, which identifies the eigenvector of matrix A corresponding to its largest eigenvalue. The differentiation process is clarified through the explicit formulation of the function m in terms of its components, ultimately linking it to the eigenvalue equation. Understanding this derivation is crucial for applying matrix calculus in optimization problems.

PREREQUISITES
  • Matrix calculus fundamentals
  • Understanding of eigenvalues and eigenvectors
  • Knowledge of Lagrange multipliers
  • Familiarity with symmetric matrices
NEXT STEPS
  • Study the application of Lagrange multipliers in optimization problems
  • Learn about eigenvalue decomposition in linear algebra
  • Explore matrix calculus techniques in depth
  • Review the properties of symmetric matrices and their implications
USEFUL FOR

Mathematicians, data scientists, and engineers involved in optimization problems, particularly those utilizing matrix algebra and eigenvalue analysis.

AcidRainLiTE
Messages
89
Reaction score
2
I am trying to follow the following reasoning:

Given a known matrix A, we want to find w that maximizes the quantity

w'Aw​

(where w' denotes the transpose of w) subject to the constraint w'w = 1.

To do so, use a lagrange multiplier, L:

w'Aw + L(w'w - 1)
and differentiate to obtain

Aw = Lw.​

Thus, we seek the eigenvector of A with the largest eigenvalue.​


I do not understand how they differentiated w'Aw + L(w'w-1) to get Aw = Lw. Can someone explain to me what is going on at that step?
 
Physics news on Phys.org
It would probably help to write things down explicitly in terms of components,

$$ m = w'Aw + L(w'w - 1) = \sum_{ij} A_{ij} w_i w_j + L \left( \sum_i w_i^2 -1 \right).$$

This is a function of the ##n## variables ##w_i##. At an extremum, ##\partial m /\partial w_k =0##. If you actually work out this set of equations, you'll see they are the components of the eigenvalue equation that you quoted. You'll need to use the fact that ##A## can be taken to be a symmetric matrix.
 
Both posts were helpful. Thanks.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
13
Views
1K
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
9
Views
2K
Replies
5
Views
2K
  • · Replies 24 ·
Replies
24
Views
70K
  • · Replies 6 ·
Replies
6
Views
6K