1. Mar 23, 2014

### zmalone

From the attached image problem:
When deriving the third term in the Lagrangian:
$\lambda$$_{2}$(w$^{T}$∑w - $\sigma$$^{2}_{\rho}$) with respect to w, are w$^{T}$ and w used like a w$^{2}$ to arrive at the gradient or am I oversimplifying and it just happens to work out on certain problems like this?

(∑ is an n x n symmetric covariance matrix and w is n x 1 vector)

#### Attached Files:

• ###### LinAlgQuestion1.jpg
File size:
20.2 KB
Views:
65
2. Mar 24, 2014

### Ray Vickson

The best way of avoiding errors, at least when you are starting, is to write it out in full:
$$w^T \Sigma w = \sum_{i=1}^n \sum_{j=1}^n \sigma_{ij} w_i w_j = \sum_{i=1}^n \sigma_{ii} w_i^2 + 2 \sum_{i < j} \sigma_{ij} w_i w_j$$

BTW: you don't need all those '[ itex]' tags: just enclose the whole expression between '[ itex]' and ' [/itex]' (although I prefer an opening '# #' (no space) followed by a closing '# #' (again, no space). Compare this: $\lambda_2 (w^T \Sigma w - \sigma_{\rho}^2)$ with what you wrote. (Press the quote button, as though you were composing a reply; that will show you the code. You can then just abandon the reply and not post it.)

Last edited: Mar 24, 2014