Squared norms: difference or notational convenience

  • Thread starter Thread starter onako
  • Start date Start date
  • Tags Tags
    Difference
onako
Messages
86
Reaction score
0
Given certain matrix A\in\mathbb{R}^{n\times m},the rank d approximation L with the same number of rows/column as A, minimizing the Frobenius norm of the difference ||A-L|| is matrix obtained by singular value decomposition of A, with only d dominant singular values (the rest is simply set to zero).

However, I often encounter the minimization of the adapted norm, such as various kinds of normalization on the norm, ie.
i) ||A-K||^2
ii) \left(\frac{||A-K||}{||A||}\right)^{1/2}
and I'm not sure if the solution L from the above non-squared Frobenius norm coincides with the normalized Frobenius norm solution from i) and ii).
Isn't it the case that K should be L, but appropriately scaled for i) and/or ii)?
 
Last edited:
Physics news on Phys.org
Essentially you're given a function f(K) and asked to minimize it. You're then asked to minimize f(K)^2 and f(K)/constant. All of these functions have the same minimum because the operations you are applying to f are all monotone
 
Thanks; I had similar reasoning. However, I'm surprised that in the literature one might find some confusing monotone transformations.
 
Thread 'Derivation of equations of stress tensor transformation'
Hello ! I derived equations of stress tensor 2D transformation. Some details: I have plane ABCD in two cases (see top on the pic) and I know tensor components for case 1 only. Only plane ABCD rotate in two cases (top of the picture) but not coordinate system. Coordinate system rotates only on the bottom of picture. I want to obtain expression that connects tensor for case 1 and tensor for case 2. My attempt: Are these equations correct? Is there more easier expression for stress tensor...
Back
Top