How have they calculated these pseudoinverses?

  • Thread starter Thread starter kostoglotov
  • Start date Start date
kostoglotov
Messages
231
Reaction score
6
Moved from a technical forum, so homework template missing.
fieHb7f.png


imgur link: http://i.imgur.com/fieHb7f.png

UpDPZ6K.jpg


imgur link: http://i.imgur.com/UpDPZ6K.jpg

Solution:

gUHCb1u.png


imgur link: http://i.imgur.com/gUHCb1u.png

I can see that A^+A and AA^+ are matrices that project onto the column and row space respectively.

But

(i) where does the A^+ figure into the calculation, because those answers are not A matrix-multiplied by A^+ or A^+ matrix-multiplied by A?

(ii) if A^+ isn't used to find the last two answers, why calculate it in the first place, what purpose does it serve? I could just find the column and row space basis vectors by other methods and construct those last two answers accordingly.

(iii) why are those answers given in the form that they are? wouldn't A^+A be just as good given as

A^+A = \begin{bmatrix}1 & 2\\2 & 4\end{bmatrix} ?
 
Matrix ##A^+## was computed using singular value decomposition.
You are asked how the fundamental subspaces of ##A## and of ##A^+## are related.
Concerning your questions:
(i) Check your calculations, the given matrices are exactly ##A^+A## and ##AA^+##;
(ii) This is a way to check your results. According to the theory ##A^+A## and ##AA^+## give you orthogonal projections onto row and column spaces of ##A## respectively (in fact, these conditions define ##A^+## uniquely), so computing these projections using other methods you can check your answer.
(iii) your formula for ##A^+A## is wrong, how did you get it?
 
Hawkeye18 said:
Matrix ##A^+## was computed using singular value decomposition.
You are asked how the fundamental subspaces of ##A## and of ##A^+## are related.
Concerning your questions:
(i) Check your calculations, the given matrices are exactly ##A^+A## and ##AA^+##;
(ii) This is a way to check your results. According to the theory ##A^+A## and ##AA^+## give you orthogonal projections onto row and column spaces of ##A## respectively (in fact, these conditions define ##A^+## uniquely), so computing these projections using other methods you can check your answer.
(iii) your formula for ##A^+A## is wrong, how did you get it?

I thought A dagger was just 1/50 of A, but it's not, it's 1/50 of A transpose. Even after doing the SVD calculations, somehow this misapprehension snuck in.

Just a bit of blindness Matlab confirms.
 
Prove $$\int\limits_0^{\sqrt2/4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx = \frac{\pi^2}{8}.$$ Let $$I = \int\limits_0^{\sqrt 2 / 4}\frac{1}{\sqrt{x-x^2}}\arcsin\sqrt{\frac{(x-1)\left(x-1+x\sqrt{9-16x}\right)}{1-2x}} \, \mathrm dx. \tag{1}$$ The representation integral of ##\arcsin## is $$\arcsin u = \int\limits_{0}^{1} \frac{\mathrm dt}{\sqrt{1-t^2}}, \qquad 0 \leqslant u \leqslant 1.$$ Plugging identity above into ##(1)## with ##u...
Back
Top