Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Row orthonormal matrix

  1. Jun 13, 2018 #1
    Hello,

    I'm looking for a way to create an approximate row-orthonormal matrix with the number of rows (m) > the number of columns (n); i.e., finding A(mxn) so that A(mxn) . A^T(nxm) = I(mxm). I used singular value decomposition (e.g., DGESVD in mkl mathlib), but what I actually got was an orthonormal square eigenvector matrix.

    Thank you!
     
    Last edited: Jun 13, 2018
  2. jcsd
  3. Jun 13, 2018 #2

    Mark44

    Staff: Mentor

    How can an eigenvector be square (i.e., a square matrix)?

    An eigenvector is a nonzero vector x such that Ax = λx for some eigenvalue λ. Here x is a vector, an n x 1 matrix.
     
  4. Jun 13, 2018 #3
    we have a set of eigenvalues, each correspondings to an eigenvector. What I mentioned was the collection of eigenvectors. If you use LAPACK or some mathlib, the output is usually a matrix of eigenvectors, matrix of eigenvalues, not just single one unless you request the output of only one.
     
    Last edited: Jun 13, 2018
  5. Jun 14, 2018 #4

    StoneTemplePython

    User Avatar
    Science Advisor
    Gold Member

    There seems to be some linguistic confusion in this thread.

    obviously this is impossible. The ##\mathbf I_m## you are hoping to result from your product cannot exist.

    Note that ##\big(\mathbf A \mathbf A^T\big)## and ##\big(\mathbf A^T \mathbf A\big)## have the same non-zero eigenvalues with same algebraic multiplicities. If what you're asking for exists, then ##\big(\mathbf A^T \mathbf A\big)## has eigenvalues of 1 with algebraic multiplicity of ##m## which is a contracition because its characteristic polynomial is a monic polynomial of degree ##n \lt m##.

    - - - -

    Other approaches to prove this fact are looking at this thread
    https://www.physicsforums.com/threads/invertibility-of-a-matrix.917120/

    or even using Cauchy Binet.
     
    Last edited: Jun 14, 2018
  6. Jun 14, 2018 #5
    Yes, you are right. The ##A_{m\times n} A^T_{n\times m} = I_{m\times m}## does not exist if the number of rows > columns ##(m > n)## due to the linear dependence. Please see the rectangular matrix in the wiki page. https://en.wikipedia.org/wiki/Orthogonal_matrix.

    Yeah, my question is a bit confusing. I should have asked if there were any way we can find ##A_{m\times n}## with ##m > n## so that (approximately) $$A_{m\times n} A^T_{n\times m} \approx I_{m\times m}$$
    Note: There might be some differences in notations between my field of study and mathematics, ##A_{m\times n}## is an m-by-n matrix (##m## rows and ##n## columns); ##I_{m\times m}## is the m-by-m identity matrix.
     
    Last edited: Jun 14, 2018
  7. Jun 14, 2018 #6

    StoneTemplePython

    User Avatar
    Science Advisor
    Gold Member

    I can easily find

    ##
    A_{m\times n} A^T_{n\times m} =
    \begin{bmatrix}
    \mathbf I_n & \mathbf 0\\
    \mathbf 0 & \mathbf 0
    \end{bmatrix}##

    is that 'good' approximation? As stated, you cannot approximate the rank of ##\mathbf I_m##. What then are you trying to approximate? And what criteria are you using to score a given result as a 'good or bad' approximation? (Typically a metric...)

    In general coming up with good approximations takes considerable sophistication. It feels like you may need to go back to the drawing board on this.
     
  8. Jun 14, 2018 #7
    The approximation you gave is the output that I got from Singular value decomposition. What I try to approximate is the product ##A_{m\times n} A^T_{n\times m}## is as close as possible to the identity matrix ##I_m##, but there should be no columns/rows with only zero values. The criteria can be the trace of the Frobenius norm as in the procedure of finding the nearest orthonormal matrix http://people.csail.mit.edu/bkph/articles/Nearest_Orthonormal_Matrix.pdf.
     
    Last edited: Jun 14, 2018
  9. Jun 14, 2018 #8

    StoneTemplePython

    User Avatar
    Science Advisor
    Gold Member

    This is a huge amount of mission creep. It's a bit like telling someone you have a Linear Program to solve and then much later mentioning that you actually need everything to be integer valued -- a very different problem.

    - - - -
    I have a hunch that you want something like this

    ##A_{m\times n} A^T_{n\times m} =
    \begin{bmatrix}
    \mathbf I_{n-1} & \mathbf 0\\
    \mathbf 0 & \mathbf 0
    \end{bmatrix} + \epsilon \mathbf {11}^T
    ##

    for sufficiently small ##\epsilon##. Of course, you can always choose a smaller ##\epsilon## mathematically, though if doing this on a computer you don't want it to be too small (for underflow reasons). Supposing you have a tolerance for a 'good enough' solution, this should fit.

    the idea is that you'd have ##n-1## mutually orthonormal vectors -- each of which is orthogonal to the ones vector. Then add ##\sqrt{\frac{\epsilon}{n}}\mathbf 1## to each of those vectors.
     
    Last edited: Jun 14, 2018
  10. Jun 14, 2018 #9

    StoneTemplePython

    User Avatar
    Science Advisor
    Gold Member

    and for avoidance of doubt:

    ## \mathbf 1 = \begin{bmatrix}
    1\\
    1\\
    \vdots\\
    1
    \end{bmatrix}##
     
  11. Jun 14, 2018 #10
    Thanks very much StoneTemplePython! I'll try this.
     
  12. Jun 14, 2018 #11

    StoneTemplePython

    User Avatar
    Science Advisor
    Gold Member

    Now the question remains -- how would you get this?

    I'd suggest doing generating a random ##n## x ##n## matrix (technically there are workarounds, but its fast and there is a nice interpretation of linear independence and determinants and geometry underlying this). Replace the first column of this matrix with the ones vector. Call this matrix ##\mathbf B##. Now run QR factorization -- ##\mathbf {QR} = \mathbf B##. Collect the ##n-1## columns of ##\mathbf Q## --i.e. collect all columns except the left most column which will be a rescaled version of the ones vector -- all other columns are mutually orthonormal and of course orthogonal to this first vector, the ones vector. Place the collection with ##m-n## more ##\mathbf 0## vectors in a matrix -- call it ##\mathbf A^T##.

    There's probably some fine tuning needed with indexing or scaling but the big idea is you have something like

    ##
    A_{m\times n}\big(A_{m\times n}\big)^T =
    \begin{bmatrix}
    \mathbf I_{n-1} & \mathbf 0\\
    \mathbf 0 & \mathbf 0
    \end{bmatrix} ##

    and

    ##A_{m\times n} \mathbf 1_n\mathbf 1_m^T = \big(A_{m\times n} \mathbf 1_n\big)\mathbf 1_m^T = \mathbf 0_m\mathbf 1_m^T= \mathbf 0_m \mathbf 0_m^T##

    ditto for the transpose

    so

    ##\big(A_{m\times n} + \mathbf 1_m\mathbf 1_n^T\big)\big(A_{m\times n} + \mathbf 1_m\mathbf 1_n^T\big)^T = A_{m\times n}\big(A_{m\times n}\big)^T + \big(A_{m\times n}\mathbf 1_n\mathbf 1_m^T\big)^T + A_{m\times n}\mathbf 1_n\mathbf 1_m^T + \big(\mathbf 1_m\mathbf 1_n^T\big)\big(\mathbf 1_m\mathbf 1_n^T\big)^T = A_{m\times n}\big(A_{m\times n}\big)^T+ \mathbf 0_m \mathbf 0_m^T + \mathbf 0_m \mathbf 0_m^T +\big(\mathbf 1_m\mathbf 1_n^T\big)\big(\mathbf 1_m\mathbf 1_n^T\big)^T = A_{m\times n}\big(A_{m\times n}\big)^T +n \mathbf 1_m\mathbf 1_m^T##

    now rescale the ones vector appropriately to fine tune the resulting matrix.


    - - -
    note: to the extent you are ok with using complex numbers (uses twice as many bits though, I know, and could be numeric type issues, plus you'd need to be sure to use conjugate transpose rather than just transpose)...

    then you actually already have this work done for you with the ##n## x ##n## Discrete Fourier transform matrix. You'd select all columns in it that aren't the (scaled) ones vector.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted