Create Row-Orthonormal Matrix | m > n

In summary: This is a huge amount of mission creep. It's a bit like telling someone you have a Linear Program to solve and then much later mentioning that you actually need everything to be integer valued -- a very different problem.This is a huge amount of mission creep. It's a bit like telling someone you have a Linear Program to solve and then much later mentioning that you actually need everything to be positive -- a very different problem.- - - -In summary, the eigenvector problem is impossible to solve using only the number of rows and columns in a matrix.
  • #1
buupq
6
1
Hello,

I'm looking for a way to create an approximate row-orthonormal matrix with the number of rows (m) > the number of columns (n); i.e., finding A(mxn) so that A(mxn) . A^T(nxm) = I(mxm). I used singular value decomposition (e.g., DGESVD in mkl mathlib), but what I actually got was an orthonormal square eigenvector matrix.

Thank you!
 
Last edited:
Technology news on Phys.org
  • #2
buupq said:
but what I actually got was an orthonormal square eigenvector
How can an eigenvector be square (i.e., a square matrix)?

An eigenvector is a nonzero vector x such that Ax = λx for some eigenvalue λ. Here x is a vector, an n x 1 matrix.
 
  • Like
Likes buupq
  • #3
Mark44 said:
How can an eigenvector be square (i.e., a square matrix)?

An eigenvector is a nonzero vector x such that Ax = λx for some eigenvalue λ. Here x is a vector, an n x 1 matrix.

we have a set of eigenvalues, each correspondings to an eigenvector. What I mentioned was the collection of eigenvectors. If you use LAPACK or some mathlib, the output is usually a matrix of eigenvectors, matrix of eigenvalues, not just single one unless you request the output of only one.
 
Last edited:
  • #4
There seems to be some linguistic confusion in this thread.

buupq said:
with the number of rows (m) > the number of columns (n); i.e., finding A(mxn) so that A(mxn) . A^T(nxm) = I(mxm).

obviously this is impossible. The ##\mathbf I_m## you are hoping to result from your product cannot exist.

Note that ##\big(\mathbf A \mathbf A^T\big)## and ##\big(\mathbf A^T \mathbf A\big)## have the same non-zero eigenvalues with same algebraic multiplicities. If what you're asking for exists, then ##\big(\mathbf A^T \mathbf A\big)## has eigenvalues of 1 with algebraic multiplicity of ##m## which is a contracition because its characteristic polynomial is a monic polynomial of degree ##n \lt m##.

- - - -

Other approaches to prove this fact are looking at this thread
https://www.physicsforums.com/threads/invertibility-of-a-matrix.917120/

or even using Cauchy Binet.
 
Last edited:
  • Like
Likes buupq
  • #5
Yes, you are right. The ##A_{m\times n} A^T_{n\times m} = I_{m\times m}## does not exist if the number of rows > columns ##(m > n)## due to the linear dependence. Please see the rectangular matrix in the wiki page. https://en.wikipedia.org/wiki/Orthogonal_matrix.

Yeah, my question is a bit confusing. I should have asked if there were any way we can find ##A_{m\times n}## with ##m > n## so that (approximately) $$A_{m\times n} A^T_{n\times m} \approx I_{m\times m}$$
Note: There might be some differences in notations between my field of study and mathematics, ##A_{m\times n}## is an m-by-n matrix (##m## rows and ##n## columns); ##I_{m\times m}## is the m-by-m identity matrix.
 
Last edited:
  • #6
buupq said:
Yeah, my question is a bit confusing. I should have asked if there were any way we can find ##A_{m\times n}## with ##m > n## so that (approximately) $$A_{m\times n} A^T_{n\times m} \approx I_{m\times m}$$

I can easily find

##
A_{m\times n} A^T_{n\times m} =
\begin{bmatrix}
\mathbf I_n & \mathbf 0\\
\mathbf 0 & \mathbf 0
\end{bmatrix}##

is that 'good' approximation? As stated, you cannot approximate the rank of ##\mathbf I_m##. What then are you trying to approximate? And what criteria are you using to score a given result as a 'good or bad' approximation? (Typically a metric...)

In general coming up with good approximations takes considerable sophistication. It feels like you may need to go back to the drawing board on this.
 
  • Like
Likes buupq
  • #7
The approximation you gave is the output that I got from Singular value decomposition. What I try to approximate is the product ##A_{m\times n} A^T_{n\times m}## is as close as possible to the identity matrix ##I_m##, but there should be no columns/rows with only zero values. The criteria can be the trace of the Frobenius norm as in the procedure of finding the nearest orthonormal matrix http://people.csail.mit.edu/bkph/articles/Nearest_Orthonormal_Matrix.pdf.
 
Last edited:
  • #8
buupq said:
The approximation you gave is the output that I got from Singular value decomposition. What I try to approximate is the product ##A_{m\times n} A^T_{n\times m}## is as close as possible to the identity matrix ##I_m##, but there should be no columns/rows with only zero values. The criteria can be the trace of the Frobenius norm as in the procedure of finding the nearest orthonormal matrix http://people.csail.mit.edu/bkph/articles/Nearest_Orthonormal_Matrix.pdf.

This is a huge amount of mission creep. It's a bit like telling someone you have a Linear Program to solve and then much later mentioning that you actually need everything to be integer valued -- a very different problem.

- - - -
I have a hunch that you want something like this

##A_{m\times n} A^T_{n\times m} =
\begin{bmatrix}
\mathbf I_{n-1} & \mathbf 0\\
\mathbf 0 & \mathbf 0
\end{bmatrix} + \epsilon \mathbf {11}^T
##

for sufficiently small ##\epsilon##. Of course, you can always choose a smaller ##\epsilon## mathematically, though if doing this on a computer you don't want it to be too small (for underflow reasons). Supposing you have a tolerance for a 'good enough' solution, this should fit.

the idea is that you'd have ##n-1## mutually orthonormal vectors -- each of which is orthogonal to the ones vector. Then add ##\sqrt{\frac{\epsilon}{n}}\mathbf 1## to each of those vectors.
 
Last edited:
  • Like
Likes buupq
  • #9
and for avoidance of doubt:

## \mathbf 1 = \begin{bmatrix}
1\\
1\\
\vdots\\
1
\end{bmatrix}##
 
  • Like
Likes buupq
  • #10
Thanks very much StoneTemplePython! I'll try this.
 
  • #11
Now the question remains -- how would you get this?

I'd suggest doing generating a random ##n## x ##n## matrix (technically there are workarounds, but its fast and there is a nice interpretation of linear independence and determinants and geometry underlying this). Replace the first column of this matrix with the ones vector. Call this matrix ##\mathbf B##. Now run QR factorization -- ##\mathbf {QR} = \mathbf B##. Collect the ##n-1## columns of ##\mathbf Q## --i.e. collect all columns except the left most column which will be a rescaled version of the ones vector -- all other columns are mutually orthonormal and of course orthogonal to this first vector, the ones vector. Place the collection with ##m-n## more ##\mathbf 0## vectors in a matrix -- call it ##\mathbf A^T##.

There's probably some fine tuning needed with indexing or scaling but the big idea is you have something like

##
A_{m\times n}\big(A_{m\times n}\big)^T =
\begin{bmatrix}
\mathbf I_{n-1} & \mathbf 0\\
\mathbf 0 & \mathbf 0
\end{bmatrix} ##

and

##A_{m\times n} \mathbf 1_n\mathbf 1_m^T = \big(A_{m\times n} \mathbf 1_n\big)\mathbf 1_m^T = \mathbf 0_m\mathbf 1_m^T= \mathbf 0_m \mathbf 0_m^T##

ditto for the transpose

so

##\big(A_{m\times n} + \mathbf 1_m\mathbf 1_n^T\big)\big(A_{m\times n} + \mathbf 1_m\mathbf 1_n^T\big)^T = A_{m\times n}\big(A_{m\times n}\big)^T + \big(A_{m\times n}\mathbf 1_n\mathbf 1_m^T\big)^T + A_{m\times n}\mathbf 1_n\mathbf 1_m^T + \big(\mathbf 1_m\mathbf 1_n^T\big)\big(\mathbf 1_m\mathbf 1_n^T\big)^T = A_{m\times n}\big(A_{m\times n}\big)^T+ \mathbf 0_m \mathbf 0_m^T + \mathbf 0_m \mathbf 0_m^T +\big(\mathbf 1_m\mathbf 1_n^T\big)\big(\mathbf 1_m\mathbf 1_n^T\big)^T = A_{m\times n}\big(A_{m\times n}\big)^T +n \mathbf 1_m\mathbf 1_m^T##

now rescale the ones vector appropriately to fine tune the resulting matrix. - - -
note: to the extent you are ok with using complex numbers (uses twice as many bits though, I know, and could be numeric type issues, plus you'd need to be sure to use conjugate transpose rather than just transpose)...

then you actually already have this work done for you with the ##n## x ##n## Discrete Fourier transform matrix. You'd select all columns in it that aren't the (scaled) ones vector.
 
  • Like
Likes buupq

1. What is a row-orthonormal matrix?

A row-orthonormal matrix is a type of matrix where each row is a unit vector and the dot product of any two rows is equal to zero. This means that the rows are all perpendicular to each other and have a magnitude of 1.

2. Why is creating a row-orthonormal matrix useful?

Row-orthonormal matrices are useful in many applications, such as signal processing and data compression. They also have properties that make them easier to work with in mathematical calculations and can simplify solutions to linear equations.

3. How do you create a row-orthonormal matrix when m is greater than n?

When m is greater than n, it means that there are more rows than columns in the matrix. To create a row-orthonormal matrix in this case, we use a process called Gram-Schmidt orthogonalization. This involves finding a set of orthogonal vectors that span the same space as the original rows, and then normalizing them to create unit vectors.

4. Can a row-orthonormal matrix be created when m is less than n?

No, a row-orthonormal matrix cannot be created when m is less than n. This is because there would not be enough rows to create a set of orthogonal vectors. However, it is possible to create a column-orthonormal matrix when m is less than n, where each column is a unit vector and the dot product of any two columns is equal to zero.

5. Are there any restrictions on the values in a row-orthonormal matrix?

Yes, there are some restrictions on the values in a row-orthonormal matrix. The values must be real numbers, and the matrix must have a full rank (i.e. all rows are linearly independent). Additionally, the dot product of any two rows must be equal to zero, which means that the values must be carefully chosen to meet this requirement.

Similar threads

  • Programming and Computer Science
Replies
6
Views
847
  • Programming and Computer Science
Replies
6
Views
886
  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Programming and Computer Science
Replies
25
Views
2K
  • Linear and Abstract Algebra
Replies
6
Views
876
  • Programming and Computer Science
Replies
4
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • Precalculus Mathematics Homework Help
Replies
7
Views
1K
  • Programming and Computer Science
Replies
4
Views
1K
  • Programming and Computer Science
Replies
1
Views
2K
Back
Top