Low-Dimensional Matrix Approximation

In summary, you are trying to preserve information in a matrix by truncating it and projecting out the largest rows corresponding to the largest singular values.
  • #1
jfy4
649
3
Hi,

Lets say that I have a 4x4 matrix, and am interested in projecting out the most important information in that matrix into a 2x2 matrix. Is there an optimal projection to a lower dimensional matrix where one keeps most of the matrix intact as best as possible? Thanks.
 
Physics news on Phys.org
  • #2
What information are you talking about, exactly? 4x4 and 2x2 matrices are rather different (they're functions on completely different spaces). If there's a particular (say, 2-dimensional) subspace of interest, then the restriction of a 4x4 matrix to the subspace may give you a 2x2 matrix, but we really can't say anything specific without more information.
 
  • #3
I think you should be more specifc. What do you want it for? Do you have an example? Etc.

I can easily think of a (nonlinear) way to transform a ##4\times 4## into a ##2\times 2## such that all information is preserved, but I doubt you're looking for this.
 
  • #4
good questions. True be told I'm not entirely sure... The matrices house local configurations on a lattice, and I can't keep them all due to computer memory cost, so I want to truncate the matrices and keep as much as I can. The only way I have been doing it is with SVD. Once I preform the SVD I rotate the matrix using the V ( from U λ V[itex]^{\dagger}[/itex]) and then I project out the largest rows corresponding to the largest singular values.

Im interested in a nonlinear way to keep all the information too though lol :)
 
  • #5
jfy4 said:
The only way I have been doing it is with SVD. Once I preform the SVD I rotate the matrix using the V ( from U λ V[itex]^{\dagger}[/itex]) and then I project out the largest rows corresponding to the largest singular values.

You might like to think about that idea using the eigenvalues and vectors of the matrix rather than the singular values, to see what it means "physically" for your application.

For Hermitian (or real symmetric) matrices, you can interpret it in terms of partitioning the "energy" of the system and then throwing way the "least important" components.

Of course the SVs and EVs are closely related, and for arbitrary non-hermitian matrices the SVs might be easier to work with.
 

What is low-dimensional matrix approximation?

Low-dimensional matrix approximation is a mathematical technique used to reduce the size and complexity of a matrix by approximating it with a smaller, simpler matrix. This can be useful for data compression and simplifying complex calculations.

How is low-dimensional matrix approximation different from other matrix operations?

Unlike other matrix operations, which aim to manipulate or transform the original matrix, low-dimensional matrix approximation aims to create a new matrix that is a close approximation of the original. This allows for a reduction in size and complexity while still retaining important information from the original matrix.

What are the benefits of using low-dimensional matrix approximation?

Some potential benefits of low-dimensional matrix approximation include: faster computation times, reduced storage requirements, and simplification of complex data. It can also help identify patterns and relationships within the data that may not be apparent in the original matrix.

What are some common techniques used for low-dimensional matrix approximation?

Some common techniques for low-dimensional matrix approximation include principal component analysis (PCA), singular value decomposition (SVD), and truncated singular value decomposition (TSVD). These methods use different mathematical approaches to approximate the original matrix.

Are there any limitations or drawbacks to using low-dimensional matrix approximation?

While low-dimensional matrix approximation can be beneficial in many cases, there are some limitations to consider. The accuracy of the approximation may vary depending on the data and the chosen technique. Additionally, some data may not be well-suited for low-dimensional approximation, and in some cases, the original matrix may need to be retained for certain calculations or analyses.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Differential Equations
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Atomic and Condensed Matter
Replies
0
Views
375
  • Linear and Abstract Algebra
Replies
3
Views
2K
Back
Top