Finding Linearly Dependent Rows in a Large Matrix

  • Thread starter Thread starter Big Data Alan
  • Start date Start date
  • Tags Tags
    Linearly Matrix
Big Data Alan
Messages
3
Reaction score
0
I have a large real symmetric square matrix (with millions of rows/columns). How can I identify the sets of rows that are linearly dependent?

More generally, can I determine linear independence of rows with a continuous function where, say, the function is 1.0 for a row that is linearly independent and 0.0 when it is linearly dependent. I am interested in identifying rows that are almost linearly dependent. For example, say that due to round-off error rows A and B are identical in all columns except that they differ by 1 part per million in one column's entries.
 
Physics news on Phys.org
If your matrix with 1,000,000 rows is full, the answer is "with great difficulty", unless you have access to a world-class supercomputing facility.

If the matrix is sparse, a good way would be to find the smallest eigenvalues or singular values. The corresponding vectors give you linear combinations that are "almost" linearly dependent.
 
Thanks. I should have mentioned that it is a very sparse matrix. Most rows have just a handful of non-zero entries although a few rows are exceptionally dense. I have been using the Mahout's Singular Value Decomposition program http://mahout.apache.org to find the eigenvectors. This program rejects the near-zero eigenvalue (in its "cleaning" stage) but I will check it out to see what it can tell me about the linearly dependent rows.
 
In response to the questions posted by Windowmaker - which I received notice of but cannot find within this thread: What are you doing with a matrix that big? Is it the first LA course?

I am afraid that I have strayed far from my physics education. I have a collection of nearly 2 billion packets that transited the gateways connecting our corporate network to the Internet over a few days. These involve 2.4 million unique IP addresses. So I created a 2.4M by 2.4M matrix where the indices map to the IP addresses and the elements are the number of packets exchanged between them. Perhaps surprisingly you can find individual eigenvectors fairly quickly but finding all of them is not practical. The first few eigenvectors primarily involve sums and differences of the key nodes (in terms of numbers of IPs they are in contact with and the numbers of packets exchanged).

The sorts of questions I hope the eigenvectors will answer are:
1) Which are the principal nodes in the network?
2) Given a node, which nodes are sending similar numbers of packets to the same other nodes? (For example, I have noticed similar network attacks emanating from multiple IPs and would like an algorithmic approach to identifying the clones/copy-cats.)
3) Which nodes share a structural similarity? (For example, given the IP address of the DNS server can I figure out which one is the backup DNS server?)

Note that the matrix elements are all integers; my comment about round-off error was really about minor differences in packets counts due to a) time-limited data sample, b) lost packets being resent, ...
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top