Finding Linearly Dependent Rows in a Large Matrix

  • Thread starter Thread starter Big Data Alan
  • Start date Start date
  • Tags Tags
    Linearly Matrix
Click For Summary
To identify linearly dependent rows in a large, sparse symmetric matrix, using singular value decomposition (SVD) is effective, as it helps find the smallest eigenvalues and corresponding vectors that indicate near-linear dependencies. The discussion highlights the challenge of handling a matrix with millions of rows, particularly in terms of computational resources, as well as the nuances of round-off errors affecting row similarity. The matrix in question represents packet exchanges between 2.4 million unique IP addresses, aiming to analyze network behavior and identify principal nodes and structural similarities. The user is utilizing Mahout's SVD program to extract eigenvectors, which can provide insights into network patterns and potential anomalies. Overall, the focus is on leveraging mathematical techniques to manage and analyze vast datasets efficiently.
Big Data Alan
Messages
3
Reaction score
0
I have a large real symmetric square matrix (with millions of rows/columns). How can I identify the sets of rows that are linearly dependent?

More generally, can I determine linear independence of rows with a continuous function where, say, the function is 1.0 for a row that is linearly independent and 0.0 when it is linearly dependent. I am interested in identifying rows that are almost linearly dependent. For example, say that due to round-off error rows A and B are identical in all columns except that they differ by 1 part per million in one column's entries.
 
Physics news on Phys.org
If your matrix with 1,000,000 rows is full, the answer is "with great difficulty", unless you have access to a world-class supercomputing facility.

If the matrix is sparse, a good way would be to find the smallest eigenvalues or singular values. The corresponding vectors give you linear combinations that are "almost" linearly dependent.
 
Thanks. I should have mentioned that it is a very sparse matrix. Most rows have just a handful of non-zero entries although a few rows are exceptionally dense. I have been using the Mahout's Singular Value Decomposition program http://mahout.apache.org to find the eigenvectors. This program rejects the near-zero eigenvalue (in its "cleaning" stage) but I will check it out to see what it can tell me about the linearly dependent rows.
 
In response to the questions posted by Windowmaker - which I received notice of but cannot find within this thread: What are you doing with a matrix that big? Is it the first LA course?

I am afraid that I have strayed far from my physics education. I have a collection of nearly 2 billion packets that transited the gateways connecting our corporate network to the Internet over a few days. These involve 2.4 million unique IP addresses. So I created a 2.4M by 2.4M matrix where the indices map to the IP addresses and the elements are the number of packets exchanged between them. Perhaps surprisingly you can find individual eigenvectors fairly quickly but finding all of them is not practical. The first few eigenvectors primarily involve sums and differences of the key nodes (in terms of numbers of IPs they are in contact with and the numbers of packets exchanged).

The sorts of questions I hope the eigenvectors will answer are:
1) Which are the principal nodes in the network?
2) Given a node, which nodes are sending similar numbers of packets to the same other nodes? (For example, I have noticed similar network attacks emanating from multiple IPs and would like an algorithmic approach to identifying the clones/copy-cats.)
3) Which nodes share a structural similarity? (For example, given the IP address of the DNS server can I figure out which one is the backup DNS server?)

Note that the matrix elements are all integers; my comment about round-off error was really about minor differences in packets counts due to a) time-limited data sample, b) lost packets being resent, ...
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
Replies
1
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K