What are the costs of searching vs computation in sparse matrix problems?

  • Thread starter Thread starter muzak
  • Start date Start date
  • Tags Tags
    Computation
Click For Summary

Discussion Overview

The discussion revolves around the costs associated with searching versus performing computations in the context of sparse matrix problems. Participants explore the implications of algorithmic efficiency, memory access, and data structures relevant to sparse matrices, particularly in relation to Gaussian Elimination and other algorithms.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant questions the costs of searching for elements in a sparse matrix compared to performing integer operations, noting the vagueness of the question and the dependency on specific situations.
  • Another participant asserts that searching is generally expensive due to memory access times and the impact on CPU cache, emphasizing the importance of locality in computational efficiency.
  • A different participant highlights that sparse matrices can be represented using hash tables, which can conserve both time and resources by only allocating memory for existing non-zero elements.
  • Concerns are raised about Gaussian Elimination potentially introducing more non-zero elements into a sparse matrix, complicating the efficiency of the process.
  • Participants note that the choice of algorithm for solving sparse matrix equations depends on the specific pattern of sparsity, with various methods available ranging from straightforward to more complex approaches.
  • One participant suggests that clarifying the original question could lead to more targeted and efficient solutions for matrix problems.
  • Another participant mentions that sparse matrices can be stored as linked lists, allowing for quick access to non-zero values, though preprocessing is required.

Areas of Agreement / Disagreement

Participants express uncertainty regarding the original question's clarity and agree that the discussion lacks a definitive answer. Multiple competing views on the efficiency of searching versus computation exist, and the discussion remains unresolved.

Contextual Notes

Limitations include the vagueness of the original question, the dependency on specific algorithmic contexts, and the potential for different representations of sparse matrices affecting performance.

muzak
Messages
42
Reaction score
0
This is likely going to be a stupid question given that I am not in com sci, have very little com sci knowledge with regards to information storage/algorithms.

I was wondering, if this question has any meaning, what are the costs of scanning for something vs carrying out some integer operations such as adding, multiplying, etc. (time/algorithmic efficiency/machine cost/whatever related things apply)?

I know this is very contingent on exactly what I'm trying to do, so I'm going to try to lay out the situation. I'm just doing some sparse matrix problems. But there are going to be lots of zeroes and the sparsity doesn't really have any general pattern to it. I know Gaussian Elimination is generally considered to be in the range of (2/3)n^3, don't know what this is in relation to computational time which I would expect to be dependent on the system specifications of the computer. So what would the cost be, in relation to computer taxation, of searching for rows with two non-zero elements as opposed to going through matrix calculations?

I'm guessing that is very vague or possibly not answerable because there might be no equivalence between the two operations. Lemme try a simpler case, what are the machine costs of doing some integer operations as opposed to doing comparisons or searching for something?
 
Technology news on Phys.org
Searching is expensive. Jumping to a given memory location often invalidates the cpu cache, slowing the whole process down. And. At each memory location at least one compare is required. A binary search of an ordered array has this problem, for example. The cpu is orders of magnitude faster than memory access, so if it can work on data without having to do time-consuming memory operations, efficiency goes way up. This is called locality. So it depends.

Plus.

I'm not sure this is what you mean by sparse in your cont3ext, but in the CS world a sparse array or matrix is often represented by a small data structure which is accessed via a hash table. Not all possible slots for every possible value actually exist. Each "slot" is created on demand. In a problem with a set 1000 items, the potential domain of those items could be from zero to a large number of possible slots. Computers often do not have enough memory to allocate in order to create empty slots for all the "unused" slots for non-existent items. The hash directs the memory search to just the relevant slots. This can be time-conserving as well as resource conserving.

Is this what you are asking, or are you after time complexity for an algorithm? I'm sure this is not answering you in a way you need.
 
jim mcnamara said:
I'm not sure this is what you mean by sparse in your cont3ext, but in the CS world a sparse array or matrix is often represented by a small data structure which is accessed via a hash table.

In the context of Gaussian Elimination, there is an important issue here: the elimination process may introduces more non-zero terms into the matrix.

There are many different "standard" algorithms for solving sparse matrix equations. Which is "best" depends on the "pattern" of sparseness in the equations. They vary from straightforward (e.g. for a banded matrix where all the non-zero elements are close to the diagonal) to very general methods which first do a symbolic (non-numerical) decomposition of the matrix to find a good data structure with a small amount of "fill-in" of the original zero terms.

There are also iterative methods ...

I agree, the OP's question is too vague to give a good answer.
 
Better idea. Instead of having us floundering around badly - please tell us what you want to do? Maybe the answer is to solve a lot of matrix problems very efficiently.

The right algorithm usually solves 90% of the speed of computation problems.
 
I don't know if this will help, but sparse matrices can be stored as linked lists (basically, coordinates + value) so that any non-zero values in the row or column can be found pretty quickly. However, the matrix has to be preprocessed to convert it into the easily-usable form.
 

Similar threads

Replies
29
Views
6K
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
5K
  • · Replies 14 ·
Replies
14
Views
3K
Replies
14
Views
3K
Replies
31
Views
4K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
7K
  • · Replies 2 ·
Replies
2
Views
2K