[Help] Consulting for new code

  • Thread starter Thread starter Raghnar
  • Start date Start date
  • Tags Tags
    Code
Click For Summary
SUMMARY

The discussion centers on optimizing the diagonalization of a large sparse matrix using C++ and the GNU Scientific Library (GSL). The user seeks to leverage multi-processor capabilities for improved performance but faces limitations with GSL's single-threaded routines. Suggestions include using parallel processing techniques, exploring libraries like POOMA and Blitz++, and considering MATLAB for matrix operations. The user emphasizes the need for an efficient library to handle the expected computational load, particularly for diagonalizing large matrices.

PREREQUISITES
  • Understanding of matrix diagonalization techniques
  • Familiarity with C++ programming and memory management
  • Knowledge of parallel processing concepts
  • Experience with scientific computing libraries such as GSL
NEXT STEPS
  • Research parallel processing techniques in C++ for matrix operations
  • Explore the use of POOMA and Blitz++ for efficient matrix diagonalization
  • Investigate MATLAB's capabilities for handling large sparse matrices
  • Learn about the Lanczos algorithm and its application to sparse matrices
USEFUL FOR

Researchers, computational scientists, and software developers focused on numerical methods and matrix computations, particularly those working with large-scale scientific problems.

Raghnar
Messages
41
Reaction score
1
I have to rewrite an old code in order to generalize it.
This code basically will have a huge 4 blocks matrix (bigger means less approximations needed so when running in full glory will be in the order of the 8GB memory that I have) with a lot of zeros that need to be diagonalized.
Later I need to play a little with the eigenvalues and eigenvectors.

I've thinked about using C++ as language in order to have a very optimized language that dynamically allocate memory, have classes, use GSL...etc...

My sysadmin agree with me and tell me is a good idea but
1- there's no way to use multi-processor calculations
2- there's no way to use dynamically allocated memory in a smart way in order to save memory
unless I want to write my own diagonalization routine.

Do you agree with that? There is some language or platform where I can develop a more smart program that save memory and use the full potential of these modern multi-core processors?

If you have any suggestion of any kind please post! ;)
 
Technology news on Phys.org
I'm not sure I follow-- I'm sure you could re-write your program to use shared memory and take advantage of the multiple processors by forking your code to as many segments as needed (one per processor, plus a parent, perhaps?). But it basically means doing your calculations in parallel.

If you're depending on NOT re-writing the part of the code that performs the calculation, then yes, you're stuck. Having more processors only helps when you're running multiple processes at once. In theory, I suppose if you built in complex functions to your processor, then yes, it's possible that the processor could recognize them several steps before they needed to be executed (assuming that the input data didn't change), and farm them out to other processors. But I'm not enough of a hardware guru to know if that's just plain ludicrous or if it's actually been implemented already. Strikes me as the sort of thing that might exist for graphics processing, but I have no idea. Anyway, it still doesn't help you without either switching platforms (and recompiling) or re-coding to make use of the more advanced features.

Anyway, you need to supply more information. What are the specs of the machine you're running? What aspects of the program are you going to re-write, and which aspects are you NOT going to re-write? What percentage of the work is done in the code that you will NOT re-write (IE, run it through a debug profiler and see where you're spending your processing time).

DaveE
 
I'm sorry if I hadn't made it clear, I explain again following your questions:

1- My personal machine is a quad core i5 with 8 Gigs of RAM. If I can made advantage of multi-processing technology maybe I will run it through a dozen nodes in a supercomputer forum if is needed (depending on the time: obviously if I can run in a night on my PC I don't necessary need a cluster).

2- I want to rewrite the entire code, from the top to the bottom. The coding skills of the guy that lend me that program were hellish: He used a very general physical approach that is marvellous but the code is specifically for valence levels of Sn120 and so is useless like an airplane on railways.
I must practically rewrite all the code, but I don't want to waste precious time re-writing diagonalizations or zero-finding routines in the third millennium and there are plentiful of scientific libraries. I don't want to rewrite only the common math parts, the rest is up to me.

3- I don't know for sure but I expect that at least 90% of the computing time will be used to diagonalize this giant matrix (anyone know how much times it takes to diagonalize a matrix this large on a modern processor? seconds? minutes? hours?) and so will be very important to choose the right library that is carefully optimized and able to treat the problem. The library that I know best and that seems fitted to me is GSL, but my sysadmin say that their routines use only one processor.
If it is, for make it multi-processor able I can fork the program in order to diagonalize N matrix simultaneously? This is going to use some more memory but maybe is the best option in order to save computing time...
 
If you need to diagonalize very large matrix with lots of zeros implement sparse matrices and use Lanczos algorithm to diagonalize them. Check out POOMA or Blitz++.
 
I've spent the morning checking for sparse matrix diagonalization codes, and came up with almost nothing :(
Maybe MATLAB will work, but I don't know it...

Anyway Maybe I can use Lanczos on a Dense Matrix representation and will work just fine.
 

Similar threads

Replies
6
Views
3K
Replies
6
Views
4K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
60
Views
18K
  • · Replies 18 ·
Replies
18
Views
7K