Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

[Help] Consulting for new code

  1. Mar 12, 2010 #1
    I have to rewrite an old code in order to generalize it.
    This code basically will have a huge 4 blocks matrix (bigger means less approximations needed so when running in full glory will be in the order of the 8GB memory that I have) with a lot of zeros that need to be diagonalized.
    Later I need to play a little with the eigenvalues and eigenvectors.

    I've thinked about using C++ as Language in order to have a very optimized language that dynamically allocate memory, have classes, use GSL...etc...

    My sysadmin agree with me and tell me is a good idea but
    1- there's no way to use multi-processor calculations
    2- there's no way to use dynamically allocated memory in a smart way in order to save memory
    unless I want to write my own diagonalization routine.

    Do you agree with that? There is some language or platform where I can develop a more smart program that save memory and use the full potential of these modern multi-core processors?

    If you have any suggestion of any kind please post! ;)
     
  2. jcsd
  3. Mar 12, 2010 #2
    I'm not sure I follow-- I'm sure you could re-write your program to use shared memory and take advantage of the multiple processors by forking your code to as many segments as needed (one per processor, plus a parent, perhaps?). But it basically means doing your calculations in parallel.

    If you're depending on NOT re-writing the part of the code that performs the calculation, then yes, you're stuck. Having more processors only helps when you're running multiple processes at once. In theory, I suppose if you built in complex functions to your processor, then yes, it's possible that the processor could recognize them several steps before they needed to be executed (assuming that the input data didn't change), and farm them out to other processors. But I'm not enough of a hardware guru to know if that's just plain ludicrous or if it's actually been implemented already. Strikes me as the sort of thing that might exist for graphics processing, but I have no idea. Anyway, it still doesn't help you without either switching platforms (and recompiling) or re-coding to make use of the more advanced features.

    Anyway, you need to supply more information. What are the specs of the machine you're running? What aspects of the program are you going to re-write, and which aspects are you NOT going to re-write? What percentage of the work is done in the code that you will NOT re-write (IE, run it through a debug profiler and see where you're spending your processing time).

    DaveE
     
  4. Mar 12, 2010 #3
    I'm sorry if I hadn't made it clear, I explain again following your questions:

    1- My personal machine is a quad core i5 with 8 Gigs of RAM. If I can made advantage of multi-processing technology maybe I will run it through a dozen nodes in a supercomputer forum if is needed (depending on the time: obviously if I can run in a night on my PC I don't necessary need a cluster).

    2- I want to rewrite the entire code, from the top to the bottom. The coding skills of the guy that lend me that program were hellish: He used a very general physical approach that is marvellous but the code is specifically for valence levels of Sn120 and so is useless like an airplane on railways.
    I must practically rewrite all the code, but I don't want to waste precious time re-writing diagonalizations or zero-finding routines in the third millennium and there are plentiful of scientific libraries. I don't want to rewrite only the common math parts, the rest is up to me.

    3- I don't know for sure but I expect that at least 90% of the computing time will be used to diagonalize this giant matrix (anyone know how much times it takes to diagonalize a matrix this large on a modern processor? seconds? minutes? hours?) and so will be very important to choose the right library that is carefully optimized and able to treat the problem. The library that I know best and that seems fitted to me is GSL, but my sysadmin say that their routines use only one processor.
    If it is, for make it multi-processor able I can fork the program in order to diagonalize N matrix simultaneously? This is gonna use some more memory but maybe is the best option in order to save computing time...
     
  5. Mar 13, 2010 #4
    If you need to diagonalize very large matrix with lots of zeros implement sparse matrices and use Lanczos algorithm to diagonalize them. Check out POOMA or Blitz++.
     
  6. Mar 16, 2010 #5
    I've spent the morning checking for sparse matrix diagonalization codes, and came up with almost nothing :(
    Maybe MATLAB will work, but I don't know it...

    Anyway Maybe I can use Lanczos on a Dense Matrix representation and will work just fine.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: [Help] Consulting for new code
  1. FFT code help please (Replies: 16)

  2. Help with fortran code (Replies: 17)

  3. Help with fortran code (Replies: 4)

Loading...