Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Allocating specific computer resources to a program

  1. Mar 9, 2010 #1
    Hi everyone,

    I'm not a computer scientist by any stretch, so I really apologise if this is a stupid question.

    This is my situation:

    I'm trying to help out one of my colleagues who has written a highly numerically intensive program which consumes RAM. He now has a large number of simulations to run. One of the things he's measuring is how long the program takes to run.

    We obviously know what his computer spec is, but he's running the software along side a whole load of other programs and services on Windows XP (e.g. virus scanner). We therefore don't know exactly what resources are available to the program when it runs. Also, since there are so many simulations to run, he wants to run them on several different computers (each of which has a slightly different spec) to save some time.

    Is there any way (or software he can buy) which can run the program in a "controlled virtual environment" where it be can set exactly how much RAM is available to the software and how many clock cycles it is allowed so he can say accurately how the program performs and also make comparisons with simulations run on a different computer? For instance, to run the program with 2GHz and 2Gb RAM, which is easily available to all the computers in the lab.

    Thanks very any guidance,

    Best regards,

  2. jcsd
  3. Mar 9, 2010 #2
    Buy a professional profiler like Intel Vtune.

  4. Mar 9, 2010 #3


    User Avatar
    Science Advisor
    Homework Helper

    vtune is a profiler, it runs your program and tells you how much time it spends in each function - allowing you to prioritize which functions to try and optimize. It's not used for job control

    Windows has priority classes which allow you to run a program with lower priority - so other apps will get more 'turns' and your heavy app will only run when somethign more important doesn't need the machine
    It's not as fine grained or capable as a tradiational multi user OS like Unix but is useable - see

    You can limit the memory a task will use but you don't necessarily want to do this. If the app needs more than you have given it will it simply fail?
    If the app uses RAM that a higher priority task needs, the slow process RAM will simply be swapped out to the pagefile anyway.

    The best way to limit the machine the app is runnign on is to use a virtual machine.
    VirtualPC is a free downlaod from microsoft, thereis also the free VirtualBox from Sun and IIRC there is a free version of Vmwares product.
    They are all pretty similar and offer the same tools to create a limited machine where you can control the RAM and CPU available.
    There is a small (few %) performance hit over running the machine natively but you get other advantages - you can freeze the virtual machine in mid run and restart later, or even move it to another physical PC
  5. Mar 9, 2010 #4
    They do not need job control, they need a profiler and a debugger (which should come integrated with the IDE with most commercial compilers) to optimize the program, measure it's performance and make reasonable predictions.
  6. Mar 9, 2010 #5


    User Avatar
    Science Advisor
    Homework Helper

    I originally read it to mean that they needed to run this app but limiting it's resources so other tasks could run.
    But it sounds like they want to predict it's performance on a certain spec machine.
    I don't think a profiler would do that.

    There are testing tools that can simulate different amounts of CPU, ram etc to load test an app - I think the best approach to that would be a VM.
  7. Mar 9, 2010 #6
    A VM software is reasonable if you don't have the intention to optimize the code.
  8. Mar 10, 2010 #7
    Thanks for all your replies.

    The intention is not to optimise the code. The code is now fixed. The object of this exercise is run the software with different inputs, interpret the results and record how long each simulation took to run. However, the simulation length is dependent of the processor speed and the amount of memory available, and this depends on what other programs are running at the same time, e.g. virus checker, compulsory non-scheduled software update, etc. It would be helpful to run the software in an environment all of its own (i.e. not shared with Windows services and other software) with known processor speed and RAM, so he can state clearly in his Thesis what resources were available to the software.

    There is another problem that could be solved by a solution to this: Each simulation takes several hours to run and there are literally 100s of them to do. He wants to run the simulations on multiple computers. Since most computers in the lab have a different spec, he can't really compare times for a simulation run on a computer with 3GHz and 3Gb of RAM with a simulation run on a computer with 2GHz and 2Gb of RAM.

    Reading your replies, it seems that a virtual machine is the most plausible method. I don't think a profiler is what he needs.

    Sorry if this is another stupid question, but in a VM, isn't it usual to install a new operating system, so that this environment for the software would still be shared with other Windows services? I'm not saying that would be a problem. As long as it is consistent between all the machines running the code, it would be fine.

    I've directed him towards VirtualPC and hopefully that will supply the needed functionality.

    Thanks again,

  9. Mar 10, 2010 #8


    User Avatar
    Homework Helper

    > OS installed inside a VM

    You would be running another instance of an OS inside a Virtual Machine. I'm not aware of any special debugging features in a OS and VM combination that would be able to determine some cpu independent benchmark of an application. Unless there is such a combination, a VM will just run slower.

    In the case of Windows, there are some timers that are supposed to exclude the time spent in other tasks, but I think interrupt overheads are not excluded.

    Performance is also affected by cache implementation in each CPU, and also cache hits on paged memory virtual addresses. I assume that current CPU's don't have a 1 million entry content addressable memory (associative cache) required to do one cycle conversion of virtual addresses into physical addresses for 4GB worth of paged memory composed of 4KB chunks, and I don't know how the virtual address look up tables are managed by the CPU's.
  10. Mar 10, 2010 #9
    A VM is not what you need then.
  11. Mar 10, 2010 #10


    User Avatar
    Homework Helper

    You could benchmark the lab systems by running the same input on multiple machines, to get an idea of the relative speeds. This assumes that varied inputs won't end up favoring some systems over others.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook