Allocating specific computer resources to a program

Click For Summary

Discussion Overview

The discussion revolves around the challenges of allocating specific computer resources to a numerically intensive program running simulations. Participants explore methods to control the environment in which the program operates, particularly focusing on managing RAM and CPU usage to ensure accurate performance measurements across different computers with varying specifications.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Experimental/applied

Main Points Raised

  • Some participants suggest using a professional profiler like Intel Vtune to analyze the program's performance, although it is noted that this tool is not intended for job control.
  • Others mention that Windows has priority classes that can be adjusted to allow a program to run with lower priority, but this may not provide the desired control over resources.
  • A few participants propose using virtual machines (VMs) as a solution to limit resources such as RAM and CPU for the program, allowing for consistent testing across different systems.
  • Concerns are raised about the performance implications of running a program inside a VM, including potential slowdowns and the impact of cache implementation on performance measurements.
  • Some participants clarify that the goal is not to optimize the code but to run simulations under controlled conditions to gather performance data for comparison.
  • There is uncertainty regarding whether running an OS inside a VM would still allow for isolation from other Windows services, with some suggesting that it may not provide the necessary separation.

Areas of Agreement / Disagreement

Participants generally agree that a virtual machine may be the most plausible method for achieving the desired control over resources, but there is no consensus on whether this approach will fully meet the requirements for accurate performance measurement. Disagreement exists regarding the necessity and effectiveness of profilers versus VMs.

Contextual Notes

Limitations include the potential performance overhead of using VMs, the impact of cache behavior on performance measurements, and the uncertainty about how well a VM can isolate the program from other system processes.

Who May Find This Useful

This discussion may be useful for researchers or practitioners involved in computational simulations who need to manage and measure resource allocation in a controlled environment, particularly in the context of varying hardware specifications.

ChrisHarvey
Messages
54
Reaction score
0
Hi everyone,

I'm not a computer scientist by any stretch, so I really apologise if this is a stupid question.

This is my situation:

I'm trying to help out one of my colleagues who has written a highly numerically intensive program which consumes RAM. He now has a large number of simulations to run. One of the things he's measuring is how long the program takes to run.

We obviously know what his computer spec is, but he's running the software along side a whole load of other programs and services on Windows XP (e.g. virus scanner). We therefore don't know exactly what resources are available to the program when it runs. Also, since there are so many simulations to run, he wants to run them on several different computers (each of which has a slightly different spec) to save some time.

Is there any way (or software he can buy) which can run the program in a "controlled virtual environment" where it be can set exactly how much RAM is available to the software and how many clock cycles it is allowed so he can say accurately how the program performs and also make comparisons with simulations run on a different computer? For instance, to run the program with 2GHz and 2Gb RAM, which is easily available to all the computers in the lab.

Thanks very any guidance,



Chris
 
Technology news on Phys.org
Buy a professional profiler like Intel Vtune.

ChrisHarvey said:
Hi everyone,

I'm not a computer scientist by any stretch, so I really apologise if this is a stupid question.

This is my situation:

I'm trying to help out one of my colleagues who has written a highly numerically intensive program which consumes RAM. He now has a large number of simulations to run. One of the things he's measuring is how long the program takes to run.

We obviously know what his computer spec is, but he's running the software along side a whole load of other programs and services on Windows XP (e.g. virus scanner). We therefore don't know exactly what resources are available to the program when it runs. Also, since there are so many simulations to run, he wants to run them on several different computers (each of which has a slightly different spec) to save some time.

Is there any way (or software he can buy) which can run the program in a "controlled virtual environment" where it be can set exactly how much RAM is available to the software and how many clock cycles it is allowed so he can say accurately how the program performs and also make comparisons with simulations run on a different computer? For instance, to run the program with 2GHz and 2Gb RAM, which is easily available to all the computers in the lab.

Thanks very any guidance,
Chris
 
vtune is a profiler, it runs your program and tells you how much time it spends in each function - allowing you to prioritize which functions to try and optimize. It's not used for job control

Windows has priority classes which allow you to run a program with lower priority - so other apps will get more 'turns' and your heavy app will only run when somethign more important doesn't need the machine
It's not as fine grained or capable as a tradiational multi user OS like Unix but is useable - see
http://www.raymond.cc/blog/archives...s-priority-in-windows-task-manager-with-prio/

You can limit the memory a task will use but you don't necessarily want to do this. If the app needs more than you have given it will it simply fail?
If the app uses RAM that a higher priority task needs, the slow process RAM will simply be swapped out to the pagefile anyway.

The best way to limit the machine the app is runnign on is to use a virtual machine.
VirtualPC is a free downlaod from microsoft, thereis also the free VirtualBox from Sun and IIRC there is a free version of Vmwares product.
They are all pretty similar and offer the same tools to create a limited machine where you can control the RAM and CPU available.
There is a small (few %) performance hit over running the machine natively but you get other advantages - you can freeze the virtual machine in mid run and restart later, or even move it to another physical PC
 
mgb_phys said:
vtune is a profiler, it runs your program and tells you how much time it spends in each function - allowing you to prioritize which functions to try and optimize. It's not used for job control

They do not need job control, they need a profiler and a debugger (which should come integrated with the IDE with most commercial compilers) to optimize the program, measure it's performance and make reasonable predictions.
 
I originally read it to mean that they needed to run this app but limiting it's resources so other tasks could run.
But it sounds like they want to predict it's performance on a certain spec machine.
I don't think a profiler would do that.

There are testing tools that can simulate different amounts of CPU, ram etc to load test an app - I think the best approach to that would be a VM.
 
A VM software is reasonable if you don't have the intention to optimize the code.
 
Thanks for all your replies.

The intention is not to optimise the code. The code is now fixed. The object of this exercise is run the software with different inputs, interpret the results and record how long each simulation took to run. However, the simulation length is dependent of the processor speed and the amount of memory available, and this depends on what other programs are running at the same time, e.g. virus checker, compulsory non-scheduled software update, etc. It would be helpful to run the software in an environment all of its own (i.e. not shared with Windows services and other software) with known processor speed and RAM, so he can state clearly in his Thesis what resources were available to the software.

There is another problem that could be solved by a solution to this: Each simulation takes several hours to run and there are literally 100s of them to do. He wants to run the simulations on multiple computers. Since most computers in the lab have a different spec, he can't really compare times for a simulation run on a computer with 3GHz and 3Gb of RAM with a simulation run on a computer with 2GHz and 2Gb of RAM.

Reading your replies, it seems that a virtual machine is the most plausible method. I don't think a profiler is what he needs.

Sorry if this is another stupid question, but in a VM, isn't it usual to install a new operating system, so that this environment for the software would still be shared with other Windows services? I'm not saying that would be a problem. As long as it is consistent between all the machines running the code, it would be fine.

I've directed him towards VirtualPC and hopefully that will supply the needed functionality.

Thanks again,

Chris
 
> OS installed inside a VM

You would be running another instance of an OS inside a Virtual Machine. I'm not aware of any special debugging features in a OS and VM combination that would be able to determine some cpu independent benchmark of an application. Unless there is such a combination, a VM will just run slower.

In the case of Windows, there are some timers that are supposed to exclude the time spent in other tasks, but I think interrupt overheads are not excluded.

Performance is also affected by cache implementation in each CPU, and also cache hits on paged memory virtual addresses. I assume that current CPU's don't have a 1 million entry content addressable memory (associative cache) required to do one cycle conversion of virtual addresses into physical addresses for 4GB worth of paged memory composed of 4KB chunks, and I don't know how the virtual address look up tables are managed by the CPU's.
 
ChrisHarvey said:
Thanks for all your replies...

A VM is not what you need then.
 
  • #10
ChrisHarvey said:
The object of this exercise is run the software with different inputs, interpret the results and record how long each simulation took to run. However, the simulation length is dependent of the processor speed and the amount of memory available ... He wants to run the simulations on multiple computers.
You could benchmark the lab systems by running the same input on multiple machines, to get an idea of the relative speeds. This assumes that varied inputs won't end up favoring some systems over others.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 17 ·
Replies
17
Views
4K
  • · Replies 22 ·
Replies
22
Views
2K
Replies
16
Views
3K
  • · Replies 29 ·
Replies
29
Views
4K
  • · Replies 11 ·
Replies
11
Views
1K
Replies
7
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K