Allocating specific computer resources to a program

Click For Summary
SUMMARY

The discussion centers on the need for a controlled environment to run a numerically intensive program that consumes significant RAM, allowing for accurate performance measurement across different computer specifications. Chris suggests using a virtual machine (VM) to allocate specific resources such as RAM and CPU cycles, with recommendations for tools like VirtualPC and VirtualBox. Participants clarify that while profiling tools like Intel VTune can help optimize code, they are not suitable for resource allocation. The consensus is that a VM provides the necessary isolation and control for consistent simulation runs across various hardware setups.

PREREQUISITES
  • Understanding of virtual machines (e.g., VirtualPC, VirtualBox)
  • Familiarity with performance profiling tools (e.g., Intel VTune)
  • Knowledge of Windows operating system resource management
  • Basic concepts of CPU and RAM allocation
NEXT STEPS
  • Research how to configure VirtualBox for resource allocation
  • Explore performance profiling techniques using Intel VTune
  • Learn about Windows priority classes and their impact on application performance
  • Investigate load testing tools for simulating different CPU and RAM scenarios
USEFUL FOR

Researchers, software developers, and system administrators who need to run resource-intensive applications in a controlled environment for performance analysis and benchmarking across different hardware configurations.

ChrisHarvey
Messages
54
Reaction score
0
Hi everyone,

I'm not a computer scientist by any stretch, so I really apologise if this is a stupid question.

This is my situation:

I'm trying to help out one of my colleagues who has written a highly numerically intensive program which consumes RAM. He now has a large number of simulations to run. One of the things he's measuring is how long the program takes to run.

We obviously know what his computer spec is, but he's running the software along side a whole load of other programs and services on Windows XP (e.g. virus scanner). We therefore don't know exactly what resources are available to the program when it runs. Also, since there are so many simulations to run, he wants to run them on several different computers (each of which has a slightly different spec) to save some time.

Is there any way (or software he can buy) which can run the program in a "controlled virtual environment" where it be can set exactly how much RAM is available to the software and how many clock cycles it is allowed so he can say accurately how the program performs and also make comparisons with simulations run on a different computer? For instance, to run the program with 2GHz and 2Gb RAM, which is easily available to all the computers in the lab.

Thanks very any guidance,



Chris
 
Technology news on Phys.org
Buy a professional profiler like Intel Vtune.

ChrisHarvey said:
Hi everyone,

I'm not a computer scientist by any stretch, so I really apologise if this is a stupid question.

This is my situation:

I'm trying to help out one of my colleagues who has written a highly numerically intensive program which consumes RAM. He now has a large number of simulations to run. One of the things he's measuring is how long the program takes to run.

We obviously know what his computer spec is, but he's running the software along side a whole load of other programs and services on Windows XP (e.g. virus scanner). We therefore don't know exactly what resources are available to the program when it runs. Also, since there are so many simulations to run, he wants to run them on several different computers (each of which has a slightly different spec) to save some time.

Is there any way (or software he can buy) which can run the program in a "controlled virtual environment" where it be can set exactly how much RAM is available to the software and how many clock cycles it is allowed so he can say accurately how the program performs and also make comparisons with simulations run on a different computer? For instance, to run the program with 2GHz and 2Gb RAM, which is easily available to all the computers in the lab.

Thanks very any guidance,
Chris
 
vtune is a profiler, it runs your program and tells you how much time it spends in each function - allowing you to prioritize which functions to try and optimize. It's not used for job control

Windows has priority classes which allow you to run a program with lower priority - so other apps will get more 'turns' and your heavy app will only run when somethign more important doesn't need the machine
It's not as fine grained or capable as a tradiational multi user OS like Unix but is useable - see
http://www.raymond.cc/blog/archives...s-priority-in-windows-task-manager-with-prio/

You can limit the memory a task will use but you don't necessarily want to do this. If the app needs more than you have given it will it simply fail?
If the app uses RAM that a higher priority task needs, the slow process RAM will simply be swapped out to the pagefile anyway.

The best way to limit the machine the app is runnign on is to use a virtual machine.
VirtualPC is a free downlaod from microsoft, thereis also the free VirtualBox from Sun and IIRC there is a free version of Vmwares product.
They are all pretty similar and offer the same tools to create a limited machine where you can control the RAM and CPU available.
There is a small (few %) performance hit over running the machine natively but you get other advantages - you can freeze the virtual machine in mid run and restart later, or even move it to another physical PC
 
mgb_phys said:
vtune is a profiler, it runs your program and tells you how much time it spends in each function - allowing you to prioritize which functions to try and optimize. It's not used for job control

They do not need job control, they need a profiler and a debugger (which should come integrated with the IDE with most commercial compilers) to optimize the program, measure it's performance and make reasonable predictions.
 
I originally read it to mean that they needed to run this app but limiting it's resources so other tasks could run.
But it sounds like they want to predict it's performance on a certain spec machine.
I don't think a profiler would do that.

There are testing tools that can simulate different amounts of CPU, ram etc to load test an app - I think the best approach to that would be a VM.
 
A VM software is reasonable if you don't have the intention to optimize the code.
 
Thanks for all your replies.

The intention is not to optimise the code. The code is now fixed. The object of this exercise is run the software with different inputs, interpret the results and record how long each simulation took to run. However, the simulation length is dependent of the processor speed and the amount of memory available, and this depends on what other programs are running at the same time, e.g. virus checker, compulsory non-scheduled software update, etc. It would be helpful to run the software in an environment all of its own (i.e. not shared with Windows services and other software) with known processor speed and RAM, so he can state clearly in his Thesis what resources were available to the software.

There is another problem that could be solved by a solution to this: Each simulation takes several hours to run and there are literally 100s of them to do. He wants to run the simulations on multiple computers. Since most computers in the lab have a different spec, he can't really compare times for a simulation run on a computer with 3GHz and 3Gb of RAM with a simulation run on a computer with 2GHz and 2Gb of RAM.

Reading your replies, it seems that a virtual machine is the most plausible method. I don't think a profiler is what he needs.

Sorry if this is another stupid question, but in a VM, isn't it usual to install a new operating system, so that this environment for the software would still be shared with other Windows services? I'm not saying that would be a problem. As long as it is consistent between all the machines running the code, it would be fine.

I've directed him towards VirtualPC and hopefully that will supply the needed functionality.

Thanks again,

Chris
 
> OS installed inside a VM

You would be running another instance of an OS inside a Virtual Machine. I'm not aware of any special debugging features in a OS and VM combination that would be able to determine some cpu independent benchmark of an application. Unless there is such a combination, a VM will just run slower.

In the case of Windows, there are some timers that are supposed to exclude the time spent in other tasks, but I think interrupt overheads are not excluded.

Performance is also affected by cache implementation in each CPU, and also cache hits on paged memory virtual addresses. I assume that current CPU's don't have a 1 million entry content addressable memory (associative cache) required to do one cycle conversion of virtual addresses into physical addresses for 4GB worth of paged memory composed of 4KB chunks, and I don't know how the virtual address look up tables are managed by the CPU's.
 
ChrisHarvey said:
Thanks for all your replies...

A VM is not what you need then.
 
  • #10
ChrisHarvey said:
The object of this exercise is run the software with different inputs, interpret the results and record how long each simulation took to run. However, the simulation length is dependent of the processor speed and the amount of memory available ... He wants to run the simulations on multiple computers.
You could benchmark the lab systems by running the same input on multiple machines, to get an idea of the relative speeds. This assumes that varied inputs won't end up favoring some systems over others.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 22 ·
Replies
22
Views
2K
Replies
16
Views
3K
  • · Replies 29 ·
Replies
29
Views
3K
  • · Replies 11 ·
Replies
11
Views
1K
Replies
7
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K