Anyone know of parallel operating systems designed to work with multiple PC processors?
Both Windows and the Linux variants work with multiple processors.
I do not believe either is a true parallel operating system. For example, neither can spread a computation among all the available processors. There may be special variants of Linux capable of doing this, but I am not yet aware of any.
Both Windows and Linux can spread and balance the computation provided the programmer properly parallelized the code.
I am not sure what else you think would be a "true" parallel operating system.
It has been at least a decade since I last looked at this area. At the time, I do recall there being dedicated operating systems that enabled true parallel processing on a network of PC processors. The term network here comprised several different architectures and topologies. An attempt to generalize the special programming required of whatever network was in use was to use MPI (Message-Passing Interface). MPI was one way code could be parallelized and make it portable among supercomputers or other distributed computing alternatives.
If both Windows XP and Vista, and any Linux distro, can distribute instructions to be executed among the available processors for a given compute-intensive program, than what tools similar to MPI if not MPI exist today to enable such parallelization?
I'm still not convinced standard Linux can truly distribute the work load of a properly parallelized program regardless of how well the program is implemented.
However, I am opened minded, and perhaps you or others can convince me otherwise.
There probably is no general-purpose highly-optimized parallel operating system on PCs. Compilers and operating systems are probably just not smart enough to take a generic program and parallelize it optimally. Of course, certain computation problems are better suited to certain configurations of distributed processors [which, of course, requires knowledge of the cpu capabilites, memory speeds, networking speeds, and topology]. ...but I'm no expert. I took a class in parallel-computation a while back.
This might be of interest to you:
as part of
which is found in
There are compilers that automatically parallelize code, but really the scope is fairly limited, it is better to have a good programmer, who understands the issues related to parallelization, to take a look at it.
Note though, that the current PC bus architecture was never designed for massively parallel systems. If you are looking for that then the PC architecture is not going to work, and clearly 8 or even 16 processors is nowhere near the definition of massively parallel systems.
The main issues with architectures that enable massively parallel configurations is how to perform an effective data transfer between the processors and their caches.
Years ago, the Inmos transputers using the Ockham OS, were in vogue. You could construct your own topology, for instance use a hypercube configuration.
Currently there is no general purpose massively parallel computer system available that I know of, unless you are willing to pay millions.
There are alternatives, if you can parallelize your code enough you could use grid style computing, lookup beowulf systems if you are interested. Using a good network topology with gigabyte or higher interconnects you could generate an immense level of computing power.
But overall the ability to perform parallel computations primarily depends on the hardware, the topology, and the way a program is coded, the OS plays a relatively minor role in this.
are u talking about dual or quad cores?. Two my knowledge the supercomputer clusters in ontario are dual core that run linux as the OS. ANd use either MPI or openmp. And i believe bluegene has the same structure. As for personnel dual core PCs. MPI/openmp.
Does it make a difference? In terms of an operating system making full use of what is available, what impact does dual versus quad core have?
The consensus seems to be standard Linux can handle the multiprocessing. As for the multi-threading of programs, that would appear to be a function of using such tools as MPI or OpenMP, would you agree?
I was not aware of OpenMP until your post. Thank you for pointing it out.
dual or quad cores...doesn't matter, was just tryign to understand whether you were considering multicores or single CPU networked clusters.
For multicore PCs, you can utilize openmp. And if you consider clusters, than you can have a hybrid openmp/mpi sfwr scheme. Where openmp is used on the multicores and mpi used in communicating between PCs.
U can still use the simple functions that come with C interms of passing data (send/recv/accept/connect/bind/listen)..openmp/mpi are just hierarchical languages ontop of these.
Not quite general purpose but NVIDIA is coming out with Tesla. These are GPU based parallel computing units with 128 processors. Starting from next month you can buy the entry level unit that can reach 500 GFlops peak performance for about $1500. The C based toolkit is already available. Unfortunately the first generation is 32bit only.
GPU processors are not general purpose but they can be used for simulations of physical processes, neural networks, image recognition etc.
I don't actually think that it is importand for an OS to take advantage of a multiple core processor. The important thing is for the applications to use the multiple cores for the various processes. Suppose that you have just an OS with no other programms installed. What good is there to use 2 or 4 cores just for the OS. The importand thing is for the OS to "see" the multiple processors and for the applications to use them. Except for the case that you want to run many computation heavy apps at the same time... but how many times does anyone do that?
Yes, you are correct. My concern was for various scheduled tasks to run on a different processors as allocated by the operating system to effect greater overall throughput. The same holds true for general applications not necessarily written for a multi-processor environment.
Separate names with a comma.