Register to reply

Why is Fortran so Fast?

by dimensionless
Tags: fortran
Share this thread:
dimensionless
#1
May12-07, 07:50 AM
P: 464
I have some code in Fortran 77 that executes at pretty good speed. I also have a highly optimized version of the same algorithm in C. When using g77 and gcc, the unoptimized Fortran is just slightly faster that the highly optimized C. When using MSVC++ and g77, than the C is a bit faster. This leads to my question:

If I recode my highly optimized algorithm in Fortran 77 and use a comercial Fortran compiler, will the execution speed be far and above my highly optimized algorithm that was compiled with MSVC++? If speed is an issue, should I try to compile Fortran 95 into desktop applications that are written mostly in C/C++?
Phys.Org News Partner Science news on Phys.org
Bees able to spot which flowers offer best rewards before landing
Classic Lewis Carroll character inspires new ecological model
When cooperation counts: Researchers find sperm benefit from grouping together in mice
nmtim
#2
May12-07, 10:59 AM
P: 79
The question is, does one language have an advantage in translating to machine instructions?

Fortran 77 (don't know about 90/95) does have one advantage--no pointers per se. This eliminates the possibility of what compiler writers call aliasing, where the same memory location can be modified through two separate variables. If aliasing is a possibility, as in C, then the compiler needs to be more conservative to ensure correct machine code.

Put differently, Fortran 77 restricts the programmer to a subset of C that can be more easily optimized. It would seem feasible for a C programmer to restrict oneself to that subset, and have performance that's just as good.

There are methods to tell a C compiler that you are not aliasing (the restrict keyword, compiler flags).

Would you be interested/able to post your codes? I would find that enlightening, as I have never worked much in Fortran, and I've always been a bit curious about the performance differences that people claim to get. I'm also curious about your optimizations.

As for mixed language programming, F77 & C/C++ is no big deal. I've never tried F95 & C/C++, but people I respect strongly dislike it.
Crosson
#3
May12-07, 12:24 PM
P: 1,295
Fortran was the first high level language. They had the burden of proof to show that a high-level language could be competitive for preformance with assembly code, and so Fortran was the target of the largest optimization effort ever.

Comparing Fortran 77 against ansi C, I doubt thats the issue. The issue is that Fortran is totally static, the size of all datastructures is known at compile time. Once the program has loaded into memory, the stack size will not change. This allows for absolute addressing (with an offset of course) and that is very fast.

On the other hand C is stack dynamic. This allows for more flexible programming, but wasted memory and slower execution. Try declaring your C variables as static, and see if you can close the performance gap.

Note: Fortran 95 is no longer static, and so it allows for recursion and other non-Fortran nonsense.

Fortran 77 (don't know about 90/95) does have one advantage--no pointers per se. This eliminates the possibility of what compiler writers call aliasing, where the same memory location can be modified through two separate variables. If aliasing is a possibility, as in C, then the compiler needs to be more conservative to ensure correct machine code.
No pointers is a big advantage, but aliasing is possible in Fortran through the use of EQUIVALENCE. This was done in the old days to make it possible to load different data structures, used at different points in the program, into the same static memory location to save space. In Fortran 95 the use of Equivalence is deprecated.

AlephZero
#4
May13-07, 12:18 PM
Engineering
Sci Advisor
HW Helper
Thanks
P: 6,959
Why is Fortran so Fast?

Static vs dynamic storage allocation has nothing to do with the speed difference (and for modern compilers, there IS no speed difference worth botherig about).

Professional Fortran compilers have NEVER implemented just the ANSI language standard, so long as I've been using them (about 40 years). There is a de facto set of extensions which every serious compiler supports. If it didn't support them, it would be unmarketable, because most serious existing Fortran programs already use the extensions.

Those extensions have included stack-based and heap-based memory allocation, recursive functions and subroutines, pointers, etc, ever since those things were "popularised" by C and Unix (and Unix is 30 years old now).

If you knew anything about compiler writing, you would know that there is no overhead for stack-based storage allocation compared with static, and no wasted memory either. In fact on modern hardware, stack-based allocation is usually more efficient.

The main historical reason why C got a reputation of being slow compared with Fortran was simply that the early C compilers did no optimisation. In some senses, C was originally designed as a human-readable machine-independent assembler language, that was trivial to compile onto the sort of hardware that was being built in 1970 (when a typical computer had 0.0001 Gbytes of memory, and 0.0001 GHz CPU speed). For example, on the early C compilers it was quite common for a loop written using pointers to run much faster than the same loop written using array subscripts, because the compiler didn't even try to move the repeated address-calculation code out of the loop. With modern compilers (built after compiler-writing had been transformed from a task requiring genius to something teachable to any average system programmer) that is no longer relevant.

Another stimulus to Fortran optimisation was the vector-architecture machines of the 1980s (Cray, CDC, etc). They taught Fortran programmers to build software that could use that architecture efficiently, and the same code can be optimised easily on modern fast scalar machines.
dimensionless
#5
May13-07, 01:49 PM
P: 464
I have quite figured out aliasing and the restrict keyword, but I've seen improvement in both the inefficient Fortran and the efficient C by using the -O3 flag with gcc and g77. The C code is faster, but still not nearly as fast as I would expect. I'll have to do more coding to really do a good comparison.
rcgldr
#6
May26-07, 06:12 PM
HW Helper
P: 7,048
Quote Quote by nmtim View Post
If aliasing is a possibility, as in C, then the compiler needs to be more conservative to ensure correct machine code.
No aliasing is a compiler optimization switch with Microsoft compilers, allowing "maximum" optimization.

C is closer to machine code than Fortran, so with enough effort, a programmer should be able to generate the same or faster code, but it could end up being an exercise of trial and error to get the compiler to generate the code you want.

I'm not aware of any C compilers that have the extensions / optimizations that some Fortran Compilers have to take advantage of hardware features like vector / parallel / pipeline / out of order processing / register scoreboarding oriented operations on super computers, where IBM and Cray still remain relatively popular at the very high end: http://www.top500.org/lists/2006/11

Regarding a PC, I'm not sure how much "hardware specific" optimizations there are in existing Fortran or C/C++ compilers. A good compiler might auto-generate mult-threaded code, take advatange of out of order intruction handling on CPU's with register scoreboarding, or auto-generate multi-threaded code.
nmtim
#7
May26-07, 10:53 PM
P: 79
Quote Quote by Jeff Reid View Post
I'm not aware of any C compilers that have the extensions / optimizations that some Fortran Compilers have to take advantage of hardware features like vector / parallel / pipeline / out of order processing / register scoreboarding oriented operations on super computers, where IBM and Cray still remain relatively popular at the very high end: http://www.top500.org/lists/2006/11
Both GCC and Intel compilers have intrinsic support, including vector instructions like SSE and Altivec. I would be surprised if there were any serious C compiler today that didn't. In fact, failing to have such support would be part of the definition of a non-serious compiler.

Not sure what you mean by "parallel" (in "vector / parallel / pipeline"). Do you mean superscalar issue? That is a hardware feature, not a software feature, beyond not emitting an instruction stream that inhibits multiple issue.

Out of order processing is similar: the compiler/assembly programmer does not do out of order processing. The order that's "out of" is the order in which instructions were laid down by the compiler. It's hardware that decides to execute out of order in order to compensate for unpredictable run time latencies (cache misses). The compiler can't do that, for the obvious reason that cache state is unknown at compile time.

Not sure what you mean by "on super computers", as most supercomputers nowdays are massively parallel assemblies of Opterons, Itaniums, and similar commodity (more or less) chips. All decent C/C++/Fortran compilers pay attention to register allocation, etc. In fact, I suspect that a lot of compilers first generate an abstract representation of the code, and then use that to generate instructions, schedule them, and allocate registers. This has nothing to do with the initial language.
rcgldr
#8
May27-07, 02:05 AM
HW Helper
P: 7,048
Quote Quote by nmtim View Post
Not sure what you mean by "parallel" (in "vector / parallel / pipeline").
As in massively parallel.
Not sure what you mean by "on super computers"
Ones that include more high end oriented vector processing similar to that implemented on the Cray 1 and later machines, including the current ones. As noted in the link below, "vectorizing and parallelizing compilers for Fortran existed", but each machine type had a different version of the compiler.

Massively parallel systems, based on microprocessor chips, have an issue with microsecond or longer latency on communcations between cpu's, just because of the length of the wires. Some problems are solved more quickly with the high end vector oriented type computers.

http://en.wikipedia.org/wiki/Supercomputer

From the link below, Fortran "is the primary language for some of the most intensive supercomputing tasks, such as weather and climate modeling, computational fluid dynamics, computational chemistry, quantum chromodynamics, simulations of long-term solar system dynamics, high-fidelity evolution artificial satellite orbits, and simulation of automobile crash dynamics."

http://en.wikipedia.org/wiki/Fortran

From the same link on Fortran, regarding extensions: "Vendors of high-performance scientific computers (e.g., Burroughs, CDC, Cray, Honeywell, IBM, Texas Instruments, and UNIVAC) added extensions to Fortran to take advantage of special hardware features such as instruction cache, CPU pipelines, and vector arrays." My understanding is that similar extensions are still used on the current high end supercomputers.

Again from the same link on Fortran: a reference to the "out of ordering" done by a compiler from the 1970's (date not mentioned in article): "one of IBM's FORTRAN compilers (H Extended IUP) had a level of optimization which reordered the machine language instructions to keep multiple internal arithmetic units busy simultaneously".

So the main reason "Fortran is so fast", is that speed is one of the goals of Fortran compilers. To this end, the scientific community willingly accepts machine specific extensions to the language.
nmtim
#9
May27-07, 08:35 PM
P: 79
Quote Quote by Jeff Reid View Post
As in massively parallel.
Ones that include more high end oriented vector processing similar to that implemented on the Cray 1 and later machines, including the current ones. As noted in the link below, "vectorizing and parallelizing compilers for Fortran existed", but each machine type had a different version of the compiler.

Massively parallel systems, based on microprocessor chips, have an issue with microsecond or longer latency on communcations between cpu's, just because of the length of the wires. Some problems are solved more quickly with the high end vector oriented type computers.
I was unaware that "high-end" vector machines with vector units as wide as the old Crays were still in widespread use. Can you point to any examples?

Some codes do not scale well to MPP. Various solutions have been pursued, including replacing with algorithms that do scale well, and doing nothing and getting poor performance.

From the link below, Fortran "is the primary language for some of the most intensive supercomputing tasks, such as weather and climate modeling, computational fluid dynamics, computational chemistry, quantum chromodynamics, simulations of long-term solar system dynamics, high-fidelity evolution artificial satellite orbits, and simulation of automobile crash dynamics."
Any data to support that in 2007? I'm sure it was true 20 years ago, not sure it's true today.

From the same link on Fortran, regarding extensions: "Vendors of high-performance scientific computers (e.g., Burroughs, CDC, Cray, Honeywell, IBM, Texas Instruments, and UNIVAC) added extensions to Fortran to take advantage of special hardware features such as instruction cache, CPU pipelines, and vector arrays." My understanding is that similar extensions are still used on the current high end supercomputers.
As I pointed out, all serious C/C++ compilers have such extensions for their target platforms. Do you really think Fortran compiler writers are the only people who think about these things??? I think you may safely assume that any tricks known to Fortran compiler writers just as well known to C compiler writers, whatever the case may have been 30 years ago.


Again from the same link on Fortran: a reference to the "out of ordering" done by a compiler from the 1970's (date not mentioned in article): "one of IBM's FORTRAN compilers (H Extended IUP) had a level of optimization which reordered the machine language instructions to keep multiple internal arithmetic units busy simultaneously".
That's different from what is usually understood by "out of order execution" today. In many of today's CPU's, the front end can dynamically reorder the instruction stream to hide memory latencies.There's nothing remotely related to Fortran in that. As to your understanding of the term, there is nothing unique to Fortran in reordering the instruction stream for optimal performance. All decent C compilers in 2007 do that.


So the main reason "Fortran is so fast", is that speed is one of the goals of Fortran compilers. To this end, the scientific community willingly accepts machine specific extensions to the language.
I don't believe that. Speed is a goal of compilers for many languages, not just Fortran. The main reason Fortran 77 used had a speed advantage for an average scientific programmer was that the language restricted users to an easily optimized subset of C. To write C code as good, you had to know what you were doing (that may still be true today.)
dimensionless
#10
May28-07, 10:56 AM
P: 464
Can all C/C++ programs be coded in such a way that they are just as fast as Fortran?

Are Fortran 2003, 95, 90 just as fast as 77?
nmtim
#11
May28-07, 02:48 PM
P: 79
Yes, at least in the trivial sense that you can take the assembly output from the Fortran compiler and inline that in C/C++, and it's a valid C/C++ program.

I haven't seen an example of a Fortran code that could not be done as well in C/C++, given mature compilers for both. I would welcome such an example, of anyone has one.

I think the problem is that there are a lot more ways to hobble performance in C/C++ than Fortran 77: the space of possible programs is significantly larger given OO, templates, the equivalence of arrays and pointers, the lack of built-in multidimensional arrays, etc. You have to know more about what you're doing to avoid those traps.

Really, it's not that hard to have good performance in C++ for numerical codes. Whatever language you're using, it's far more important to understand the target architecture, and how your compiler interacts with your code to produce an instruction stream for that architecture. You have to think about data locality (what fits in cache), getting as many ops per load, branch elimination/prediction, etc.

I don't know about later flavors of Fortran. I've heard Fortran programmers complain about performance of F77 codes degrading as they go to newer F95 compilers. But I don't know anything systematic about it.
rcgldr
#12
May28-07, 03:59 PM
HW Helper
P: 7,048
Quote Quote by nmtim View Post
I was unaware that "high-end" vector machines with vector units as wide as the old Crays were still in widespread use. Can you point to any examples?
Cray X1, X1E, X2. http://en.wikipedia.org/wiki/Cray_X1.
Fortran still in use for ...
Any data to support that in 2007?
It's a current Wiki ariticle. Do a web search for Fortran to find other similar references. Part of this is the time it would take to convert existing code over to C/C++.
rcgldr
#13
Jun14-07, 01:18 AM
HW Helper
P: 7,048
Quote Quote by nmtim View Post
Fortran faster than C ... To this end, the scientific community willingly accepts machine specific extensions to the language.
I don't believe that. Speed is a goal of compilers for many languages, not just Fortran. The main reason Fortran 77 used had a speed advantage for an average scientific programmer was that the language restricted users to an easily optimized subset of C. To write C code as good, you had to know what you were doing (that may still be true today.)
My point is that it is the extensions that allow Fortran to run faster, not so much something inherent in the language. I've since noted that the C compilers for the Super Computers also have similar extensions (as with the Cray X1 / X2 examples I posted about previously).
graphic7
#14
Jun14-07, 07:49 AM
PF Gold
P: 560
Quote Quote by Jeff Reid View Post
Cray X1, X1E, X2. http://en.wikipedia.org/wiki/Cray_X1.
It's a current Wiki ariticle. Do a web search for Fortran to find other similar references. Part of this is the time it would take to convert existing code over to C/C++.
The IBM pSeries POWER5 and POWER6 gear are essentially vector machines. Rather than throwing a vector unit on each processor, multiple processors in the system act as the vector unit. One can setup calculations in such away on the POWER5 and POWER6 that when a calculation is completed in foo set of processors, the calculation is checked against what bar set calculated for accuracy.
rcgldr
#15
Jun17-07, 07:17 PM
HW Helper
P: 7,048
Quote Quote by graphic7 View Post
The IBM pSeries POWER5 and POWER6 gear are essentially vector machines. Rather than throwing a vector unit on each processor, multiple processors in the system act as the vector unit.
The advantage of true vector machines is that the operands are available in quckly accessable registers, as opposed to other cpus which involves a longer transfer rate. The CDC Star could do vector math at the rate of the memory bandwidth, but the later Cray machines did this with registers. As previously mentioned, the Intel SSE has parallel floating point register based operations, but they're 32 bit floating points operands, as opposed to the higher precision operands on the high end vector machines.

Is Cray the last "holdout" still making true vector machines?
nmtim
#16
Jun18-07, 08:35 AM
P: 79
Quote Quote by Jeff Reid View Post
...the Intel SSE has parallel floating point register based operations, but they're 32 bit floating points operands...
You are poorly informed. SSE has had support for IEEE 754 double precision vector operations since SSE2; these and similar instructions, like Altivec on Power, are implemented on the major Intel, AMD, & IBM platforms used by the bulk of HPC. And before you drag out your next false explanation, yes, C/C++ compilers give full access via intrinsics to these instructions.

Cray is not the last holdout making true vector machines. No one is, because memory bandwidth is too poor and managing the cache too complex. You're better off spending your transistor budget running more ops on narrower vectors at higher frequency. The newer Crays use Opterons, with 128 bit vector units. Again, your information is out of date.

While I'm up, on the subject of intrinsics (compiler extensions that expose hardware functionality not addressable by the language), you make the claim that Fortran is faster because it has these. But, as I pointed out previously and even you have acknowledged, C compilers have these extensions, too. So if C has the same extensions, how can Fortran be faster than C by virtue of these extensions?

And as I said before, but you seem to not comprehend, ALL decent C++ compilers offer these extensions on their target platforms. And even if they did not (note that they do!), C++ programmers could always inline some assembler while taking advantage of compiler extensions that defer register allocation and instruction scheduling to the optimizer. Compiler extensions are not a differentiating factor in performance.
rcgldr
#17
Jun18-07, 10:56 PM
HW Helper
P: 7,048
Quote Quote by nmtim View Post
SSE 32 bit precision
You are poorly informed. SSE has had support for IEEE 754 double precision vector operations since SSE2
My mistake, it's 64 bits as you stated. I meant to edit this, but I didn't get back to this thread in time (the edit option was no longer available).

Cray is not the last holdout making true vector machines. No one is
"The Cray X1E supercomputer combines the processor performance of traditional vector systems with the scalability of microprocessor-based architectures."

http://www.cray.com/products/x1e/index.html

I've read that there will be a follow on to this, a Cray X2. The USA govement partially funds this line of supercomputers, so there still must be some need for "traditional vector systems".

So if C has the same extensions, how can Fortran be faster than C by virtue of these extensions?
It shouldn't be, but I wonder if mathematical algorithms involving exponentiaion implemented as native Fortran operators, versus the same algorithms implemented using the pow...() function calls instead of a language native exponentiation operator in C or C++ would result in identical optimization.

I'm still under the impression that a significant part of scientific programming is still done in Fortran, probably because of a large amount of code already exists and the cost of conversion would be high.
Hurkyl
#18
Jun18-07, 11:17 PM
Emeritus
Sci Advisor
PF Gold
Hurkyl's Avatar
P: 16,092
For the Crays, one thing is that their Fortran compiler is simply better than their C compiler (and much better than their C++ compiler); the C compiler often needs a lot of help to recognize how to take advantage of the vector registers.


Register to reply

Related Discussions
Lightspeed fast, then slowed, then fast again. General Physics 3
Fortran v.s. Visual Fortran Programming & Computer Science 0
Help with fortran Engineering, Comp Sci, & Technology Homework 1
Anyone familiar with the fortran language? Computing & Technology 23
Anyone know Fortran? Introductory Physics Homework 4