I've written a program that repeats a calculation a certain number of times in single precision, then in double precision. The times are recorded by the cpu_time(real t1) subroutine. (Specifically, the calculation is aReal/(aReal+0.01) .) It seems that the time for the double calculation is 1.20 times the time for the single precision calculation. (There was little difference in the times for ~1 second times, but over many minutes there's a clear 1.2 factor.) Why isn't it larger than that? (Not that I'm complaining.) I'd figure that for addition and subtraction, dealing with double variables would take twice as long, and it would be four times as long for multiplication (and somewhere around there for division).