Comparison of high-level computer programming languages

In summary, the author argues that computational speed is a matter of little concern to users, and that comparisons between languages are complicated.
  • #1
hilbert2
Science Advisor
Insights Author
Gold Member
1,598
605
A little question about the appropriateness of a certain research subject...

Would it be useful to make a study of the computational effectiveness of equivalent codes written with Matlab, Mathematica, R Code, Julia, Python, etc. in a set of typical computational engineering problems like Navier-Stokes equation, numerical heat conduction, thermal radiation intensity field calculation, and so on and test how the computation time scales with increased resolution of the discretization? Or would this be redundant as the languages have already been tested for benchmark problems like matrix multiplication and linear system solution?

Just got this idea after reading about how the relatively new Julia language is very effective in the sense of computational speed despite being as simple as Matlab or R to make code with.
 
Technology news on Phys.org
  • #2
With the exception of (1) real time applications, such as robotics, and (2) very massive computations like FEA, I think computational speed is a matter of little concern. Does anyone really care whether the result appears on your screen in a tenth of a second or requires a whole half second? I certainly don't; I'm just not that quick to react and not in that big a hurry.
 
  • #3
There are plenty of computational tasks that take several days even on supercomputers running a large amount of cores.

Supercomputing facilities always need a queue system to ensure no researcher uses more than their share of the computing resources.
 
  • Like
Likes Chestermiller
  • #4
hilbert2 said:
There are plenty of computational tasks that take several days even on supercomputers running a large amount of cores.

I'm sure that this is true. I would argue, however, that this is a specialist concern, not a general user concern. How many folks do you suppose are doing those problems that run for days on a supercomputer?
 
  • #5
Comparisons are complicated. If execution speed of MATLAB programs is an issue, it is possible to auto-generate C code and get much faster execution. In that case, the ease with which C code can be auto-generated, efficient libraries can be used, or parallel processing can be applied is important. There are many other complications to consider. I think that the existing benchmarks do a reasonably good job for the initial comparisons of languages and there are other issues to consider.
 
  • Like
Likes Klystron and hilbert2
  • #6
FactChecker said:
Comparisons are complicated. If execution speed of MATLAB programs is an issue, it is possible to auto-generate C code and get much faster execution. In that case, the ease with which C code can be auto-generated, efficient libraries can be used, or parallel processing can be applied is important. There are many other complications to consider. I think that the existing benchmarks do a reasonably good job for the initial comparisons of languages and there are other issues to consider.

Thanks for the answer. I have gotten the impression that the best thing about the Julia language is that it can also be easily parallelized despite being simple to write code with.

Maybe I'll compare some simple diffusion/Schrödinger equation solvers written in R and Julia on the same computer and make some plots of the computation time versus number of grid points, and see if there's anything interesting in there.

It seems to be quite difficult to find any peer reviewed publications about that kind of comparisons, here's one exception: http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf (not from natural sciences though).
 
  • #7
hilbert2 said:
Thanks for the answer. I have gotten the impression that the best thing about the Julia language is that it can also be easily parallelized despite being simple to write code with.
I got a good impression of Julia, but I don't have any experience with it. I know that MATLAB has some options for parallel processing and have seen large programs that are run on a network of computers, but I was not involved in those efforts and don't know how hard it was to do.
 
  • #8
Dr.D said:
With the exception of (1) real time applications, such as robotics, and (2) very massive computations like FEA, I think computational speed is a matter of little concern. Does anyone really care whether the result appears on your screen in a tenth of a second or requires a whole half second?
Computational speed is an important matter to software engineers in general. By that, I'm not referring to the speed in which computations are performed, but the overall responsiveness of an application. A general rule of thumb is that if an application takes longer than a tenth of a second, users will perceive the application as being "slow." I can't cite any references for this -- this is just something I learned while working in the software industry for fifteen years.

As an example, one of the design goals for Windows XP was to decrease the startup time when the computer was turned on, relative to previous Windows versions. Microsoft invested a lot of resources to achieve this goal.
Dr.D said:
I certainly don't; I'm just not that quick to react and not in that big a hurry.
 
  • #9
I am not sure a comparison of processing speeds from high level languages would be all that useful. I would expect different languages would perform differently on all the benchmarks and there would be no strong conclusions that could be drawn. That is, for example, I do not think Julia would outperform C on every benchmark, or C would outperform python on every benchmark, etc. I think it would be a mixed bag.

I think the importance of speed is overstated. I find that it would be more important for code to be well-documented so that it could be understood when inevitably, it is handed off to colleagues, and users than incremental increases in speed. Many times old code written by experts is used by novices on problems where it is misapplied. This ins often not the fault of the "novices", as the "experts" have retired or moved on before documenting the purpose, methods, and limitations of their code. It seems the experts thought they would be around forever.

It is also been my experience that experts were not replaced because the organization/business to reduce the payroll through attrition, but the organization/business was not thinking in terms of legacy.

Back to the main point. Documentation, and understandability should be more of a priority than speed. You have engineers that can make the most high-level user-friendly language inscrutable, and you have engineers that can make (even) structured fortran or assembly language, understandable.
 
  • Like
Likes Merlin3189 and Klystron
  • #10
hilbert2 said:
It seems to be quite difficult to find any peer reviewed publications about that kind of comparisons, here's one exception: http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf (not from natural sciences though).
It's an interesting set of results. When several popular languages take hundreds of times longer to run than C++, there is something to consider. But many interpreted languages can be made to run much faster by using compiled libraries for critical calculations. That makes the comparison more complicated since people are likely to apply techniques that speed the program up when speed becomes a serious issue. In my opinion, no modern languages will run significantly faster than C or FORTRAN, and some will be hundreds of times slower.
 
  • Like
Likes mpresic
  • #11
mpresic said:
I am not sure a comparison of processing speeds from high level languages would be all that useful. I would expect different languages would perform differently on all the benchmarks and there would be no strong conclusions that could be drawn. That is, for example, I do not think Julia would outperform C on every benchmark, or C would outperform python on every benchmark, etc. I think it would be a mixed bag.

I think the importance of speed is overstated.
As a person who spend many weekends (and holidays) nursing batch programs through runs that take several days, I tend to disagree. I have also spent a lot of time massaging real-time programs to run in hard real-time of a few milliseconds. There is nothing uglier and harder to document than code that has been squeezed to run in a small time frame.
 
  • Like
Likes .Scott
  • #12
I see the importance of speeding up code that may take days to run. Commonly system administrators make take the computer or servers, whatever down for maintenance requiring an interruption in service within the several days timeframe. I also agree with your comments regarding FORTRAN and C.

Real-time code can be difficult to document. Nevertheless it is important to see that the code is maintained. In this respect, generations of workers familiar with the methods and techniques should be kept. For example, I am sure to get to the moon, the real-time code for the Apollo computers was hard to understand. I for one would be reassured that this expertise was maintained for if (or when) we try to get back to the Moon. I u
FactChecker said:
It's an interesting set of results. When several popular languages take hundreds of times longer to run than C++, there is something to consider. But many interpreted languages can be made to run much faster by using compiled libraries for critical calculations. That makes the comparison more complicated since people are likely to apply techniques that speed the program up when speed becomes a serious issue. In my opinion, no modern languages will run significantly faster than C or FORTRAN, and some will be hundreds of times slower.
nderstand a good book was written concerning the Apollo guidance computer. Sounds intriguing.
 
  • Like
Likes FactChecker
  • #13
FactChecker said:
It's an interesting set of results. When several popular languages take hundreds of times longer to run than C++, there is something to consider. But many interpreted languages can be made to run much faster by using compiled libraries for critical calculations. That makes the comparison more complicated since people are likely to apply techniques that speed the program up when speed becomes a serious issue. In my opinion, no modern languages will run significantly faster than C or FORTRAN, and some will be hundreds of times slower.

I did an experiment with calculating a numerical heat transfer (or diffusion) problem in 2D with R, Julia and C++ codes. The problem is like the one in this blog post I have written https://physicscomputingblog.wordpress.com/2017/02/20/numerical-solution-of-pdes-part-3-2d-diffusion-problem/ .

I made a square computational domain, which contained NxN discrete cells, where N was given values 30, 37, 45, 52 and 60 on different runs. The method that was used was implicit finite differencing. The number of timesteps was only 10 in all runs.

The C++ code used simple Gauss-Jordan elimination taken from the book "Numerical Recipes in C", and the Ubuntu C++ compiler was run with parameters "g++ -ffast-math -O3". There was no attempt made to use parallel processing, or to account for the sparseness of the linear system. The matrix inverse was computed only on the first time step, and simple matrix-vector multiplication was used in consecutive time steps.

The R code used the in-built "solve(A,b)" function for solving the system of equations.

The Julia code uses the backslash operator "A\b" for solving the system.

The computation times used by the three codes (not including compilation time) are plotted below for the runs done on my own (slow) laptop (AMD E2-3800 APU with Radeon(TM) HD Graphics × 4).

300vd47.jpg


Next the runs were also made with my work computer (Intel Xeon(R) CPU E31230 @ 3.20GHz × 8), and the computation times are shown on the next plot.

308ywkp.jpg


First I thought that the Julia code is the fastest because it can somehow notice the sparseness of the matrix and use that to speed up the computation without being explicitly told to do so, but when I tried to invert a matrix filled with random double-precision numbers from interval [-1,1], it worked just as fast as the inversion of the 2D diffusion equation matrix did. So the Julia compiler can probably somehow automatically parallelize the code.

The C++ code would most likely be the fastest if I used some LAPACK functions for solving the linear system, but I haven't done that yet.

Note that if the computational domain has ##N^2## cells, the matrix to be inverted has ##N^4## elements.
 
  • Like
Likes FactChecker and fluidistic
  • #14
The slow speed of C++ is surprising (although I have more confidence in speed of C than C++). There must be some catch -- some difference in the algorithm. If you want to see if Julia is parallelizing the calculations, you should be able to see something in the performance monitor. If you really want to study it, you can use scripts and DOS commands to collect data. I can not believe that Julia is fundamentally faster than C or even C++ (I can believe a tie, and that would support what others have said about Julia.). C++ slower than R is completely unbelievable to me.

PS. I think you are seeing why people do not use complicated benchmarks for comparisons of greatly dissimilar languages. They involve so much more than the basic calculations and the algorithms are not comparable without a lot of work on specific computers.
 
  • Like
Likes Telemachus and hilbert2
  • #15
I would also assume that this is due to the the algorithm. Numerical Recipes is notorious for not having efficient implementations (although it is a great book to learn the basics, including the code supplied). You should try GSL.
 
  • Like
Likes Telemachus and FactChecker
  • #16
I compiled and ran the C++ code

Code:
#include <iostream>
#include <cstdio>
#include <ctime>
#include <stdio.h>

main()
{
std::clock_t start;
double duration;
double x = 1000.0;

start = std::clock();

for(int n=0; n<1e8; n++)
{
x*=0.9999999;
}

duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;

std::cout<<"x="<< x <<"\n";
std::cout<<"time (s): "<< duration <<'\n';
}

result: x=0.0453999
time (s): 0.341655

Then an equivalent Julia code:

Code:
t1 = time_ns()

x = 1000.0

for k in 1:100000000
    x*=0.9999999
end

t2 = time_ns()

print("x= ",x,"\n")
print("time (s): ",(t2 - t1)/1.0e9,"\n")

result: x=0.04539990730150107
time (s): 8.537868758

So quite a large difference in favor of C++ with this kind of calculation, at least. I'm not sure if telling the Julia to use less significant figures would make it faster.
 
  • #17
Slightly modified from @hilbert2's benchmark.
C:
#include <stdio.h>
#include <time.h>

int main()

{
   clock_t start;
   double duration;
   double x = 1000.0;

   start = clock();
   for (int n = 0; n<1e8; n++)

   {
      x *= 0.9999999;
   }
   duration = (clock() - start) / (double)CLOCKS_PER_SEC;
   printf("x: %f\n", x);
   printf("time (s): %f\n", duration);
   return 0;

}
Compiled as straight C code, release version, VS 2015, on Intel i7-3770 running at 3.40 GHz
Output:
x: 0.045400
time (s): 0.297000
This time is about 10% less than the time hilbert2 posted.
 
  • #18
They have to be compared on the same computer and run at a high, noninterrupted, priority on a dedicated core.
 
  • #19
I believe one problem with such a comparison is that in many(most?) well written program in e.g. Matlab or even Python you will find that most of the time is spent calling routines that are already coded in say C; and in some cases they even use the same routines (say LAPACK or FFTW).
Hence, you wouldn't necessarily be comparing the languages as such but the libraries they use to do the "heavy lifting".
Actually solving problems by directly solving e.g. Navier-Stockes in ANY high-level language sounds extremely inefficient; and I don't imagine it is needed very often.
 
  • Like
Likes aaroman, Klystron, Stephen Tashi and 1 other person
  • #20
Mark44 said:
Compiled as straight C code, release version, VS 2015, on Intel i7-3770 running at 3.40 GHz
Output:
x: 0.045400
time (s): 0.297000
This time is about 10% less than the time hilbert2 posted.

This kind of calculation is an example of something that can't be parallelized, because the value of x after n:th round of the for loop depends on what it was after (n-1):th round. On the other hand, when doing something like a matrix-vector product ##Ax=y## between a ##N\times N ##matrix ##A## and an N-vector ##x##, you can be computing several sums of form ##y_k = \sum_{l=1}^{N} A_{kl}{x_l}## at the same time with different processors as they are independent.
 
  • #21
The julia example is missing something important: julia only compiles things inside
functions! So the julia timing was interpreted rather than compiled code.
Just wrap a function around it,
Code:
function comp()
      x = 1000.0
      for k in 1:100000000     x*=0.9999999   end
      return x
end

I get these times
Code:
julia> t1 = time_ns(); x = comp(); t2 = time_ns();
julia> println(x);print("time (s): ",(t2 - t1)/1.0e9,"\n")
0.04539990730150107
time (s): 0.148448348

versus C++:
Code:
~/tmp> g++ -O foo.cc -o foo
~/tmp> foo
x=0.0453999
time (s): 0.149457

Another important factor is "type stability" -- the type of a variable should not change within a function
Have a look here: http://www.stochasticlifestyle.com/7-julia-gotchas-handle/

hilbert2 said:
I compiled and ran the C++ code

Code:
#include <iostream>
#include <cstdio>
#include <ctime>
#include <stdio.h>

main()
{
std::clock_t start;
double duration;
double x = 1000.0;

start = std::clock();

for(int n=0; n<1e8; n++)
{
x*=0.9999999;
}

duration = ( std::clock() - start ) / (double) CLOCKS_PER_SEC;

std::cout<<"x="<< x <<"\n";
std::cout<<"time (s): "<< duration <<'\n';
}

result: x=0.0453999
time (s): 0.341655

Then an equivalent Julia code:

Code:
t1 = time_ns()

x = 1000.0

for k in 1:100000000
    x*=0.9999999
end

t2 = time_ns()

print("x= ",x,"\n")
print("time (s): ",(t2 - t1)/1.0e9,"\n")

result: x=0.04539990730150107
time (s): 8.537868758

So quite a large difference in favor of C++ with this kind of calculation, at least. I'm not sure if telling the Julia to use less significant figures would make it faster.
 
  • Like
Likes hilbert2 and FactChecker
  • #22
Thanks for the helpful info, bw. :)

I checked the activity monitor when solving the diffusion eq. solution numerically, and it seemed that the Julia program really used multiple processors but the C++ version used only one processor.
 
  • Like
Likes FactChecker
  • #23
Interest in benchmarks does not fade through the years. But I think that magazine articles and blog posts are the usual venue, not peer reviewed papers. The reason is that to make scientifically significant conclusions, the benchmark suite must include the entire spectrum of applications the language is used for. In other words, mind numbingly huge. A blog post can show benchmarks for a niche.

Every benchmark comparison attracts both praise and criticism for the benchmarks chosen and the details of the tests. That too I expect to remain unchanged for the foreseeable future.
 
  • #24
On the julia wesite there's a set benchmarks comparing various numerical languages

Www.Julialang.org

One strength of julia is that it can interoperate with languages allowing you to mashup components in different languages to get things done.

Be aware that benchmarking can favor one language over another by choice of algorithms, their implementations and the environment they are run in.

Database vendors routinely compete with each running tests that are curated to favor their product.
 
  • #25
The BYTE magazine ran a series of such coding tests in the 1980s. The algorithm chosen was the "Sieve of Eratosthenes" (finding all primes in a range of integers).

The program is written in several programming languages and documented at http://rosettacode.org/wiki/Sieve_of_Eratosthenes.
 
Last edited:
  • Like
Likes FactChecker
  • #26
hilbert2 said:
A little question about the appropriateness of a certain research subject...

Would it be useful to make a study of the computational effectiveness of equivalent codes written with Matlab, Mathematica, R Code, Julia, Python, etc.

The simple answer is no.

The long answer is that languages all get reduced down eventually into machine code. The difference in speed between languages more or less depends upon how many layers these languages have to go through in order to accomplish that task and how often they have to do it. There is also a factor in how good a compiler is at creating optimal machine code, but layering would probably dwarf that for the most part.

A much more useful study is what algorithm gives you the lowest complexity and what dependencies it has. A good example is sorting algorithms. When would you use a hash sort vs a merge sort vs a selection sort? This will provide you much more useful information than a compassion between languages.
 
  • #27
SixNein said:
The simple answer is no.

The long answer is that languages all get reduced down eventually into machine code. The difference in speed between languages more or less depends upon how many layers these languages have to go through in order to accomplish that task and how often they have to do it. There is also a factor in how good a compiler is at creating optimal machine code, but layering would probably dwarf that for the most part.

A much more useful study is what algorithm gives you the lowest complexity and what dependencies it has. A good example is sorting algorithms. When would you use a hash sort vs a merge sort vs a selection sort? This will provide you much more useful information than a compassion between languages.

I had a bit of fun with this, and it illustrates SixNein's points completely. First, I chose--as always when I have a choice--to write in Ada. Why use Ada? Aren't those compilers for embedded systems and very expensive? Some are, but there is an Ada compiler, GNAT, built into gcc. The advantage over using C with gcc is that the GNAT toolset manages the linking so you don't have to create a makefile. The code you get for the same algorithm in Ada and C should run at exactly the same speed. Of course, if you make significant use of Ada generics or tasking, lots of luck generating the same code in C. But that doesn't happen here. I'll put the complete code in the next post, so you can compile and run it on your system if you like.

I wanted to try writing the key loop two ways in Ada. That became three, and then four, and needed details on the real-time clock Ada was providing, and the resolution of the time types provided with it, to understand the results. I broke the second example so compilations wouldn't last for hours, while I was debugging my code. Why did I have to break the second case? Ada rules say that 0.9999999999**100000000 is a numeric literal, where ** is exponentiation, and it has to be computed exactly. ;-) The problem isn't evaluating the expression, it is keeping all those decimals around while it does so. The compiler (gcc) runs for around four hours, then hits Storage_Error when the number is too big to fit in 2 Gigabytes. (Yes, I have much more memory than that on my system, but the limit is on the bignum type.) Anyway, I broke the numeric expression up differently than in the third case, the whole program compiles in under a minute now.

The next issue is why I have a lot of large multipliers in the timing code, and why I added the output at the head. Found that the compiler was using 80-bit arithmetic for Long_Long_Float and storing it in three 32-bit words. Interesting. But let me show you the full output:

Sanity Checks.
Long_Long_Float'Size is 96 bits.
Duration'Small is 0.0010 Microseconds.
Real_Time.Tick is 0.2910 Microseconds.
Real_Time.Time_Unit is 0.0010 Microseconds.

Multiplication result was 0.045399907063 and took 147318.304 Microseconds.
Exponentiation result was 0.045399907063 and took 0.291 Microseconds.
Exponentiation 2 result was 0.045399907062 and took 0.583 Microseconds.
Fast exponentiation result was 0.045399907062 and took 0.875 Microseconds.

I gave up on trying to get that monospaced. \mathtt gets a typewriter font in LaTex, but then I have to redo the spacing and line breaks explicitly. Not worth the trouble. Now to discuss the results. That first time looks huge, but it is in microseconds. It is actually 0.1473... seconds. That may be the fastest time so far reported here, but if I ran the C code I should get close to the same thing. But the thing you should do if your program is too slow is not to look for tweaks here and there, but to use a better algorithm. I understand that this program was intended as a benchmark, but these results one, two, and three clock ticks respectively, and for a fast real-time clock, indicate that there is some magic going on under the hood. When I wrote the second case (exponentiation) I realized that the compiler was going to try to do everything at compile time, and it did. But Ada rules say that numeric literals evaluated at compile time must be evaluated exactly. Breaking the expression up this way (X := 1000.0; Y := 0.999_999_9**10_000; Y := Y**10_000; X := X*Y; ) took maybe thirty seconds of grinding at compile time, then stuffed the first 64-bits plus exponent into Y, and raised that to the 10,000th power at run-time. But how did it do that last step so quickly? Probably by calling the built-in power function in the chip.

We can guess that exponentiation 2 used the same trick, but calling the power function with an exponent of 100,000,000 instead of 10,000 apparently used another clock tick. (Incidentally, if the clock function is called twice during a tick it adds the smallest increment here, one nanosecond, to the value returned. Twice that for the third call, and so on. This means that you always get a unique value for the clock call. With a six core processor, and this code running on just one core, this can add a few nanoseconds to the value which should be ignored. It can also subtract nanoseconds if the starting call to clock is not the first call in that interval.)

Finally, the third approach can't be short-circuited by trig or exponential functions. It computes 0.9999999 times itself 100,000,000 times. That code would work even if both values were entered from the keyboard when the program was already running, and it did the calculation 168 thousand times faster.

So:
1. Use a language which makes the structure of the problem visible.
2. Use that to find a better algorithm, if needed.
 
  • #28
@eachus , I wish no offense, but in summary, was there a run-time difference between C and Ada? I like a "you were there" type description, but only after a summary that tells me if it is worth reading the details.
These type of timing comparisons of a single calculation done many times may not reflect the true difference between language execution speeds.
 
  • #29
-- Save this file as power.adb, open a command window.
-- invoke gcc with "gnatmake -O3" if you have the gnat tools and libraries installed.
-- That command will issue "gcc -c -O3 power.adb" then call gnatlink and gnatbind.
-- type power in a command window to execute the program.
with Ada.Text_IO; use Ada; use Ada.Text_IO;
with Ada.Real_Time; use Ada.Real_Time;
procedure Power is
Start: Time;
Elapsed: Time_Span;
X,Y: Long_Long_Float := 1000.0;
package Duration_IO is new Fixed_IO (Duration);
package Long_Float_IO is new Text_IO.Float_IO(Long_Float);
begin
Text_IO.Put_Line(" Sanity Checks.");
Text_IO.Put_Line(" Long_Long_Float'Size is" &
Integer'Image(Long_Long_Float'Size) & " bits.");
Text_IO.Put(" Duration'Small is ");
Duration_IO.Put(Duration'Small * 1000_000,2,4,0);
Text_IO.Put_Line (" Microseconds.");
Text_IO.Put(" Real_Time.Tick is ");
Duration_IO.Put(To_Duration(Tick * 1000_000),2,4,0);
Text_IO.Put_Line (" Microseconds.");
Text_IO.Put(" Real_Time.Time_Unit is ");
Duration_IO.Put(Duration(Time_Unit * 1000_000),2,4,0);
Text_IO.Put_Line (" Microseconds.");
-- Print Duration'Small, Real_Time.Tick, and Real_Time.Time_Unit
-- to understand the issues that can come if they are
-- inappropriate or useless.

New_Line;
X := 1000.0;
Start := Clock;
for I in 1..100_000_000 loop
-- Some differences are just doing things the Ada way.
-- Like using underscores to make reading numbers easier.
-- Here I could have written 0..1E8-1. If I were actually
-- used for something other than a loop count, I might have.
X := X*0.999_999_9;
end loop;
Elapsed := Clock-Start;
Ada.Text_IO.Put("Multiplication result was ");
Long_Float_IO.Put(Long_Float(X),4,12,0);
Ada.Text_IO.Put(" and took ");
Duration_IO.Put(To_Duration(Elapsed * 1000_000),2,3,0);
Text_IO.Put_Line (" Microseconds.");

Start := Clock;
X := 1000.0;
Y := 0.999_999_9**10_000; -- Lots of CPU time at compile.
Y := Y**10_000; -- Broken to avoid a compiler crash
X := X*Y; -- Ada requires evaluating literal expressions exactly.
Elapsed := Clock-Start;
Ada.Text_IO.Put("Exponentiation result was ");
Long_Float_IO.Put(Long_Float(X),4,12,0);
Ada.Text_IO.Put(" and took ");
Duration_IO.Put(To_Duration(Elapsed * 1000_000),2,3,0);
Text_IO.Put_Line (" Microseconds.");

Start := Clock;
X := 1000.0;
X := X*0.999_999_9**Integer(100.0*X*X); -- Not a numeric literal
Elapsed := Clock-Start;
Ada.Text_IO.Put("Exponentiation 2 result was ");
Long_Float_IO.Put(Long_Float(X),4,12,0);
Ada.Text_IO.Put(" and took ");
Duration_IO.Put(To_Duration(Elapsed * 1000_000),2,3,0);
Text_IO.Put_Line (" Microseconds.");
-- That may do the same. I msy have to pull over the Ident_Int function from the ACVC tests.

-- As a compiler writer this is the sort of optimization I would want to happen,
-- if the value raised to a power, and the power were variables:

declare
I: Integer := 1;
Value : Long_Long_Float := 0.999_999_9;
Exponent: Integer := 100_000_000;
Result : Long_Long_Float := 1.0;
Powers: array (Integer range 1..32) of Long_Long_Float;
Control: array (Integer range 1..32) of Integer;
begin
Start := Clock;
X := 1000.0;
Powers(1) := Value;
Control(1) := 1;
while Control(I) <= Exponent loop
Powers(I+1) := Powers(I)*Powers(I);
Control(I+1) := Control(I)+Control(I);
I := I+1;
end loop;
for J in reverse 1..I loop
if Control(J) <= Exponent
then Result := Powers(J)*Result;
Exponent := Exponent-Control(J);
end if;
end loop;
X := X*Result;
Elapsed := Clock-Start;
Ada.Text_IO.Put("Fast exponentiation result was ");
Long_Float_IO.Put(Long_Float(X),4,12,0);
Ada.Text_IO.Put(" and took ");
Duration_IO.Put(To_Duration(Elapsed * 1000_000),2,3,0);
Text_IO.Put_Line (" Microseconds.");
end;
end Power;
 
  • #30
If you have ever been on a project that fell into the Ada "strict typing hell", then you know that the advertised Ada development advantages are not guaranteed. And upper management often prefers strict rules over "best programmer judgement". That preference can lead straight to the Ada "strict typing hell" (among other bad things).
 
  • #31
COBOL had this issue of strictness too. It wasn't so bad though because we'd use an older program as a template for the newer one. I remember the classic error of forgetting a single period in the IDENTIFICATION section and wind up with literally hundreds of errors as the compiler failed to recover from it.
 
  • #32
FactChecker said:
If you have ever been on a project that fell into the Ada "strict typing hell", then you know that the advertised Ada development advantages are not guaranteed. And upper management often prefers strict rules over "best programmer judgement". That preference can lead straight to the Ada "strict typing hell" (among other bad things).

One part of my job at MITRE, and there were a half a dozen of us who did this, was to get all of the misunderstandings about Ada out of the software design rules well before coding started on Air Force electronics projects. Sometimes though we ran into managers who had added their own rules gotten out of a magazine somewhere. Like: can't use Unchecked_Conversion. All UC means is that it is the programmer's job to wrap it in any necessary checks. Free almost always is UC, because it is your job to be sure there are no other accesses out there. Another is the one you are complaining about. I didn't declare any non-standard types in that fragment. Where you should use your own types are 1: in Generics and 2: when physical units are involved. There were a slew of nice papers on how to have one type for SI units such that corresponded to most units, with the type checking done at compile time. I preferred to stick to things like measuring time in Duration, with constants like milliseconds being defined for use when declaring values. Anyway, define one type per package, and convert the package to a generic if necessary. (Doesn't apply to enumeration types used for convenience: type Color is (Red, ... or type Switch is (Off,On); although you might want to do that one as Off: constant Boolean := False; and so on.)

The most important rule in Ada programming though, is that if the language seems to be getting in your way, it is trying to tell you something. If you are getting wrapped around the axle, ask what is the simplest way to do what you are trying to do, then figure out why you can't do that. Adding a parameter to a subprogram, or a subprogram to a package may require changing the design documents. Just realize that people make mistakes and hope no one will blow up. (Because their work, of course, was perfect.)
 
  • Like
Likes FactChecker
  • #33
eachus said:
One part of my job at MITRE, and there were a half a dozen of us who did this, was to get all of the misunderstandings about Ada out of the software design rules well before coding started ...
Just realize that people make mistakes and hope no one will blow up. (Because their work, of course, was perfect.)

That was a most interesting post. It reminds us that, until the day when we turn over coding to AIs, rules and discipline must give way to humanity. Humans writing code will always remain partially an art.

I recently watched a very interesting documentary (see below) about MIT's Draper Labs and the navigation computers for the Apollo moon missions. According to this, the software project was nearly a disaster under the loosey-goosey academic culture until NASA sent in a disciplinarian. After a tough time, the software got finished and performed admirably for Apollo 8 and 11.

My point is that you can err in either direction, too much discipline or too much humanity. Finding the right balance has little to do with programming languages.

 
  • #34
Dr.D said:
I'm sure that this is true. I would argue, however, that this is a specialist concern, not a general user concern. How many folks do you suppose are doing those problems that run for days on a supercomputer?
Personally, I did in several different scientific areas.
 
  • #35
anorlunda said:
That was a most interesting post. It reminds us that, until the day when we turn over coding to AIs, rules and discipline must give way to humanity. Humans writing code will always remain partially an art.

I recently watched a very interesting documentary (see below) about MIT's Draper Labs and the navigation computers for the Apollo moon missions. According to this, the software project was nearly a disaster under the loosey-goosey academic culture until NASA sent in a disciplinarian. After a tough time, the software got finished and performed admirably for Apollo 8 and 11.

My point is that you can err in either direction, too much discipline or too much humanity. Finding the right balance has little to do with programming languages.



I remember that. As a Freshman I got into a class under Doc Draper* at the I-Lab. (Much later Draper Labs) I got assigned to a project to determine whether early ICs (I think they had three transistors and six diodes) were any good or not. In the lab chips which had worked for months would suddenly fail. I had what I thought was a very simple idea that if failure equalled too slow, I should test not static switching voltages but the 10% to 90% (or vice-versa) output voltage swing was taking too long. Turned out you only had to test one transistor all of them on a chip had pretty much identical characteristics. Why did this time domain measurement matter? When the transistor was switching, it was the highest resistance component in the circuit. So the chips that switched too slowly eventually overheated and died. Of course, what killed one chip might not touch the one next to it, because it hadn't had to switch as often.

I remember Apollo 8 being given go for TLI (trans lunar injection), that was the vote that I had done my job.

As for the 1202 alarms, I was at my parent's home where we were celebrating one of my sister's 18th birthday. All the family was there, including my father who had designed power supplies for radars at Cape Canaveral (before it became KSC), and my grandfather who had literally learned to fly from the Wright Brothers well before WWI. Of course, every time the TV talking heads said computer problem, my first reaction was, oh no! I goofed. Then I realized it was a software issue. Whew! Not my problem.

Finally, Apollo 11 landed and while they were depressurizing the LM, I started to explain that the pictures we were going to see, live from the moon were going to be black&white, not color like Apollo 8, and why.

My mother interrupted, "Live from the moon? Live from the moon! When I was your age we would say about as likely as flying to the moon, as a way to indicate a thing was impossible."
"Helen," her father said, "Do you remember when I came to your room and said I had to go to New York to see a friend off on a trip? I never thought Lindbergh would make it!" (My grandfather flew in the same unit as Lindbergh in WWI. Swore he would never fly again and didn't. My grandmother, his wife, flew at least a million miles on commercial airlines. She worked for a drug company, and she was one of their go to people for getting convictions against druggists diluting the products, or even selling colored water. (She would pick up on facial features hard to disguise, so that if they shaved a beard, died their hair, etc., she could still make a positive ID, and explain it to the judge.)

They both lived long enough to see pictures from Voyager II at Uranus, and my mother to see pictures from Neptune.

* There were very few people who called Doc Draper anything other than Doc Draper. His close friends call him Doc. I have no idea what his mother called him. (Probably Sonny like my father's mother called him.)
 
  • Like
Likes NotGauss and anorlunda

Similar threads

  • Sticky
  • Programming and Computer Science
Replies
13
Views
4K
  • Programming and Computer Science
Replies
29
Views
3K
  • Programming and Computer Science
Replies
10
Views
3K
  • Computing and Technology
Replies
3
Views
2K
  • Special and General Relativity
Replies
33
Views
2K
  • Programming and Computer Science
Replies
4
Views
4K
  • Computing and Technology
2
Replies
44
Views
3K
  • Programming and Computer Science
Replies
14
Views
3K
Replies
18
Views
3K
Back
Top