Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Comparison of high-level computer programming languages

  1. Jul 27, 2018 #41

    Klystron

    User Avatar
    Gold Member

    Consider the importance of near-time calculations to experimenters operating a wind tunnel to generate and collect fluid dynamics data for subsequent analysis.

    Suppose we are testing a scale model of a Boeing 777 wing mounted in a subsonic wind tunnel to determine the effects a winglet has on laminar flow around the primary wing as alpha -- angle of attack -- varies. The wind tunnel software computes and displays Reynolds number -- a measure as laminar flow becomes turbulent -- alongside alpha to guide operations in near-time to maximize use of resources; perhaps by restricting angle of attack past a selected turbulence measure or inhibiting full-scale data collection when turbulence exceeds the flight envelope (operational limits) of an actual 777.

    https://en.wikipedia.org/wiki/Reynolds_number .
    See also "The Wind Tunnels of NASA" and NASA ARC Standardized Wind Tunnel System (SWTS).

    The system programmer not only provides real-time data collection but near-time data sampling and computation of vital measurements such as Reynolds number while the experiment runs. The wind tunnel software system computes selected values as quickly and error free as possible in order to provide the best data during run time for later (super)-computation. The software engineer recognizes some computations are time critical for operational reasons. Later fluid dynamics computations could be time sensitive due to cost and super-computer time sharing.
     
  2. Aug 6, 2018 #42
    Ultimately all compilers, or the compiler used to compile the compiler, where written in C/C++. It can do anything with no restrictions. It's easy to write very efficient very fast code. it's also just as easy to shoot yourself in the foot with it. But remember the old adage "There will never be a programming language in which it is the least bit difficult to write terrible code". That said, C# does have one thing going for it in that the application developer can allow third parties and end users to extend the application through code while also restricting what system functions that code has access to. So users can share code on the internet without worrying about getting a virus, as long as the developer locked out I/O system calls, or put them behind appropriate custom versions of those functions.
     
  3. Aug 6, 2018 #43

    anorlunda

    Staff: Mentor

    How does an end user do that?

    Ho can an end user audit the safety practices of the developer?

    As long as there is a "as long as" proviso:wink:, the prudent end user must presume that the proviso is not met.
     
  4. Aug 8, 2018 #44
    Results:
    Multiplication result was 0.045399907063 and took 147318.304 Microseconds.
    Exponentiation result was 0.045399907063 and took 0.291 Microseconds.
    Exponentiation 2 result was 0.045399907062 and took 0.583 Microseconds.
    Fast exponentiation result was 0.045399907062 and took 0.875 Microseconds.

    Sorry if it wasn't clear. The first result, taking 0.147318 Seconds, was comparable to, but faster than, all the previously published results. The next three results took advantage of much better optimization, and took less than one microsecond, all were over 100,000 times faster than the first result. The fact that these three approaches took one, two, and three clock ticks should not be taken to mean one was better than the other three. (All were better than the first result.) If I really needed something that fast, I'd run the program 20 times or so to make sure that the results were consistent. But once you get the optimizer to knock out over 99,999 percent of the execution time, I wouldn't worry.
     
  5. Aug 8, 2018 #45

    FactChecker

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    This sounds too good to be true. I don't know exactly what you ran or are comparing, but that is too much improvement from an optimizer. One thing to look for is that the "optimized" version may not really be looping through the same number of loops or the same calculation. That can be because some identical calculation is being done time after time and the optimizer moved that code out of the loop.

    PS. Although pulling repeated identical calculations out of a loop is a good optimization step, it is not representative of the average speedup you can expect from an optimizer.

    PPS. The last time I saw a speedup like that, the "fast" version had completely removed a loop to 1000 and was only executing the calculations once.
     
    Last edited: Aug 9, 2018
  6. Aug 15, 2018 #46

    bhobba

    Staff: Mentor

    Speed comparisons often reveal strange things:
    http://alexeyvishnevsky.com/2015/05/lua-wraped-python/

    It turns out for many tasks a highly optimized just in time complied language like Lua is as fast as C - the version of Lua is LuaJIT:
    http://luajit.org/

    But as the above shows even just interpreted LUA is pretty fast - the same a c in that application - but personally I use LUAJIT.

    It's easy to call Lua from Python using a c program as glue. I personally, on the very rare occasions I program these days, just write it in Python. Usually its fast enough, but if it isn't do some write statements to see what bits its spending most time in and write it in Lua and call it from Python. For simple programs I just write it in Moonscript, which compiles to Lua from the start. I have never have done it except on a couple of programs while I was professionally programming, but for really critical parts I write in assembler. I only use C programs for glue - its good for that - most languages can call or call other languages using c. Although the link I gave used some functionality integrated into Python to execute Lua - called LUPA as an extension to CPYTHON. So for me it goes like this - Python, Lua and rarely assembler.

    Thanks
    Bill
     
    Last edited: Aug 15, 2018
  7. Aug 15, 2018 #47

    FactChecker

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    The languages discussed were Ada and C. I don't know what exactly was being compared or run when the claim was that the optimizer option sped execution up by a factor of 100 thousand times. No optimizer can do that. It implies that a version of Ada or C was inconceivably slow.
     
  8. Aug 15, 2018 #48

    bhobba

    Staff: Mentor

    Nor do I. I was simply pointing out speed comparisons are a strange beast. I highly doubt any optimizer can do that - the big speed ups usually come from two things:

    1. Static typing like you can do in CYTHON
    2. Compiling rather than interpreting. Just In Time Compiling is nowdays as fast as actual compiling (GCC binaries now run as fast as LLVM) hence LLVM being on the rise as a language programs are compiled to and you simply implement LLVM on your machine. I suspect they will eventually exceed the performance of optimized direct compiles - just my view.

    But to be clear you do NOT achieve that type of speed up with optimizing compilers. JIT compilers and optimizing them seems the way of the future - but that will not do it either.

    Thanks
    Bill
     
  9. Aug 16, 2018 #49

    FactChecker

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    Interpreted languages are a different beast from C or Ada and large speed differences should not be surprising. But those types of speed issues are caused by drastically different and identifyable approaches. Often the solution is to invoke libraries written in C from the higher level interpreted language. Python is known to be relatively slow and to benefit from the use of faster libraries.

    That being said, I have never seen a speed difference as large as 100 thousand times unless a looping process with thousands of loops was completely eliminated. In my experience, even context switches and excessive function calls do not cause those types of slow downs. It is possible that the computer operating system is messing up one of the options and not the other, but I am assuming that those issues have been eliminated.
     
    Last edited: Aug 16, 2018
  10. Sep 10, 2018 #50
    Ok, A lot of these responses are exceptionally valuable in their own right, so I won't go into details, but I would suggest you question why you're asking this question (no pun intended).

    On the one hand, everything in a high-level language has to be done on a low level, and low level is typically faster. So should you always use C over Matlab?

    No. In fact since Matlab is a high-level language, it can do many things under-the-hood, that you might not necessarily need to get involved with. For example, should you be using integers, longs, 128 bit integers? What if you need to do that dynamically? What about multithreading? Do you really want to get involved with Mutexes, race conditions and shared memory?

    If you know for a fact, on the machine level, what you want to be doing, and that is the absolute best that you know of, C/C++/D have no substitute. They do the least amount of work for you and are compiled languages, so the tradeoffs are in your favour. But it will take a longer time to write.

    If, on the other hand, you know what your result looks like and you'd be Googling the algorithm to do that efficiently, then you're better off using a pre-built library. In fact, even the most inefficient platform, since it does a lot of the optimisations for you, will outperform your C++ code, simply because it knows better.

    So the real question to ask, is what is more important to you: getting the results tomorrow by writing low level code for a day, that displays the results near instantly, or to write code that takes a few minutes, but that you could jot down in an hour. If it's the results you want, then obviously use the high-level stuff. If you want to re-use your code as a library, then use the former.

    It's not a simple solution.
     
  11. Sep 10, 2018 #51

    anorlunda

    Staff: Mentor

    One thing I think is undeniably true, is that programming languages are the most fun of all topics among programmers.

    I'm reminded of when the Ada language was first introduced. They published a document called the rationale, explaining why they wanted this new language. The rationale (to the best of my memory) said that in the history of DOD software projects, that every single project created it's own language. The exception was Jovial which had been used in 2 projects. Ada was intended to be the one and only language for all future projects.

    So, did Ada become the language to end all languages? Heck no.

    I'm confident that as long as humans write software, they will continue creating new programming languages, and there will be a credible rationale for each and every one of them.
     
  12. Sep 10, 2018 #52

    FactChecker

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    Good advice, but I think that you are being very conservative in your estimates. Using a low-level language to mimic what one can get in one hour of MATLAB programming could easily take weeks of programming.
     
  13. Sep 10, 2018 #53
    That’s assuming you could get equivalent behaviour. Most Matlab functions are exceptionally smart and catch things like poor conditioning early. Besides, when was the last time Python segfaulted because you used a negative array index?
     
  14. Sep 19, 2018 #54

    cronxeh

    User Avatar
    Gold Member

    I think for general benchmarks (i.e Python vs Java) there are already good ball-park figures out there (i.e https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html and https://www.techempower.com/benchmarks ) but for real-world application its not really worth talking about a single core execution or even single machine execution.

    So really it comes down to speed of iteration, concurrency, parallelism, and community. I personally would not reach for C/C++ as it does not pass 'speed of iteration' mustard test, or community for that matter. So in my humble opinion:

    For small-scale application, data science, and proof of concept work - Python3 is lingua franca.

    For really-really large-scale applications with multiple distributed teams working together, deployed across 10K+ servers, there are really only 2 choices - Java, and if you like to skate uphill and write most of your own libraries for everything, Golang. There is also Scala as a contender, but it has its own issues (as in: all software problems are people problems, and with Scala you'll get "implicit hell").
     
  15. Sep 20, 2018 #55

    Svein

    User Avatar
    Science Advisor

    Python, Java, Julia, whatever: You are all assuming that there exists a software "machine" that handles all the difficult parts for you. Some of us do not have that luxury - writing device drivers, interrupt handlers, process schedulers and so on. In that case your environment and requirements are radically different:
    • You are writing on "bare metal". No libraries are available to help with the difficult parts.
    • Usually your routines have to be short, fast and error-free. An Ethernet hardware driver is called millions of times each day - bugs are not tolerated
    • Debugging the routines calls for very special equipment (you can not insert debugging printouts, since the high-level printing routines are not available)
    Here is an example of a small part of an interrupt driver for an Ethernet hardware chip:
    Code (C):

    /*m************************************************************************
    ***  FUNCTION: _ecInitInter
    ***************************************************************************
    ***  PURPOSE:  Sets up the interrupt structure for EC
    ***************************************************************************
    ***
    ***  WRITTEN BY     : Svein Johannessen 890711
    ***  LAST CHANGED BY: Svein Johannessen 900216
    **************************************************************************/


    #include "ec.h"
    #include "sys/types.h"
    #include "sys/mbuf.h"
    #include "ecdldef.h"
    #include "ecextrn.h"
    #include "net/eh.h"

    void (* _ecRx)() = NULL;
    void (* _ecTx)() = NULL;
    void (* _ecFatal)() = NULL;

    short _ecRxRdy();
    short _ecTxRdy();

    void interrupt EC_INT();

    u_char int_babl;                    /* babble */
    u_char int_miss;                    /* missed packet */
    u_char int_merr;                    /* memory error */
    u_char int_rint;                    /* rx packet */
    u_char int_tint;                    /* tx packet */
    u_char int_idon;                    /* init done */

    u_short _ecMERR;
    u_short _ecLastCSR0;

    EXPORT short _ecInitInter(eh_idone,eh_odone)
    void (* eh_idone)();
    void (* eh_odone)();
    {

        _ecRx = eh_idone;
        _ecTx = eh_odone;
        _ecFatal= NULL;
        _ecMERR = 0;
        _ecLastCSR0 = 0;

        /* Here someone must set up the PC interrupt vector ... */
        if ( ( _ecRx == NULL ) || ( _ecTx == NULL ) )
             return ERROR;
        return NOERROR;
    }

    /*f************************************************************************
    **  FUNCTION: _ecRxInt
    ***************************************************************************
    ***  PURPOSE:  Handles a receive interrupt
    ***************************************************************************
    ***
    ***  WRITTEN BY     : Svein Johannessen 890711
    ***  LAST CHANGED BY: Svein Johannessen 900216
    **************************************************************************/


    static void _ecRxInt()
    {
        struct  mbuf *cur_buff;
        register short rxerr, good;

        /* see if the LANCE has received a packet  */
        rxerr = _ecRecPacket(&cur_buff);        /* get address of data buffer */

        if ( cur_buff != NULL ) {
          good = (rxerr==NOERROR) && !(int_miss || int_merr);
          (*_ecRx)(cur_buff,good);
          }
        else
             int_rint = 0;
        (void)_ecAllocBufs();         /* Allocate more buffers */
    }
    /*f************************************************************************
    ***  FUNCTION: _ecTxInt
    ***************************************************************************
    ***  PURPOSE:  Handles a transmit interrupt
    ***************************************************************************
    ***
    ***  WRITTEN BY     : Svein Johannessen 890712
    ***  LAST CHANGED BY: Svein Johannessen 900418
    **************************************************************************/


    void _ecTxInt()
    {
        struct  mbuf *cur_buff;
        u_char  TxBad;
        short good, Coll;

        TxBad = _ecCheckTx(&cur_buff, &Coll);
        good = !(int_babl || int_merr || TxBad);
        if (cur_buff!=NULL)
          (*_ecTx)(cur_buff,good,Coll);
    }

    /*f************************************************************************
    ***  FUNCTION: _ecIntHandler
    ***************************************************************************
    ***  PURPOSE:  Handles an interrupt
    ***************************************************************************
    ***
    ***  WRITTEN BY     : Svein Johannessen 890712
    ***  LAST CHANGED BY: Svein Johannessen 900418
    **************************************************************************/

    /**
    ***  OTHER RELEVANT  :
    ***  INFORMATION     :
    ***
    **************************************************************************/


    extern short num_rx_buf;             /* wanted number of rx msg desc */
    extern short cnt_rx_buf;             /* actual number of rx msg desc */

    void _ecIntHandler()
    {
        register u_short IntStat;
        register u_short ErrStat;

        IntStat = RD_CSR0;

        while (IntStat & INTF) {
          _ecLastCSR0 = IntStat;
          int_babl = ((IntStat & BABL)!=0);
          if ( int_babl )
               WR_CSR0( BABL);
          int_miss = ((IntStat & MISS)!=0);
          if ( int_miss )
               WR_CSR0( MISS);
          int_merr = ((IntStat & MERR)!=0);
          if ( int_merr )
          {
                _ecMERR++;
              WR_CSR0( MERR);
          }
          int_rint = ((IntStat & RINT)!=0);
          if ( int_rint )
            WR_CSR0( RINT);
          while ( int_rint ) {
            _ecRxInt();
            int_rint = _ecRxRdy();
            }
          int_tint = ((IntStat & TINT)!=0);
          if ( int_tint ) {
            WR_CSR0( TINT);
            _ecTxInt();
            }
          int_idon = ((IntStat & IDON)!=0);
          if ( int_idon )
               WR_CSR0( IDON);
          if ( int_miss && (cnt_rx_buf==0)) {
               _ecDoStatistic(FALSE,FALSE,int_miss,FALSE);
               (void)_ecAllocBufs();         /* Allocate more buffers */
          }
          if (_ecFatal!=NULL) {
            ErrStat = 0;
            if ((IntStat & TXON)==0)
              ErrStat |= EC_TXSTOPPED;
            if ((IntStat & RXON)==0)
              ErrStat |= EC_RXSTOPPED;
            if ( int_miss && (cnt_rx_buf!=0))
              ErrStat |= EC_SYNCERROR;
            if (ErrStat!=0)
              (*_ecFatal)(ErrStat);
            }
          IntStat = RD_CSR0;
          }
        WR_CSR0( (INEA | CERR));
    }

    /*f************************************************************************
    ***  FUNCTION: _ecInterrupt
    ***************************************************************************
    ***  PURPOSE:  Receives an interrupt
    ***************************************************************************
    ***
    ***  WRITTEN BY     : Svein Johannessen 890830
    ***  LAST CHANGED BY: Svein Johannessen 890830
    **************************************************************************/


    void interrupt _ecInterrupt()
    {
        _ecIntHandler();
    }

    /* End Of File */

     
     
  16. Sep 20, 2018 #56

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2017 Award

    Well, the thread title is "Comparison of high-level computer programming languages" (emphasis mine).
     
  17. Sep 20, 2018 #57

    FactChecker

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    Your "real-world" is far different from my "real-world".
     
  18. Sep 21, 2018 #58

    cronxeh

    User Avatar
    Gold Member

    yes, but are they both equally imaginary?
     
  19. Sep 21, 2018 #59

    Svein

    User Avatar
    Science Advisor

    Yes, but what exactly does it mean?
    • High-level as in "more abstract than assembly language"?
    • High-level as in "will only run on a high-level computer (containing a mass storage device and a sophisticated operating system)"?
     
  20. Sep 21, 2018 #60
    Normally I'd say the first one, but the OP seems to want to compare efficiency of 3.5-4GL math suites, presumably ignoring 3GL offerings, or 2GL possibilities.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted