Comparison of high-level computer programming languages

Click For Summary
The discussion centers on the potential usefulness of studying the computational effectiveness of various programming languages, including Matlab, Julia, and Python, in solving typical engineering problems. While some argue that existing benchmarks already provide sufficient comparisons, others suggest that a focused study could yield valuable insights, particularly regarding Julia's speed and ease of parallelization. The importance of computational speed is debated, with some participants emphasizing that for many applications, speed is less critical than code documentation and understandability. Additionally, real-time applications and massive computations are highlighted as areas where speed becomes more relevant. Overall, the conversation reflects a nuanced view on the trade-offs between speed, usability, and maintainability in high-level programming languages.
  • #31
COBOL had this issue of strictness too. It wasn't so bad though because we'd use an older program as a template for the newer one. I remember the classic error of forgetting a single period in the IDENTIFICATION section and wind up with literally hundreds of errors as the compiler failed to recover from it.
 
Technology news on Phys.org
  • #32
FactChecker said:
If you have ever been on a project that fell into the Ada "strict typing hell", then you know that the advertised Ada development advantages are not guaranteed. And upper management often prefers strict rules over "best programmer judgement". That preference can lead straight to the Ada "strict typing hell" (among other bad things).

One part of my job at MITRE, and there were a half a dozen of us who did this, was to get all of the misunderstandings about Ada out of the software design rules well before coding started on Air Force electronics projects. Sometimes though we ran into managers who had added their own rules gotten out of a magazine somewhere. Like: can't use Unchecked_Conversion. All UC means is that it is the programmer's job to wrap it in any necessary checks. Free almost always is UC, because it is your job to be sure there are no other accesses out there. Another is the one you are complaining about. I didn't declare any non-standard types in that fragment. Where you should use your own types are 1: in Generics and 2: when physical units are involved. There were a slew of nice papers on how to have one type for SI units such that corresponded to most units, with the type checking done at compile time. I preferred to stick to things like measuring time in Duration, with constants like milliseconds being defined for use when declaring values. Anyway, define one type per package, and convert the package to a generic if necessary. (Doesn't apply to enumeration types used for convenience: type Color is (Red, ... or type Switch is (Off,On); although you might want to do that one as Off: constant Boolean := False; and so on.)

The most important rule in Ada programming though, is that if the language seems to be getting in your way, it is trying to tell you something. If you are getting wrapped around the axle, ask what is the simplest way to do what you are trying to do, then figure out why you can't do that. Adding a parameter to a subprogram, or a subprogram to a package may require changing the design documents. Just realize that people make mistakes and hope no one will blow up. (Because their work, of course, was perfect.)
 
  • Like
Likes FactChecker
  • #33
eachus said:
One part of my job at MITRE, and there were a half a dozen of us who did this, was to get all of the misunderstandings about Ada out of the software design rules well before coding started ...
Just realize that people make mistakes and hope no one will blow up. (Because their work, of course, was perfect.)

That was a most interesting post. It reminds us that, until the day when we turn over coding to AIs, rules and discipline must give way to humanity. Humans writing code will always remain partially an art.

I recently watched a very interesting documentary (see below) about MIT's Draper Labs and the navigation computers for the Apollo moon missions. According to this, the software project was nearly a disaster under the loosey-goosey academic culture until NASA sent in a disciplinarian. After a tough time, the software got finished and performed admirably for Apollo 8 and 11.

My point is that you can err in either direction, too much discipline or too much humanity. Finding the right balance has little to do with programming languages.

 
  • #34
Dr.D said:
I'm sure that this is true. I would argue, however, that this is a specialist concern, not a general user concern. How many folks do you suppose are doing those problems that run for days on a supercomputer?
Personally, I did in several different scientific areas.
 
  • #35
anorlunda said:
That was a most interesting post. It reminds us that, until the day when we turn over coding to AIs, rules and discipline must give way to humanity. Humans writing code will always remain partially an art.

I recently watched a very interesting documentary (see below) about MIT's Draper Labs and the navigation computers for the Apollo moon missions. According to this, the software project was nearly a disaster under the loosey-goosey academic culture until NASA sent in a disciplinarian. After a tough time, the software got finished and performed admirably for Apollo 8 and 11.

My point is that you can err in either direction, too much discipline or too much humanity. Finding the right balance has little to do with programming languages.



I remember that. As a Freshman I got into a class under Doc Draper* at the I-Lab. (Much later Draper Labs) I got assigned to a project to determine whether early ICs (I think they had three transistors and six diodes) were any good or not. In the lab chips which had worked for months would suddenly fail. I had what I thought was a very simple idea that if failure equalled too slow, I should test not static switching voltages but the 10% to 90% (or vice-versa) output voltage swing was taking too long. Turned out you only had to test one transistor all of them on a chip had pretty much identical characteristics. Why did this time domain measurement matter? When the transistor was switching, it was the highest resistance component in the circuit. So the chips that switched too slowly eventually overheated and died. Of course, what killed one chip might not touch the one next to it, because it hadn't had to switch as often.

I remember Apollo 8 being given go for TLI (trans lunar injection), that was the vote that I had done my job.

As for the 1202 alarms, I was at my parent's home where we were celebrating one of my sister's 18th birthday. All the family was there, including my father who had designed power supplies for radars at Cape Canaveral (before it became KSC), and my grandfather who had literally learned to fly from the Wright Brothers well before WWI. Of course, every time the TV talking heads said computer problem, my first reaction was, oh no! I goofed. Then I realized it was a software issue. Whew! Not my problem.

Finally, Apollo 11 landed and while they were depressurizing the LM, I started to explain that the pictures we were going to see, live from the moon were going to be black&white, not color like Apollo 8, and why.

My mother interrupted, "Live from the moon? Live from the moon! When I was your age we would say about as likely as flying to the moon, as a way to indicate a thing was impossible."
"Helen," her father said, "Do you remember when I came to your room and said I had to go to New York to see a friend off on a trip? I never thought Lindbergh would make it!" (My grandfather flew in the same unit as Lindbergh in WWI. Swore he would never fly again and didn't. My grandmother, his wife, flew at least a million miles on commercial airlines. She worked for a drug company, and she was one of their go to people for getting convictions against druggists diluting the products, or even selling colored water. (She would pick up on facial features hard to disguise, so that if they shaved a beard, died their hair, etc., she could still make a positive ID, and explain it to the judge.)

They both lived long enough to see pictures from Voyager II at Uranus, and my mother to see pictures from Neptune.

* There were very few people who called Doc Draper anything other than Doc Draper. His close friends call him Doc. I have no idea what his mother called him. (Probably Sonny like my father's mother called him.)
 
  • Like
Likes NotGauss and anorlunda
  • #36
eachus said:
One part of my job at MITRE, and there were a half a dozen of us who did this, was to get all of the misunderstandings about Ada out of the software design rules well before coding started on Air Force electronics projects. Sometimes though we ran into managers who had added their own rules gotten out of a magazine somewhere.
In theory, our rules were guided by some Carnegie Mellon advice. I thought that their advice was very wise, flexible, and appropriate. The part that management disliked and eliminated from our rules was flexible.
The most important rule in Ada programming though, is that if the language seems to be getting in your way, it is trying to tell you something.
On a large program, it doesn't matter what the code is telling me. We have to follow the programming standards that management presents to the government.
 
  • #37
FactChecker said:
In theory, our rules were guided by some Carnegie Mellon advice. I thought that their advice was very wise, flexible, and appropriate. The part that management disliked and eliminated from our rules was flexible.On a large program, it doesn't matter what the code is telling me. We have to follow the programming standards that management presents to the government.

We granted far more waiver requests than we turned down. The only one I can remember turning down was for 25 KSLOC of C. The project had no idea what the code did, since the author had left over two years earlier. I looked at the code and it was a simulation for a chip that had been developed for the project, to let them test the rest of the code without the chip. Since the chip was now there, I insisted that they replace the emulation with code (about 100 lines) that actually used the chip. Software ran a lot faster then. Another waiver request I remember was to allow for 17 lines of assembler. I showed them how to write a code insert in Ada. Issue closed.

In general, we found that the most decisive factor in whether a software project succeeded or not, was to divide the number of software engineers into the MIPS of development machines they could use to develop and test code. A number significantly under one was trouble, two or three no problem. Of course today everybody has a faster PC than that, so problems only came when the software was being developed in a classified lab
 
Last edited:
  • Like
Likes FactChecker
  • #38
mpresic said:
Back to the main point. Documentation, and understandability should be more of a priority than speed. You have engineers that can make the most high-level user-friendly language inscrutable, and you have engineers that can make (even) structured fortran or assembly language, understandable.
In general, there will be project requirements, and those requirements must be met. It sounds a bit religious to emphasize how one must address one potential requirement over another.

If I need to present results to a meeting that's two hours away, I will be concentrating of rapid short-term development and execution. If I need to control a mission-critical military platform that will be in service is 8 years, I will be concentrating of traceability, maintainability, ease of testing, version control, auditability, etc.

To address the OPs question:
If benchmarks using Navier-Stokes equations will document ground not covered in existing benchmarks, then there is potential use in it. I don't know much about Navier-Stokes equations, but if they are used in simulations that tend to run past several minutes, then there may be consumers of this data.

As far as using Matlab-generated C code, by all means include that in the survey. You will be documenting hardware, software, everything - version numbers, configuration data, and the specific method(s) used to implement the solution of each platform.

Since the code you produce will be part of your report, it should be exemplary in style and function.
 
  • Like
Likes FactChecker
  • #39
This thread reminds me of a PF Insights Article. The article and the ensuing discussion parallel this thread in many ways.

The article: https://www.physicsforums.com/insights/software-never-perfect/

The discussion: https://www.physicsforums.com/threa...r-perfect-comments.873741/page-2#post-5565499

I'll quote myself complaining that modern software engineering methods and discipline are not sufficiently down scalable , and that is a serious problem because of the IOT.

anorlunda said:
Consider a controller for a motor operated valve (MOV). The valve can be asked to open, close, or to maintain an intermediate position. The controller may monitor and protect the MOV from malfunctions. In the old days, the logic for this controller would be expressed in perhaps 100-150 bytes of instructions, plus 50 bytes of data. That is so little that not even an assembler would be needed. Just program it in machine language and type the 400 hex digits by hand into the ROM burner. A 6502, or 8008, or 6809 CPU variant with on-chip ROM would do the work. The software would have been the work product of a single person working less than one work-day, perhaps checked by a second person. Instantiations would cost about $1 each. (In the really old days, it would have been done with discrete logic.)

In the modern approach, we begin with standards, requirements, and design phases. then the logic would be programmed in a high level language. That needs libraries, and those need an OS (probably a Linux variant), and that brings in more libraries. With all those libraries come bewildering dependencies and risks, (for example https://www.physicsforums.com/threads/science-vulnerability-to-bugs.878975/#post-5521131) All that software needs periodic patches, so we need to add an Internet connection (HORRORS!:nb)) and add a user interface. With that comes all the cybersecurity, and auditing overhead. All in all, the "modern" implementation includes ##10^4## to ##10^6## times more software than the "old" 200 byte version, to perform the same invariant MOV controller function.

Now you can fairly call me old fashioned, but I find it hard to imagine how the world's best quality control procedures, and software standards could ever make the "modern" implementation as risk free or reliable as the "old" 200 byte version. Worse, the modern standards probably prohibit the "old" version because it can't be verifiabull, auditabull, updatabull, securabull, or lots of other bulls. I argue that we are abandoning the KISS principle.

Now, the reason that this is more than a pedantic point, is the IOT (Internet of Things). We are about to become surrounded by billions of ubiquitous micro devices implemented the "modern" way rather than the "old" way. It is highly germane to stop and consider if that is wise.
 
  • #40
.Scott said:
If benchmarks using Navier-Stokes equations will document ground not covered in existing benchmarks, then there is potential use in it. I don't know much about Navier-Stokes equations, but if they are used in simulations that tend to run past several minutes, then there may be consumers of this data.
Navier-Stokes equations are at the core of Computational Fluid Dynamics and are, indeed, used in very long series of runs. For instance, aerodynamics calculations that account for every combination of angle of attack, angle of sideslip, mach, altitude, and surface positions would take a very long time to run. Supercomputers are sometimes necessary.
 
Last edited:
  • #41
Consider the importance of near-time calculations to experimenters operating a wind tunnel to generate and collect fluid dynamics data for subsequent analysis.

Suppose we are testing a scale model of a Boeing 777 wing mounted in a subsonic wind tunnel to determine the effects a winglet has on laminar flow around the primary wing as alpha -- angle of attack -- varies. The wind tunnel software computes and displays Reynolds number -- a measure as laminar flow becomes turbulent -- alongside alpha to guide operations in near-time to maximize use of resources; perhaps by restricting angle of attack past a selected turbulence measure or inhibiting full-scale data collection when turbulence exceeds the flight envelope (operational limits) of an actual 777.

https://en.wikipedia.org/wiki/Reynolds_number .
See also "The Wind Tunnels of NASA" and NASA ARC Standardized Wind Tunnel System (SWTS).

The system programmer not only provides real-time data collection but near-time data sampling and computation of vital measurements such as Reynolds number while the experiment runs. The wind tunnel software system computes selected values as quickly and error free as possible in order to provide the best data during run time for later (super)-computation. The software engineer recognizes some computations are time critical for operational reasons. Later fluid dynamics computations could be time sensitive due to cost and super-computer time sharing.
 
  • #42
Ultimately all compilers, or the compiler used to compile the compiler, where written in C/C++. It can do anything with no restrictions. It's easy to write very efficient very fast code. it's also just as easy to shoot yourself in the foot with it. But remember the old adage "There will never be a programming language in which it is the least bit difficult to write terrible code". That said, C# does have one thing going for it in that the application developer can allow third parties and end users to extend the application through code while also restricting what system functions that code has access to. So users can share code on the internet without worrying about getting a virus, as long as the developer locked out I/O system calls, or put them behind appropriate custom versions of those functions.
 
  • #43
FarmerTony said:
end users to extend the application through code while also restricting what system functions that code has access to. So users can share code on the internet without worrying about getting a virus, as long as the developer locked out I/O system calls, or put them behind appropriate custom versions of those functions.

How does an end user do that?

Ho can an end user audit the safety practices of the developer?

As long as there is a "as long as" proviso:wink:, the prudent end user must presume that the proviso is not met.
 
  • Like
Likes FactChecker
  • #44
FactChecker said:
@eachus , I wish no offense, but in summary, was there a run-time difference between C and Ada? I like a "you were there" type description, but only after a summary that tells me if it is worth reading the details.
These type of timing comparisons of a single calculation done many times may not reflect the true difference between language execution speeds.

Results:
Multiplication result was 0.045399907063 and took 147318.304 Microseconds.
Exponentiation result was 0.045399907063 and took 0.291 Microseconds.
Exponentiation 2 result was 0.045399907062 and took 0.583 Microseconds.
Fast exponentiation result was 0.045399907062 and took 0.875 Microseconds.

Sorry if it wasn't clear. The first result, taking 0.147318 Seconds, was comparable to, but faster than, all the previously published results. The next three results took advantage of much better optimization, and took less than one microsecond, all were over 100,000 times faster than the first result. The fact that these three approaches took one, two, and three clock ticks should not be taken to mean one was better than the other three. (All were better than the first result.) If I really needed something that fast, I'd run the program 20 times or so to make sure that the results were consistent. But once you get the optimizer to knock out over 99,999 percent of the execution time, I wouldn't worry.
 
  • #45
eachus said:
But once you get the optimizer to knock out over 99,999 percent of the execution time, I wouldn't worry.
This sounds too good to be true. I don't know exactly what you ran or are comparing, but that is too much improvement from an optimizer. One thing to look for is that the "optimized" version may not really be looping through the same number of loops or the same calculation. That can be because some identical calculation is being done time after time and the optimizer moved that code out of the loop.

PS. Although pulling repeated identical calculations out of a loop is a good optimization step, it is not representative of the average speedup you can expect from an optimizer.

PPS. The last time I saw a speedup like that, the "fast" version had completely removed a loop to 1000 and was only executing the calculations once.
 
Last edited:
  • Like
Likes bhobba and anorlunda
  • #46
FactChecker said:
This sounds too good to be true.

Speed comparisons often reveal strange things:
http://alexeyvishnevsky.com/2015/05/lua-wraped-python/

It turns out for many tasks a highly optimized just in time complied language like Lua is as fast as C - the version of Lua is LuaJIT:
http://luajit.org/

But as the above shows even just interpreted LUA is pretty fast - the same a c in that application - but personally I use LUAJIT.

It's easy to call Lua from Python using a c program as glue. I personally, on the very rare occasions I program these days, just write it in Python. Usually its fast enough, but if it isn't do some write statements to see what bits its spending most time in and write it in Lua and call it from Python. For simple programs I just write it in Moonscript, which compiles to Lua from the start. I have never have done it except on a couple of programs while I was professionally programming, but for really critical parts I write in assembler. I only use C programs for glue - its good for that - most languages can call or call other languages using c. Although the link I gave used some functionality integrated into Python to execute Lua - called LUPA as an extension to CPYTHON. So for me it goes like this - Python, Lua and rarely assembler.

Thanks
Bill
 
Last edited:
  • #47
bhobba said:
Speed comparisons often reveal strange things:
http://alexeyvishnevsky.com/2015/05/lua-wraped-python/

It turns out for many tasks a highly optimized just in time complied language like Lua is as fast as C - the version of Lua is LuaJIT:
http://luajit.org/
The languages discussed were Ada and C. I don't know what exactly was being compared or run when the claim was that the optimizer option sped execution up by a factor of 100 thousand times. No optimizer can do that. It implies that a version of Ada or C was inconceivably slow.
 
  • #48
FactChecker said:
The languages discussed were Ada and C. I don't know what exactly was being compared or run when the claim was that the optimizer option sped execution up by a factor of 100 thousand times. No optimizer can do that. It implies that a version of Ada or C was inconceivably slow.

Nor do I. I was simply pointing out speed comparisons are a strange beast. I highly doubt any optimizer can do that - the big speed ups usually come from two things:

1. Static typing like you can do in CYTHON
2. Compiling rather than interpreting. Just In Time Compiling is nowdays as fast as actual compiling (GCC binaries now run as fast as LLVM) hence LLVM being on the rise as a language programs are compiled to and you simply implement LLVM on your machine. I suspect they will eventually exceed the performance of optimized direct compiles - just my view.

But to be clear you do NOT achieve that type of speed up with optimizing compilers. JIT compilers and optimizing them seems the way of the future - but that will not do it either.

Thanks
Bill
 
  • #49
Interpreted languages are a different beast from C or Ada and large speed differences should not be surprising. But those types of speed issues are caused by drastically different and identifyable approaches. Often the solution is to invoke libraries written in C from the higher level interpreted language. Python is known to be relatively slow and to benefit from the use of faster libraries.

That being said, I have never seen a speed difference as large as 100 thousand times unless a looping process with thousands of loops was completely eliminated. In my experience, even context switches and excessive function calls do not cause those types of slow downs. It is possible that the computer operating system is messing up one of the options and not the other, but I am assuming that those issues have been eliminated.
 
Last edited:
  • Like
Likes bhobba
  • #50
Ok, A lot of these responses are exceptionally valuable in their own right, so I won't go into details, but I would suggest you question why you're asking this question (no pun intended).

On the one hand, everything in a high-level language has to be done on a low level, and low level is typically faster. So should you always use C over Matlab?

No. In fact since Matlab is a high-level language, it can do many things under-the-hood, that you might not necessarily need to get involved with. For example, should you be using integers, longs, 128 bit integers? What if you need to do that dynamically? What about multithreading? Do you really want to get involved with Mutexes, race conditions and shared memory?

If you know for a fact, on the machine level, what you want to be doing, and that is the absolute best that you know of, C/C++/D have no substitute. They do the least amount of work for you and are compiled languages, so the tradeoffs are in your favour. But it will take a longer time to write.

If, on the other hand, you know what your result looks like and you'd be Googling the algorithm to do that efficiently, then you're better off using a pre-built library. In fact, even the most inefficient platform, since it does a lot of the optimisations for you, will outperform your C++ code, simply because it knows better.

So the real question to ask, is what is more important to you: getting the results tomorrow by writing low level code for a day, that displays the results near instantly, or to write code that takes a few minutes, but that you could jot down in an hour. If it's the results you want, then obviously use the high-level stuff. If you want to re-use your code as a library, then use the former.

It's not a simple solution.
 
  • Like
Likes FactChecker
  • #51
One thing I think is undeniably true, is that programming languages are the most fun of all topics among programmers.

I'm reminded of when the Ada language was first introduced. They published a document called the rationale, explaining why they wanted this new language. The rationale (to the best of my memory) said that in the history of DOD software projects, that every single project created it's own language. The exception was Jovial which had been used in 2 projects. Ada was intended to be the one and only language for all future projects.

So, did Ada become the language to end all languages? Heck no.

I'm confident that as long as humans write software, they will continue creating new programming languages, and there will be a credible rationale for each and every one of them.
 
  • Like
Likes bhobba, Klystron and FactChecker
  • #52
Alex Petrosyan said:
So the real question to ask, is what is more important to you: getting the results tomorrow by writing low level code for a day, that displays the results near instantly, or to write code that takes a few minutes, but that you could jot down in an hour. If it's the results you want, then obviously use the high-level stuff. If you want to re-use your code as a library, then use the former.
Good advice, but I think that you are being very conservative in your estimates. Using a low-level language to mimic what one can get in one hour of MATLAB programming could easily take weeks of programming.
 
  • Like
Likes Alex Petrosyan and jedishrfu
  • #53
FactChecker said:
Good advice, but I think that you are being very conservative in your estimates. Using a low-level language to mimic what one can get in one hour of MATLAB programming could easily take weeks of programming.

That’s assuming you could get equivalent behaviour. Most Matlab functions are exceptionally smart and catch things like poor conditioning early. Besides, when was the last time Python segfaulted because you used a negative array index?
 
  • Like
Likes FactChecker
  • #54
I think for general benchmarks (i.e Python vs Java) there are already good ball-park figures out there (i.e https://benchmarksgame-team.pages.debian.net/benchmarksgame/faster/python.html and https://www.techempower.com/benchmarks ) but for real-world application its not really worth talking about a single core execution or even single machine execution.

So really it comes down to speed of iteration, concurrency, parallelism, and community. I personally would not reach for C/C++ as it does not pass 'speed of iteration' mustard test, or community for that matter. So in my humble opinion:

For small-scale application, data science, and proof of concept work - Python3 is lingua franca.

For really-really large-scale applications with multiple distributed teams working together, deployed across 10K+ servers, there are really only 2 choices - Java, and if you like to skate uphill and write most of your own libraries for everything, Golang. There is also Scala as a contender, but it has its own issues (as in: all software problems are people problems, and with Scala you'll get "implicit hell").
 
  • #55
Python, Java, Julia, whatever: You are all assuming that there exists a software "machine" that handles all the difficult parts for you. Some of us do not have that luxury - writing device drivers, interrupt handlers, process schedulers and so on. In that case your environment and requirements are radically different:
  • You are writing on "bare metal". No libraries are available to help with the difficult parts.
  • Usually your routines have to be short, fast and error-free. An Ethernet hardware driver is called millions of times each day - bugs are not tolerated
  • Debugging the routines calls for very special equipment (you can not insert debugging printouts, since the high-level printing routines are not available)
Here is an example of a small part of an interrupt driver for an Ethernet hardware chip:
Code:
/*m************************************************************************
***  FUNCTION: _ecInitInter
***************************************************************************
***  PURPOSE:  Sets up the interrupt structure for EC
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890711
***  LAST CHANGED BY: Svein Johannessen 900216
**************************************************************************/

#include "ec.h"
#include "sys/types.h"
#include "sys/mbuf.h"
#include "ecdldef.h"
#include "ecextrn.h"
#include "net/eh.h"

void (* _ecRx)() = NULL;
void (* _ecTx)() = NULL;
void (* _ecFatal)() = NULL;

short _ecRxRdy();
short _ecTxRdy();

void interrupt EC_INT();

u_char int_babl;                    /* babble */
u_char int_miss;                    /* missed packet */
u_char int_merr;                    /* memory error */
u_char int_rint;                    /* rx packet */
u_char int_tint;                    /* tx packet */
u_char int_idon;                    /* init done */

u_short _ecMERR;
u_short _ecLastCSR0;

EXPORT short _ecInitInter(eh_idone,eh_odone)
void (* eh_idone)();
void (* eh_odone)();
{

    _ecRx = eh_idone;
    _ecTx = eh_odone;
    _ecFatal= NULL;
    _ecMERR = 0;
    _ecLastCSR0 = 0;

    /* Here someone must set up the PC interrupt vector ... */
    if ( ( _ecRx == NULL ) || ( _ecTx == NULL ) )
         return ERROR;
    return NOERROR;
}

/*f************************************************************************
**  FUNCTION: _ecRxInt
***************************************************************************
***  PURPOSE:  Handles a receive interrupt
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890711
***  LAST CHANGED BY: Svein Johannessen 900216
**************************************************************************/

static void _ecRxInt()
{
    struct  mbuf *cur_buff;
    register short rxerr, good;

    /* see if the LANCE has received a packet  */
    rxerr = _ecRecPacket(&cur_buff);        /* get address of data buffer */

    if ( cur_buff != NULL ) {
      good = (rxerr==NOERROR) && !(int_miss || int_merr);
      (*_ecRx)(cur_buff,good);
      }
    else
         int_rint = 0;
    (void)_ecAllocBufs();         /* Allocate more buffers */
}
/*f************************************************************************
***  FUNCTION: _ecTxInt
***************************************************************************
***  PURPOSE:  Handles a transmit interrupt
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890712
***  LAST CHANGED BY: Svein Johannessen 900418
**************************************************************************/

void _ecTxInt()
{
    struct  mbuf *cur_buff;
    u_char  TxBad;
    short good, Coll;

    TxBad = _ecCheckTx(&cur_buff, &Coll);
    good = !(int_babl || int_merr || TxBad);
    if (cur_buff!=NULL)
      (*_ecTx)(cur_buff,good,Coll);
}

/*f************************************************************************
***  FUNCTION: _ecIntHandler
***************************************************************************
***  PURPOSE:  Handles an interrupt
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890712
***  LAST CHANGED BY: Svein Johannessen 900418
**************************************************************************/
/**
***  OTHER RELEVANT  :
***  INFORMATION     :
***
**************************************************************************/

extern short num_rx_buf;             /* wanted number of rx msg desc */
extern short cnt_rx_buf;             /* actual number of rx msg desc */

void _ecIntHandler()
{
    register u_short IntStat;
    register u_short ErrStat;

    IntStat = RD_CSR0;

    while (IntStat & INTF) {
      _ecLastCSR0 = IntStat;
      int_babl = ((IntStat & BABL)!=0);
      if ( int_babl )
           WR_CSR0( BABL);
      int_miss = ((IntStat & MISS)!=0);
      if ( int_miss )
           WR_CSR0( MISS);
      int_merr = ((IntStat & MERR)!=0);
      if ( int_merr )
      {
            _ecMERR++;
          WR_CSR0( MERR);
      }
      int_rint = ((IntStat & RINT)!=0);
      if ( int_rint )
        WR_CSR0( RINT);
      while ( int_rint ) {
        _ecRxInt();
        int_rint = _ecRxRdy();
        }
      int_tint = ((IntStat & TINT)!=0);
      if ( int_tint ) {
        WR_CSR0( TINT);
        _ecTxInt();
        }
      int_idon = ((IntStat & IDON)!=0);
      if ( int_idon )
           WR_CSR0( IDON);
      if ( int_miss && (cnt_rx_buf==0)) {
           _ecDoStatistic(FALSE,FALSE,int_miss,FALSE);
           (void)_ecAllocBufs();         /* Allocate more buffers */
      }
      if (_ecFatal!=NULL) {
        ErrStat = 0;
        if ((IntStat & TXON)==0)
          ErrStat |= EC_TXSTOPPED;
        if ((IntStat & RXON)==0)
          ErrStat |= EC_RXSTOPPED;
        if ( int_miss && (cnt_rx_buf!=0))
          ErrStat |= EC_SYNCERROR;
        if (ErrStat!=0)
          (*_ecFatal)(ErrStat);
        }
      IntStat = RD_CSR0;
      }
    WR_CSR0( (INEA | CERR));
}

/*f************************************************************************
***  FUNCTION: _ecInterrupt
***************************************************************************
***  PURPOSE:  Receives an interrupt
***************************************************************************
***
***  WRITTEN BY     : Svein Johannessen 890830
***  LAST CHANGED BY: Svein Johannessen 890830
**************************************************************************/

void interrupt _ecInterrupt()
{
    _ecIntHandler();
}

/* End Of File */
 
  • #56
Svein said:
Python, Java, Julia, whatever: You are all assuming that there exists a software "machine" that handles all the difficult parts for you.

Well, the thread title is "Comparison of high-level computer programming languages" (emphasis mine).
 
  • #57
cronxeh said:
but for real-world application its not really worth talking about a single core execution or even single machine execution.
Your "real-world" is far different from my "real-world".
 
  • #58
FactChecker said:
Your "real-world" is far different from my "real-world".

yes, but are they both equally imaginary?
 
  • #59
Vanadium 50 said:
Well, the thread title is "Comparison of high-level computer programming languages" (emphasis mine).
Yes, but what exactly does it mean?
  • High-level as in "more abstract than assembly language"?
  • High-level as in "will only run on a high-level computer (containing a mass storage device and a sophisticated operating system)"?
 
  • #60
Normally I'd say the first one, but the OP seems to want to compare efficiency of 3.5-4GL math suites, presumably ignoring 3GL offerings, or 2GL possibilities.
 

Similar threads

  • Sticky
  • · Replies 13 ·
Replies
13
Views
7K
Replies
29
Views
5K
Replies
10
Views
5K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
3K
  • · Replies 14 ·
Replies
14
Views
3K
Replies
18
Views
3K
  • · Replies 4 ·
Replies
4
Views
6K
  • · Replies 44 ·
2
Replies
44
Views
5K