How many instructions are there ?

  • Thread starter Thread starter oldtobor
  • Start date Start date
Click For Summary
The discussion centers on the vast quantity of code and instructions in programming languages, estimating around 200 billion lines of code globally, potentially reaching trillions when considering assembly language. Participants debate the efficacy of various programming languages, with some arguing that BASIC was highly effective in solving software problems as early as the 1980s, while others criticize it as outdated and less capable compared to languages like C and Java. The conversation also touches on the maintenance of legacy code and the emergence of new software challenges, suggesting that much of the existing code will eventually be replaced. Additionally, there are claims regarding the historical development of Linux and the role of different programming languages in modern computing. Overall, the thread highlights differing opinions on programming language effectiveness and the implications of code volume in the tech industry.
  • #61
EDIT on Linux would be what, fewer than 1k lines of curses? It'd be almost trivial.

But EDIT doesn't have syntax highlighting, automatic tabbing, multiple-file searching, or any of the other features that people really want in a text editor; this is why no one has ever bothered to port it to Unix/curses. If all you want is to be able to move the cursor around with the arrow keys, then use pico and get on with your life. Pico is almost exactly the same as EDIT.

- Warren
 
Computer science news on Phys.org
  • #62
Why is it so hard to create an exact clone of EDIT from a unix - linux prompt ? I am sure that there must be some technical reason, maybe the terminals or the way the information goes to the screen, but an EDIT program should be relatively simple for an open source programmer to write, since they are writing operating system software, but I am sure there is a real technical reason. Please explain, because the EDIT program is very simple and convenient.

But what really intrigues me is that there must be some real, technical, fundamental limit of the unix architecture that prevents the creation of a simple program like DOS EDIT from the prompt. It is only 70k under windows (dos) and is a very simple straightforward program. I find it really mysterious that after more than 10 years no one in the unix or linux OSS community of programmers could create an exact replica.

I am absolutely convinced that it cannot be done. It is a simple program that would be handy to many people. There must be some real architectural reason. If you say the entire UNIX and LINUX community in 20 years, millions of people all decide no, it is forbidden to have EDIT, I truly cannot believe that. People are pragmatic, want to get things done fast, have problems to solve, EDIT would be just another quick simple tool you can use.

The funny thing is that the unix philosophy is all about small simple programs that can get a simple job done fast, like grep or awk etc. So it is perfectly within the spirit of unix or linux. Now there are more powerful choices, but in many cases, small scripts, you just don't need the power. After all grep for example finds patterns quickly and easily. There are more powerful database programs, obviously, but for something quick and simple grep is ok.

If on the other hand you are right, no one wants it, it will never be done, then that is a good example of what you can expect from the open source community in the future. They will make thousands of arcane programs and languages, but simple things that any person could appreciate and use like EDIT for DOS, or ACCESS or similar simply will never be done.

It is as if thousands of programmers have exactly the same mind set, the exact same opinions and tastes in everything. Like a religion or as if they were brainwashed against anything even remotely similar to anything PC or windows. Then why are they trying so hard to make the windows emulator WINE ?
 
  • #63
You're an idiot, oldtobor. Why do you keep repeating things? Are you here for discussion, or just to rant about incoherent nonsense? You think there's something fundamental about Unix operating system that precludes the development of a goddamn trivial text editor? There are many editors for Unix that are so similar to EDIT that there's no reason to write another one! Look at pico. LOOK AT PICO. LOOK AT PICO.

What exactly about pico do you not like?!?

- Warren
 
Last edited:
  • #64
D H said:
if( x & ( 3 == 0))
As written, the then branch if the if statement is unreachable.
That was my point, it wouldn't work unless C treated the precedence of & the same as *, which it doesn't. My point about the precedence is that &, ^, and | are math operators, and should have been given the same precedence as the other math operators, but that's not the way the C language was defined.

C provides no means to declare global values
Of course it does: "extern". Using "extern int Foo;" ...
Note, I posted global "values", not global "variables". For example, how would you decleare "abc" in this case "#define abc 0x12345"? This is easily done in assembler as prevously posted (public abc ... abc EQU 012345h). One example usage is to implement the equivalent of "sizeof(relocatable function)".

Note that C defines require \ at the end of a line for continuation.
No, it does not.
Yes it does, note I'm referring to "defines", for example, large macros, \ is required to extend a #define across multiple lines.

C defines static variables to be initialized to zero. ... occupies space
That's not precisely true.

Microsoft reference:
When modifying a variable, the static keyword specifies that the variable has static duration (it is allocated when the program begins and deallocated when the program ends) and initializes it to 0 unless another value is specified.

Uninitialized globals typically reside in the zero initialized data segment of the image.
Most wintel environments don't include a zero initialized segment, thus statics end up in the .data segment, and non-initialized variables in the .bss segment.

For the ARM (a risc processor environment, it's not an issue as all zero initialized variables are placed at the end of the .data segment, and the linker defines global values for "end of normal initialized .data / start of zero initialized .data" and "end of zero initialized data" for the start up routine to zero out. It also generates global values to allow code and data relocation for embedded environments.

vi
My main complaint about vi derivatives is that they have to be toggled in and out of text insert / command mode. I prefer using some key sequence to generate commands. I find generic editors like codewright much easier to use (I use it in CUA mode).

getting back on topic
As previously posted, C is a mid-level language, between assembler and true high-end languages. It doesn't include exponentiation, and requires a call. Conversion of mathematical algorithms to C is a pain compared to Fortan, or MatLab. Cobol still has it's place in dealing with database (field oriented) environments on mainframes. Oracle and other SQL languages are also good for database environments. Not all software problems were solved in the 1980's on a PC. Some problems were solved long before that, and some problems were solved more recently.
 
Last edited:
  • #65
Jeff Reid said:
Most wintel environments don't include a zero initialized segment, thus statics end up in the .data segment, and non-initialized variables in the .bss segment.

This is also incorrect. The .bss section IS the uninitialized data section. This section has the IMAGE_SCN_CNT_UNINITIALIZED_DATA bit set in the Characteristics field of the section header. I've implemented PE-COFF executable loaders and object file linkers on several platforms. Here's the official spec, if you're interested: http://www.microsoft.com/whdc/system/platform/firmware/PECOFF.mspx
Check out around page 18.

In any case, both globals that are explicitly initialized to zero and uninitialized globals end up there, not in the .data section, unless you specifically override it (__attribute__ ((__section__ (".data"))), in GCC for instance). Furthermore, the zeroing of the section is usually performed as the executable is mapped into the process, and is either part of the loader or an effect of creating a private mapping of the zero page, or similar VM operation.

I haven't done much work with ARM, but the platform doesn't really matter per se, as it is the executable format that provides such features. Most operating systems that run on ARM use ELF, which follows the same general scheme as the PE-COFF description above.

I'm curious about your microsoft reference; you didn't cite it, but it looks like a rather generic functional description of what happens, not actually what happens. Maybe if you provide a citation, I can show you why it's not as definitive as you believe. :rolleyes:

- OMD
 
Last edited by a moderator:
  • #66
eieio said:
None of it will be maintained forever, it will all be discarded and replaced by new, more capable code.
That's the second(*) biggest mistake in the history of the business.

The whole Y2K thing happened because no rational programmer thought their code could possibly live for 2 or 3 decades.

And yet banks are still running their COBOL applications from the 80s.



(*second only to Bill Gates' gaff of legend)
 
Last edited:
  • #67
DaveC426913 said:
That's the second(*) biggest mistake in the history of the business.

The whole Y2K thing happened because no rational programmer thought their code could possibly live for 2 or 3 decades.

And yet banks are still running their COBOL applications from the 80s.

(*second only to Bill Gates' gaff of legend)


:smile: To both.
Did you know there was a year 10 issue?
Initially they started with 1 digit year :rolleyes:

There was a good reason for doing so though.
The first machine I wrote code for had 20k of memory.
That was huge.
It came out of the box with 4k of real core memory.
Little ferrite donuts strung in a wire matrix.

You spent a lot of time trying to scare up a free bit or two somewhere.
Allocating an entire 16 bits for '19' was simply out of the question.

When BG came out with the 640k comment, the multitasking mainframes had just become available with 1 meg memory just a few years prior.
'Only Six computers will ever be sold in the commercial market'
Howard Aiken (The designer of the first IBM computer)


PS: vi is a horrible editor.
It was, however, a lot better than card punch.
Considering modern editors, I find it hard to believe it's still around for use.
 
  • #68
eieio said:
Most wintel environments don't include a zero initialized segment, thus statics end up in the .data segment, and non-initialized variables in the .bss segment.
This is also incorrect. The .bss section is the uninitialized data section.
I specifically mentioned a zero initialized section, not an unitialized section. However, it appears that the Wintel environment does zero all of the .bss segment. I also discovered that unitialized global variables ended up in the .data segment. In debug builds, the stack is initialized to 0xcccccccc, so local unitialized variables will be set to "c...c", in release builds I assume that no initialization is done.

In any case, both globals that are explicitly initialized to zero and uninitialized globals end up there, not in the .data section
I just tested this with Visual Studio 2005. A static variable ends up in the .bss section if there's no initializer, or it ends up in the .data segment if there is an initializer. As mentioned, I was suprised to find that an uninitialized global variable ended up in the .data segement and set to zero.

The platform doesn't really matter per se, as it is the executable format that provides such features.
Well the linker has to generate global values for a startup program to know what sections need to be zeroed out, or the executable format will need to include all of the .data / .bss sections, including the zeroed out section. I read the document you linked to and in section 5.1, it states that zero only sections images don't have to be included in the object file, which implies the startup routine clears these sections.

As mentioned, the Wintel environment appears to zero out all of .bss section. In the Arm environment, only a portion of the .bss segment is zeroed out, with the remainder remaining truly unitialized, which I assume is to reduce the execution time. Global values are generated by the linker, that indicate the location and size of the zero initialized logical segment which is the first part of the .bss segment.

I'm curious about your microsoft reference
It's from Visual Studio 2005, click on help, search for static, then click on C/C++ link and you get this: http://jeffareid.net/misc/static.jpg .

When BG came out with the 640k comment, the multitasking mainframes had just become available with 1 meg memory just a few years prior.
Back in 1986, when Atari came out with the 68000 based ST1024 system with 1 meg of memory for under $1000, I and other programmer / engineers asked how long it would be before people started commenting, you only have 1 meg of memory in your computer. Eventually the Atari series reached 4MB of ram before it sold off it's computer division to a European company, the only place where sales were still reasonable. The point here is that a lot of engineers realized that home computer memory sizes were going to continue to grow.

Regarding memory sizes on mainframes, high end IBM 360's and 370's had 1MB or more of memory during the 1960's and 1970's. By 1985, a Cray 2 super computer had 512MB of memory.
 
Last edited by a moderator:
  • #69
chroot said:
You're an idiot, oldtobor. Why do you keep repeating things? Are you here for discussion, or just to rant about incoherent nonsense? You think there's something fundamental about Unix operating system that precludes the development of a goddamn trivial text editor? There are many editors for Unix that are so similar to EDIT that there's no reason to write another one! Look at pico. LOOK AT PICO. LOOK AT PICO.

What exactly about pico do you not like?!?

- Warren

I downloaded and tried a PICO for DOS version. It is not too bad, better than vi, at least simpler. The question ws about an exact replica of DOS, but I think I found out why. During the late 80s, early 90s there were many full screen DOS programs around, but the unix environment was very separated from the PC users (unix being very high end professional). You could more easily find unix utilities (like awk already in 1989 by polytron) ported to DOS than vice versa. It was like someone asking to port DOS basic to IBM MVS, it didn't make sense. Then came windows gui and then linux. With linux the possibility of porting any DOS to unix became virtually zero because DOS was no longer even on the radar and because of hostility for anything DOS by OSS programmers.

I just wonder how it might have evolved if all those full screen DOS programs were ported as exact replicas to unix, unix - prompt. Like turbo pascal, or quick basic, etc.
 
  • #70
Jeff Reid said:
Regarding memory sizes on mainframes, high end IBM 360's and 370's had 1MB or more of memory during the 1960's and 1970's. By 1985, a Cray 2 super computer had 512MB of memory.
The IBM 360 was constrained to a max of 64k memory.
In the company I worked for, we had 5 370's, the biggest was 512k until 78 when they upgraded to a meg.
I think they did it to support TCAM the predecessor to VTAM.
The IBM 370 was constrained to 268 meg until around 84 when they came out with XA.
That was a PITA due to all the software that had used the upper 4 bits of the address to pass flags.
 
  • #71
oldtobor said:
I just wonder how it might have evolved if all those full screen DOS programs were ported as exact replicas to unix, unix - prompt. Like turbo pascal, or quick basic, etc.
DOS had access to the video buffer and the keystroke buffer.
IIRC the first CRT TTY terminals used for UNIX (and other applications) only transmitted the line the cursor was on when the enter key was hit.
This was a carry over from paper termiminals.
The arrow keys and whatnot were only available to the TTY terminal and not the OS.
 
  • #72
NoTime said:
The IBM 360 was constrained to a max of 64k memory.

I find that very hard to believe. I was working for a company that used 360/65s round about 1970 (before they had virtual memory operating systems) and the standard sized job streams were set at 100 180 and 240k memory (several streams running at once). That makes no sense if the machine had a max of 64k.

Even our little IBM 1130s had 32k words (not bytes) of memory.

Quite possibly the smaller models of 360 (models 20 and 30) were more contrained though.

By 1985, a Cray 2 super computer had 512MB of memory

Nope, it had 512 Mwords (64 bit) = 4 GB.
 
  • #73
AlephZero said:
Quite possibly the smaller models of 360 (models 20 and 30) were more contrained though.
The ones I was thinking of were models 20 and 30.
 
  • #74
oldtobor said:
With linux the possibility of porting any DOS to unix became virtually zero because DOS was no longer even on the radar and because of hostility for anything DOS by OSS programmers.
I wonder, then, how the DOSBox project exists.
 
  • #75
NoTime said:
PS: vi is a horrible editor.
It was, however, a lot better than card punch.
Considering modern editors, I find it hard to believe it's still around for use.

You either seem to be forgetting or don't know that standards "typically" dominate the UNIX world (note, I said UNIX world, which excludes UNIX-like operating systems, such as Linux) and what dominates standards usually are tools that are on multiple variations of the said operating system over a long period of time prior to the formation of the standard. In this case we are talking about ed/ex/vi, which were around for some time before POSIX 1003.2-1992 (the specific POSIX standard that mandates which commands/utilities should exist on a POSIX 1003.2-1992 compliant system).

Simply said, the reason editors like ed, ex, and vi are still around is the standard that most UNIX variants follow mandates them, and users of those systems are guaranteed to always have such editors around. Even if the said system isn't POSIX-compliant, such as a BSD release that predates POSIX, vi will still be around, because vi originated on a very, very old BSD release (2BSD or so, I think). If a user is going to choose an editor to learn, it would be reasonable and advantageous to choose the editor that will be available everywhere.

Also, more 'modern' editors like emacs, nedit, etc. are not going to be available on every system, even modern systems, like AIX or Solaris without installing additional freeware, which may or may not be practical in a production environment with strict standards specifying what can be installed on systems within the production environment.
 
Last edited:
  • #76
Hurkyl said:
I wonder, then, how the DOSBox project exists.

Excellent find! There's oldtobor's EDIT right there, running on Linux. As if, somehow, pico isn't similar enough.

- Warren
 
  • #77
D H said:
Scary, but true: Some flight software is now written in C++. Fortunately, many of the features of C++ that are touted as "attributes" of the language are forbidden in this use: operator overloading, multiple inheritance, templates, runtime binding: all verbotten. Some of these features are often touted as attributes of Python.
Huh? You say it like those are bad things.

Yes, a programmer inexperienced with those tools can do a lot of damage, but so can someone inexperienced with a circular saw. That doesn't make circular saws a bad thing. :-p
 
  • #78
runtime binding
How is runtime binding signficantly different than using pointers to functions? In the old days on some systems where every instruction cycle counted, one way to speed up interrupt processing was to change the interrupt vector address (pointer to function) at each step to eliminate the extra time it would take to do an indirect jump via software.
 
  • #79
Hurkyl said:
Huh? You say it like those are bad things.

They (operator overloading, multiple inheritence) have the potential to be very bad things. Why do you think operator overloading and multiple inheritence were intentionally left out of Java?

For some rather stong opinions regarding operator overloading, read these thread on operator overloading in Java:
http://forum.java.sun.com/thread.jspa?forumID=54&threadID=489919" .
 
Last edited by a moderator:
  • #80
D H said:
They (operator overloading, multiple inheritence) have the potential to be very bad things. Why do you think operator overloading and multiple inheritence were intentionally left out of Java?
Circular saws have the potential to do very bad things too. That doesn't make them a bad tool. :-p (Wait, didn't I already say that?)

Operator overloading and a certain portion of multiple inheritance functionality was intentionally left out of java because java has different design goals.
 
  • #81
No comment on the advisability of building flight software using circular saws.

However, I did once spend a long night helping ferry the "walking wounded" to hospital, after a plane crash caused partly by the fact that somebody managed to wire up the flight deck on a commercial airliner so a problem with engine 1 lit up the warning indicators for engine 2, and somebody else inspected what they had done and said "yeah, that's OK". The consequence was the flight crew attempted a one-engine landing, except they shut down the engine that was working properly, not the other one.

By all means tell me that sort of thing will never happen with modern software design methodologies, but I won't necessarily believe you.
 
Last edited:
  • #82
graphic7 said:
You either seem to be forgetting or don't know that standards "typically" dominate the UNIX world.

:smile: I had admin responsibilities for a couple years.
AIX and SUN.
I got to do a lot of work for things that should have evolved to no-brainers.
I would think, that after 25 years or so they could come up with better "standards".
 
  • #83
Hurkyl said:
Circular saws have the potential to do very bad things too. That doesn't make them a bad tool. :-p (Wait, didn't I already say that?)

Operator overloading and a certain portion of multiple inheritance functionality was intentionally left out of java because java has different design goals.

If you really want to get into multiple inheritance functionality then try some UNIX X-Windows coding.
IMO 3/4 of the development time and half the code is spent overriding inheritance.

IIRC Java's original design goal was as a small interpretive language to run on a consumer set top box imbedded p-engine.
Now it envisions itself as a competitor to C++.
It's original goal seems to have disappeared.

While circular saws may not be a bad tool, they certainly are dangerous.
 

Similar threads

  • · Replies 102 ·
4
Replies
102
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 38 ·
2
Replies
38
Views
7K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • · Replies 25 ·
Replies
25
Views
5K
  • · Replies 44 ·
2
Replies
44
Views
5K
  • · Replies 30 ·
2
Replies
30
Views
6K
  • · Replies 45 ·
2
Replies
45
Views
7K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K