Xezlec
- 318
- 0
Crosson said:The mainstream position is that reliability. writability, etc. are trade-offs in the short term, and that they all improve in the long-term. I reference you to Fortran - less writable, less readable, less reliable, less functional etc. then languages created more recently.
I don't know Fortran, but what I've been told is that it was (and still is) uniquely suited to be compiled to very fast code (faster than compiled C code). I've discussed this with people who use it to this day, and they point out things that seem like flaws, and show how those things improve "compilability". I also think it's considered more reliable than the average modern language, though I'm not sure.
I acknowledge that old versions of Fortran (such as F77) have some limitations that are not really necessary. That is, they have some drawbacks that don't give you anything in return (for example, 6-character function names). But my understanding was that as it has matured, it has "kept up" with the latest generation of programming languages in these respects (the 6-character limit is gone, for example), and has remained mostly a language optimized for compilation to fast code.
I suppose it can be said that genuine improvements have been made since the 1970s, but that was a long time ago, when "computer" was still an odd word to most people. Computers today are important enough and widespread enough that I think most major programming languages have come close to the edge of what can be achieved without trading off something. An exception to this is that most languages today are text-based, and it is conceivable that more sophisticated representations could become popular.
It is evidence that, if we solved syntax, we would solve it all. You are acting as if NP has been shown to be not equal to P. The fact that syntax processing is NP-complete, and yet the human brain seems to do it in polynomial time, suggests to me that P = NP. I really hope P = NP.
That's a surprising assertion. The human brain processes syntax in a small amount of time, and the syntax we use is simple syntax. I don't see how you come to the conclusion that we do it in polynomial time. We could even be doing it in exponential time. No way to know, since we don't know the "constant factor" involved, and since the problems we solve are very simple. And people make syntax errors all the time (you've made a few in this thread, and I probably have too). The more complex the syntax, the more errors people make. At least that's always been true for me. Maybe you would argue that I haven't had enough time to "adapt".
I'm going to have to create an example syntax that you can't figure out. Give me some time to work on it.
Also, I (along with the majority of CS experts) seriously doubt P=NP (new thread, maybe?). The problem has been explored so much, and the NP class of problems includes so many things that really "feel" similar to each other, and the known P problems "feel" so different from NP problems, it just seems intuitively unlikely that they are the same category. Besides which, they've both been explored so thoroughly and aggressively that I think the counterexample or "bridge" between the two would have been found by now if it existed.
Axioms are not elligible for proof or disproof (I will excuse this, and not question that you don't know what an axiom is).
I was specifically pointing out that it would be nonsense to make such a claim.
The theorem you linked me to has very little to do with the topic at hand, besides analogy (show me wrong, describe exactly how the theorem applies to what we are talking about, or reference someone who has).
I didn't mean it as only an analogy, I meant it quite literally. A programming language is literally a coding (think about the ASCII codes of the characters, if you need a bit stream). Programmer goofs are literally noise in this coding (the bitstream represents something different than the sequence of instructions in the programmer's mind). Compilers catching errors are literally using redundant information in the coding to detect bit errors (usually multi-bit).
Therefore, number of errors in the code that is sent through the "channel" (the human-computer interface) is directly dependent on how redundant your coding is.
I find this 'theorem intimidation' to be a form of sophistry. The fact is that mathematics is inexact in its application to reality, and impossibility theorems are especially slippery to apply: usually someone circumvents them by a loophole in the hypothesis.
I don't mean to intimidate. Maybe there is a loophole, but I've thought this way for some time, so if I am wrong I would honestly like to know how. When I look at languages that overspecify, which require you to type unnecessary extra information to do things, I find that the compiler catches more errors, because I'm unlikely to make the same mistake everywhere.
Let me use a concrete example. Imagine a language that does not require you to declare variable types. If the type is clear from the operations you perform on it, this information is redundant. Declarations are unnecessary.
But languages which require declarations remain popular precisely because they require this redundant information. With that information, the compiler can detect when you have mistakenly typed a nonexistent variable (like your MATLAB example), or when you are abusing a variable (trying to do something that doesn't apply to that data type). In a less overspecified language, these errors would compile just fine, but would be bugs.