How does ad hoc-language architecture impact performance?

In summary: I mean, yes, there are some clunky features in python and perl and php, but on the whole they're much more elegant languages than scheme and smalltalk and they definitely have a greater expressive power.
  • #1
Kajahtava
106
1
I was just reading this, it's just so pervertedly ad hoc, it kind-of makes me sad in a way. I've been reading at a lot of places that Ruby is 'elegant', I can't see it, it's just adding language construct after language construct, feature after feature with no internal cohesion to it, it's just adding more and more and more to the point that's no longer feasible to write a formal specification on it because I doubt even Matz himself really knows what exactly is going on here, all he knows it that it seems to work.

Python, Perl, same thing, it's just so ad hoc, it's kind of depressing that some university's even switched from Scheme to Python as an example language, there's no internal cohesion in Python, it's almost impossible to explain to any-one exactly how Python works on a formal level, it's a language people learn by trial and error. Since PHP 5.3, PHP apparently supports 'closures', how they implement this is really a profoundly ugly hack that just shows the language was never 'designed' and they just tried to add features later which conflicted with earlier choices. It's like the difference between physics where almost every-thing can be shown to follow from the standard model and sociology where each situation has completely unrelated foundations and changing one won't affect the other. It's not only ugly, it also hampers performance on a practical side and mutilates the brain of programmers who no longer understand what they are doing. It's the difference between having studied English on a linguistic level and understand the grammar, knowing how the words and morphology came to be, and just being a native speaker, sure, you don't make errors when you're a native speaker, but you don't really know why what you do is wrong or right and you can't explain it.

In the end, these languages seem to want to be some hybrid of Lisp of Smalltalk, or at least, gain the sheer expressive power both Lisp and Smalltalk have but just implement that power by creating dedicated language constructs for them which often contradict each other outright and then having to define an order of praecedence and so on. Closures can be implemented in Lisp and Smalltalk easily, it just follows out of the language, it doesn't need such absurd syntactic features to have variable-length functions. You can easily define functions/methods including a parametrized name in a loop in Lisp without having to convert a string to a method name in such a ugly fashion, it's not a thing that was built into the language just for that, it can be implemented from how the language is built concisely, the same way you loop 99-bottles because that language was designed by considerate thought, not just adding feature after feature.

It'd be almost impossible to compile any of these languages because they're just so damned laden with bloated bells and horns, you'd have to compile 90% of the interpreted with it. And no, having higher order functions, dynamic typing, and eval-constructs does not imply a language is hard to compile. In fact, McCarthy's proto-lisp whence originated all these things was compilable in months, a record at that time because that language for its time was simply utter and utter brilliance. It wasn't just making a language attractive by adding feature after feature, it was brilliant and original from the first stone on. Hell, Haskell at some points compiles with higher efficiency than C even though higher order functions are the cornerstone of that language.

It makes me sad...
Discuss... (praeferably disagree, I feel like a good debate)
 
Technology news on Phys.org
  • #2
Kajahtava said:
there's no internal cohesion in Python, it's almost impossible to explain to any-one exactly how Python works on a formal level, it's a language people learn by trial and error
Um, the core language specifications for python are actually pretty detailed (the big modules like Numpy/Scipy-not so much), but it would just confuse most people if you tried to explain to them how any language works on a formal level. All languages start out being taught on a trail and error level and then work up to the formal specifications. Scheme, smalltalk, and lisp may seem like they're more formalized just because they're usually first introduced in something like a language paradigms course (which is all formalization.) Math works the same way: calculus theorems are taught long before the formal proofs for the theories.

it's not a thing that was built into the language just for that, it can be implemented from how the language is built concisely, the same way you loop 99-bottles because that language was designed by considerate thought, not just adding feature after feature
So the whole concept of turing completeness pretty much says that anything can be implemented in any true language, but sometimes things are built into languages (written in the language) simply because it's done often and therefore convenient to have it. And are you sure you're not confusing libraries with core languages? The core of python/perl/C/etc. are actually pretty condensed, with a lot of the features being commonly used libraries.

It'd be almost impossible to compile any of these languages because they're just so damned laden with bloated bells and horns, you'd have to compile 90% of the interpreted with it.
So? An interpreted language isn't meant to be compiled. Different things for different functions and all that jazz.

Hell, Haskell at some points compiles with higher efficiency than C even though higher order functions are the cornerstone of that language.
Haskell is functional and C is procedural, which makes them each uniquely suited for different tasks. Even if Haskell compiles faster, the development time needed to do something in Haskell may not be worth it.
 
  • #3
story645 said:
Um, the core language specifications for python are actually pretty detailed (the big modules like Numpy/Scipy-not so much), but it would just confuse most people if you tried to explain to them how any language works on a formal level. All languages start out being taught on a trail and error level and then work up to the formal specifications. Scheme, smalltalk, and lisp may seem like they're more formalized just because they're usually first introduced in something like a language paradigms course (which is all formalization.) Math works the same way: calculus theorems are taught long before the formal proofs for the theories.
Which I think is a mistake, and the point is that most conventional mathematics can be reduced to the axioms of ZFC, the rest just follows out of symbolic definitions, i.e., we define the symbol 0 as a shorthant for {} and we defined 1 as a shorthand for {0} and we define S(n) as a shortant for x union {x in n : x = x} and we define that forall x forall y : x + 0 = x and x + s(y) = s(x+y). From that it can consistently be shown and checked by a proof checking algorithm that 0+1=1.

The point is that Scheme is more formalized because this exists: Given a program and an output, a computer can verify the correctness of an implementation (this does not violate Rice's theorem).

I think there's a very good reason that languages like PHP, Python and Ruby A: have no formal specification but a reference implementation and B: cannot be reasonably compiled. PHP is really a bad puppy in this, it's often no longer clear if some-thing is a library function or a language construct.

So the whole concept of turing completeness pretty much says that anything can be implemented in any true language, but sometimes things are built into languages (written in the language)
This is a misinterpretation of Turing Completeness that people often make, Turing Completeness simply says that it's possible in PHP and C and Ruby and what, not to write the same functions from N to N, or if the language does not support functions, more general it means:

A: They support encoding natural numbers in some way that satisfies the peano axioms
B: On an input of such an encoding they can all express exactly the same outputs
C: On the assumption they have infinite memory and time to do it

This at all does not mean they have the same language features like dynamically renaming variables and all that can be accomplished, it just guarantees that they at least can ex press one of the infinite ways to Rome, there are a lot of very minimal formalims that are Turing Complete, Conway's game of Life being interesting, to encode natural numbers in C is easy, take an unsigned integer, in Conway's game of life you have to encode it in the starting position of the cells in some way.

simply because it's done often and therefore convenient to have it. And are you sure you're not confusing libraries with core languages? The core of python/perl/C/etc. are actually pretty condensed, with a lot of the features being commonly used libraries.
The core of C is, C is maintained as an open specification, Python, Perl, PHP and Ruby are however maintained by the means of a reference implementation. C is internally logical, concise, and the other parts can be derived from the bottom, though it's not as concise as Scheme it's definitely designed from the ground up, even though they share some syntax, Perl and PHP are completely different in this which are built hack after hack which is quite possible in interpreted languages but harder if you want to compile them. Also note that high level needn't mean hard to compile in here.

So? An interpreted language isn't meant to be compiled. Different things for different functions and all that jazz.
The reason it's not 'meant to be compiled' (it's impossible to do so) is because of all that ad-hoc design. Writing a compiler for any language which is internally sound and from which the top follows from the bottom is a viable task because you only need to compile the bottom and the top follows.

Those languages where just made by adding feature after feature just for the sake of having more features without any thought to the ramifications, it's just HQ9+ all over again. I mean, it's not like you have to sit for and 'design' that feature of Ruby via some mathematical principles, working it out, you just have to want that it's there and you can add it to your implementation if you work in an 'interpreted language' (which seems to be just an excuse for adding feature after feature without having to deal with any thought on the principle behind it)

In Lisps and Smalltalk, these features also exist, it's perfectly possible to define a bunch of methods of functions via a parametrization, at runtime while still writing a compiler for most Lisps and Smalltalk is in fact easier than writing one for C. These languages arose from careful thought and planning, not just 'adding more and more features that they want in them'.

Is there any advantage for not being able to compile a language? Surely not, it's a drawback, so it has to be gained by some-thing, this case it's the rich functionality they afford, a billion different paradigms thrown at each other, though a lot of languages also show that the same expressive power can be achieved in a compiled language, which is comparatively simpler to implement even. These languages were never 'designed' by a team of researchers like those other languages were, it was just 'decided' what features they should have and those were just thrown in it. A proper programming language is often accompanied with a whole list of mathematical proofs about what can and can't be done in it.

Haskell is functional and C is procedural, which makes them each uniquely suited for different tasks. Even if Haskell compiles faster, the development time needed to do something in Haskell may not be worth it.
Haskell does not compile faster, compiling Haskell can take quite some time, on the GHC it some-times compiles to faster code, a slight difference.

Also, Haskell code is very brief, a lot briefer than C. But it was just displacing the often-cited misconception that it's 'impossible' to compile a language with higher order functions and that they have to be interpreted.
 

1. What is ad hoc-language architecture?

Ad hoc-language architecture is a type of software architecture that is designed for a specific use or purpose, rather than being a general-purpose language. It is typically created to solve a specific problem or address a particular set of requirements.

2. How does ad hoc-language architecture differ from other types of software architecture?

Unlike other types of software architecture, such as object-oriented or client-server, ad hoc-language architecture is not meant to be used for a wide range of applications. It is tailored to meet the needs of a specific project or task, and may not be suitable for other purposes.

3. What are the benefits of using ad hoc-language architecture?

One of the main benefits of ad hoc-language architecture is its flexibility and adaptability. It can be customized to fit the specific requirements of a project, which can lead to improved efficiency and performance. Additionally, it can be easier to understand and maintain since it is designed for a specific use.

4. Are there any drawbacks to using ad hoc-language architecture?

One potential drawback of ad hoc-language architecture is its limited scope. Since it is designed for a specific use, it may not be suitable for other projects or tasks. Additionally, it may require more time and resources to develop compared to using a more widely-used software architecture.

5. How can ad hoc-language architecture be implemented in a project?

Ad hoc-language architecture can be implemented by first analyzing the specific requirements and needs of the project. The architecture can then be designed and developed to meet those requirements, using a combination of existing design patterns and new elements as needed. It is important to constantly review and adapt the architecture as the project progresses to ensure it continues to meet the project's needs.

Similar threads

  • Programming and Computer Science
Replies
31
Views
5K
  • Programming and Computer Science
2
Replies
60
Views
16K
  • Programming and Computer Science
2
Replies
49
Views
10K
  • STEM Academic Advising
Replies
13
Views
2K
Replies
2
Views
6K
  • Beyond the Standard Models
Replies
10
Views
2K
  • Sci-Fi Writing and World Building
Replies
2
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
3
Views
275
Replies
1
Views
5K
Back
Top