In summary: The article discusses how recursion can be used to solve problems that look difficult, but can be solved with a little bit of work. It also discusses how some problems with recursion can be difficult to solve, and how to avoid some common problems.
  • #1
10,809
3,686
Recursion is quite simple. It’s a subroutine calling itself. It’s surprising but some problems that look quite hard can be trivial using recursion – but be wary – as I hope to explain there are traps you need to consider especially if working in a team environment.
When I studied Computer Science the first language I learned was FORTRAN. You can’t easily do recursion in FORTRAN (my lecturer went so far as to say it can not be done, but a Google search will show that’s not quite true – however for this article, I will assume you can’t). At the end of the FORTRAN part of the course, they devoted a couple of weeks to Pascal to explain its advantages and disadvantages compared to FORTRAN. Pascal does allow recursion so that’s where I first cut my teeth. The other major language in those far-off days was COBOL. Some versions allow it, but the general advice is, like FORTRAN – forget it. However, FORTRAN...

Continue reading...
 
Last edited by a moderator:
  • Like
Likes WWGD, Drakkith, Klystron and 6 others
Technology news on Phys.org
  • #2
That was a good article. It is true that many of us, especially the beginners, tend to use recursion without even understanding it's complexity for larger inputs. While teaching Python, our Professor was stressing the same. Often, when we need to find the sum of a series correct upto certain decimal places, we tend to do it in a straightforward fashion. But our Professor stressed that we should rather try to find how the nth term depends on the (n-1)th term, and then use that in the sum. In this way, factorials can often be ignored completely.

I do remember that when I used to do programs on data structures in Java using SLL, DLL or circular LL, I would often face a StackOverflow exception, presumably because the recursion didn't terminate as the base case was ill-defined.
 
  • Like
Likes bhobba
  • #3
Wrichik Basu said:
I would often face a StackOverflow exception, presumably because the recursion didn't terminate as the base case was ill-defined.
This is a good lesson about using recursion -- namely, that you need a clear-cut way for recursion to terminate. Not mentioned in the article was how recursion relies on the stack for parameter passing.

One problem with recursive functions is that even if you have a proper terminating case, it's still possible to overflow the stack if the routine calls itself too many times. This is less of a problem with modern computers due to the much larger RAM available for stack storage.
 
  • Like
Likes WWGD, bhobba, Klystron and 1 other person
  • #4
Stack isn't a problem with many functional languages because of tail call elimination.
 
  • #5
DavidSnider said:
Stack isn't a problem with many functional languages because of tail call elimination.

This only works for a certain type of recursive function that once found bubbles the answer to the top. In contrast, the factorial is not a tail recursion amenable function because it modifying the results as it bubbles back.

Python:
de factorial(arg):
    if arg==0:
        return 1
    else:
        return arg * factorial(arg-1)

I think it could be modified to do that though:

Python:
de factorial(arg, result):
    if arg==0:
        return result
    else:
        factorial(arg-1, result*arg)

to run:
Python:
factorial(arg,1)
 
  • Like
  • Informative
Likes WWGD, Klystron and pbuk
  • #6
  • Like
Likes Klystron and anorlunda
  • #7
Said one mirror to her mother
as they gazed at one another:
I see an eternity unfolding
Endlessly onward like no other.
 
  • Like
Likes harborsparrow, Klystron and Wrichik Basu
  • #9
Originally I didn't understand how to use recursion (even for simple problems). The reason was [which is probably true for many other people too] that while I understood on a (mechanical) level what did it do [or at least had a good idea], I didn't get how it was justified to use it to solve common textbook like problems.

Later on, four or five years later (after being out of undergraduate), when I picked it I managed to understand how to apply it to solve very simple (textbook like) problems. To me the assumption that calling to smaller sub-routine would solve the smaller problem now made sense. One of the reasons was that I felt that (if needed) I could get down to it and justify a (near) formal proof [in an idealised setting ... possibly after removing some extra burden/rules of a given language] using induction.

Now two or three years later again (just around the time of joining this forum), I decided to formulate a (finite) computational model for myself [not for execution on anything real etc.]. There were three or four reasons I felt it was a good idea, which I won't go in detail here. I am not educated about the formal studies of PL, so I [somewhat informally] wrote the rules to a certain reasonable level of detail (just for myself mostly).

Now one thing I realized was that [at least in the (very) simple procedural style language that I set-up] a language using recursion can simply be viewed as one where: when you write-down in detail the "dependence chart" for the way the functons call/use other functions, there is a cyclical path. In other words, the "call dependence hierarchy"(or whatever we want to call it) can contain a cyclical path.

Anyway just a basic observation that I thought I might share. I don't have a particularly good idea how this definition would generalise when we talk about more and more complex and extra rules in a PL.
 
Last edited:
  • Like
Likes bhobba
  • #10
Yes, a good example of this is when you make an expression parser. You can use recursion by using the parser itself to evaluate parenthesized expressions to any degree limited only by your memory. As an example:

The expression ((2+3)*4 + 5*(6 -7))/(5+7)
ParseExpr sees ((2+3)*4+5*(6-7)) and (5+7)

Called again for the left expression (2+3)*4 + 5*(6-7)

Called again for 2+3 returns a 5
Then evaluates 5*4 to get 20

Called again on 6-7 returns a -1
Then evaluates 5 times -1 to get -5
Then evaluates 20. - 5 to return 15

Called again for 5+7 returns 12
...
 
  • Like
Likes SSequence
  • #11
jedishrfu said:
Said one mirror to her mother
as they gazed at one another:
I see an eternity unfolding
Endlessly onward like no other.

If you want to understand recursion better I suggest learning the programming language LISP. It was the second language developed just after FORTRAN, but is influenced by different principles, namely the Lambda Calculus. For some info about LISP (including writing a Lisp interpreter) and the Lambda Calculus, plus some info on an interesting language called HASKELL see:
https://crypto.stanford.edu/~blynn/lambda/
For the seminal paper on LISP see (note only for advanced computer science students):
https://aiplaybook.a16z.com/reference-material/mccarthy-1960.pdf
For the average Hacker - yes I progressed from programmer to Hacker :cool::cool::cool::cool::cool::cool::cool: - the following is better:
http://languagelog.ldc.upenn.edu/myl/ldc/llog/jmc.pdf
I read somewhere of an interesting experiment where they asked programmers to solve problems in various programming languages to see which language got a working program fastest. Surprise surprise, the winner was the second oldest language - Lisp. Amazing. BTW I do not use LISP - I use PYTHON then tweak it for performance with LUAJIT if I have to:
http://alexeyvishnevsky.com/2015/05/lua-wraped-python/
Another good exercise is what my teacher made us do - write the Quick-Sort in assembler. Actually these days you would use C that allows the embedding of assembler. You learn a lot but I do not have fond memories of doing it - I avoid assembler whenever possible - I find LUAJIT plenty fast enough and much easier to understand and maintain.

Actually these days I do very little programming. 30 years of it was enough for me. I remember when I started loved programming however slowly but surely it lost its 'charm'. Another reason to try and break into 'management', but that has its own issues that would be way off topic for this thread.

Thanks
Bull
 
Last edited by a moderator:
  • #12
jedishrfu said:
I think it could be modified to do that though

With a simple refinement using Python's ability to define a default argument you can eliminate the need to include the "result" variable in the initial call:

Python:
def factorial(n, result=1):
    if n == 0:
        return result
    return factorial(n - 1, result * n)
 
  • Like
Likes jedishrfu
  • #13
jedishrfu said:
Yes, a good example of this is when you make an expression parser. You can use recursion by using the parser itself to evaluate parenthesized expressions to any degree limited only by your memory.
Just to clarify a bit, there are two different senses one can interpret "call-dependence hierarchy". One is the aspect of evaluation. The other is dependence which can just be tracked by looking at the program.

============

Evaluation aspect is how a function say with a signature like:
int f(int x)
is evaluated when we make a specific call like f(100) etc. That is, how many further function calls are required (and with what structure+rules) to evaluate the given value f(100).

As I understand, if the language is very very simple, then giving a precise delineation of such rules (for recursion) isn't that difficult (though it would definitely take some work when stated precisely).

============

However, I was using the term in the following sense (just giving one specific example):
Suppose that the function "f" (in its body) calls "g"
The function "g" (in its body) calls "h"
The function "h" (in its body) calls "f"

So there is a cyclical path in the sense of:
f→g→h→f

============

Anyway, I am not particularly knowledgeable about the topic (apart from just knowing the basics) so I will give it a rest here.
 
  • #14
bhobba said:
If you want to understand recursion better I suggest learning the programming language LISP.
Which is something I am attempting to do at the moment. My first experience with Lisp was in about 1982, with an implementation for the Apple II computer. I couldn't see the point of it at the time. Since then, I've realized that there are a lot of good reasons to become acquainted with this language.

Since this is a thread about recursion, I should mention that virtually every discussion of recursion in any programming language that supports this feature includes the factorial function as an example. It's almost as if there were no other possible uses. Because I teach classes in C++ and MIPS assembly, I try to come up with examples other than those that typically are in textbooks.

Here's an example in Lisp that I wrote that converts a nonnegative decimal integer to its hex form.

Code:
(defun to-hex(num hexDigitsList)    
    (defvar quot -1)
    (defvar remainder -1)

    (setq quot (floor (/ num 16)))
    (setq remainder (mod num 16))
    (push remainder hexDigitsList)    
    (if (> quot 0)
        (to-hex quot hexDigitsList)
        (progn    
            (format t "Hex:    0x")
            (loop for hexDigit in hexDigitsList
                do (format t "~X" hexDigit)))
            ))
The underlying algorithm is this:
Given a nonnegative decimal number,
Divide it by 16 and save the (integer) quotient and remainder.
If the quotient is positive, call the routine again with the quotient as the first argument,
Otherwise, display the list of remainders.

For example, if the function above is called with decimal 100 as the argument, the first call produces a quotient of 6 and a remainder of 4.
In the second call, the new quotient is 0 (6 / 16 == 0 in integer division), and the remainder is 6.
Since the new quotient is zero, we display the remainders in reverse order, resulting in the hex value of 0x64.

A routine to convert decimal numbers to their binary form is nearly identical, with the main difference being that you divide by 2 each time rather than by 16.

Any process that needs to produce results in the opposite order they were generated is really a natural for recursion. Other examples include displaying a string in reverse order and checking for palindromes.
 
  • Informative
  • Like
Likes Klystron, DrClaude and bhobba
  • #15
If you want to get into LISP the "Structure and Interpretation of Computer Programs" book and lectures are amazing and free. I so wish I had seen these when I first started coding.

https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-001-structure-and-interpretation-of-computer-programs-spring-2005/video-lectures/
 
  • Like
Likes bhobba
  • #17
Fun article @bhobba. Thanks for sharing.

Count me among the programmers who favored KISS over recursion. I never did find a case where I thought it was actually needed. That is until the day I first read about MapReduce.

https://en.wikipedia.org/wiki/MapReduce
When you introduce the complexity of parallel processing, and distributed clusters of computers, then a simple loop won't work, and the elegance of MapReduce greatly simplifies things.
 
Last edited:
  • Like
Likes Klystron, Wrichik Basu and jedishrfu
  • #18
The article doesn't do a good job of explaining when to use and when not to use recursion, even though that is the title of the article. A few example for each case would help.

With recursion, making a problem smaller doesn't make it simpler until it reaches some simpler base case. When developing recursive algorithms, it's common to start with a base case and work backwards to larger cases in order to write a program. The Tower of Hanoi algorithm would be difficult to figure out how to program without starting with the base cases first.

If the data for a process tends to be a nested structure, then the code to work with that data would also tend to be nested (recursive). Examples of "nested data" are tree structures, nested directories on a system, ... .

In cases where data is not nested, such as an array or linked list, algorithms such as sorting algorithms may be recursive or iterative, depending on the algorithm. In the case of quick sort, each level of recursion brings the data closer to a sorted state. In the case of merge sort using an extra working array and indexing, a top down recursive approach accomplishes nothing during recursion except to generate and push index pairs onto the stack until an index pair corresponding to a single element is produced, and only then does any actual merging take place. A bottom up iterative merge sort skips the recursion, treating an array of n elements as n runs of size 1, and starts merging immediately. Most libraries for merge sort are some variation of bottom up merge sort (like tim sort), but most learning exercises involve top down merge sort, and I'm not sure why this happened. For linked lists, iterative bottom up merge sort is faster than recursive merge sort (which has to scan lists to split them).

Fibonacci is another case where recursion is relatively expensive, as the recursive version involves about Fib(n)-1 instances of the function (exponential growth), while an iterative version can be implemented using 2 or 3 variables to hold the sums.

APL was the first recursive programming language I used, but the main difference in APL is that it's native operators (most of them greek symbols) work on scalars, vectors, matrices, or "tensors" (matrices with more than 2 dimensions).

Cobol and Fortran may not be "modern languages" but they are still in use today.
 
Last edited:
  • #19
jedishrfu said:
This only works for a certain type of recursive function that once found bubbles the answer to the top.

It also only works in certain programming languages or certain compilers/interpreters that eliminate tail calls. CPython, the reference Python implementation, doesn't do tail call elimination and Python's inventor has said they have no intention of changing this, so there's little point in writing tail-recursive code in Python.
jedishrfu said:
There's also a modern variant to Lisp: Clojure

Clojure is frequently marketed this way. It's misleading since it suggests Lisp is somehow "unmodern" and obsolete compared with Clojure. The truth is that Clojure is a largely different language from Lisp that does some significant things differently (e.g., lazy vs. eager evaluation, functional vs. multiparadigm language) which have advantages as well as disadvantages compared with how Lisp does things.

Running on the JVM is also not unique to Clojure; there are implementations of both Common Lisp and Scheme that do that. There's a difference though: Lisp and Scheme merely can run on the JVM, and can let you access Java libraries if you want, but are otherwise self contained languages. You could write Lisp or Scheme and not care if your code happens to be running on the JVM or somewhere else. Clojure by contrast actively exposes the Java ecosystem. If you use Clojure you very soon find you need to access Java classes and methods to do basic things, and if there are errors in your code you'll be confronted with Java exceptions and Java stack traces.
 
  • #20
Mark44 said:
Since this is a thread about recursion, I should mention that virtually every discussion of recursion in any programming language that supports this feature includes the factorial function as an example. It's almost as if there were no other possible uses.

The thing with recursive functions is that they save a copy of all the function arguments and local variables on the stack for each recursive call. (Unless your code is tail recursive and your language or compiler recognises and eliminates tail calls, as mentioned earlier.)

The factorial is a good example to illustrate what recursion is but it is usually a bad example to illustrate where recursion is useful, since it has no real advantage over the iterative version and it has a serious disadvantage: it uses more memory and it needlessly risks overflowing the stack.

A better example of where recursion is useful is walking a tree. This is because while doing this you need to save, at all times, the path you followed down the tree to get to where you currently are so that you can backtrack. In an iterative version you would need to manually update a stack with this information. The recursive way is much easier and cleaner since this information is already saved on the call stack for you automatically and you can just use that.

A simple tree walker, which applies a given function to every item in a tree, can be done like this in Lisp:
Code:
(defun maptree (function tree)
  "Apply FUNCTION to every item in TREE."
  (typecase tree
    (cons (maptree function (car tree))
          (maptree function (cdr tree)))
    (t (funcall function tree))))
For example, running
Code:
(maptree #'princ '(a (b . c) ((d . e) . f) . g))
prints "ABCDEFG" to standard output.
Here's an example in Lisp that I wrote that converts a nonnegative decimal integer to its hex form.

Code:
(defun to-hex(num hexDigitsList)    
    (defvar quot -1)
    (defvar remainder -1)

    (setq quot (floor (/ num 16)))
    (setq remainder (mod num 16))
    (push remainder hexDigitsList)    
    (if (> quot 0)
        (to-hex quot hexDigitsList)
        (progn    
            (format t "Hex:    0x")
            (loop for hexDigit in hexDigitsList
                do (format t "~X" hexDigit)))
            ))
There are some problems with this which you'll probably want to know about if you're learning Lisp. The serious one is the use of defvar inside the function body:
Code:
    (defvar quot -1)
    (defvar remainder -1)
defvar defines a global variable* so you would rarely if ever want to use it inside a function. If you want to declare local variables then the basic way to do that is using 'let', although in your case it would probably be better to use 'multiple-value-bind' since Lisp's floor function computes both the floor (result of integer division) and mod at once and returns them as two separate return values:
Code:
  (multiple-value-bind (quot remainder) (floor num 16)
    ...)

Other than this, some style issues are:
  • Your code is only recursive in the technical sense that your function calls itself. The algorithm you are using is really an iterative one and you don't gain anything from writing it in a recursive way.
  • Part of what your function does is to construct a list of digits (in base 16) of the number. This is interesting and potentially useful enough that it should really be a separate function.
  • Since the procedure is the same in bases other than 16, the base should really be a variable (possibly an optional one with a default value).

Putting it all together I'd probably do something more like this:
Code:
(defun digits (number &optional (base 10))
  "Return a list of digits in NUMBER."
  (loop with digits = ()
        with mod
        while (plusp number)
        do (setf (values number mod) (floor number base))
           (push mod digits)
        finally (return digits)))

(defparameter *base-alphabet* "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ")

(defun digit->symbol (digit)
  (aref *base-alphabet* digit))

(defun print-number (number &optional (base 10) (stream *standard-output*))
  (if (zerop number)
      (princ (digit->char 0))
      (dolist (n (digits number base))
        (princ (digit->symbol n) stream))))
In "real" code you might also want to add some type checking, e.g. make sure 'number' is an integer and the base is between 2 and 36.

Lastly, in case it isn't obvious to someone, this only makes sense as an exercise since Lisp's built-in format function already supports printing integers in hexadecimal (actually, any base between 2 and 36) as well as Roman numerals and written English:
Code:
> (let ((x 42))
    (format t "~d = 0x~x = ~@r = ~r.~%" x x x x))
42 = 0x2A = XLII = forty-two.
*defvar also does something else: it globally declares that the variable will be dynamically rather than lexically bound. The first answer to this Stack Overflow question explains what this means. The important thing is that this globally affects how code accessing the variable (or any variable with the same name) works, to the point that Lisp programmers use a special naming convention for global variables to avoid this happening by accident.
Any process that needs to produce results in the opposite order they were generated is really a natural for recursion.

This doesn't apply in this case. By iterating you go through the digits in reverse order. But then push adds the digits to the head of the list where you're storing them. This is why they end up in the correct order.
 
  • Like
  • Informative
Likes Klystron, bhobba and Greg Bernhardt
  • #21
wle said:
CPython, the reference Python implementation, doesn't do tail call elimination and Python's inventor has said they have no intention of changing this, so there's little point in writing tail-recursive code in Python.

I think there have been some attempts to implement tail call elimination in CPython by bytecode manipulation. I don't know that I would recommend that in a production system, since bytecode manipulation is usually considered non-standard, but it's possible.
 
  • #22
wle said:
A better example of where recursion is useful is walking a tree. This is because while doing this you need to save, at all times, the path you followed down the tree to get to where you currently are so that you can backtrack. In an iterative version you would need to manually update a stack with this information.
In the case of a binary tree, an iterative version can modify the links between nodes for an in order traversal of a tree, such as a Morris traversal, which doesn't require recursion or stack. I don't know how well known this is.
 
  • #23
A couple of additional points:

There are a couple of situations where recursion is prohibited. The first is related to safety standards as in the transportation industry. For example, MISRA (automotive) prohibits recursion because of the difficulties in verifying that the code is operating as designed.
The other situation relates to how the stack is handled on many microprocessors. In those cases the build tools actually position local variables on a "stack" at link time - so the stack is not really dynamic.
So long as there is a known limit on the level of recursion, it is possible to write an algorithm without using recursion. Each level of recursion gives you another copy of your local variables. In the worse case, an array of structures can be used to hold those local variable sets.

Recursion is often the result of more complicated situations than a function calling itself. For example, an executable may open a dialog box that provides GUI options - some of which invoke dialog boxes. In such cases, the second invocation of the dialog box function is not just many function levels down from its parent dialog, it's actually invoked from a message loop that may be several levels down from the message loop supporting the parent box.
 
  • Informative
  • Like
Likes DrClaude, Wrichik Basu and anorlunda
  • #24
.Scott said:
Recursion is often the result of more complicated situations than a function calling itself. For example ...
That's an excellent point, although I always called that re-entrant rather than recursive. You can get the same thing with real time programs and with parallel processing.

It would be clearer if the title emphasized recursive algorithms rather than recursive subroutine calls.
.Scott said:
There are a couple of situations where recursion is prohibited. The first is related to safety standards ...
That is also an excellent point. There are some analogous prohibitions on use of virtual memory in time-critical safety-related programming.
 
  • Like
Likes Klystron
  • #25
.Scott said:
For example, MISRA (automotive) prohibits recursion because of the difficulties in verifying that the code is operating as designed.

That's interesting. Do they also limit what languages\compilers you're allowed to use?
 
  • #26
DavidSnider said:
That's interesting. Do they also limit what languages\compilers you're allowed to use?
MISRA is specific to embedded automotive systems. I believe it specifies C or C++ standards exclusively - or rather, subsets of several releases of those language standards. As a matter of practicality, it requires automated support for MISRA compliance checking. So if those tools are not available for a development environment, that environment would not be practical to use.
Beyond the compiler, there is a requirement for controlling the impact of any single point of failure. So not just any microprocessor or any operating system can be used.

Wiki for MISRA C
 
  • Informative
Likes anorlunda
  • #27
.Scott said:
MISRA is specific to embedded automotive systems. I believe it specifies C or C++ standards exclusively - or rather, subsets of several releases of those language standards. As a matter of practicality, it requires automated support for MISRA compliance checking. So if those tools are not available for a development environment, that environment would not be practical to use.
Beyond the compiler, there is a requirement for controlling the impact of any single point of failure. So not just any microprocessor or any operating system can be used.

Wiki for MISRA C

Wow. This has led me down a rabbit hole of legal cases involving auto code not being MISRA compliant. Fascinating.
 
  • #28
As described in other posts above I also programmed factorial and Fibonacci functions in Pascal, first applying iteration then recursion at a time when stack / memory was quite limited. Lessons learned from Pascal progressed to practical recursion in Common Lisp. Applied sparingly, recursion made sense implementing tree traversals in game theory IIRC*, some counting algorithms and certain string manipulations such as recognizing palindromes and improving a codec.

Surprisingly, I once implemented recursive code in C/C++ to push pull bits within specialized flags on Sun Solaris servers. With space at a premium, I was able to code error signals from application function calls into a few 16 or 32-bit words. Recursion turned out to be the most efficient and fastest method to determine which functions had returned certain error conditions indicating incomplete intermediate computations without causing frame delays for other applications.

I wish I had code samples as it may have been machine specific but it worked well enough that NASA certified that flight simulator for full-motion hydraulics, a major milestone. The original systems code would freeze or otherwise invalidate the entire frame instead of marking an error condition to be corrected during later frames. IIRC* the 32-bit words were sectioned into 8-bit conditional variables that operated across multiple threads to ensure that downstream apps always used reliable data.

Before programming, I had studied recursion in relation to self-referential functions in linguistics and set theory. Offered without proof, self reference is a requirement for a recursive function but not all self-referential functions can be written recursively. These ideas were useful implementing reflective memory and also for self-adaptive run-time code such as the flags mentioned above.

*"If I remember correctly". I may lack the brain power now to remember the code I designed. These Insight articles are an excellent mental stimulus. Thanks.
 
  • Like
Likes anorlunda
  • #29
One can easily experiment with recursive routines on Excel.

Example 1: the Fibonacci sequence:

Cell A1 = 1
Cell A2 = 1
Cell A3 = A1 + A2

Then copy/paste the formula in Cell A3 to (n-3) cells below for n terms in the Fibonacci sequence.

Example 2: generate sine and cosine (actually we'll use versin(x) which is 1 - cos(x)). Let x = pi/6

Cell A1 = pi()/6*2^(-10), Cell B1 = (pi()/6*2^(-10))^2/2
- these are small angle approximations for sine and versin the smaller the angle the better the approximation.
Cell A2 = 2*A1*(1-B1), Cell B2 = 2*(A1)^2,
- doubling algorithm.

Copy/paste the doubling formulae in cells A2 and B2 to the next 9 rows to determine sin(pi/6) and versin(pi/6). Obviously cos(pi/6) is easily obtained from 1 - versin(pi/6).

Example 3. Home loan balance given fixed interest rate and monthly payment.

Cell A1 = 1e6
Cell A2 = A1*(1 + 0.01) - 11010.86

Copy / paste the formula in A2 to next 239 rows where n=240 is total no of months (period of loan). Graph your output to see the characteristic curve for loan balance.
 
Last edited:
  • #30
Historical trivia - Tony Hoare's original implementation of quicksort was done on a machine without a stack:

https://en.wikipedia.org/wiki/Elliott_803

The program was iterative, using a program created stack that he called a "nest". Although he implemented actual code on that machine, it seems that the code was not archived, and instead there's a paper describing the general concept and the "nest" (which later was named "stack").

This "nest" aka "stack" was used to allow recursion on the 803, and used with a version of Algol that pre-dated Algol-60.
 
Last edited:
  • Like
Likes QuantumQuest, bhobba and Klystron
  • #31
I'm sure programmers before him used variations of the stack. He was likely the first to describe it clearly in the literature though.

I remember implementing a data stack in programs I wrote in BASIC for a CS class on data structures. My computer could handle recursive calls in BASIC but not with data so I implemented an index that I incremented and decremented as I traversed the recursion. My data values were stored in arrays referenced by the index.

FORTRAN on mainframes in the 1970s didn't have recursive capabilities. If you tried it you get caught in a loop.
The reason was return address was a single location so sub A could call sub B and sub B could call sub C but if subs B or C called A then the return address would point back to them and not the original caller.
 
  • Like
  • Informative
Likes DrClaude, QuantumQuest, bhobba and 1 other person
  • #32
rcgldr said:
[snip...]
"nest" (which later was named "stack").

This "nest" aka "stack" was used to allow recursion [...]
Apropos the terminology one of my old data structure textbooks used both terms but concluded that 'stack' was less ambiguous than 'nest' but retained the latter to describe 'nested FOR loops', 'nested functions', etc. Also, 'stack' was a popular term in electronics at that time. Interesting post.
 
  • Like
  • Informative
Likes DrClaude, QuantumQuest and jedishrfu
  • #33
The programming language I mostly used, called Natural, allowed recursion but few used it, and those that were called upon to maintain the few that did tried to avoid it like the plague; buck pass it whenever possible. Interestingly many programs that were put in the too hard basket were often solved using an array as a stack. One that did the rounds I originally wrote but ran too long. What it did was a heap of database searches to update things. It was possible to store a direct pointer to the database record rather than search for it, and if you do that it retrieved records much faster. So I figured maybe it was retrieving many of the same records a lot, and on a punt kept a copy of the records location in a large array I used as a stack. I searched that first before searching the database. Amazingly it cut run time to about an hour from all night.

Thanks
Bill
 
  • Like
Likes Klystron and QuantumQuest
  • #34
jedishrfu said:
FORTRAN on mainframes in the 1970s didn't have recursive capabilities.
IBM 360 family to this day doesn't have a native stack. Instead there is a caller allocated save area, similar to X64 calling convention, except that the IBM save area is at least large enough to save all registers. At callee entry, R13 points to the save area, R14 is return address, R15 is base == entry address of the function being called . A linkage stack of save areas can be implemented by allocating save areas from the heap, then using those allocated save areas to allow recursion. Assuming the programming convention saves all registers, including R13, and R14 into the caller provided save area, then the current R13 points to the caller's save area, and that save area's stored copy of R13, points to the prior save area, providing a link chain (linkage stack) of save areas. When control is returned to the caller, the caller frees the allocated save area if not making another call. If a function does not make any calls, there's no need for that function to allocate a new save area, it will just use the save area provided by the caller.
 
Last edited:
  • #35
Yes, they had an area to save the registers prior to the call. Some registers were then loaded with argument pointers and upon return the registers were reloaded with their older values.
 

Similar threads

Replies
4
Views
2K
Replies
1
Views
2K
Replies
11
Views
14K
Replies
8
Views
2K
Replies
5
Views
3K
Replies
3
Views
2K
Back
Top