Insights Recursion in Programming and When to Use or Not to Use It

AI Thread Summary
Recursion is a programming technique where a function calls itself to solve problems, often simplifying complex tasks. However, it requires careful consideration of base cases to avoid issues like StackOverflow errors, especially in team environments. While languages like FORTRAN traditionally struggle with recursion, others like Pascal and modern functional languages handle it more effectively. Tail call optimization in functional languages can mitigate stack overflow risks, but not all recursive functions, such as factorial calculations, are suitable for this optimization. Understanding recursion can be enhanced by studying languages like LISP, which emphasize recursive principles.
Technology news on Phys.org
  • #52
Nice seeing my little article on recursion with a bit of humour has created some interesting discussion.

Previously I suggested programming in Python and using Juajit with some simple C glue if some parts needed speeding up.



But things move on, and the one I now find interesting is TPtython++



The latest version is even smarter. Instead of making some minor changes to the Python code, so those bits are compiled C++, it goes through your Python code, and the bits it can be easily done to automatically become compiled C++. Of course, you can still convert the code manually you want as C++ to speed things up further. Actually, it is not C++, but a Python-like version of C++ called Pythonic++



Thanks
Bill
 
  • #53
jedishrfu said:
Here's the rosettacode.org Erlang version: URL='https://rosettacode.org/wiki/Pascal%27s_triangle#Erlang']https://rosettacode.org/wiki/Pascal's_triangle#Erlang[/URL]

[CODE lang="clike" title="pascal trianlge in erlang"]-import(lists).
-export([pascal/1]).

pascal(1)-> [[1]];
pascal(N) ->
L = pascal(N-1),
[H|_] = L,
[lists:zipwith(fun(X,Y)->X+Y end,[0]++H,H++[0])|L].[/CODE]

It looks like it has a tail recursion design.
I don't think so. After the recursive call to pascal(), the code still needs to do a couple operations.
 
  • #54
@bhobba The Lua reference is pretty interesting in that it beat Numpy for both tasks. I've played with Lua in the context of doing a 360 degree Rosette Graphing app on iOS Codea. Lua seems like a nicer version of Python although I only did one app so far.

The Lua article makes me want to play with it again and maybe even get it working on my desktop linux box.

Thanks.
 
  • #55
You're right, for tail recursion to work the recursive call needs to be last. I forgot that part.

https://www.baeldung.com/cs/tail-vs-non-tail-recursion
 
  • #56
jedishrfu said:
The Lua reference is pretty interesting in that it beat Numpy for both tasks.

Yes LUAJIT by itself is very nice.

There is an even higher level language that compiles to LUA
https://moonscript.org/

For simpler programs I just write in Moonscript, knowing the LUAJIT will be nearly as fast a C++.

Thanks
Bill
 
Last edited:
  • #57
Interesting to hear about Lua and Pythonic++. When I get tired of playing with Erlang, I think I'll look into these languages. I'm always interested in speeding things up.
 
  • #58
Julia too is getting a lot of press. Its functional not OO but with function signature overloading. Its fast but theres a precompile cost thats frustrating. Its similar to matlab but faster for numerical work and it can iact as glue to ntegrate python, fortran and r into a collective application.
 
  • #59
vinaykhatri said:
Although recursion can act like a loop, it can not be used as an alternative for a loop, the main reason is recursion can make a simple loop program complex and increase time and space complexity.
I realize that I am changing the context of that statement, but that context is easy to mistake.
So I am not quite willing to let that statement stand without comment.

If your build environment or coding rules prohibit recursion, there will always be an alternative to recursion that does not add artificial complexity.

When recursion is used, you are, in effect, using the stack as an array of structures with the array size not specified (or known) until runtime. In situations where recursion is prohibited by rules or build methods, the central issue is that the demand for stack space is not explicitly limited.

The alternative is to define a structure which has all the data elements needed to describe the state at each recursion reentry - the equivalent of all those local variables that are residing on the stack. Then pick a maximum recursion depth and declare an array of those structures of that size.

Then, when it's time to make that recursive call, increment your pointer into that array, check for array overrun, move the data that that had appeared in the recursive call arguments into that new array structure, and continue on. On return, decrement the array index.

Sure, it's not generally that simple. But with a bit of thought and code shuffling it can become that simple.

Now a few words on "complexity": "Code complexity" can refer to how psychologically tractable a software developer may find the code to follow or to "cyclomatic complexity" which is a very specific static code analysis measurement. The tractability can be addressed by describing the basic strategy and tying the code to that strategy by "common sense" arrangement of the structure, mnemonic names tied to the strategy, and making sure that the design documentation is readily accessible to anyone looking at the code. "Cyclomatic complexity" will generally affect both the tractability of the code and the difficulty in producing unit tests that thoroughly test the code. But it is tied most closely to the testing issues.
A common method of reducing both types of complexity is to revisit working code that has evolved with an eye to more fully thought out strategies - and then reworking the code to implement those strategies.
A common method of reducing cyclomatic complexity is to break a function up into smaller functions to reduce if/then/switch/loop nesting levels.
 
  • #60
.Scott said:
The alternative is to define a structure which has all the data elements needed to describe the state at each recursion reentry - the equivalent of all those local variables that are residing on the stack. Then pick a maximum recursion depth and declare an array of those structures of that size.
In many cases you don't even need to do that, since all you need is a single data structure, not an array of them, to keep track of the state. This fact is why so many algorithms can be implemented as loops instead of recursively. For example, to compute the nth Fibonacci number, you don't need n data structures; all you need to keep track of is the current loop index and the current partial product.
 
  • Like
Likes bhobba and .Scott
  • #61
PeterDonis said:
so many algorithms can be implemented as loops instead of recursively
TL;DR ANY algorithm can be implemented with either loops or recursive function calls, so (unless and until you identify that the implementation is too expensive in time or memory) use whichever you find easier to code and therefore more reliable and easier to maintain.

Details
  • ANY algorithm can be implemented with loops instead of recursive function calls, however some are easier to implement recursively (e.g. quicksort). Which runs faster and/or with less memory depends on a number of factors.
  • ANY algorithm can be implemented with recursive function calls instead of loops, however some are easier to implement with loops (e.g. Fibonacci numbers). Algorithms that are easier to implement with loops run faster and/or with less memory unless the recursion can be performed with exclusively tail calls and the compiler is designed (and set) to optimize very aggressively.
  • SOME algorithms cannot be implemented with exclusively tail call recursion. These are often easier to implement recursively because iterative implementation requires the programmer to maintain one or more stacks: again quicksort is a good example, and again which (if either) runs faster and/or with less memory depends on a number of factors.
 
  • #62
pbuk said:
ANY algorithm can be implemented with either loops or recursive function calls
Yes, I was being imprecise; by "loop" I meant a loop that does not have to add more information to some data structure on every iteration; it can update information on every iteration, but only if it throws away the old information so the total information kept track of is of (roughly) constant size.

An example in Python would be a for loop using the range built-in iterator; this does not compute all of the numbers to be iterated over in advance, it only computes them one at a time as they are needed, and throws away the old one each time it computes a new one.

You could of course implement something similar using recursion, as long as whatever language you were using could optimize away the parts that would require storing more data on each iteration (for example, tail call optimization to eliminate repeated pushing of parameters on the stack).
 
  • #63
pbuk said:
TANY algorithm can be implemented with loops instead of recursive function calls, however some are easier to implement recursively (e.g. quicksort). Which runs faster and/or with less memory depends on a number of factors.

Yes. But it's the whole structured programming thing we had constantly drummed into us during my university days. The choice is made on which will be the most maintainable. Deciding that of course is both art and science. The last time I posted about it on another forum frequented by more modern programmers than me the response I got was - Give me that old-time religion :DD:DD:DD:DD:DD.

Thanks
Bill
 
Back
Top