Does anyone knows a good book that teaches techniques to prove that an algorithm is correct?
What part of correctness are you talking about?
If the algorithm is recursive and you want to prove termination and convergence then this is different from dealing with something non-recursive.
If you want to prove convergence in a recursive or similar sense, depending on the algorithm you could show that the geometric distance between expected answer and observed answer gets smaller with every recursive call.
In terms of terminating conditions, this will probably depend on the data structure being used as well especially if you have a recursive algorithm.
If you are dealing with a statistical algorithm like RNG simulation for custom distributions, then this needs even more analysis.
What kind of algorithm do you have in mind?
I was talking about proving the termination and convergence of a recursive algorithm. Since I'm learning this topic from scratch, any recursive problem, like factorial, is a good problem to understand.
Like any mathematics, you usually want to look at a particular class of algorithms rather than 'the general case'.
The thing is that some recursive algorithms will not terminate either for any inputs for the algorithm, or for some subset of inputs.
One thing I think you should first take a look at concerns the halting problem. This will give you a general idea of whether an algorithm can terminate or not, and you'll find that this is probabilistic measure. For more information on this look at work by Gregory Chaitin (he has done some fascinating work).
The other thing would be to study computational complexity and recursion theory and theories of computation and probably automata theory as well. This particular subject is very much at the heart of computer science in the theoretical perspective.
For the factorial, you could analyze this under the bounded idea mentioned above.
The thing is that you know it will terminate when x = 0 (0! = 1) at the top of the call stack and assuming that the inputs are always integers >= 0, then you can show that the program always terminates since the next call will reduce the argument from x to x-1 if you take the factorial to give back f * factorial(f-1) in this recursive way.
Now again when you have complicated data structures like say trees or graphs and want to do stuff on these, then it gets a little more complex.
So intuitively, it will help if you form a class of functions that you know terminate by enforcing that every call that is done increases the probability of termination and that for all conditional possibilities corresponding to recursive function calls, you get an exhaustion of probabilities to be 1 for a finite number of recursive calls that are bounded in depth (like the factorial in this case for n! will have n+1 recursive calls and therefore a depth of n+1).
The general theory will deal with probabilistic arguments and if you can show that there is a non-zero chance that the recursive algorithm won't terminate, then you're done. From that you could then narrow down 'which inputs' will cause this and you can even use some of the ideas from mathematical analysis like convergence and divergence in this context.
It's a little hard to give specific advice for your query because it is actually a very complex thing and this question is at the very heart of what computation actually is and what a valid computation is just like mathematicians want to figure out what kind of functions converge: it's the same kind of analogy and understanding it in any depth is really important for understanding computation, just like understanding convergence is essential in mathematics.
Thank you for your reply, it's very useful.
Separate names with a comma.