What is Algorithmic Problem Solving ?

  • Thread starter Thread starter yUNeeC
  • Start date Start date
  • Tags Tags
    Problem solving
Click For Summary
Algorithmic Problem Solving involves designing algorithms and implementing them in programming languages like Java, as part of a Computer Science course integrated into a math major. The course is expected to cover both theoretical aspects of algorithms and practical programming skills, potentially using various languages. There is a debate about the relevance of programming in a math curriculum, with some arguing that programming skills are essential for modern mathematicians. The discussion also touches on the distinction between computer science and natural sciences, emphasizing that both fields require logical reasoning but serve different purposes. Understanding algorithms is fundamental to effective programming and problem-solving in both disciplines.
  • #31


It is well to point out that "algorithm" is a loaded word. Still, one should be consistent with the definition they give, at least in the context it is given. Ergo my providing examples of simple programs that did not constitute algorithms according to the definition given.

I think that any definition of "algorithm" should be consistent, coherent, and useful. I'm not sure defining algorithms as an arbitrary series of operations is useful... but I can definitely see what you're getting at.
 
Physics news on Phys.org
  • #32


AUMathTutor said:
Still, I think that basic programming is a tool that should be common to all scientists, engineers, and mathematicians.

I agree with that. Computers are so ubiquitous to anything related to science or engineering. You can't do science or engineering research anymore without using a computer. Even some pure mathematicians at my university use mathematical software to test hypotheses and gain insight in problems. Needless to say, these mathematical software packets require lots of scripting.

AUMathTutor said:
It is well to point out that "algorithm" is a loaded word. Still, one should be consistent with the definition they give, at least in the context it is given. Ergo my providing examples of simple programs that did not constitute algorithms according to the definition given.

I think that any definition of "algorithm" should be consistent, coherent, and useful. I'm not sure defining algorithms as an arbitrary series of operations is useful... but I can definitely see what you're getting at.

Yes, apparently there is no standard definition for "algorithm". Even on Wikipedia, there is no formal definition. I've always been taught that an algorithm is a finite sequence of operations which takes an input and produces an output. Furthermore, for any input, the program must produce an output after a finite number of steps. There is no notion of usefulness in that definition. I think it is difficult to include usefulness in the definition because it is has a subjective meaning. You can't define usefulness formally.
 
Last edited:
  • #33


Well, I think that the definition you provide is useful.

Useful how? It lends itself to proving things about algorithms. There may be no formal definition of usefulness, but it's like porn in that you know it when you see it.

I think any good definition of algorithm should require that the algorithm be as referentially transparent as regular mathematical functions. This excludes certain kinds of things as algorithms, but isn't that the point of definitions?

I am wary of using Wikipedia as a resource in these matters. Their discussion of termination is very misleading, at least in my opinion. They seem to be saying that you can not know for sure whether a certain algorithm will terminate, which I don't believe has ever been shown. Alan Turing showed there's no algorithm to do it, but that doesn't mean we can't do it.
 
  • #34


AUMathTutor said:
I am wary of using Wikipedia as a resource in these matters. Their discussion of termination is very misleading, at least in my opinion. They seem to be saying that you can not know for sure whether a certain algorithm will terminate, which I don't believe has ever been shown. Alan Turing showed there's no algorithm to do it, but that doesn't mean we can't do it.
Well, the part on Wikipedia about termination says this:
There is no algorithmic procedure for determining of arbitrary algorithms whether or not they terminate from given starting states.
This is exactly the Halting problem. Of course, as you say, it is possible to look at every algorithm individually (although there are infinitely many different algorithms) and mathematically proof its termination. However, there is no single method (read algorithm) which works for all of them. I don't think that the Wikipedia entry is misleading about that.
 
  • #35


It says that, but it says other things which seem to imply that a lack of an algorithmic procedure means we can't do any better than just testing every algorithm. The presentation is sloppy... I would say Wikipedia is useful for a quick and dirty intro to fundamentals, but when it gets down to details, you should look elsewhere.
 
  • #36


It would only be possible to solve the halting problem if one had access to a more powerful computer (in the formal sense) than a Turing machine. No man-made computer is more powerful than a Turing machine. You might argue that the brain is more powerful, though I would not agree.
 
  • #37


You're missing the point. The point is that there can be no correct algorithm for the problem, but the problem can still be solved on a case-by-case basis.
 
  • #38


SOME of the cases of the problem can be solved; many cannot.
 
  • #39


mXSCNT said:
SOME of the cases of the problem can be solved; many cannot.
That's an interesting question. Practically, it is impossible to prove (or disproof) termination for all algorithms because there are infinitely many. However, assume you start working by taking algorithms at random. Say you generate the random number x_1 and analyze the termination of algorithm_{x_1}. Then you generate the next random number x_2 and analyze algorithm_{x_2}, and so on... Also, you adapt your method for every specific algorithm since otherwise you "encounter" the Halting problem.

The question is now: would you eventually encounter an algorithm for which you cannot either proof or disproof its termination? Or you could go on this way for infinite time, successfully proving or disproving termination for every algorithm on the way?

I don't know the answer to that question and I would be curious to find out whether someone has an argument in favor of either possibility.

PS: Now I realize that I used the term "algorithm" instead of Turing machine, which is wrong because I am talking about proving or disproving the termination of an "algorithm". However, by definition, an algorithm always terminates. So the terminology I used is kind of wrong :-) and you should read "Turing machine" instead of "algorithm".
 
  • #40


I've never seen a proof of that. I would like to point out that the proof of the undecidability of the Halting Problem doesn't show this is true.

I was under the impression that undecidable problems could be solved... you just have to solve them on a case-by-case basis. As in, there's no general-purpose way of solving them.

Could you reference a proof that there exist instances of undecidable problems which literally cannot be decided upon by human beings, in principle?
 
  • #41


Well, I don't know the answer to the question myself and thus would be very interested in a proof (which either proves the proposition true or false). I have a feeling that it is connected to Gödel's incompleteness theorem. If you see the sentence "this algorithm finishes after a finite number of steps" as a theorem to prove, then I think that there exist algorithm's for which this sentence isn't true or false, thus unprovable. So I would say that there exist algorithms for which termination isn't provable, even on a case-by-case basis. However, this is just a thought and I can't prove it formally.
 
  • #42


By "case by case basis" I believe you mean it is possible for a sufficiently smart mathematician to find a proof of whether any given algorithm halts or not. But that is not possible.

Suppose (for contradiction) that your mathematician is working from some finite set of logical axioms, such that algorithm x halts if and only if the fact that x halts can be proved in those logical axioms, and such that algorithm x does not halt if and only if the fact x does not halt can be proved in those logical axioms. If these logical axioms existed, it would be possible to produce an algorithm y that solves the halting problem. y could start enumerating every possible derivation using those logical axioms, starting with proofs that are 1 line long, going on to proofs that are 2 lines long, and so forth. By our assumption, y will eventually reach a proof that x halts or a proof that x does not halt, whereupon it could output that fact. Therefore y is an algorithm that would solve the halting problem. But it is known that the halting problem can't be solved by a Turing machine, so the assumption that such a set of logical axioms existed must be false.
 
  • #43


I'm not sure I like the part in your proof where the "algorithm" recognizes the proof of the termination or the lack thereof. That seems to be just displacing the problem inherent in the halting problem to interpreting proofs; in fact, all you've shown is that interpreting a proof is undecidable if the halting problem is.

How would the algorithm know the proof was a valid proof of termination or lack thereof? I don't see it.
 
  • #44


mXSCNT said:
By "case by case basis" I believe you mean it is possible for a sufficiently smart mathematician to find a proof of whether any given algorithm halts or not. But that is not possible.

Suppose (for contradiction) that your mathematician is working from some finite set of logical axioms, such that algorithm x halts if and only if the fact that x halts can be proved in those logical axioms, and such that algorithm x does not halt if and only if the fact x does not halt can be proved in those logical axioms. If these logical axioms existed, it would be possible to produce an algorithm y that solves the halting problem. y could start enumerating every possible derivation using those logical axioms, starting with proofs that are 1 line long, going on to proofs that are 2 lines long, and so forth. By our assumption, y will eventually reach a proof that x halts or a proof that x does not halt, whereupon it could output that fact. Therefore y is an algorithm that would solve the halting problem. But it is known that the halting problem can't be solved by a Turing machine, so the assumption that such a set of logical axioms existed must be false.

I think this proof is correct. The Turing machine which implements y can see if a proof is correct in the following way. First, you assume you have a formal (that is, a logic expression) statement for "algorithm x always finishes after a finite number of steps". Let's call that expression P. Then, the machine starts generating "proofs", starting from the axioms. If, at some time, it generates the sentence P, then you have a proof for you statement. If it generates not(P), then you know that the statement is false. If you assume you can always find a proof for the termination of an algorithm, this process always stops after a finite time. So you have solved the Halting problem.

However, the proof is still about vague about certain propositions which need clarification. For example, how would the Turing machine which implements y build up the "tree" of proofs? Also, how would you formally state "algorithm x always finishes after a finite number of steps"?
 
  • #45


I still don't like it. I think you can probably construct an equivalency between interpreting / generating / etc. valid proofs and solving the halting problem for a given algorithm on a given input.

I think it's well not to lose sight of the original point. The claim is that there exist algorithms for which there is no way to determine whether or not they terminate (a much stronger condition than undecidability!) I remain unconvinced, even in light of the proposed proof.

I wonder whether there exists a simple example of an algorithm for which there is no way to evaluate termination on some input (the input would also be nice). It's pleasant to talk in terms of contradiction proofs and what not, but seeing is believing.*

* There's something about all this I just don't like.
 
  • #46


Unfortunately it is not possible to supply the example you requested (and prove it is such an example). For every algorithm that terminates, there is a proof that it does so (simply run it until it finishes). So if you had a proof that there is no proof of termination for an algorithm, your proof implies the algorithm does not terminate; your proof would be a proof of non-termination.

Technically, it IS possible to supply an example of an algorithm for which there is no proof of termination or non-termination, but it is impossible to prove that the example satisfies that condition.
 
  • #47


yUNeeC said:
Well, math is fundamental to pretty much everything. Computer science, at least in my opinion, is not.

Most people I work with wouldn't see the point in either, because they just play video games all day...on computers. They have no knowledge of, or interest in, the mechanics of computer hardware or software, and especially not the math behind it all.

Our world is being constantly redefined by the presence and plethora of computers. And while the vast majority can be consumers without any understanding, if you want to actually be a part of the intellectual class or produce any kind of technology, a working knowledge of CS is definitely essential.
 
  • #48


I think that's where I get nervous... when we start talking about truths couched in terms of things that we can't ever know.

It's like looking for the set of keys you can't ever find. Once you find them... they're not what you were looking for anymore.
 
  • #49


Well, whether it makes you nervous or not, the results and don't depend on any weird conjectures. They are just simple logic.

One could provide examples of many programs that we don't KNOW terminate--this doesn't preclude a mathematician from at some point proving they do or don't, but they are at least candidates.

Here's one simple example:
Code:
def f(n):
  i=n
  while i > 1:
    if i % 2 == 0:
      i = i / 2
    else:
      i = i * 3 + 1
    if i == n:
      return True
  return False

n = 1
while not f(n):
  n = n + 1
Does this terminate? Difficult to say...

You could produce many more examples of algorithms whose termination is unknown, by picking some unproved conjecture C. Then let the algorithm systematically generate every proof that follows from your axioms, and if it proves C it terminates. The algorithm terminates if and only if C is provable--so determining whether it terminates is not easy.
 
  • #50


yUNeeC said:
Yeah, I'm at ECU. Interesting...I just don't understand why part of the core of the BS degree in math would be programming...but I guess that's just my luck.

Thanks for all of the help guys.

If your going to do any form of mathematical modelling, simulation, or something related then you will need to understand some basic programming constructs. Let's say you work in applied mathematics. You will probably need to code up a simulation. It might be scripted but programming is programming nonetheless.
 
  • #51


mXSCNT said:
Neither mathematics nor computer science is a natural science, because they do not study the physical world.

While mathematics doesn't study the physical world, mathematics at its core is the generalization of abstractions of the world as we experience it. Nothing comes even closer to providing a mechanism that describes our world other than mathematics.
 
  • #52


Here's what I'm getting at: is it possible to look at an algorithm, and for some actual input to that algorithm, prove that it is impossible to prove that it terminates or fails to terminate?

In other words, for these (hypothetical) algorithms the termination of which cannot be determined, do there exist valid proofs demonstrating the impossibility of termination proofs?

Does this make any sense? I'd be very wary of accepting anything of this theoretical nature without proof, especially since it goes against intuition. You seem to be making the argument that "who knows?", but that's different in my mind than "you can't know".

The more that I think about it, I imagine you're right anyway, in a sense, since Godel's incompleteness theorem probably works the same for termination proofs of algorithms as for any other proofs.

My original point: Wikipedia makes it sound like the best we can do is run algorithms and time them, when in reality you usually can tell whether they terminate or not (in fact, if you were to write a deterministic algorithm whose termination was unknown and unknowable, your boss would probably make you write another algorithm...) I maintain that the Wiki page was misleading in this regard.
 
  • #53


AUMathTutor said:
Here's what I'm getting at: is it possible to look at an algorithm, and for some actual input to that algorithm, prove that it is impossible to prove that it terminates or fails to terminate?
No, as I've already mentioned: if you could prove that it is impossible to prove an algorithm terminates, it immediately follows that the algorithm does not terminate. This is because every algorithm that terminates has a proof of termination, namely running the algorithm and observing the termination.
My original point: Wikipedia makes it sound like the best we can do is run algorithms and time them, when in reality you usually can tell whether they terminate or not (in fact, if you were to write a deterministic algorithm whose termination was unknown and unknowable, your boss would probably make you write another algorithm...) I maintain that the Wiki page was misleading in this regard.
The proof of the undecidability of the halting problem doesn't restrict the algorithms that might attempt to solve it. They could do anything, including static analysis and whatever else. It may be intuitive in some cases to think about just running the algorithm and waiting for termination--but it's not the only way to do it.
 

Similar threads

Replies
7
Views
3K
  • · Replies 15 ·
Replies
15
Views
5K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
Replies
2
Views
3K
Replies
13
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K