Can Randomness Improve Computational Power for Turing Machines?

  • Thread starter Thread starter infoman
  • Start date Start date
  • Tags Tags
    Computational
AI Thread Summary
The discussion centers on the computational challenges posed by Turing machines, particularly regarding problems that lack efficient solutions in probabilistic Turing machines. A key focus is the question of whether randomness enhances computational power, specifically if there are problems solvable in polynomial time by probabilistic Turing machines but not by deterministic ones. The conversation highlights the class of NP (non-deterministic polynomial time) problems, which currently cannot be efficiently solved by traditional deterministic algorithms. Examples of NP problems include the Boolean satisfiability problem (SAT), the N-puzzle, the knapsack problem, the Hamiltonian cycle problem, and the traveling salesman problem, among others. The potential of randomness in algorithms is discussed, suggesting that it may increase computing power if another algorithm can evaluate the effectiveness of random changes.
infoman
Messages
1
Reaction score
0
I would like to start a discussion that summarizes the main computational challenges regarding Turing machines, so my question is: what are the current computational problems which are widely recognized do not have any efficient solution in a probabilistic turing machine.

Maybe you can answer this just giving a reference.
 
Last edited:
Technology news on Phys.org
From wikipedia:
One of the central questions of complexity theory is whether randomness adds power; that is, is there a problem which can be solved in polynomial time by a probabilistic Turing machine but not a deterministic Turing machine?
 
infoman said:
...what are the current computational problems which are widely recognized do not have any efficient solution in a probabilistic turing machine.

Maybe you can answer this just giving a reference.

Do you mean "deterministic" instead of "probabilistic"?
An "efficient algorithm" is taken to mean an algorithm that scales polynomially, or less, for any given input. There is a class of problems, "NP", which stands for non-deterministic polynomial time, and which contains problems that currently cannot be solved in a deterministic manner and in polynomial time on a turing machine (or equivalent).
In short, NP contains problems which we currently can't solve efficiently on a traditional computer. Examples of these problems are:
* Boolean satisfiability problem (SAT)
* N-puzzle
* Knapsack problem
* Hamiltonian cycle problem
* Traveling salesman problem
* Subgraph isomorphism problem
* Subset sum problem
* Clique problem
* Vertex cover problem
* Independent set problem
* Graph coloring problem
and... MineSweeper, plus a variety of puzzle games.
 
If an algorithm is written to solve a problem and the algorithm is changed randomly,then provided there is another algorithm that can decide if the change is producing useful results then,yes,randomness can lead to an increase in computing power.
 
Dear Peeps I have posted a few questions about programing on this sectio of the PF forum. I want to ask you veterans how you folks learn program in assembly and about computer architecture for the x86 family. In addition to finish learning C, I am also reading the book From bits to Gates to C and Beyond. In the book, it uses the mini LC3 assembly language. I also have books on assembly programming and computer architecture. The few famous ones i have are Computer Organization and...
I have a quick questions. I am going through a book on C programming on my own. Afterwards, I plan to go through something call data structures and algorithms on my own also in C. I also need to learn C++, Matlab and for personal interest Haskell. For the two topic of data structures and algorithms, I understand there are standard ones across all programming languages. After learning it through C, what would be the biggest issue when trying to implement the same data...
Back
Top