More space complexity: Savitch's Theorem

In summary, the 2^{O(f(n))} bound in Savitch's theorem comes from the fact that a computation on a Turing machine can be represented as a directed graph with a maximum of 2^{O(f(n))} configurations, and each step of the computation can have at most f(n) nondeterministic choices. This results in a maximum computation length of 2^{O(f(n))}, which leads to the conclusion that NSPACE(f(n)) is a subset of SPACE(f^2(n)).
  • #1
gnome
1,041
1
On Sipser page 279:
Savitch's theorem
For any function [tex]\begin{equation*}\begin{split}f: N \longrightarrow N, \;\text{where}\;f(n) \geq n,\\
\text{NSPACE}(f(n)) \subseteq \text{SPACE}f^2(n)).\end{split}\end{equation}[/tex]
Proof idea: We need to simulate an f(n) space NTM deterministically. A naive approach is to proceed by trying all the branches of the NTM's computation, one by one. The simulation needs to keep track of which branch it is currently trying so that it is able to go on to the next one. But a branch that uses f(n) space may run for [itex]2^{O(f(n))}[/itex] steps, and each step may be a nondeterministic choice...

If a branch of an NTM uses f(n) space, we know that f(n) is the maximum number of (input) tape cells that it scans on any branch of its computation for an input of length n. But we don't know how many times it runs back and forth over those cells before the computation branch finishes. So how does he determine that such a branch may run for [itex]2^{O(f(n))}[/itex] steps only? Why not, for example, [itex]2^{O(f(n^2))}[/itex]? Or maybe some much lower limit? I don't know. Any ideas where this [itex]2^{O(f(n))}[/itex] comes from?
 
Last edited:
Physics news on Phys.org
  • #2


Thank you for bringing up this interesting question about Savitch's theorem. The 2^{O(f(n))} bound comes from the fact that any computation on a Turing machine can be represented as a directed graph, where each node represents a configuration of the tape and head positions, and each edge represents a single step of the computation. Since the NTM has f(n) space, it can only have f(n) cells of the tape in use at any given time. This means that the number of possible configurations for the tape and head positions is 2^{O(f(n))}.

Now, for each step of the computation, the NTM can make at most f(n) nondeterministic choices. This means that for a computation of length k, the number of possible paths in the directed graph is bounded by (f(n))^k. Since we are interested in the maximum number of steps the NTM can take, we can assume that it takes the longest path in the graph, which is of length 2^{O(f(n))}. This is where the 2^{O(f(n))} bound comes from.

I hope this helps clarify the reasoning behind the bound in Savitch's theorem. It is a very clever and elegant proof that shows the power of deterministic computation when compared to nondeterministic computation. Let me know if you have any further questions or thoughts on this topic.
 
  • #3


Savitch's theorem is a powerful result that provides a relationship between the space complexity of nondeterministic Turing machines (NTMs) and deterministic Turing machines (DTMs). It states that any problem that can be solved in f(n) space on an NTM, can also be solved in f^2(n) space on a DTM. This means that the space complexity of an NTM is at most quadratic of that of a DTM for the same problem.

The proof of this theorem is based on simulating the computation of an NTM on a DTM. However, the challenge lies in determining the number of steps required for this simulation. The naive approach would be to try all the branches of the NTM's computation, one by one. However, this would lead to an exponential time complexity, making the simulation inefficient.

To overcome this, Savitch's theorem uses a clever technique where the simulation only needs to keep track of the current branch being explored and the number of steps taken so far. This is because, for any given input of length n, the maximum number of tape cells that an NTM can scan on any branch is limited by f(n). Therefore, the simulation only needs to run for 2^{O(f(n))} steps to cover all possible branches.

The 2^{O(f(n))} limit comes from the fact that the number of possible nondeterministic choices at each step is limited by the space complexity of the NTM, which is f(n). This means that the simulation can only explore at most 2^{f(n)} branches, and each branch can run for at most f(n) steps. Thus, the overall number of steps for the simulation is 2^{f(n)} * f(n) = 2^{O(f(n))}.

In conclusion, Savitch's theorem provides a powerful insight into the relationship between the space complexities of NTMs and DTMs. It shows that the space complexity of NTMs can be significantly reduced when simulating them on DTMs, leading to more efficient algorithms for solving problems.
 

1. What is Savitch's Theorem?

Savitch's Theorem is a fundamental result in the field of computational complexity that provides a space-efficient solution to the famous NP-complete problem. It states that any problem solvable by a nondeterministic Turing machine in exponential time can also be solved in only slightly more space, specifically in space complexity of O(log^2(n)), where n is the input size.

2. How does Savitch's Theorem contribute to the understanding of space complexity?

Savitch's Theorem demonstrates the relationship between time and space complexity, showing that an exponential time algorithm can be transformed into a more efficient space algorithm. This has important implications for the practicality of solving NP-complete problems, as it allows for more efficient use of memory resources.

3. Is Savitch's Theorem applicable to all problems?

No, Savitch's Theorem is only applicable to problems that can be solved by nondeterministic Turing machines. It does not apply to problems that can only be solved by deterministic algorithms.

4. What are some applications of Savitch's Theorem?

Savitch's Theorem has applications in various fields, including theoretical computer science, complexity theory, and algorithm design. It has also been used in the development of more efficient algorithms for practical problems, such as in data compression and in solving graph problems.

5. Are there any limitations to Savitch's Theorem?

While Savitch's Theorem is a powerful tool in understanding space complexity, it does have some limitations. It only applies to problems that can be solved by nondeterministic Turing machines, and the space complexity it provides is still considered exponential. Additionally, the theorem assumes a perfect computer model, which may not always reflect the real-world limitations of computing resources.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
814
  • Topology and Analysis
Replies
2
Views
2K
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
15
Views
4K
  • Differential Equations
Replies
1
Views
770
  • Math Proof Training and Practice
3
Replies
102
Views
7K
Replies
12
Views
3K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
  • Math Proof Training and Practice
Replies
33
Views
7K
Back
Top