Measuring time complexity

1. Jan 26, 2005

gnome

In Sipser, "Introduction to the Theory of Computation", a proof that "every t(n) time nondeterministic single-tape Turing machine (where t(n) ≥ n) has an equivalent 2O(t(n)) time deterministic single-tape Turing machine" shows that the running time of the equivalent NTM is $O(t(n)b^{t(n)})$ and then concludes with the statement:

$$O(t(n)b^{t(n)}) = 2^{O(t(n))}$$

which is not entirely clear to me.

b in this case is just a constant representing the maximum branching factor of the NTM.

I guess that $2^{O(t(n))}$ is equivalent to $b^{t(n)}$ since b is a constant and therefore $log_2{b}$ is a constant so we get

$$b^{t(n)} = 2^{(log_2 b)* t(n)} = 2^{O(t(n))}$$

Correct?

But, then what happened to the other O(t(n)) (the coefficient of the original expression)? That was not a constant term. It might be n, or nlogn, or n2, or whatever, but certainly not a constant. So why did it just disappear? Is it just that the exponential term so dominates that the t(n) coefficient is dropped as insignificant?

2. Jan 26, 2005

Hurkyl

Staff Emeritus
It didn't just disappear -- you have to break the RHS into two parts, each part asymptotically bigger than one of the factors of the LHS.

3. Jan 27, 2005

gnome

I don't understand your meaning. Where are the two parts in $2^{O(t(n))}$?

Or to put it another way: for argument's sake, suppose that every branch of the original NTM has a length of at most n2, so t(n) = n2 in the original NTM. And further, suppose that the maximum branching factor was just 2. Then we would have the total number of nodes $= O(2^{n^2})$ and the time for traveling from the root to a node = $O({n^2})$. Therefore, the running time of the equivalent deterministic machine = $O(n^22^{n^2})$. Can this be simplified to just $2^{O(n^2)}$? Why?

4. Jan 27, 2005

Hurkyl

Staff Emeritus
You can almost always break things into two parts if you're creative! How about writing it as the square of 2(1/2) O(t(n))?

5. Jan 27, 2005

gnome

But what is the point of that? (And anyway, ½O(t(n)) is the same as O(t(n)), isn't it?)

6. Jan 27, 2005

Hurkyl

Staff Emeritus
Because you can prove 2^O(t(n)) is asymptotically larger than t(n), and equal to O(b^t(n)), therefore 2^O(t(n)) * 2^O(t(n)) must be asymptotically at least as large as O(t(n) * b^t(n)).

But what is the point of that? (And anyway, ½O(t(n)) is the same as O(t(n)), isn't it?)

Yep! But writing it that way makes it clear how I'm breaking it into two parts.

7. Jan 27, 2005