Can a PTM simulate a one-tape Turing machine in O(T^2) time?

In summary, the conversation discusses the concept of a paper tape Turing machine (PTM) and how it can simulate T steps of a one-tape Turing machine in O(T^2) steps. The speakers explore various strategies and considerations for constructing a PTM that can accurately simulate a given TM, including the use of different alphabets and potential optimizations based on the value of T. They also touch on the idea of extending the tape alphabet to hold ordered n-tuples and using specific symbols to denote the virtual tape head location.
  • #1
0rthodontist
Science Advisor
1,231
0
This was a bonus problem that I missed on the last homework.

A paper tape Turing machine (PTM) is a Turing machine whose tape alphabet is partially ordered, and if a is a symbol on a square of the tape, then b can only be written on that square if b is greater than a in the partial order. Show that a PTM can simulate T steps (tape head movements/writes) of a one-tape Turing machine in O(T^2) steps.

O(T^3) is easy, you can just copy the whole tape over again at every step for O(T^2) PTM steps at step T of the TM.

To get O(T^2) the PTM head can't move more than O(T) steps at a given step T of the TM, which means it can move back and forth over the PTM tape at most a constant number of times on average per step of the TM. When you need to change a symbol on a tape square in the wrong direction, the natural thing to do is go to the right or to the left until you have an empty square, and write the new symbol. If this could be made to work, it would be T^2 time. But I don't see how to maintain the association between the new symbol and the old square.
 
Physics news on Phys.org
  • #2
Bleh, too many little details.

Are you trying to come up with a PTM that can simulate a specific TM, or are you looking for one that can simulate any TM that's given to it as input?

Is the number of steps to simulate a parameter? Or is it just supposed to simulate indefinitely, but at any point in time the number of simulated steps T is less than cT² for some T? Or is T supposed to be the number of steps before the simulated TM halts?

Are you allowed to choose the alphabet for your PTM? I suppose WLOG you can assume the TM alphabet is {0, 1}. Oh, I suspect that WLOG you can choose your alphabet for the PTM for the same reason you can assume the alphabet for your TM is {0, 1}.


To get O(T^2) the PTM head can't move more than O(T) steps at a given step T of the TM
You're overworking your dummy variable, I suspect. :tongue: But in any case, you forget about amortized analysis -- there is nothing stopping your TM from using [itex]\Theta(T^2)[/itex] steps to simulate a particular single step, as long as the overall time is still O(T²)
 
  • #3
What the PTM must do is simulate a given TM. That is, given the Turing machine M, supply the PTM M' that can simulate the action of M on any input. You can pick your alphabets (so long as they are finite of course).

The problem didn't specifically say whether the TM halts at T steps and the PTM must take no more than cT^2 steps to output the same thing, or whether there must be some interpretation of the PTM's state after cT^2 steps that corresponds to T steps of the TM. I don't see a practical difference in terms of which interpretation is easier to solve since I don't see any way to mimic the TM's output behavior without mimicking the TM's tape step by step.

Hurkyl said:
You're overworking your dummy variable, I suspect. :tongue: But in any case, you forget about amortized analysis -- there is nothing stopping your TM from using [itex]\Theta(T^2)[/itex] steps to simulate a particular single step, as long as the overall time is still O(T²)

I thought about amortized analysis a little in regards to copying the whole tape to smooth things out, but you'd have to do it less and less frequently as time went on. I can't think of a way to maintain the in-between steps with O(T) time anyway for any greater-than-constant period of time.
 
Last edited:
  • #4
The problem didn't specifically say whether the TM halts at T steps and the PTM must take no more than cT^2 steps to output the same thing, or whether there must be some interpretation of the PTM's state after cT^2 steps that corresponds to T steps of the TM. I don't see a practical difference in terms of which interpretation is easier to solve since I don't see any way to mimic the TM's output behavior without mimicking the TM's tape step by step.

The thing is that if you know T ahead of time, you might be able to make optimizations that utilize that knowledge. (e.g. put each copy exactly T locations later)

Or maybe you don't need to know it ahead of time; maybe you can simulate one step, then restart and simulate two steps, ... then restart and simulate 2^n steps, then ...


You can pick your alphabets (so long as they are finite of course).
Have you thought much about what interesting things you can do with your alphabet, then?
 
Last edited:
  • #5
I'm pretty sure you can't know T ahead of time, since if you knew T and you knew the machine, then your algorithm for constructing the PTM could just be "run the real TM for T steps and generate a PTM that writes exactly the TM's information on the tape" giving O(T) simulation time or even O(1) if you only care about output.

I don't think there's a lot you can do with the alphabet. The TM alphabets have to have more than one symbol and since the QTM has to simulate the TM, the input alphabet has to be the same. As you said any alphabet can be simulated by the alphabet {0. 1}. You can add delimeters and blacked-out symbols as you need to. A trick I've seen in class is extending the tape alphabet from gamma to (gamma x gamma) or (gamma x gamma x gamma) etc. so that each tape square can hold an ordered n-tuple. You can denote the virtual tape head location with an h symbol. Another way to denote the virtual tape head location is to have the tape squares hold an extra piece of data that says {0, 1} depending on whether the simulated tape head is on the square.

One thing I was thinking about a little was that the squares could hold directional information, allowing you to maintain the necessary assocations between a square and its rewrite history. The squares could indicate something like, "go left until you find a square with a green mark." Then at the square with the green mark, you get another symbol telling you to "go right until you find the third square with a red mark." Then at the square with the red mark, you get another symbol telling you to "go left five squares," and five squares left of that you find your data. I haven't figured out just how that might work so that each square still only needs to store a finite amount of directional information to maintain associations between an unbounded number of squares.
 
Last edited:
  • #6
The professor talked about the problem in class today. It turns out there was a mistake on the assignment and he really only meant the implementation had to be O(T^3).
 
  • #7
That doesn't mean it's still fun to work on it. :smile: If we can't prove it O(T²), then we should try and prove it [itex]\Omega(T^3)[/itex] or somesuch. :smile:
 
  • #8
!
It can be done in O(T^2). I just figured it out.

Use two back-to-back stacks to represent the TM, like
abc|zyx
where | divides the two stacks and the left stack has a as its top level element, and the right stack has x as its top level element.
Pushing onto a stack is easy: just write to the top of the stack. Removing an element from a stack is equally easy: just black it out. So the right stack might contain zy9x99w where the 9's are blacked out, and what that means is it contains wxyz with w on top, ignoring the 9's. The location of the tape head is represented by the space between the two stacks (as if you flipped the stacks around so they were facing one another). For example, to move the virtual tape head left, pop from the left stack and push to the right stack.
 
Last edited:
  • #9
That's neat!

But as stated it requires a doubly-infinite tape. :frown:


Well, maybe not. If you know how many steps you want to simulate, you can start sufficiently far out on your singly-infinite tape so that you won't run off the left edge.


And then to simulate an arbitrary number of steps, maybe you can use my trick! You could start X places out, and then run the simulation until you hit the left edge of your tape, and then copy the tape so that the center is at the X²-nd cell and run until you hit the old stack, then copy so that the center is at the X^4 cell, and run...

At least I hope you can pull that off.
 
  • #10
To simulate it with a 1-sided tape you can just interleave the stacks, so every other element starting from position 0 belongs to the left stack, and every other element starting from position 1 belongs to the right stack. So the twin stacks a99b9c|zy99xwvu would be represented as
Code:
c 9 b 9 9 a
+
 z y 9 9 x w v u
----------------
cz9yb9999xaw v u
 
Last edited:

1. What is a Paper Tape Turing Machine?

A Paper Tape Turing Machine is a theoretical model of a computer that was first described by Alan Turing in 1936. It consists of a strip of paper with symbols written on it, a read/write head, and a set of rules for manipulating the symbols. This model is considered the basis for modern computers and is used to study the capabilities and limitations of computation.

2. How does a Paper Tape Turing Machine work?

A Paper Tape Turing Machine operates by moving the read/write head along the paper tape, reading and writing symbols as it goes. The machine follows a set of rules, known as a program, which determine its actions based on the current symbol it is reading and its internal state. This allows the machine to perform calculations and solve problems.

3. What are the limitations of a Paper Tape Turing Machine?

One limitation of a Paper Tape Turing Machine is that it can only perform a finite number of operations before running out of space on the paper tape. Additionally, it can only manipulate symbols according to its set of rules, so it is not capable of learning or adapting on its own.

4. How is a Paper Tape Turing Machine different from a modern computer?

A Paper Tape Turing Machine is a theoretical model, while modern computers are physical devices that use electronic components to perform computations. Additionally, modern computers have much more memory and processing power, allowing them to perform complex tasks and store large amounts of data.

5. What is the significance of the Paper Tape Turing Machine in computer science?

The Paper Tape Turing Machine is considered a fundamental concept in computer science because it introduced the idea of a universal computing machine. This means that any computation that can be performed by a computer can also be performed by a Paper Tape Turing Machine, making it a powerful tool for understanding the capabilities and limitations of computers.

Similar threads

  • Programming and Computer Science
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Programming and Computer Science
Replies
29
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Programming and Computer Science
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
1
Views
3K
Replies
1
Views
2K
  • Programming and Computer Science
Replies
15
Views
3K
Back
Top