(This may be a little bit outside the normal scope of this particular forum. I realized after writing this I didn't really know where to go with questions about computational complexity. Does anyone have any recommendations as to places where this question might be more appropriate? Anyway:) So I've got this question. The question started out as "is randomness computable?" but I realized I didn't have a very precise idea of what I meant by "randomness" or "computable". So, never mind that question. Here's I think the question I *really* want to ask: Let's say that I've got a standard probabilistic turing machine. It's a turing machine, and it has an entropy source it can consult to get random bits. It's running a program I'll call D. D is an algorithm in BPP. So when I run D on my probabilistic turing machine, it runs in polynomial time and is for any given input wrong at most 1/3 of the time. Now let's consider a slightly different situation. I've got two *deterministic* turing machines. Let's call them T1 and T2. These turing machines are totally normal, except the first one (T1) is rigged up to *think* it's a probabilistic turing machine. It has an "entropy source" it can consult for bits. But the "entropy source" is actually just T2. T2 is loaded up with a program (let's call it R) that when run produces a stream of bits according to some algorithm; R's input tells it how many bits to produce before shutting off. (I'm assuming this detail makes no difference, but let's say R works such that the output of R(2) is a prefix of the output of R(4).) When T1 consults its "entropy source" for the first time, we run R(1) or something on T2 and feed T1 the first bit on T2's output tape. When T1 consults the entropy source a second time we run R(2) and feed T1 the second bit on T2's output tape... and so on. So, here's my question, in three parts: 1. Is there *any* algorithm R such that when we run D on T1, D will perform the way it did on the probabilistic turing machine-- i.e., D runs in polynomial time, and for any given input is wrong at most 1/3 of the time? (Let's say that when we time D, we only count steps taken on T1-- we don't don't count any time spent running or rerunning R on T2 to get more entropy. Let's also say, just to make this work, that we calculate that "1/3 of the time" by rerunning D on T1 multiple times without "resetting" T2-- such that if the first run of D consumes k bits of entropy from T2, then the first bit requested by the second run of D will be the (k+1)th bit of R(k+1).) 2. If R(n) exists, is there *any* known bound on its runtime complexity with respect to n? 3. Do the answers to any of the above questions change if rather than D being in BPP, it had been in some other probabilistic complexity class such as PP or BPL or something?