Practicability of TM completeness

  • Thread starter Thread starter DC Reade
  • Start date Start date
AI Thread Summary
The discussion centers on the concept of Turing completeness and its implications for simulating human intelligence and consciousness. While some argue that computers can theoretically simulate any process given infinite resources, others highlight the practical limitations, such as the complexity of human brain functions and the challenges in replicating innate motivations in machines. There is skepticism regarding the assumption that advanced AI will emerge spontaneously from sophisticated learning algorithms, with calls for more concrete evidence of directed research in this area. The conversation also touches on the philosophical aspects of consciousness and decision-making in machines, emphasizing the need for clarity in discussing these topics. Ultimately, the debate reveals a tension between theoretical possibilities and practical realities in AI development.
DC Reade
Messages
10
Reaction score
0
"Computers are Turing-complete, they can simulate everything in the universe given enough computing power, space and time."
Reference https://www.physicsforums.com/threads/today-i-learned.783257/page-130

An easy statement to make, as an abstraction with no limits placed on the parameters.
 
Technology news on Phys.org
  • Like
Likes fresh_42
Drakkith said:
I'm going to disagree with this and the rest of your post, as I believe this is an open problem in artificial intelligence research and not something that can be confidently said to be possible or not.

If you know of any evidence that any directed efforts are being made to address the specific problems I've mentioned, I'd like to hear it. The AGI researchers I've encountered all seem to think that those problems will simply resolve spontaneously as an emergent phenomenon once a machine is equipped with a sufficiently sophisticated automatic learning program. But that sounds like a hand-wave to me.

Mortal incarnate beings possesses innate motivations. Machines don't even know whether they're on or off, much less caring about that state of affairs. There's no evidence that an electric circuit finds being powered up to be a more desirable state than remaining inert. A flatworm exhibits more sensibility than that, despite the fact that it possesses a relatively miniscule amount of complex pattern recognition or memory as compared to AlphaZero.

mfb said:
Assuming growth continues roughly at the same exponential rate supercomputers should get able to mirror all human neurons within the next ~20 years, and vastly exceed the corresponding processing power in 30-40.

That's a faith-based proposition, ultimately. https://www.cnet.com/news/end-of-moores-law-its-not-just-about-physics/ If you doubt that Moore's Law has adherents who use it as an article of faith, I suggest that you read some of the comments for that article. Fortunately, not every comment writer in the thread is so starry-eyed. As one of them pointed out, there's a Moore's 2nd Law as well: economic constraints eventually begin to come into play https://en.wikipedia.org/wiki/Moore's_second_law
I suspect that an enormous amount of power would also be required in order to keep the super-supercomputers running, once they're built, to say nothing of the challenges of their fabrication. And it should be recognized that there really isn't much further for processing units to shrink; chips built to use 5 nanometer devices- the current state of the art- are awfully small already. https://en.wikipedia.org/wiki/5_nanometer#cite_note-EndMoores2013-1

mfb said:
We don't know if it is sufficient to look at neurons, but including more cells or more details is just a quantitative problem, not a qualitative one. Scanning a human brain (as one option to get a template) is also a matter of engineering, not a physics problem.

Those are some of the same challenges that I alluded to in my comment. You haven't begun to address them. So you need to realize that your reply is a hand-wave, not an answer.

I guarantee that more will be entailed to model human brains with software than simply "looking at neurons", which are dynamic wetware-based processors, not static hardware like inert VSLIs stamped out in a factory. We don't even have a full grasp of the array of functions for neurotransmitters like serotonin, much less an engineering level of understanding how they're used by our neurons. That current focus of that research isn't in neurology, it's in the pharmaceutical industry, and the results are nowhere near being mapped out comprehensively; they're primarily based on trial and error empiricism, not the certainties of semiconductor doping.
Similarly, I guarantee that there's more involved in "scanning the human brain" to achieve the results required for a computer simulation than what we can achieve with the current state of the art in brain imaging. Compared to the ambitious capabilities required to provide a thorough dynamic mapping of human brain function, we're about one level above using metal detectors on a beach, in terms of imaging sophistication. Although, as I've already noted, I have yet to have any assurance that computer engineering could model the awareness of a living organism as primitive as a flatworm with software, even if a complete outline of its functioning were available.

Hmm, that might make for an interesting challenge- build a robot capable of simulating the functioning of serotonin in flatworms. The everyday routine of flatworms is almost entirely about biosurvival; their neuromuscular activity utilizes serotonin, 5HT, to move toward food or away from noxious substances in their environment. AI researchers should be able to find some useful clues here https://www.sciencedirect.com/science/article/pii/S0166685107000965 Model that with a software program, and you may begin to gain an appreciation of what's entailed in building a machine that care whether it's on or off.
 
Drakkith said:
So?
So it's a meaningless statement. You're implying the capability of obtaining access to an infinite amount of everything. A metaphysical proposition of the most ambitious sort, garbed as if it were a physical truism.
 
mfb said:
How do you test for consciousness? If there is no possible test for it then it is not a scientific question.
https://en.wikipedia.org/wiki/Turing_test

I'd submit that much depends on the mental acuity of the human doing the interrogation. Trick questions are allowed. Like, ask why a hen lays its egg while it's still flying. Although it might be much simpler just to talk about one's case of the runs and seek sympathy from the correspondent, or maybe solicit some stories on that topic. See if the replies pass the smell test, so to speak.
 
DC Reade said:
So it's a meaningless statement. You're implying the capability of obtaining access to an infinite amount of everything. A metaphysical proposition of the most ambitious sort, garbed as if it were a physical truism.
There is a reason why so many computation classes are distinguished. It is furthermore not very helpful to confuse logical problems with computational ones. Turing completeness is a logical statement which isn't meant to have a computational relevance. To wildly switch among all of them in a way that it just suits a local argument isn't helpful either. I think it is simply impossible to condense an entire science into a few posts in a thread which isn't meant to discuss them in detail.
 
Please open a thread in the appropriate forum, if you seriously want to discuss these topics - and a a source such that we can all talk about the same thing, instead of mixing so many different keywords. This does not belong here!

DC Reade said:
Logic. I assume to contrast computation classes, which is something different, aka incomparable.
I'd submit that much depends on the mental acuity of the human...
Philosophy.
... doing the interrogation.
Measurement, undefined I like to add.
Trick questions are allowed. Like, ask why a hen lays its egg while it's still flying.
I assume, this refers to AI.
Although it might be much simpler just to talk about one's case of the runs and seek sympathy from the correspondent, or maybe solicit some stories on that topic.
... which in my opinion is exactly what you do here.
See if the replies pass the smell test, so to speak.
Again. Please open a thread in https://www.physicsforums.com/forums/programming-and-computer-science.165/, provide a specific subject, source and statement. I have currently no idea, in which of the above areas your statements belong to. However, this is necessary for an educated answer and the level of response.
 
DC Reade said:
So it's a meaningless statement. You're implying the capability of obtaining access to an infinite amount of everything. A metaphysical proposition of the most ambitious sort, garbed as if it were a physical truism.

On the contrary, it implies that the complexity of your simulation depends directly on your available resources, with the upper limit that if you had unlimited resources you could simulate anything you desired. Obviously this upper limit is not achievable in the real world. It's there simply because it's desirable to state your boundary conditions.

DC Reade said:
If you know of any evidence that any directed efforts are being made to address the specific problems I've mentioned, I'd like to hear it. The AGI researchers I've encountered all seem to think that those problems will simply resolve spontaneously as an emergent phenomenon once a machine is equipped with a sufficiently sophisticated automatic learning program. But that sounds like a hand-wave to me.

I don't have any specific examples. I was under the impression that this is simply a natural process in the field of AI research. That is, the rise of AI's of greater complexity and ability, and the research that goes into creating them, naturally leads to the exploration of this problem. That's not to say that there aren't any experiments dedicated to replicated human-like intelligence. I'm near certain that there are.

DC Reade said:
Mortal incarnate beings possesses innate motivations.

And these innate motivations are believed to arise from physical processes in the body that can be emulated or copied in machines.

DC Reade said:
There's no evidence that an electric circuit finds being powered up to be a more desirable state than remaining inert.

I suppose that depends on what you mean by 'a more desirable state'. It is trivial to create a circuit that will keep itself turned on as long as power is available. So then you would need to carefully decide where you want to draw the line between something that is capable of decision making (so as to decide that it should remain in a desirable state) and something that is not.
 
fresh_42 said:
There is a reason why so many computation classes are distinguished. It is furthermore not very helpful to confuse logical problems with computational ones. Turing completeness is a logical statement which isn't meant to have a computational relevance. To wildly switch among all of them in a way that it just suits a local argument isn't helpful either. I think it is simply impossible to condense an entire science into a few posts in a thread which isn't meant to discuss them in detail.
The conversation can't simply be carried on with abstractions and generalizations, either.
 
  • #10
DC Reade said:
The conversation can't simply be carried on with abstractions and generalizations, either.
Then it's more or less philosophy, which we do not discuss. But even philosophy demands a set of rules within a controversy can take place. Everything else in meaningless.
 
  • #11
Drakkith said:
And these innate motivations are believed to arise from physical processes in the body that can be emulated or copied in machines.
I get that there are people who believe this. I'd just like to learn of some evidence that they've managed to model it at the level of flatworms. Or living organisms of even more simplicity than that.
Drakkith said:
I suppose that depends on what you mean by 'a more desirable state'.
Actually, it's more about my questions pertaining to the possible perspective of the machine, as far as it giving evidence of possessing a preference one way or the other.
 
Last edited:
  • #12
fresh_42 said:
Please open a thread in the appropriate forum, if you seriously want to discuss these topics - and a a source such that we can all talk about the same thing, instead of mixing so many different keywords. This does not belong here!Logic. I assume to contrast computation classes, which is something different, aka incomparable.

Philosophy.

Measurement, undefined I like to add.

I assume, this refers to AI.

... which in my opinion is exactly what you do here.

Again. Please open a thread in https://www.physicsforums.com/forums/programming-and-computer-science.165/, provide a specific subject, source and statement. I have currently no idea, in which of the above areas your statements belong to. However, this is necessary for an educated answer and the level of response.
My self-aware consciousness doesn't compartmentalize all that discretely, for better or worse. I wasn't aware that "mixing keywords" constitutes a problem for the readers here, either.

A Turing test of a machine intelligence that's only permitted to make a narrow set of inquiries, or one with questions subject to pre-approval, is not going to be an authentic Turing test. It may work to detect the presence of some level of intelligent sophistication. But not self-aware consciousness. Turing tests don't depend on objective measurement criteria, either; the proof of result is found by the success of the machine in convincing -deceiving- the humans doing the inquiry. That's necessarily implies subjective assessment.

Conversely, it's well within the bandwidth of human consciousness to reduce its functioning to that of a bot program. Ironically enough.
 
Last edited:
  • #13
DC Reade said:
I get that there are people who believe this. I'd just like to learn of some evidence that they've managed to model it at the level of flatworms. Or living organisms of even more simplicity than that.

Why would we need to get to that level before having the opinion that physical processes in the body lead to intelligence and that they can be replicated by a properly designed machine? Literally all of science deals with physical processes and there are no known phenomena that have ever been found that absolutely cannot be explained by science. So it makes perfect sense to have the opinion that AI can eventually reach human-level intelligence and cognition.

I think you'd be hard pressed to provide a solid reason why it can't be done.

DC Reade said:
Actually, it's more about my questions pertaining to the possible perspective of the machine, as far as it giving evidence of possessing a preference one way or the other.

I'm not sure what you mean by this.
 
  • #14
PeroK said:
An open question is how much of the biological aspects would you have to replicate in order to get consciousness?
That's a good question. I think that in order to begin to answer it, one needs to consider exactly what conditions differentiate the human condition from that of a machine.

Human are living organisms. Living organisms possesses an innate sense of mortality, a sensorium, a body image, an actual biotic body with continual needs for nutrition and hydration, an internal clock of sorts, varying states of awareness, often including a requirement for the altered state known as "sleep", and various instinctual drives. To provide an incomplete list.

How many of those attributes do machines share with living organisms? None. How many would need to be modeled in order to produce self-aware consciousness? Some of them, I'd posit. I think the notion of self-awareness presupposes the existence of a subjective core identity, which implies at least some of the limits associated with living organisms. How would a programmer induce such an innate sense of awareness from outside of the hardware circuitry?

For my part, I'm fine with having narrower, relatively task-specific AI with superhuman attributes. It doesn't need to know what it's doing. We need to know what it's doing. I view the lack of subjective, self-aware consciousness in machines as a feature, not an impediment.
 
  • #15
Drakkith said:
there are no known phenomena that have ever been found that absolutely cannot be explained by science
That's a risible statement. After all, the history of human scientific endeavor has hardly even begun to ask questions, much less answering them all.
 
  • #16
Drakkith said:
I think you'd be hard pressed to provide a solid reason why it can't be done
It's impossible to prove a negative. It's an essentially non-falsifiable proposition.
 
  • #17
Thread closed.
 
Last edited by a moderator:
  • #18
DC Reade said:
Mortal incarnate beings possesses innate motivations. Machines don't even know whether they're on or off, much less caring about that state of affairs. There's no evidence that an electric circuit finds being powered up to be a more desirable state than remaining inert. A flatworm exhibits more sensibility than that, despite the fact that it possesses a relatively miniscule amount of complex pattern recognition or memory as compared to AlphaZero.
A human playing chess and a computer program playing chess both want to win (normally). Where is the fundamental difference?
If you give a chess program a way to influence its power it will try to keep the power on because it will lose if powered off.
DC Reade said:
Assuming growth continues roughly at the same exponential rate supercomputers should get able to mirror all human neurons within the next ~20 years, and vastly exceed the corresponding processing power in 30-40.
That's a faith-based proposition, ultimately.
No, it is an "if then" statement. If it rains tomorrow the street will get wet. This is not faith-based. I didn't claim the "if" part would be true.
DC Reade said:
Those are some of the same challenges that I alluded to in my comment. You haven't begun to address them.
They do not matter in the context of the discussion. To answer the question "is it possible to build a house" you don't have to first calculate how many bricks exactly you need for a specific house. Eventually you'll need that number, but not at the step where you consider the general possibility.

You make guarantees that go beyond the consensus of the experts. Where do you get the certainty from?
DC Reade said:
That is not a test for consciousness.

Edit: Sorry, was writing already and didn't see you closed the thread in the meantime @PeterDonis.
 
Back
Top