Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
webplodder
Messages
9
Reaction score
1
Given that neuroscience cannot definitively explain where consciousness originates - whether as an emergent property of the brain or something transcendent - is it be possible for complex chatbots to develop behaviours that can’t be predicted by simply analyzing the sum of their parts?
 
Computer science news on Phys.org
The answer would be something no-ish, I think
Unless some random behaviour is intentionally incorporated in programming, a computer is clearly deterministic.
However, if the question is about expecting a result based on pre-analysing the training data instead - that's a very different story.
 
  • Like
  • Skeptical
Likes   Reactions: russ_watters, PeroK and webplodder
Rive said:
The answer would be something no-ish, I think
Unless some random behaviour is intentionally incorporated in programming, a computer is clearly deterministic.
However, if the question is about expecting a result based on pre-analysing the training data instead - that's a very different story.
OK, but assuming we're talking about a system capable of emulating or even exceeding human reasoning, how could we possibly anticipate every behaviour? And again, if the 'mind' (for lack of a better word) isn't deterministic, how do we know there isn't something else operating at this level?
 
  • Like
Likes   Reactions: PeroK
webplodder said:
Given that neuroscience cannot definitively explain where consciousness originates - whether as an emergent property of the brain or something transcendent - is it be possible for complex chatbots to develop behaviours that can’t be predicted by simply analyzing the sum of their parts?
We may already be seeing that. Even computer systems much simpler than chatbots or neural networks can exhibit unpredictable behaviour. Note that determinism doesn't imply predictability, as long as you have sufficient variability of the environment. Deterministic feedback loops can soon go beyond predictability in any practical sense.

In fact, the simplest example is the famous recursive equation:
$$x_{n+1}= Rx_n(1-x_n)$$You couldn't have a simpler, more deterministic sequence. And, yet, for ##R > 3.57## the sequence exhibits almost limitless complexity and unpredictability.

See, also, Conway's Game of Life, where a few simple rules can again lead to almost limitless complexity.

webplodder said:
OK, but assuming we're talking about a system capable of emulating or even exceeding human reasoning, how could we possibly anticipate every behaviour? And again, if the 'mind' (for lack of a better word) isn't deterministic, how do we know there isn't something else operating at this level?
Precisely. Unless we are missing something supernatural, then human intelligence and consciousness must arise from basic biological algorithms of one sort or another. If you describe the human brain in terms of its constituent atoms and molecules, then consciousness must arise from components that are not themselves conscious.

The extent to which systems being developed today will exhibit "expected" behaviours is, of course, not known. The argument that they are just dumb machines is, however, a dangerous fallacy.
 
  • Like
Likes   Reactions: FactChecker and Hornbein
The sum of the parts is the fitting of data to the ~2 trillion parameters in the latest LLMs. 2 trillion nonlinear parameters can capture a lot of relationships. This is just an expansion of the same methodology used in Google Translate and no one argues that Google Translate will ever 'understand' Mandarin Chinese
 
Google for "emergent properties".

And this basically boils down to eons old discussion between reductionists and their opponents. Don't expect clear answers, more like more and more questions.
 
  • Like
Likes   Reactions: AlexB23, Hornbein, Astronuc and 1 other person
I would say yes. But for a reasonable discussion, we would have to narrow it to individual types of AI. With that limitation, we would need to put some meaning to "sum" and "more than".
In neural networks, not only can the results sometimes be surprising, the reasoning behind the results can be too obscure to understand.
 
I think it's not true that current LLM's are deterministic. They all incorporate some level of stochasticity.
 
webplodder said:
OK, but assuming we're talking about a system capable of emulating or even exceeding human reasoning, how could we possibly anticipate every behaviour? And again, if the 'mind' (for lack of a better word) isn't deterministic, how do we know there isn't something else operating at this level?
We don't need to pre-identify the behaviors, only figure them out after the fact. It's pretty much a tautology that they can exceed expectations but not their programming.
 
  • Like
Likes   Reactions: Rive
  • #10
phyzguy said:
I think it's not true that current LLM's are deterministic. They all incorporate some level of stochasticity.
I mean they sample from PDF's to generate their output. Even the models weights are fuzzed for models like Chatgpt depending on user settings I think. I'm assuming the objection is that the stochasticity isn't truly random? Which is true I guess.
PeroK said:
The extent to which systems being developed today will exhibit "expected" behaviours is, of course, not known. The argument that they are just dumb machines is, however, a dangerous fallacy.

I mean, we haven't really seen anything that would point to more than just dumb machine right? Machine learning, neural nets, etc have been around for decades at this point. The transformer architecture is pretty new, but it's built off what came before. I would find it odd myself if there was some computing threshold you had to hit where dumb machine -> emergent intelligence was just naturally crossed.


The research avenue that creeps me out the most in computing has always been this:

brainss

I just feel like they all were thinking "Could we?" and no one stopped to think "Should we?"
 
  • Like
Likes   Reactions: BWV
  • #11
webplodder said:
OK, but assuming we're talking about a system capable of emulating or even exceeding human reasoning,
Such a machine does not exist yet, and the possibility of creating one is as probable as bringing back to life body parts sewn together, à la Frankenstein:
https://www.sciencenewstoday.org/can-a-machine-ever-truly-think-like-a-human said:

The Path Toward Human-Like Minds

Could future machines evolve beyond clever mimicry? Many scientists believe the answer may lie in artificial general intelligence (AGI)—a hypothetical AI capable of understanding, learning, and reasoning across the full range of human cognitive tasks.

AGI would not simply be good at chess or conversation but could tackle any intellectual challenge a human can, adapting flexibly to new situations. It would integrate vision, language, reasoning, planning, and perhaps even emotion.

Achieving AGI demands breakthroughs in multiple areas:
  • World models: Human thought relies on rich mental models of the world. We simulate scenarios, predict consequences, and imagine possibilities. Machines would need similar models to think beyond surface patterns.
  • Common sense: Humans possess vast background knowledge that informs our judgments. Machines need to acquire comparable common sense to navigate everyday situations.
  • Embodied cognition: Many scientists argue that true understanding requires a body interacting with the physical world. Robots, not just chatbots, might be essential to creating human-like intelligence.
  • Consciousness and emotion: If thinking requires subjective experience, as some philosophers argue, then machines might need new architectures to support inner awareness.
Yet whether AGI will ever be conscious—or merely produce an impeccable imitation of human behavior—remains one of science’s deepest mysteries.

webplodder said:
is it be possible for complex chatbots to develop behaviours that can’t be predicted by simply analyzing the sum of their parts?
I'm a big fan of the "embodied cognition" requirement for AGI; so, in my opinion, no, a simple chatbot would never achieve consciousness or the like.

PeroK said:
The argument that they are just dumb machines is, however, a dangerous fallacy.
They are dumb machines. Thinking they are intelligent and relying on their decisions without further verification is what is dangerous.
 
  • #12
jack action said:
They are dumb machines. Thinking they are intelligent and relying on their decisions without further verification is what is dangerous.
Danger is not an exclusive characteristic.
 
  • #13
I'm always confused by the argument that says, "they are just machines, they can't be intelligent/conscious." Your brain is a collection of biological neurons, and it is clearly intelligent and conscious. So why can't a collection of artificial neurons, which in many ways mimic the neurons in your brain, be intelligent and conscious? Maybe we're not there yet, but I don't understand the, "It can't be," arguments.
 
  • Agree
  • Like
Likes   Reactions: Filip Larsen, gleem, FactChecker and 1 other person
  • #14
jack action said:
Such a machine does not exist yet, and the possibility of creating one is as probable as bringing back to life body parts sewn together, à la Frankenstein:

They are dumb machines. Thinking they are intelligent and relying on their decisions without further verification is what is dangerous.
There certainly are computer algorithms that exceed human intelligence in part-tasks. Human chess moves are currently evaluated by measuring how they agree with computer-generated moves. I wouldn't be surprised if LLMs start to incorporate some of those part-task capabilities.
 
  • #15
phyzguy said:
I'm always confused by the argument that says, "they are just machines, they can't be intelligent/conscious." Your brain is a collection of biological neurons, and it is clearly intelligent and conscious. So why can't a collection of artificial neurons, which in many ways mimic the neurons in your brain, be intelligent and conscious? Maybe we're not there yet, but I don't understand the, "It can't be," arguments.
But brain neurons change, form new connections, die, etc in response to outside stimuli. The artificial neurons don't really change even if model weights are fuzzed somewhat and they don't change based on touch, smell, etc, with the outside environment. Contextual periods for LLM's are also still vanishing tiny. You need some kind of long term "memory" as well.
 
  • Skeptical
Likes   Reactions: PeroK
  • #16
phyzguy said:
brain is a collection of biological neurons
We know that brain contains neurons. We don't know that it is neurons.
 
  • #17
jack action said:
I'm a big fan of the "embodied cognition" requirement for AGI; so, in my opinion, no, a simple chatbot would never achieve consciousness or the like.
Yes. It has to have a way of identifying itself, which it can only do by reacting to its own actions. It needs feedback from its actions before it can say, I think, therefore I am.
 
  • #18
Hill said:
We know that brain contains neurons. We don't know that it is neurons.
I'm missing your point here. You think that there is some unknown component in the brain?
 
  • Like
Likes   Reactions: PeroK
  • #19
phyzguy said:
I'm missing your point here. You think that there is some unknown component in the brain?
There are many other components in the brain, whose functions are known, unknown, and not fully understood.
 
  • #20
Hill said:
There are many other components in the brain, whose functions are known, unknown, and not fully understood.
Unless you are proposing some supernatural or divine component, then it's all just atoms and molecules following the dumb rules of organic chemistry.
 
  • Like
Likes   Reactions: Borek and phyzguy
  • #21
PeroK said:
it's all just atoms and molecules following the dumb rules of organic chemistry.
I agree with this.
 
  • Like
Likes   Reactions: PeroK
  • #22
phyzguy said:
Your brain is a collection of biological neurons, and it is clearly intelligent and conscious.
That is, a brain that is connected to a body with multiple sensors that can explore its environment.

phyzguy said:
So why can't a collection of artificial neurons, which in many ways mimic the neurons in your brain, be intelligent and conscious?
How can one knows a machine is mimicking a brain when no one knows how a brain exactly works?
https://pmc.ncbi.nlm.nih.gov/articles/PMC10585277/ said:

How far neuroscience is from understanding brains​

The cellular biology of brains is relatively well-understood, but neuroscientists have not yet generated a theory explaining how brains work. Explanations of how neurons collectively operate to produce what brains can do are tentative and incomplete.
https://qbi.uq.edu.au/brain-basics/brain/brain-physiology/how-do-neurons-work said:

Neurons are diverse​

Neurons aren’t all the same; for starters, they release different neurotransmitters. Moreover, several different subclasses of neurons can use the same transmitter. These different subclasses seem to be suited to different tasks in the brain, although we don’t fully know yet what those tasks are. One major goal of contemporary neuroscience is to understand the extent of this diversity.

How many different types of neuron are there (and how can we define a “type” of neuron)? What do they all do? Are particular types more important than others in various diseases, and can we target them for therapies?

The ongoing genetic revolution has made these questions more addressable than ever before, yet we still have a long way to go. Once you appreciate this diversity and combine it with the fact that there are 86 billion neurons (plus at least as many glia!) you can begin to understand why we still have much more to discover about how brains work.

phyzguy said:
Maybe we're not there yet, but I don't understand the, "It can't be," arguments.
It is not "It can't be", it is "today's machines are not it".

FactChecker said:
There certainly are computer algorithms that exceed human intelligence in part-tasks.
Doing a task (like analyzing all possibilities or finding patterns) faster than a human doesn't make the machine intelligent.
 
  • Like
Likes   Reactions: Hill
  • #23
jack action said:
Doing a task (like analyzing all possibilities or finding patterns) faster than a human doesn't make the machine intelligent.
What about doing a logic task that the vast majority of humans can't do no matter how much time you give them. Or a task that no human can do? Even coming up with answers where no human can understand the reasoning?
Those are all happening now.
 
  • Like
Likes   Reactions: PeroK
  • #24
jack action said:
Doing a task (like analyzing all possibilities or finding patterns) faster than a human doesn't make the machine intelligent.
Would you say that all computer systems currently have precisely zero intelligence? Assuming we can measure intelligence in some way.
 
  • #25
FactChecker said:
What about doing a logic task that the vast majority of humans can't do no matter how much time you give them. Or a task that no human can do? Even coming up with answers where no human can understand the reasoning?
Those are all happening now.
PeroK said:
Would you say that all computer systems currently have precisely zero intelligence? Assuming we can measure intelligence in some way.
Without a way to measure intelligence, I guess it is harder to determine what is more intelligent. Or is it?

Say I have a computer that can recite all the words in the English language, with their definitions. It can even do it twice in a row, exactly the same way. It is very difficult for a human to do this, if not impossible, at least for most. Even if those people speak the English language every day since they were born.

Is that computer more intelligent than a human? If so, then a dictionary is just a mute computer - without a voice synthesizer - holding the same information. A human just has to use their eyes to read the book, instead of listening to the computer with their ears, to get the same information. You read it twice in a row, and you get the same information! Is a book an intelligent object?

One can argue that this is knowledge, not logic. But is a computer reasoning, or is it just obeying the rules of logic implemented by the humans behind the machine? A calculator to which I enter "2+2=" and spits out "4" is not considered more intelligent than a 2-year-old. The guy who built the calculator is.

Assume an AI machine finds a new molecule to cure cancer or even finds a way to time-travel. Does it do it on its own? Was there a goal for this machine to achieve this? Did it have a reason to do so? Then what? Can it do anything with that newfound information? Or was this machine just a dumb tool used by an intelligent human, with a goal and means to use that information?

Without a body and a set of sensors, I fail to see how one can classify any set of atoms as "intelligent".
 
  • #26
jack action said:
Without a body and a set of sensors, I fail to see how one can classify any set of atoms as "intelligent".
That's where artificial comes in. It's not the real thing.

It doesn't answer my question. To say that every system developed so far has zero intelligence seems like stretching. Intelligence can't be a binary thing, because dogs, cats and crows clearly have some intelligence. What does a machine have to do to register on your intelligence scale?
 
  • #27
PeroK said:
What does a machine have to do to register on your intelligence scale?
Doesn't it have to do with the ability to solve a new problem it has not previously encountered by abstracting lessons from analogous experiences?
 
  • #28
DaveC426913 said:
Doesn't it have to do with the ability to solve a new problem it has not previously encountered by abstracting lessons from analogous experiences?
Computer systems, as has been mentioned many times on this forum, are already doing that. That is precisely what machine learning is about. The system is not explicitly programmed to do a task, but uses adaptable algorithms to determine for itself the best way to do something.

Moreover, these systems can do things like this many times better than the average human. This is the problem with trying to say they have zero intelligence.

We've had this very same debate several times previously, but there is no one left who is active in the field of AI generally who believes that these systems have no intelligence. The question is how much they already have and how much that will develop in the next two decades. Which is the conservative timeline for AGI to surpass human capabilities generally.
 
  • #29
For example, AlphaZero was given the rules of chess, but not any advice on how to play. It taught itself to play way better than any human. It saw deeper into the game, especially in terms of maintaining a long-term strategic advantage. That's specific intelligence. It wasn't just following an algorithm on how to play chess like a conventional chess engine. It worked everything out for itself, based only on the rules of the game.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K