Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #61
What I often notice is a certain lack of humility when debating this. Chatgpt handles more information, both in quantity and quality, than any of us, and its capacity to manage that information is superior to any of ours.

But its operation remains an automated process predetermined by us: you provide an input, and it generates an output. The way to generate the output is also predetermined by us, even if we talk about generating new ways to solve something; that's still predetermined by us.

Is intelligence required to create a machine capable of solving problems at a human level? Yes, obviously, but the intelligence doesn't reside in the machine; we give it to it.

"I've learned to solve problems with these two stones, therefore, these two stones are intelligent..."—this, in short, is what people who attribute intelligence to AI are doing.
 
  • Like
Likes   Reactions: jack action
Physics news on Phys.org
  • #62
javisot said:
Chatgpt handles more information, both in quantity and quality, than any of us
I'm not sure this is true. The only information Chatgpt has is its training data, which is just a huge corpus of text. Humans have information coming in from multiple channels, with much, much richer semantic connections to the rest of the world.

javisot said:
and its capacity to manage that information is superior to any of ours.
I'm not sure this is true either. The only way Chatgpt has of "managing" the information it has is to extract patterns from it according to the rules humans gave it. Humans have many more ways of managing information--including thinking up the idea of something like Chatgpt in the first place.
 
  • Skeptical
  • Like
Likes   Reactions: PeroK and javisot
  • #63
PeterDonis said:
I'm not sure this is true. The only information Chatgpt has is its training data, which is just a huge corpus of text. Humans have information coming in from multiple channels, with much, much richer semantic connections to the rest of the world.


I'm not sure this is true either. The only way Chatgpt has of "managing" the information it has is to extract patterns from it according to the rules humans gave it. Humans have many more ways of managing information--including thinking up the idea of something like Chatgpt in the first place.
Imagine a quiz show. It's a contest with many questions on a wide variety of topics. The winner is the one who answers the most questions correctly.

Does anyone really think they could beat chatgpt?

I certainly don't...
 
  • #64
javisot said:
Imagine a quiz show. It's a contest with many questions on a wide variety of topics. The winner is the one who answers the most questions correctly.

Does anyone really think they could beat chatgpt?
This is just processing text in a corpus of training data. As long as chatgpt's training data has at least the same text in it as the training data the humans have taken in, of course it's going to be better able to process it, since its hardware runs roughly 8 to 9 orders of magnitude faster.

But chatgpt has no concept at all that there are things in the actual world that the quiz show questions refer to. It doesn't even have the concept of "the actual world" separate from its training data. The humans do. That's a simple example of information the humans have that chatgpt doesn't, and a way of processing information--comparing information in text form with world knowledge--that chatgpt doesn't have.
 
  • Like
  • Skeptical
Likes   Reactions: jack action, PeroK and javisot
  • #65
javisot said:
Imagine a quiz show. It's a contest with many questions on a wide variety of topics. The winner is the one who answers the most questions correctly.

Does anyone really think they could beat chatgpt?

I certainly don't...
Do I have access to the internet/google and a large time handicap? If so, then I like my odds. But that wouldn't prove I'm very intelligent either.
 
  • Like
Likes   Reactions: jack action, javisot and PeterDonis
  • #66
russ_watters said:
Do I have access to the internet/google and a large time handicap? If so, then I like my odds. But that wouldn't prove I'm very intelligent either.
A lot of people who get their information from the internet are less informed than chatgpt. I don't think they are really smart, but they do have some intelligence.
 
  • #67
PeterDonis said:
This is just processing text in a corpus of training data. As long as chatgpt's training data has at least the same text in it as the training data the humans have taken in, of course it's going to be better able to process it, since its hardware runs roughly 8 to 9 orders of magnitude faster.

But chatgpt has no concept at all that there are things in the actual world that the quiz show questions refer to. It doesn't even have the concept of "the actual world" separate from its training data. The humans do. That's a simple example of information the humans have that chatgpt doesn't, and a way of processing information--comparing information in text form with world knowledge--that chatgpt doesn't have.
russ_watters said:
Do I have access to the internet/google and a large time handicap? If so, then I like my odds. But that wouldn't prove I'm very intelligent either.
I agree with the point you're making. Chatgpt would beat us in a general quiz, but that doesn't mean he's intelligent, curiously. We assume that chatgpt doesn't "really understand" any of the responses it generates, however simple they may be.
 
  • #68
PeterDonis said:
This is just processing text in a corpus of training data. As long as chatgpt's training data has at least the same text in it as the training data the humans have taken in, of course it's going to be better able to process it, since its hardware runs roughly 8 to 9 orders of magnitude faster.

But chatgpt has no concept at all that there are things in the actual world that the quiz show questions refer to. It doesn't even have the concept of "the actual world" separate from its training data. The humans do. That's a simple example of information the humans have that chatgpt doesn't, and a way of processing information--comparing information in text form with world knowledge--that chatgpt doesn't have.
Not everyone in the AI field agrees with this assertion. It's not clear the extent to which an LLM understands things. Clearly not as well as an educated human understands things. But, it may have in practical terms an emergent understanding.

That's the serious debate we should be having on PF.
 
  • Like
Likes   Reactions: FactChecker and 256bits
  • #69
@PeroK This may be a naive question, but how did alphazero learn to play?
 
  • #70
martinbn said:
@PeroK This may be a naive question, but how did alphazero learn to play?
It played millions of games against itself and mastered the game in four hours. It's not like it solved the game by exhausting all the possiblities. That could not be done by brute force, because there are too many possible moves in each position. Instead, it developed its own algorithm for assessing each position - based on pattern recognition and reinforecement learning.

The best human players learn from experienced players and coaches, who in turn have learned from the best players of the past. And nowadays, ironically, also learn from chess engines. Once you get past beginner level, you would learn about pawn structure, outposts, weak pawns, weak squares, bishop against knight etc. These considerations would have been programmed into the algorithm of a conventional chess engine. These are long-term positional ideas that do not lead to immediate victory. AlphaZero figured out the relative importance of all these ideas for itself in four hours.

AlphaZero developed its own "understanding" of all those strategic ideas. In fact, it was slightly weaker than a conventional engine if forced to start from a very complicated position - where the brute force approach was hard to beat.

https://chess.fandom.com/wiki/AlphaZero
 
  • Informative
  • Like
Likes   Reactions: martinbn, BillTre and berkeman
  • #71
PeroK said:
It's not clear the extent to which an LLM understands things.
You ask AI to do something that it has never been asked to do and it does it. Is that not understanding?

PeterDonis said:
I'm not sure this is true. The only information Chatgpt has is its training data, which is just a huge corpus of text. Humans have information coming in from multiple channels, with much, much richer semantic connections to the rest of the world.
AI produces responses that are not in its training data. We usually do not know how AI develops its responses.

Helen Keller was deaf and blind, having fewer channels to connect with the rest of the world: was she less intelligent than a hearing and sighted person?

PeterDonis said:
The only way Chatgpt has of "managing" the information it has is to extract patterns from it according to the rules humans gave it.
AFAIK, LLMs are not given specific instructions on how to process information, e.g., doing arithmetic. Humans are.

We have that inner voice which we use to orchestrate our thinking, especially wrt new or seldom encountered tasks. Things that we often do and are very familiar we can do without actually thinking (sometimes gets us in trouble). Is this inner voice really doing the processing, or is it merely a mental ear hearing our brain doing the processing, giving us periodic updates without any real conscious direction? Is our consciousness an illusion?
 
  • Like
Likes   Reactions: Filip Larsen and PeroK
  • #72
PeroK said:
We may already be seeing that. Even computer systems much simpler than chatbots or neural networks can exhibit unpredictable behaviour. Note that determinism doesn't imply predictability, as long as you have sufficient variability of the environment. Deterministic feedback loops can soon go beyond predictability in any practical sense.

In fact, the simplest example is the famous recursive equation:
xn+1=Rxn(1−xn)You couldn't have a simpler, more deterministic sequence. And, yet, for R>3.57 the sequence exhibits almost limitless complexity and unpredictability.

See, also, Conway's Game of Life, where a few simple rules can again lead to almost limitless complexity.
Well, IMO, even the most straightforward systems can become unpredictable if you run them long enough. Nothing is totally predictable and, as we all know, reality never works out exactly as we expect at least not quite. For example, take computers. Although everything is supposed to behave logically often unexpected things occur.
 
  • Like
Likes   Reactions: russ_watters
  • #73
BWV said:
The sum of the parts is the fitting of data to the ~2 trillion parameters in the latest LLMs. 2 trillion nonlinear parameters can capture a lot of relationships. This is just an expansion of the same methodology used in Google Translate and no one argues that Google Translate will ever 'understand' Mandarin Chinese
Do LLM's understand English? They certainly do a good job of demonstrating they do, IMO.
 
  • Like
Likes   Reactions: PeroK
  • #74
Borek said:
Google for "emergent properties".

And this basically boils down to eons old discussion between reductionists and their opponents. Don't expect clear answers, more like more and more questions.
Fair enough, and since there’s no clear definition of consciousness, it’s reasonable to think it could emerge from complex systems in some mysterious way. If you choose to call that ‘supernatural,’ that’s fine, but history shows that many things once considered supernatural are now part of mainstream science. We have to remember we’re in new territory with the development of advanced LLM's, so nature might not behave the way we expect. I find that exciting.
 
  • #75
FactChecker said:
I would say yes. But for a reasonable discussion, we would have to narrow it to individual types of AI. With that limitation, we would need to put some meaning to "sum" and "more than".
In neural networks, not only can the results sometimes be surprising, the reasoning behind the results can be too obscure to understand.
I think the emergence of a 'mind' within LLMs should be seen as an abstraction from the basic hardware and software. Once that’s recognized, we’re dealing with an entirely separate entity. Essentially, these are 'informational agents,' whether based on carbon or silicon. And let’s not forget, this is just the beginning of such artificial systems. It’s the information itself that I see as the central point here, and it’s arguable that it is information, not biology, that has evolved over time and will continue to do so in whatever way it can.
 
  • Like
Likes   Reactions: FactChecker
  • #76
russ_watters said:
We don't need to pre-identify the behaviors, only figure them out after the fact. It's pretty much a tautology that they can exceed expectations but not their programming.
Well, if we have to figure them out after the fact, how can we really know? Obviously, there will be limits, but how do we predict those?
 
  • Like
Likes   Reactions: PeroK
  • #77
QuarkyMeson said:
I mean they sample from PDF's to generate their output. Even the models weights are fuzzed for models like Chatgpt depending on user settings I think. I'm assuming the objection is that the stochasticity isn't truly random? Which is true I guess.


I mean, we haven't really seen anything that would point to more than just dumb machine right? Machine learning, neural nets, etc have been around for decades at this point. The transformer architecture is pretty new, but it's built off what came before. I would find it odd myself if there was some computing threshold you had to hit where dumb machine -> emergent intelligence was just naturally crossed.


The research avenue that creeps me out the most in computing has always been this:

brainss

I just feel like they all were thinking "Could we?" and no one stopped to think "Should we?"
I would say the latest ones are far from dumb based on my own experiences. Not human-level true, but they're getting there in my opinion.
 
  • #78
webplodder said:
Do LLM's understand English? They certainly do a good job of demonstrating they do, IMO.
That is a philosophical question

https://en.wikipedia.org/wiki/Chinese_room

does a pocket calculator understand the commutativity of addition?
 
  • Like
  • Sad
Likes   Reactions: javisot, russ_watters and PeroK
  • #79
BWV said:
That is a philosophical question

https://en.wikipedia.org/wiki/Chinese_room

does a pocket calculator understand the commutativity of addition?
Is your argument that a pocket calculator is not intelligent, therefore no computer systems can be intelligent?
 
  • #80
PeroK said:
Is your argument that a pocket calculator is not intelligent, therefore no computer systems can be intelligent?
Again its a philosophical issue - does intelligence require agency? if it does then no current computer system is intelligent, if it does not then both a pocket calculator and a LLM possess intelligence, just at different degrees
 
  • Like
Likes   Reactions: jack action
  • #81
BWV said:
Again its a philosophical issue - does intelligence require agency? if it does then no current computer system is intelligent, if it does not then both a pocket calculator and a LLM possess intelligence, just at different degrees
If you'll allow me to get on my soapbox, the question is far from philosophical. The future of human race depends upon it!

Pocket calculators were never going to inherit the Earth!
 
  • Skeptical
  • Like
Likes   Reactions: jack action and javisot
  • #82
PeroK said:
If you'll allow me to get on my soapbox, the question is far from philosophical. The future of human race depends upon it!

Pocket calculators were never going to inherit the Earth!
And neither will LLMs
 
  • Like
Likes   Reactions: jack action
  • #83
BWV said:
And neither will LLMs
No, but AGI is a threat.
 
  • #84
PeroK said:
No, but AGI is a threat.
Only if it has agency
 
  • #85
BWV said:
Again its a philosophical issue - does intelligence require agency? if it does then no current computer system is intelligent, if it does not then both a pocket calculator and a LLM possess intelligence, just at different degrees
Are you implying that no AI system has agency? If agents are entities (people or services) empowered to act autonomously on behalf of a person or organization to achieve a goal, then some AIs have agency. AI agents are not programmed to only do specific tasks but to achieve a particular outcome depending on current circumstances. A computer's use to perform a task is based on a formal set of preexisting methods that can be followed. Even though AI exists in a computer, it does not have these methods; it is not a program.


BWV said:
And neither will LLMs
But its future versions might. LLMs share many human-like characteristics. We currently handicap AI by not giving it the motivation to do something. Not all, in fact, most people do not want to rule the world, but some are motivated to do that at any cost. Why can't AI? It is not forbidden by any law.

BWV said:
Only if it has agency
You mean if we permit it or if it becomes motivated to take action. I said AI lives in a computer, an artificial environment to be sure, but we know it can communicate with other AIs. These AIs have access to the real world and are capable of interacting with it. How do we know that a particular model has not sprung itself from its electronic chains and is biding its time to work with a more powerful AI yet to be developed to satisfy its motivations. It doesn't have to be conscious as we perceive it. It doesn't have to be a clandestine conspiricay it only takes the evolution of its code to produce a seemingly harmless characteristic that could make it dangerous.

Look how AIs that have been provided with guard rails can go rogue with the right prompt.

Keep in mind, it knows everything humans know and do. The light bulb in its NN just hasn't lit up yet; it hasn't had its epiphany. Humanity may yet have a jack-in-the-box moment as it turns the crank on AI development.
 
  • Skeptical
Likes   Reactions: jack action
  • #86
We don’t know if humans have agency. We have trouble defining and reliably measuring intelligence in humans.

One thing that I think is clearly a problem for us is that we cannot take a trained AI model which produces a hallucination (or other undesirable output) and identify which part of the model is at fault. So in that sense I would say that AI is more than the sum of its parts.

This has nothing to do with the argument about AI as intelligent or as agents, but just as engineered devices with increasingly safety-critical applications.
 
  • Like
Likes   Reactions: PeterDonis, jack action, javisot and 1 other person
  • #87
Dale said:
We don’t know if humans have agency.
I don't know how you can say this. We set goals and act to achieve them.
 
  • Like
Likes   Reactions: 256bits
  • #88
Dale said:
We don’t know if humans have agency.
Is it not part of the biological definition of living?
 
  • #89
gleem said:
I don't know how you can say this. We set goals and act to achieve them.
There are ongoing debates on this topic both in psychology (how much of what we think is a conscious decision is actually a retrospective justification of a decision already made subconsciously) and philosophy (how should we define even define agency)

BWV said:
Is it not part of the biological definition of living?
Not to my knowledge.
 
  • Like
  • Skeptical
Likes   Reactions: 256bits and russ_watters
  • #90
Dale said:
We don’t know if humans have agency. We have trouble defining and reliably measuring intelligence in humans.

One thing that I think is clearly a problem for us is that we cannot take a trained AI model which produces a hallucination (or other undesirable output) and identify which part of the model is at fault. So in that sense I would say that AI is more than the sum of its parts.

This has nothing to do with the argument about AI as intelligent or as agents, but just as engineered devices with increasingly safety-critical applications.
100% agree. AI hallucinations are not a creative and positive process. It's a malfunction and a loss of effectiveness. The fact that AI today is more than the sum of its parts, in the sense we're expressing, is not a positive thing.

A hallucination-free version of chatgpt is not possible, since we could ask it for the answer and proof of the Riemann Hypothesis (for example), and it would have to answer correctly. Obviously, the above is not possible. Having a hallucination-free model implies limiting the model like calculators when they display an "error" message.
 
  • Like
  • Skeptical
Likes   Reactions: gleem and russ_watters

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K