Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #121
PeroK said:
It played millions of games against itself and mastered the game in four hours. It's not like it solved the game by exhausting all the possiblities. That could not be done by brute force, because there are too many possible moves in each position. Instead, it developed its own algorithm for assessing each position - based on pattern recognition and reinforecement learning.

The best human players learn from experienced players and coaches, who in turn have learned from the best players of the past. And nowadays, ironically, also learn from chess engines. Once you get past beginner level, you would learn about pawn structure, outposts, weak pawns, weak squares, bishop against knight etc. These considerations would have been programmed into the algorithm of a conventional chess engine. These are long-term positional ideas that do not lead to immediate victory. AlphaZero figured out the relative importance of all these ideas for itself in four hours.

AlphaZero developed its own "understanding" of all those strategic ideas. In fact, it was slightly weaker than a conventional engine if forced to start from a very complicated position - where the brute force approach was hard to beat.

https://chess.fandom.com/wiki/AlphaZero
I don't know what computer program provides the step-by-step scores of black versus white chess positions in the YouTube videos of games. It seems to evaluate positions strategically. Its evaluation is far beyond my ability to comprehend. Also, players' moves, even at the highest levels, are evaluated according to their agreement with computer-recommended moves. The computer moves are regarded as the best moves.
 
Physics news on Phys.org
  • #122
webplodder said:
Engineers have been safely improving complex systems (aircraft, nuclear reactors, software) for decades without perfect causal maps. Same principle applies here.
Not knowing the precise neuron isn't a show-stopper; it's just engineering under uncertainty.
webplodder said:
We hand over critical stuff to black-box systems every day - traffic lights, autopilot, even stock-trading algos - and we don't know every line of code. We just test, monitor, and build redundancy.
This isn't exactly true, nor do I think it's applicable to LLMs/AI. Typical engineering systems are supposed to be deterministic. We can't test every combination of inputs but we test enough to be confident we have it programmed to behave the way we want it to. When an unexpected behavior occurs in such systems it isn't because the program has some emergent intelligence and made a different decision than we would have, it's because of an error in programming, testing or sensors/inputs (or a combination thereof).

LLM-style AI is a risk for such systems because if the Believers are right it is inherrently unpredictable due to its Agency and if the nay-sayers are right it has loose and flawed programming.
We hand over critical stuff to black-box systems every day - traffic lights, autopilot, even stock-trading algos - and we don't know every line of code.
Unless by "we" you mean the public, we know every line of code. The engineers know it because they wrote it. Read up on the Boeing MCAS failure, and how much review and discussion that one little feature had internally at Boeing before being rolled out. The reasons for the failure are a little complicated but it isn't because nobody knew every line of code.

[BTW, it looks like some embedded quotes are messed up in some of your earlier posts.]
 
Last edited:
  • Like
Likes   Reactions: gleem and gmax137
  • #123
webplodder said:
if someone can grab the main bits of that report, spot the dodgy dog stuff or the DNA mess, and still call bullshit?
You're missing the point. What I was describing was not "calling bullshit" based on reading a book report written from Cliff's Notes, but teachers not "calling bullshit" because they didn't even realize the student who wrote the book report had not actually read the book--the student was able to fake having read the book and understood it, based on the Cliff's Notes, enough to fool the teacher.

In this analogy, the LLMs are the student and the humans who think it understands based on the text it outputs are the teacher.
 
  • Like
Likes   Reactions: gleem
  • #124
Dale said:
The debate is more about whether the word “agency” should be used to describe that.
Fair enough.

Dale said:
That is why I was focused on what I see as the major issue AI has as an engineered product: we cannot identify the source of its malfunctions (hallucinations).
This I agree is a major issue.
 
  • Like
Likes   Reactions: Dale
  • #125
Dale said:
There are ongoing debates on this topic both in psychology (how much of what we think is a conscious decision is actually a retrospective justification of a decision already made subconsciously) and philosophy (how should we define even define agency)
I brought up the point that consciousness may be an illusion. But ISTM that Freud might be right. Young humans often act on impulse without regard to consequences, which becomes less of a problem as they mature. So something is happening in the brain. Even if consciousness is an illusion, the subconscious may still "debate" on the value of an action.

In the common venacular agency does not require the agent to initiate the task.
javisot said:
AI hallucinations are not a creative and positive process. It's a malfunction and a loss of effectiveness.
Would you consider a serendipitous discovery a hallucination?

jack action said:
No, the argument is that if you, @PeroK , attribute some form of intelligence to LLMs - no matter how small, no matter how you define it - then you must attribute some form of intelligence to a pocket calculator as well.
A calculator can only perform calculations and only in a specified way. Not intelligent. LLMs are provided with information as humans are, to perform tasks as humans do, and to augment their capabilities as humans can. NN-equipped AI is not programmed the same way as other computerized systems, which are designed to do things always in a specified way, depending on the input. If you ask an AI a question at different times, it will probably give a different version of its previous response, just as a human would.

jack action said:
That is the point: analyzing patterns in the training data to discover something common that will point us in the right direction, something we haven't seen, yet. The fact that humans can find it faster with an AI program than on their own, doesn't give any sign of intelligence to the machine, especially independent intelligence, an agency.
Again, AI does what humans do.
 
  • #126
gleem said:
LLMs are provided with a very, very restricted subset of information as humans are, to perform a very, very restricted subset of tasks as humans do, and to augment their capabilities in a very, very restricted subset of ways as humans can.
See the bolded additions I made above to your claims. They make a huge difference. The claim in the hype is AGI--artificial general intelligence. LLMs, at best, even if we accept for the sake of argument that they are "intelligent" in some way in their restricted domain (I don't, but there are many who do, apparently including you), are limited to that restricted domain--to a very specialized subset of information and tasks and capabilities. Humans are not.
 
  • #127
gleem said:
AI does what humans do.
In a very, very restricted domain, one can argue this, yes. I actually have no problem with the general idea that all "intelligence" boils down to some form of pattern recognition--an idea that has been around in the AI community for decades. But claims of "intelligence" for LLMs are based on the much, much stronger (and to me obviously false--though certainly not unprecedented, similar claims have also been around in the AI community for decades) claim that all intelligence can be boiled down to pattern recognition in a corpus of text. There is much, much, much more to the universe than text.
 
  • #128
gleem said:
A calculator can only perform calculations and only in a specified way. Not intelligent. LLMs are provided with information as humans are, to perform tasks as humans do, and to augment their capabilities as humans can. NN-equipped AI is not programmed the same way as other computerized systems, which are designed to do things always in a specified way, depending on the input. If you ask an AI a question at different times, it will probably give a different version of its previous response, just as a human would.
No one would argue that a ML model based on OLS is not deterministic or is somehow intelligent, nor would anyone make that claim for a simple NN that recognizes cats from dogs. But scale up to 100s of billions of non-linear parameters on language models, slightly different contexts will produce different results to the same question, and that is before the controlled randomness that is built into the models to produce the effect you describe. It is still deterministic - chaotic perhaps, but its no more 'intelligent' than an OLS model in principle
 
  • #129
BWV said:
its no more 'intelligent' than an OLS model in principle
One could make a similar argument about humans--our brains are just physical systems running according to physical laws, after all. No one would argue that a single human neuron is intelligent, so why should it be any different if we scale up to 100 billion neurons?

Obviously this argument fails for humans (or at least I think so--but I'm not sure whether everyone in this discussion would agree that even we humans are "intelligent"!), so we can't accept it as it stands for LLMs either. We have to look at other differences between LLMs and humans, such as the ones I have pointed out in several previous posts, if we are going to claim that humans are intelligent while LLMs are not.
 
  • #130
PeterDonis said:
One could make a similar argument about humans--our brains are just physical systems running according to physical laws, after all. No one would argue that a single human neuron is intelligent, so why should it be any different if we scale up to 100 billion neurons?

Obviously this argument fails for humans (or at least I think so--but I'm not sure whether everyone in this discussion would agree that even we humans are "intelligent"!), so we can't accept it as it stands for LLMs either. We have to look at other differences between LLMs and humans, such as the ones I have pointed out in several previous posts, if we are going to claim that humans are intelligent while LLMs are not.
That is where the agency argument comes in - cells have agency, activation functions in a NN do not.
 
  • #131
BWV said:
cells have agency
In what sense?

BWV said:
activation functions in a NN do not
In what sense?

Both are just following physical laws. What's the difference between them that makes one an "agent" and the other not?
 
  • #132
PeterDonis said:
In what sense?


In what sense?

Both are just following physical laws. What's the difference between them that makes one an "agent" and the other not?

Not claiming there is universal consensus in biology (or that I have any expertise to judge) but not hard to find examples in the literature like:
Living cells not only construct themselves, but as open thermodynamic systems, they must ‘eat’ to survive. In general, cells can evolve to ‘eat’ because living cells are nonlinear, dynamical systems with complex dynamical behaviour that enables living cells, receiving inputs from their environment and acting on that environment, to sense and categorize their worlds, orient to relevant features of their worlds, evaluate these as ‘good or bad for me’ and act based on those evaluations. This is the basis of agency and meaning. Agency and meaning are immanent in evolving life. The semantic meaning is: ‘I get to exist for a while’. The capacity to ‘act’ is immanent in the fact that living cells achieve constraint closure and do thermodynamic work to construct themselves. The same capacity enables cells to do thermodynamic work on their environment. A cell’s action is embodied, enacted, embedded, extended and emotive [40,42,43,52,53].

Cells are molecular autonomous agents, able to reproduce, do one or more thermodynamic work cycles and make one or more decisions, good or bad for me [52,53]. The capacity to learn from the world, categorize it reliably and act reliably may be maximized if the cell, as a nonlinear dynamical system, is dynamically critical, poised at the edge of chaos [54,55]. Good evidence now demonstrates that the genetic networks of many eukaryotic cells are critical [56,57]. Such networks have many distinct dynamical attractors and basins of attraction. Transition among attractors is one means to ‘make a decision’. It will be of interest to test whether Kantian whole autocatalytic sets can evolve to criticality.
https://pmc.ncbi.nlm.nih.gov/articl...s the basis of,sets can evolve to criticality.
 
  • #133
gleem said:
Would you consider a serendipitous discovery a hallucination?
No. Furthermore, you touch on an important point here: the difference between a mistake and a hallucination is still being defined. Both a hallucination and a mistake are errors, but a hallucination is characterized by the impossibility of identifying the source of the problem.
 
  • #134
I'll mention a problem I see with the term "intelligence." It's just a word, a concept we use to refer to a very diverse set of abilities. Measuring intelligence is measuring a word; what we can measure are our abilities to solve different problems, and then indirectly infer that "we are very intelligent" or "not very intelligent".

We humans describe ourselves as "intelligent" because, on average, we have the same set of abilities, albeit to relatively similar degrees.



(I'll also mention a curious phrase I came up with in a conversation with chatgpt: "I'm analogous to a Turing machine without the halting problem. I follow deterministic processes that aren't necessarily exact")
 
Last edited:
  • #135
And I necessarily agree with Dale's point: the "intelligent yes or no" debate is really pointless. Perhaps it's a waste of time. The important thing is: how general is my model versus how many hallucinations does it generate?

We want to create models that are as general as possible and generate as few hallucinations as possible.
 
  • Like
Likes   Reactions: Dale

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K