How intelligent are large language models (LLMs)?

AI Thread Summary
Large language models (LLMs) are compared to a Magic 8-Ball, suggesting they lack true intelligence despite their advanced outputs. François Chollet argues that LLMs will never achieve genuine intelligence, while some participants believe human reasoning is often flawed and biased, similar to LLM outputs. Concerns are raised about the potential for LLMs to surpass human decision-making capabilities, especially in critical areas like climate change and healthcare. The discussion highlights the limitations of LLMs in providing reliable diagnoses despite performing well on exams, emphasizing the need for caution regarding AI's role in society. Ultimately, the conversation underscores the complexity of defining intelligence and the risks associated with underestimating AI's potential impact.
A.T.
Science Advisor
Messages
12,927
Reaction score
3,972
TL;DR Summary
François Chollet argues that LLMs are not ever going to be truly "intelligent" in the usual sense -- although other approaches to AI might get there.
  • Like
  • Informative
Likes jbergman, DaveE and PeroK
Computer science news on Phys.org
LLVM's are essentially a Magic 8-Ball, just with more outputs. How intelligent are they?
 
  • Like
  • Haha
Likes AlexB23, jbergman and jedishrfu
Vanadium 50 said:
LLVM's are essentially a Magic 8-Ball, just with more outputs. How intelligent are they?
That's an excellent analogy @Vanadium 50 !

Magic 8 balls have a multi-sided die with writing from humans on each side and the probability of the shake determines which side will appear in the window.

PS: I once gave that analogy to Garrett Lisy to use when explaining his E8 theory with the multi-sided die representing his hyperdimensional particle that would sometimes appear as one particle or another depending on the circumstances.
 
Last edited:
  • Like
Likes Vanadium 50
A.T. said:
TL;DR Summary: François Chollet argues that LLMs are not ever going to be truly "intelligent" in the usual sense -- although other approaches to AI might get there.

Astrophysicist Sean Caroll interviews AI researcher François Chollet:
https://www.preposterousuniverse.co...eep-learning-and-the-meaning-of-intelligence/
The fundamental problem, IMO, with Chollet's argument is that he exaggerates human intelligence. The majority of humans cannot perform objective reasoning and data analysis. Instead, most people do something akin to what ChatGPT does, only with limited, biased data, biased a priori reasoning and a good measure of dishonesty thrown in.

I saw an interesting video recently where someone asked one of the LLM's to rate all post-war UK Prime Ministers. It was a stunningly more intelligent, unbiased and objective analysis than almost any human could produce. As most humans are driven by their largely unsubstantiated political biases.

On many issues, humans are in a state of denial and our own lack of intelligence is one of them. One of the biggest dangers is the arrogance of Chollet and his assumed superiority of human thought. And his pointless quibbling over what intelligence really is. On a practical level, there is a real danger that these systems could out-think us and outwit us - using our obvious human failings against us. Especially as the human race remains divided into factions who distrust or even hate each other. On a practical level, we cannot unite to prevent catastrophic climate change. And, on a practical level, we may be susceptible to being usurped by AI.

This last point is essentially what Geoffrey Hinton has been saying. For example:



I would stress the parallel with climate change. If catastrophic climate change is a risk, then there is no point in trying to convince yourself that it can't happen. You have assume the risk is real and work on that basis. The same is true with the existential threat from AI. Who wants to bet that Chollet is right and pretend there is nothing to worry about? Like climate change, by the time we realise it is actually happening, then it's too late!
 
  • Like
  • Skeptical
Likes Ranvaldo, Bandersnatch, jbergman and 4 others
PeroK said:
The fundamental problem, IMO, with Chollet's argument is that he exaggerates human intelligence. The majority of humans cannot perform objective reasoning and data analysis. Instead, most people do something akin to what ChatGPT does, only with limited, biased data, biased a priori reasoning and a good measure of dishonesty thrown in.
although LLMs dont add anything to this - the superiority of quantitative, statistically based decision making in many (but not all) areas was well established by Kahneman & Tversky and other cognitive psychologists in the 60s and 70s
PeroK said:
I saw an interesting video recently where someone asked one of the LLM's to rate all post-war UK Prime Ministers. It was a stunningly more intelligent, unbiased and objective analysis than almost any human could produce. As most humans are driven by their largely unsubstantiated political biases.
Yes but political views and biases represent subjective policy preferences so how do you objectively rate politicians other than by how well they implemented the preferences of their constituents?
PeroK said:
On many issues, humans are in a state of denial and our own lack of intelligence is one of them. One of the biggest dangers is the arrogance of Chollet and his assumed superiority of human thought. And his pointless quibbling over what intelligence really is. On a practical level, there is a real danger that these systems could out-think us and outwit us - using our obvious human failings against us. Especially as the human race remains divided into factions who distrust or even hate each other. On a practical level, we cannot unite to prevent catastrophic climate change. And, on a practical level, we may be susceptible to being usurped by AI.

This last point is essentially what Geoffrey Hinton has been saying. For example:



I would stress the parallel with climate change. If catastrophic climate change is a risk, then there is no point in trying to convince yourself that it can't happen. You have assume the risk is real and work on that basis. The same is true with the existential threat from AI. Who wants to bet that Chollet is right and pretend there is nothing to worry about? Like climate change, by the time we realise it is actually happening, then it's too late!

its a long way from AI being able to provide better decisions than humans to possessing the agency to implement those decisions

A good example of how far LLMs are from providing critical decision making - LLMs that can pass medical exams cannot provide reliable medical diagnoses:

https://www.nature.com/articles/s41591-024-03097-1

Abstract​

Clinical decision-making is one of the most impactful parts of a physician’s responsibilities and stands to benefit greatly from artificial intelligence solutions and large language models (LLMs) in particular. However, while LLMs have achieved excellent performance on medical licensing exams, these tests fail to assess many skills necessary for deployment in a realistic clinical decision-making environment, including gathering information, adhering to guidelines, and integrating into clinical workflows. Here we have created a curated dataset based on the Medical Information Mart for Intensive Care database spanning 2,400 real patient cases and four common abdominal pathologies as well as a framework to simulate a realistic clinical setting. We show that current state-of-the-art LLMs do not accurately diagnose patients across all pathologies (performing significantly worse than physicians), follow neither diagnostic nor treatment guidelines, and cannot interpret laboratory results, thus posing a serious risk to the health of patients. Furthermore, we move beyond diagnostic accuracy and demonstrate that they cannot be easily integrated into existing workflows because they often fail to follow instructions and are sensitive to both the quantity and order of information. Overall, our analysis reveals that LLMs are currently not ready for autonomous clinical decision-making while providing a dataset and framework to guide future studies.
 
  • Like
  • Skeptical
Likes russ_watters and PeroK
Define what intelligence is and we can answer your question. The classical measure of intelligence has been an IQ test. Based on that, there is no doubt that LLMs are intelligent.
 
  • Like
Likes AlexB23, Tazerfish, russ_watters and 1 other person
I get that they can be bad at math and certain types of reasoning. I have met many people who have worse capabilities in both. Should we declare those people as unintelligent and completely useless? If there's going to be a definition, it should go both ways. Otherwise it's a biased definition.

One thing is for sure, ChatGPT can stay on topic better than my spouse. :oldwink:
 
  • Like
Likes mattt and PeroK
Vanadium 50 said:
LLVM's are essentially a Magic 8-Ball, just with more outputs.
No, LLVM (originally the acronynm for a low level virtual machine) is a suite of middleware that facilitates cross-platform implementation of compiled languages :-p

I often make this slip myself.
 
  • Haha
  • Like
Likes Tazerfish and Vanadium 50
  • #10
This thread seems to have become confused between large language models (LLMs), which is what the interview in the OP was about, and artificial intelligence (AI) in general.

LLMs aren't 'bad at math': they don't do any math. AlphaProof and AlphaGeometry 2, which seem to be really good at proving things, are not LLMs any more than is AlphaZero, which is really good at chess. They are all examples of AI, but none of them has any claim to being an artificial general intelligence (AGI)*.

Chollet is not "quibbling" over what intelligence is, he is simply pointing out that LLMs are designed to match patterns in the data it is fed and this limits their capability. He is not saying that human intelligence is inherently superior to AI - in fact quite the opposite: about 24 minutes in he says "so for instance, genetic algorithms if implemented the right way, have the potential of demonstrating true creativity and of inventing new things in a way that LLMs cannot, LLMs cannot invent anything 'cause they're limited to interpolations. A genetic algorithm with the right search space and the right fitness function can actually invent entirely new systems that no human could anticipate", and provides an example.

We seem to have this rabbit-hole discussion about once a month, someone should write an Insight. Oh.



* some members of the Google DeepMind team, which created ChatGPT, do make this claim for it in https://arxiv.org/abs/2311.02462
 
Last edited:
  • Like
Likes mattt, jbergman, russ_watters and 3 others
  • #11
pbuk said:
We seem to have this rabbit-hole discussion about once a month, someone should write an Insight. Oh.
Just because we have that Insight doesn't mean that everything it says is valid. Nor should it silence debate on the topic.
 
  • #12
pbuk said:
Chollet is not "quibbling" over what intelligence is, he is simply pointing out that LLMs are designed to match patterns in the data it is fed and this limits their capability.
Exactly. Knowing the limits of a technology is important when you use it.

He is directly quantifying the abilities of AIs in general intelligence using the ARC tests, where average humans do much better than LLMs. These tests consist of novel questions, that haven not been widely distributed on the Internet yet, so LLMs cannot have been trained on them.
 
  • Like
Likes jbergman and russ_watters
  • #13
PeroK said:
The fundamental problem, IMO, with Chollet's argument is that he exaggerates human intelligence. The majority of humans cannot perform objective reasoning and data analysis. Instead, most people do something akin to what ChatGPT does, only with limited, biased data, biased a priori reasoning and a good measure of dishonesty thrown in.

I saw an interesting video recently where someone asked one of the LLM's to rate all post-war UK Prime Ministers. It was a stunningly more intelligent, unbiased and objective analysis than almost any human could produce. As most humans are driven by their largely unsubstantiated political biases.
But a LLM is just a statistical cross section of human generated data. It's data analysis of those inputs - those human-biased opinions. It isn't doing any political analysis at all, objective or otherwise - which I agree with @BWV ; there is inherently no such thing.

I wonder if they asked it what political party it supports.
 
  • #14
russ_watters said:
But a LLM is just a statistical cross section of human generated data. It's data analysis of those inputs - those human-biased opinions. It isn't doing any political analysis at all, objective or otherwise
It doesn't matter how it does it. That's the point. You can quibble that it's not intelligent all you like. It does stuff that stands up to objective analysis.

That it has no independent political opinion is not in itself a lack of intelligence.

It's not human, but it is intelligent.
 
  • Skeptical
  • Like
Likes jbergman, russ_watters and Filip Larsen
  • #15
PeroK said:
It does stuff that stands up to objective analysis.
Objective analysis is exactly what Chollet is applying to LLMs and other AI approaches. And it shows that LLMs are good at interpolating what has been fed into them, but not good (worse than humans) at extrapolating from it.

But even in the interpolation part LLMs are not very efficient, given that you have to feed them much more text a human could ever read, just to make them sound like a human.
 
  • Like
Likes jbergman, russ_watters and PeterDonis
  • #16
Some of the latest LLM's has been shown to have emergent capabilities for analogical reasoning that I assume is only going to get better which is my primary reason to consider current LLM's "intelligent", but since it is such a difficult term to agree on (even in pure human context) perhaps it in a sense would be more productive for discussions to characterize LLM's (when they are producing correct output) as "wise" rather than "intelligent"?
 
  • #17
PeroK said:
It does stuff that stands up to objective analysis.
Yes, but that doesn't mean the stuff it does qualifies as "intelligent". You say:

PeroK said:
The majority of humans cannot perform objective reasoning and data analysis. Instead, most people do something akin to what ChatGPT does, only with limited, biased data, biased a priori reasoning and a good measure of dishonesty thrown in.
But you are not saying here that ChatGPT is intelligent; you are saying that humans (or at least most of them) are not intelligent, at least not in the domain of activity under discussion. And in any case "objective reasoning and data analysis" is only one of many possible definitions of "intelligent", and IMO it's a very narrow one. Humans do lots of things that can reasonably be called intelligent but which don't fall into that category.
 
  • #18
Filip Larsen said:
Some of the latest LLM's has been shown to have emergent capabilities for analogical reasoning that I assume is only going to get better which is my primary reason to consider current LLM's "intelligent" ...
And this is what many people refuse to acknowledge. At PF we have declared that LLM's are no more intelligent than a washing machine and that you may as well ask a toaster to help with your homework. Meanwhile, from lay people to high school students to researchers, many are getting "intelligent" and insightful answers from them. Even if those answers are not perfect.

It doesn't matter than we have an Insight that "proves" that ChatGPT is unreliable - in practical terms it is more reliable than almost any individual human being.

And, the speed of developmemt is such that in all the areas where we still claim human superiority (like medical diagnoses), it's only a matter of time before the LLM's can outdo the human experts. Or, at least, that is far more likely than that the development grinds to a halt because the vital spark of human intelligence is missing and cannot be simulated.
 
  • Skeptical
  • Like
Likes russ_watters and Filip Larsen
  • #19
PeroK said:
I saw an interesting video recently where someone asked one of the LLM's to rate all post-war UK Prime Ministers. It was a stunningly more intelligent, unbiased and objective analysis than almost any human could produce.
How do you know? On what basis is this claim made?
 
  • #20
There are cases where LLMs have decoded ancient text that no human had ever decoded before. If they are just parroting back what they were trained on, that couldn't happen. What they are doing when training is building a model of the world inside their neural networks, just like the model of the world that you build when you train your neural network by interacting with the environment. So I think the continued cries of "they can't be intelligent because they are not human!!" are missing the mark.
 
  • Like
Likes mattt, Bandersnatch, Borg and 1 other person
  • #21
PeterDonis said:
How do you know? On what basis is this claim made?
I used my intelligence to judge the analysis.
 
  • #22
PeroK said:
in practical terms it is more reliable than almost any individual human being.
But when it comes to getting answers to hard questions, we don't just ask a random individual human being. We draw on cumulative knowledge and understanding that has been built up over a long time. For example, the knowledge I draw on to answer questions in the relativity forum comes from textbooks and peer-reviewed papers that describe detailed theoretical models and key experiments, plus my own personal experience of living in a curved spacetime. LLMs don't have anything like that. Sure, the text of Misner, Thorne & Wheeler might be in its training data, but the word frequency mining it's doing on that data is very different from what I'm doing when I understand the equations and use them to work problems. It's not even the same as what, for example, Wolfram Alpha is doing when you feed it a question.
 
  • Like
Likes pbuk and russ_watters
  • #23
PeroK said:
I used my intelligence to judge the analysis.
How? You have been saying humans aren't intelligent. But now you're saying you are?
 
  • #24
PeroK said:
It doesn't matter how it does it. That's the point. You can quibble that it's not intelligent all you like. It does stuff that stands up to objective analysis.
Who is doing the judging? This entire line of reasoning is circular. It's programmed by humans, gathers and analyzes the opinions of humans and then is judged on its "objectivity" in how it aggregates those opinions by humans.

What's worse is this: There's literally no such thing as completely objective when it comes to politics. Much of what separates positions is pure opinion.

But at least we should all be able to agree, objectively, that the Dallas Cowboys suck and any LLM that disagrees was improperly coded.
PeroK said:
That it has no independent political opinion is not in itself a lack of intelligence.
I agree with that. What makes it not intelligent is that it doesn't think. My calculator is better at math than I am, but that's not the bar most people set for defining "AI".

prior post:
The fundamental problem, IMO, with Chollet's argument is that he exaggerates human intelligence.
There's more than one kind of intelligence. A spreadsheet is orders of magnitude more intelligent than I am if we judge each of us on our ability to do math. But computers still suck at coordinating movement, which humans barely have to think to do. The trick with AI, with the specific aim of replacing humans, is not in how smart they are.
 
  • #25
PeterDonis said:
But when it comes to getting answers to hard questions, we don't just ask a random individual human being. We draw on cumulative knowledge and understanding that has been built up over a long time. For example, the knowledge I draw on to answer questions in the relativity forum comes from textbooks and peer-reviewed papers that describe detailed theoretical models and key experiments, plus my own personal experience of living in a curved spacetime. LLMs don't have anything like that. Sure, the text of Misner, Thorne & Wheeler might be in its training data, but the word frequency mining it's doing on that data is very different from what I'm doing when I understand the equations and use them to work problems. It's not even the same as what, for example, Wolfram Alpha is doing when you feed it a question.
That was your training data and you use your algorithms (whatever they are) to process what you know and answer questions in a given context. That process has been simulated by LLM's. If I want to ask a question about GR, I don't care how the answer is generated. I'm only interested in the quality of the answer. It may be that your answers are still superior to an LLM. And, perhaps that will always be the case. Personally, however, I doubt that. Moreover, an LLM has all the advantages that IT has over humans - 24x7, instant replies and unlimited patience (!)- that may balance the equation in its favour, even if your eventual answer is objectively somewhat superior in content.
 
  • #26
russ_watters said:
Who is doing the judging? This entire line of reasoning is circular.
I am. That line of reasoning comes from an intelligent, educated human (me). So, your argument is against human intelligence (mine). If I am incapable of intelligent reasoning, then where does that leave us?
 
  • Haha
Likes russ_watters
  • #27
PeroK said:
That was your training data and you use your algorithms (whatever they are) to process what you know and answer questions in a given context. That process has been simulated by LLM's.
You have no basis for this claim since you don't know what human brains do with their input data. I strongly doubt that whatever human brains are doing with input data is anywhere near as simple as what LLMs do with their input data. For one thing, human input data is much more varied than text, which is all that LLMs can take as input.
 
  • #28
PeroK said:
If I want to ask a question about GR, I don't care how the answer is generated. I'm only interested in the quality of the answer.
But if you don't know how the answer is generated, you have only your own prior knowledge of the subject to use in judging the quality of the answer. Which makes the answer useless; you can only judge its correctness if you already know what's correct.

With a GR textbook, OTOH, you know a lot about how its content is generated, which means you don't have to just rely on your own prior knowledge. You can learn from a textbook by accepting the fact that it contains a lot of information that is accurate--because of the process that produced it--but which you don't currently have. You can't learn from an LLM that way.
 
  • #29
PeroK said:
If I am incapable of intelligent reasoning, then where does that leave us?
You tell us. You are the one who has been arguing that humans are incapable of intelligent reasoning.
 
  • Like
Likes russ_watters
  • #30
PeroK said:
And this is what many people refuse to acknowledge. At PF we have declared that LLM's are no more intelligent than a washing machine and that you may as well ask a toaster to help with your homework.

...in practical terms it is more reliable than almost any individual human being.
This hyperbole is not helpful. Nobody is claiming LLMs can't give useful answers. Or even that they can do it less often than an unfiltered cross section of the internet. The difference between ChatGPT and PF is that on ChatGPT you are asking a filtered cross section of the internet what it thinks of Relativity whereas on PF you are asking physics professors. Or for homework help -- well, ChatGPT doesn't do homework help, does it? It just gives answers.

It may be that your answers are still superior to an LLM. And, perhaps that will always be the case. Personally, however, I doubt that.

We agree on that. And when that happens, we should definitely revisit our policy.
 
Last edited:
  • #31
phyzguy said:
There are cases where LLMs have decoded ancient text that no human had ever decoded before. If they are just parroting back what they were trained on, that couldn't happen. What they are doing when training is building a model of the world inside their neural networks, just like the model of the world that you build when you train your neural network by interacting with the environment. So I think the continued cries of "they can't be intelligent because they are not human!!" are missing the mark.
I mean....a LLM figuring out a language seems like a task pretty well in its wheelhouse.
 
  • #32
PeroK said:
I am. That line of reasoning comes from an intelligent, educated human (me). So, your argument is against human intelligence (mine). If I am incapable of intelligent reasoning, then where does that leave us?
Lol, I didn't say you aren't intelligent, I said you aren't objective.
 
  • #33
Is the question easier to agree on if flipped around: What kind abilities or qualities should a (future) model exhibit to be considered of, say, average human intelligence?

I think perhaps the OP question (or the "flipped" question) is not really that interesting in regards to LLM's. Research (driven by whatever motive) will by all accounts drive LLM to be more and more likely to produce what most would consider output generated by an intelligence (i.e. a sort of "intelligence is in the eye of the beholder" measure) and the interesting question in that context seems more to be if this path of evolution will be "blocked" by some fundamental but so-far undiscovered mechanism or model structure.

Compare, if you will, with study of animal intelligence. Clearly the mere presence of a brain in an animal does not imply it is able to what we would classify as intelligence, but some animals (e.g. chimpanzees) are clearly able to exhibit intelligent or at least very adaptive behavior in their domain even if they cannot be trained to explain general relativity. In that context I guess my question becomes what set of mechanisms or structures in the human brain, compared to such an animal brain, makes it qualitatively more capable of intelligent behavior? Considering homo sapiens have a common evolutionary ancestor with every species on this planet then I can only see the significant difference be structure and scale of the brain. And if so, why shouldn't LLM's with the right size and structure not also be able to achieve human level intelligence via such an evolutionary path? I am not saying such a path is guaranteed to be found, more that such as path has already been shown to exist and be evolutionary reachable in the example of homo sapiens so why not also with LLM's as a starting point?

(I realize discussion of the level of intelligence exhibited by possible future models are not what the OP posed as question, but, just to repeat myself, since we have such trouble answering that question maybe it is more easier or relevant to discuss if there is anything fundamentally blocking current models to evolve to a point where everyone would agree yes, now the behavior is intelligent).
 
  • #34
Filip Larsen said:
why shouldn't LLM's with the right size and structure not also be able to achieve human level intelligence via such an evolutionary path?
It took millions of years for humans to evolve whatever intelligence we have, and that was under selection pressures imposed by the necessity of surviving and reproducing in the real world. Whatever "evolution" is being used with LLMs has been happening for a much, much shorter time and under very different selection pressures. So I don't see any reason to expect LLMs to achieve human level intelligence any time soon on these grounds.
 
  • #35
PeterDonis said:
Whatever "evolution" is being used with LLMs has been happening for a much, much shorter time and under very different selection pressures.
Yes, but with an artificial selection pressure much more focused on optimizing towards behavior (output) we will consider intelligent. In our research for general AI we are able to established a much more accelerated evolution encompassing scales and structures that may vary wildly over few "generations" only limited by hardware efficiency and energy consumption at each cycle, i.e. without also having the selective pressures for a body to survive and compete in a physical environment.

But, as mentioned, my question was not really about how long an evolutionary path towards general AI would take but more if there should be any reason why such a path should not exists using more or less the known LLM mechanisms as a starting point. Or put differently: if LLM's already now can exhibit the emergence reasoning by analogy (which I understand is both surprising and undisputed), it is hard for me to see why other similar traits that we consider to be part of "intelligent lines of thought" could not also be emergent at some scale or structure of the models.
 
  • #36
Filip Larsen said:
with an artificial selection pressure much more focused on optimizing towards behavior (output) we will consider intelligent.
But the output is just text, as is the input. Text is an extremely impoverished form of input and output. Much of what we humans do that makes us consider ourselves intelligent has nothing to do with processing text.

Filip Larsen said:
if LLM's already now can exhibit the emergence reasoning by analogy (which I understand is both surprising and undisputed)
I don't think it's undisputed. It's a claim made by proponents but not accepted by skeptics. I think it's way too early to consider any kind of claim along these lines as established.
 
  • #37
Filip Larsen said:
without also having the selective pressures for a body to survive and compete in a physical environment.
There has long been a school of thought in AI that holds that no entity can really be intelligent if it is not embodied and does not have to deal with all the issues involved in directly perceiving and acting on an external, physical environment. I don't think LLMs as they currently exist do anything to refute such a position.
 
  • #38
pbuk said:
We seem to have this rabbit-hole discussion about once a month
Everything that needs to be said has been said, but not everyone has said it yet.
 
  • Like
Likes Hornbein and pbuk
  • #39
PeterDonis said:
I think it's way too early to consider any kind of claim along these lines as established.
Coming from you without at the same time insisting that I show you references I take as indication that we have roughly the same understanding of the current state regarding emergent reasoning by analogy which is good enough for me.

PeterDonis said:
There has long been a school of thought in AI that holds that no entity can really be intelligent if it is not embodied and does not have to deal with all the issues involved in directly perceiving and acting on an external, physical environment.
Yes, but to my knowledge this idea originates from the AI dry periods before LLM's where people was looking for what could be missing. I first heard about in the start of 90'ties when a local professor at my university held a presentation to share a realization he had about embodiment most likely being required for the brain to be able to build and maintain a model of the world (i.e. learn). It is not a bad idea and seems to be true for the evolution of the human brain so why not AI as well, but it is also an idea that so far (to my knowledge) has had much less evidence than emergent behaviors in LLM's so if you are sceptical about the latter why are you not also sceptical of the former?

Anyway, I mainly joined in the discussion to express I am sceptical towards statements similar to "LLM's cannot achieve human-level intelligence because they are just crunching numbers", not to revisit every argument along the way if we are all just going to hold our position anyway.
 
  • #40
I have decided that I'm not going to participate in such debates until there is agreement on the definition of "intelligence" . That will never happen, thus freeing up time for all sorts of other things.
 
  • #41
Filip Larsen said:
it is also an idea that so far (to my knowledge) has had much less evidence than emergent behaviors in LLM's
Evidence of what? If you are saying we have evidence of intelligence from emergent behaviors in LLMs, I'm not sure I agree. Indeed, the argument @PeroK gave earlier in this thread was that the behavior of LLMs does not show intelligence; it shows better performance on some tasks involving text than the average human, but the average human, according to @PeroK, is not intelligent at those tasks.

As for embodied AI, I would suggest, for example, looking up the Cog project at the MIT AI lab.
 
  • #42
PeterDonis said:
Evidence of what? If you are saying we have evidence of intelligence from emergent behaviors in LLMs, I'm not sure I agree.
I should have been more clear. I am saying
  1. that we have evidence that LLM's can provide reasoning via analogies without having seen that particular analogy directly in the training set,
  2. this form of reasoning is considered as a trait of intelligent behavior (i.e. the ability to recognize one set of concepts corresponds to or are analog to another set of concepts, like the visual pattern Raven matrices often used in IQ-tests),
  3. this behavior that the LLM exhibits is emergent in the sense there was no explicit effort or special handling to ensure the network picked this up during training, and finally
  4. if one such trait can emerge from LLM training it to me seem likely that more traits usually linked to intelligent behavior can emerge as LLM's are scaled up or simply added as "embodied" mechanisms (e.g. memory, fact checking, numerica and symbolic calculations, etc.).
Point 4 is my (speculative) claim that I posed earlier. If 4 is not true then I assume there must be some trait essential for general intelligent behavior that we never will be able to get to emerge from a LLM no matter how big we scale the model, no matter what simple "determinstic" build-in mechanisms we add, and no matter what material we train it with. And if this again is claimed by others to be the case then I counter-ask: how can the human brain possibly have evolved to allow human level intelligent if it is impossible to evolve in an artificial network?

I guess one easy way out for the sake of setting the discussion is say, "oh well, it will be possible to evolve intelligence artifically but then it is not a LLM anymore and we only talked about LLM in their current design and if you add this or that mechanism it is a totally different acronym and we are good characterizing that acronym with signs of intelligence". OK, I'm fine with that resolution. Or people can continue to claim that the ability for humans to behave intelligent is based on some illusive or stranglely irreproducible nerophysical mechanism that forever will "ensure" machines cannot be as intelligent as humans. Im not fine with that and will insist on hearing a very good argument.

Yes, I know I possibly repeat discussion points mulled over elsewhere ad nausium and yes I agree it still somewhat pointless to keep going one more time, but you did ask and perhaps someone is able to present a compelling argument for why humans will alway be more intelligent than a machine and I will have learned something new and can stop worrying that much about yet another technology experiment on steorids with potential nuclear consequences we all are enrolled in to satisfy the goldrush of a few.

Yeah, I should definately stop now.
 
  • #43
PeterDonis said:
As for embodied AI, I would suggest, for example, looking up the Cog project at the MIT AI lab.
I don't really see that project produced any results that even indicate that physical embodiment is required for emergence of intelligence? It is also (not surprising) pre-LLM so there should have been plenty of oppertunities for other to carry the idea over to modern networks, but I assume noone has?

One neat example of embodiment to learn "intelligent" motion is the small walking soccer bots that learn to play soccer all by themselves. I seem to recall this was also done (with wheeled bots) with GA and subsumption architectures back when that idea was news.
 
  • #44
Filip Larsen said:
we have evidence that LLM's can provide reasoning via analogies
No, we don't. The term "reasoning" is not accepted by skeptics as a valid description of what LLMs are doing.
 
  • #45
Filip Larsen said:
I don't really see that project produced any results that even indicate that physical embodiment is required for emergence of intelligence?
There could never be any such evidence, since evidence can never prove a negative.

But the Cog project is evidence that embodiment gives rise to behaviors that are seen by human observers as being intelligent. Many people made that observation on seeing Cog operate. And, a key point for this discussion, those behaviors had nothing to do with processing text. They were the sorts of behaviors involving perception and action in the world that, when we see animals do them, we take as at least indications of the animals possibly being intelligent.

Filip Larsen said:
It is also (not surprising) pre-LLM so there should have been plenty of oppertunities for other to carry the idea over to modern networks, but I assume noone has?
The fact that people have not, as far as we know, used LLMs to drive a robot could simply mean that people who have worked on making robots exhibit behaviors that we take as indications of at least potential intelligence do not see LLMs as a useful tool for that endeavor. Which is not surprising to me given that, as I said above, those behaviors have nothing to do with processing text and processing text is all that LLMs do.
 
  • #46
PeterDonis said:
No, we don't. The term "reasoning" is not accepted by skeptics as a valid description of what LLMs are doing.
I wrote "provide reasoning by analogy" in point 2, i.e. meaning the LLM is able to produce output that when read by a human corresponds to reasoning by analogy. I have exclusively been talking about the characteristics of the output (relative to the input of course) of LLM's without consideration on the exact internal mechanism even if I perhaps have missed writting that full qualification out every single time I have referred reasoning by analogy in a sentence. My point (to repeat yet again) is that the training of the examined LLM's picked up on a trait that allows the LLM to answer some types of questions often used in aptitude tests for humans. The interesting part for me, which I have been trying to draw attention to in pretty much every reply, is the emergence of a trait associated with intelligent.

And I still have no clue why you are dismissal of associating intelligence traits with "textual output". If a human individual (perhaps after studying long in some domain) when given novel problems consistently is able to produce solutions that by consensus are considered both novel and intelligent solutions to the posed problems I assume we would have no issue agreeing it would be right to characterized this indidual as being intelligent. But, if a machine was able to exhibit the same behavior as the human you would now instead classify it as devoid of intelligence because it was only trained on the content of those books?

Or are you instead saying that it will prove impossible to evolve such a machine into existance because we eventual hit some physical limit? I suspect you are not saying that, but now I may as well ask. I still only have limited power input and output (heat managment) as the only physical contraints that potentially may have an significant impact on the evolutionary path towards a (potential) GAI, but considering the current amount of research into efficient ANN/LLM hardware it is likely these power contraints will only limit the rate of evolution and much less limit the scale of the models and its equivalent processing power (e.g. FLOPS), a bit similar to how CPU processing power has been following Moore's law for decades.
 
  • #47
Filip Larsen said:
I wrote "provide reasoning by analogy" in point 2, i.e. meaning the LLM is able to produce output that when read by a human corresponds to reasoning by analogy.
In other words, you are making no claim about what the LLM is actually doing, only about how humans subjectively interpret its output. In that case I don't see the point. But this discussion has probably run its course.
 
  • #48
Filip Larsen said:
I still have no clue why you are dismissal of associating intelligence traits with "textual output".
Because, as I've already said, text is an extremely impoverished kind of input and output.

Filip Larsen said:
when given novel problems consistently is able to produce solutions that by consensus are considered both novel and intelligent solutions to the posed problems
You can't do this with just text input and output unless the "problems" are artificially limited to text and the "solutions" are artificially limited to producing text. In other words, by removing all connection with the real world. But of course the real world knows no such limitations. Plop your LLM down in the middle of a remote island and see how well it does at surviving using text, even if all the things it needs for survival are actually present on the island. Most real world problems are far more like the latter than they are like solving artificial textual "problems".
 
  • #49
PeterDonis said:
But the Cog project is evidence that embodiment gives rise to behaviors that are seen by human observers as being intelligent. Many people made that observation on seeing Cog operate. And, a key point for this discussion, those behaviors had nothing to do with processing text.
There are many, many 'projects' by now (since by now it's on the level of a common' home lab') where simulated 'entities' gives rise of behaviors seen intelligent by human observers in a simulated environment.

Let's modify the interface of the simulated environment to text based ('Bumped into a wall at 38 degree, with 15 km/h speed. Not feeling well. ' kind of, for example).

... and then let's not delve any deeper into this metaphysical rabbit hole about the meaning and means of 'reality'.
 
  • #50
PeterDonis said:
you are making no claim about what the LLM is actually doing, only about how humans subjectively interpret its output. In that case I don't see the point.
To me that was the point. But my position is also clearly still in the "intelligence lies in the eyes of the beholder" camp.

PeterDonis said:
But this discussion has probably run its course.
Yeah, sadly discussions on non-trivial topics on PF often seem to go into that state after enough common ground has been established but before there is real chance any of us has to learn something new. For me discussions here often seem to spend a lot or even all its energy on rhetorical maneuvering and rarely get to the juicy constructive flow I can have with my engineering colleges during alternate brainstorm/critique discussions.
 
Back
Top