Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #301
DaveC426913 said:
For example, I am one who has been sort of assuming that we would first see something that behaved intelligently like a human behaves intelligently (for example able to carry on a conversation) before it would rise to the level of dangerous. I am now seeing that it does not need to have anything like what I might have of thought of as intelligence in order to get into up to some very serious shenanigans.
LLMs are carrying on conversations with people who trustingly provide very personal information.


DaveC426913 said:
One AI recently wiped some user's entire library of files accidentally (*citation needed). It didn't mean to; it simply had sufficient resources available to it (e.g. full edit/delete permissions) and insufficient (i.e. no) oversight.
I read that same? article. The AI was an open source agenic LLM now called Openclaw. The author of this model recomends to load the program on a virtual computer that does not have your files. Openclaw requires administrative permissions.
 
Physics news on Phys.org
  • #302
javisot said:
Many hallucinations, after in-depth analysis, end up being categorized as simple errors that could be corrected, or even predicted
Interesting. I was not aware of that. Do you have a reference for that?
 
  • Like
Likes   Reactions: javisot
  • #303
jack action said:
The results of LLMs are based on very simple math (addition, multiplication, etc.) repeated billions of times. If you use exactly the same inputs, you get exactly the same results. Nothing that is humanely impossible to do, except for the time it would require to do these calculations. The complexity and length of the process may make it harder to follow, but there is no magic.
Not true, that is not always the time. LLMs have a parameter called Temperature, ranging from 0 to 2, with the most probable weight choices selected (deterministic) for 0 and becoming more random as the Temperature increases, which will lead, of course, to "hallucinations". In Chat mode, the default temperature is between 0.7 and 1.0. The Temperature can be set by the user in the LLM Tools or Playground.


Grinkle said:
I am now questioning if its productive (from the software developers sense) to call a hallucination a bug.
I'm thinking not. It could be the behavior that is necessary for inspiration and true creativity. Developers are trying to make LLMs default to a "I do not know" response if the models get to a point where hallucinations are more probable but they sometimes ignore this constraint.

Lets just call it machine intelligence at least for now and start using it. Like any tool learn to use it correctly and know its strengths and limlitations.

If GPT6 make a leap in capability this year like Gemini 3 did last year than LLMs may still have some capability of achieving AGI.
 
  • Like
Likes   Reactions: javisot
  • #304
gleem said:
default to a "I do not know" response if the models get to a point where hallucinations are more probable but they sometimes ignore this constraint.

As one does. :wink:
 
  • Like
Likes   Reactions: DaveC426913
  • #305
Dale said:
Interesting. I was not aware of that. Do you have a reference for that?
For example,

https://arxiv.org/abs/2509.04664 , Why lenguage models hallucinate

Or, https://www.arxiv.org/pdf/2512.21577

https://arxiv.org/abs/2311.05232 , A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions

https://arxiv.org/abs/2401.01313? , A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models

https://www.nature.com/articles/s41586-024-07421-0 , Detecting hallucinations in large language models using semantic entropy (Nature)

https://arxiv.org/abs/2401.07897 , The Pitfalls of Defining Hallucination
 
Last edited:
  • Informative
Likes   Reactions: Dale
  • #306
gleem said:
If GPT6 make a leap in capability this year like Gemini 3 did last year than LLMs may still have some capability of achieving AGI.
What do we mean by AGI?

The AI bubble is inflated by selling AGI instead of AI itself. Major companies are selling an exotic idea of what AI could become. If AGI isn't achieved, the bubble will burst.
 
  • #307
jack action said:
With a neural network, once the inputs are set,
But the inputs to an LLM are never fully "set", because each prompt you give it is additional input.
 
  • #308
javisot said:
If you replace the LLM with a rock, then the human will most likely beat the rock. The conclusion: the rock is less intelligent than the human, but is the concept of intelligence even applicable to a rock?

However, if we conduct the same test with two humans and one wins, we can agree that one of them is more intelligent than the other. In this case, the concept of intelligence is applicable.

(Okay, I see that with the definition and test you've proposed you're trying to ensure that response time isn't a parameter that determines intelligence)
I once asked myself, is a rock intelligent?

Suppose intelligence is ability to reach goals. How do I know that the rock isn't 100% achieving its goals? I don't. So I gave up on this question.
 
  • Like
Likes   Reactions: javisot and Dale
  • #309
Hornbein said:
I once asked myself, is a rock intelligent?

Suppose intelligence is ability to reach goals. How do I know that the rock isn't 100% achieving its goals? I don't. So I gave up on this question.
Even a rock doesn't have neurons.

If by achieving all goals you mean achieving zero goals, I agree. But the most common definition of intelligence is "the ability to solve problems," and a rock doesn't solve problems. Furthermore, achieving goals implies having pre-established them beforehand; would you say that a rock does that?
 
  • #310
javisot said:
Even a rock doesn't have neurons.
Are nuerons a requirement for ijntelligence?

Does this apply to, say, alien life?

javisot said:
If by achieving all goals you mean achieving zero goals, I agree. But the most common definition of intelligence is "the ability to solve problems," and a rock doesn't solve problems.
It's largely a fanciful, philosophical exercise but there's a nugget of validity there.

How do you determine whether, say, an aline entity is meeting those criteria? Would you even recognize it?

javisot said:
Furthermore, achieving goals implies having pre-established them beforehand;
I thought it was about solving problems - problems that one has not encountered beforehand? Some problems are spontaneous - not pre-established - such as a jaguar pouncing from a tree.
 
  • #311
gleem said:
Not true, that is not always the time. LLMs have a parameter called Temperature, ranging from 0 to 2, with the most probable weight choices selected (deterministic) for 0 and becoming more random as the Temperature increases, which will lead, of course, to "hallucinations". In Chat mode, the default temperature is between 0.7 and 1.0. The Temperature can be set by the user in the LLM Tools or Playground.
It is still a fixed parameter and, once determined, repeating the calculations should yield the same results.

PeterDonis said:
But the inputs to an LLM are never fully "set", because each prompt you give it is additional input.
If the data and parameters are the same, repeating the same prompts, in the same order, should give the same results.
 
  • Skeptical
Likes   Reactions: PeroK
  • #312
jack action said:
If the data and parameters are the same, repeating the same prompts, in the same order, should give the same results.
This is incorrect. You're overlooking hallucinations. Hallucinations occur throughout this process. You can't guarantee that doing the exact same thing will produce the exact same hallucinations.
 
  • #313
DaveC426913 said:
Are nuerons a requirement for ijntelligence?
I hope we can agree that intelligence requires some detectable thermodynamic processes occurring, though - a detectable increase in entropy that is beyond what can be accounted for in the spontaneous deterioration of the rock, for instance.
 
Last edited:
  • Like
Likes   Reactions: javisot
  • #314
jack action said:
If the data and parameters are the same, repeating the same prompts, in the same order, should give the same results.
One can make (I read a very interesting argument years ago, if I can ever find it again I'll give due credit) a similar argument about organic brains. Whether you are correct or not, I don't know that we can be sure organic brains are not the same in that regard.

A philosophy thesis was based around the idea that organic brains are deterministic state machines and hence free will is an illusion. The argument conceded that this can only be a thought experiment because one can never reset the inputs to test, but it put me in mind of the movie Groundhog Day, which is a great (unintentional) depiction of this idea. Only Bill Murray's character in that movie had agency, he was the only character not being reset each day, and he pulled the strings of the other characters by changing their inputs.

Edit: Before I am accused of crediting someone who ripped-off Newton, the author did credit Newton and spent much of the paper arguing that physics gives us two possibilities, randomness and determinism, and we should accept that our human experience guides away from serious consideration of randomness.
 
Last edited:
  • #315
Grinkle said:
One can make (I read a very interesting argument years ago, if I can ever find it again I'll give due credit) a similar argument about organic brains.
Brains are changing over time, so you will not repeat the same process twice. A neural network state can be determined (no matter how complex it is) and reproduced.

If you have machine learning - i.e., the neural network modifies its own parameters - the process becomes even more complex. In that case, I agree that the system will become chaotic if the inputs come from its environment, which are themselves chaotic (learning to drive a car on the road, for example).

After further checking, randomness is introduced in the final stages of LLMs to choose between tokens with similar probabilities. The objective is to make them sound more human, so NOT having two identical outputs is the desired output. But I fail to see why anyone would want to put a machine with randomness in control of anything?
 
  • Like
Likes   Reactions: javisot
  • #316
jack action said:
Brains are changing over time, so you will not repeat the same process twice.
Yah, unless we find a way to overcome the 2nd law, its not any part of our observable reality.
 
  • #317
jack action said:
It is still a fixed parameter and, once determined, repeating the calculations should yield the same results.
It doesn't produce the same response.

If you ask a human a question at different times, they will answer differently, which may become more different as the time between questioning increases and the response lengthens. This excludes memorized statements.

jack action said:
But I fail to see why anyone would want to put a machine with randomness in control of anything?
Humans make random decisions all the time, mostly harmless, but some are careless and and regretable.
 
  • #318
jack action said:
But I fail to see why anyone would want to put a machine with randomness in control of anything?
Nobody wants that, which is why the goal is to eliminate anomalous behavior, to keep the entire process under control. No company would spend millions creating a machine that might randomly decide "I don't want to work today," or commit suicide (or enslave humanity).
 
  • #319
javisot said:
No company would spend millions creating a machine that might randomly decide "I don't want to work today," or commit suicide (or enslave humanity).
Not deliberately. But one possible motivation for head-over-heels development of AI is more concern for the consequences of being 2nd to get there than the consequences of anyone getting there.

Edit: I don't advocate for stopping AI development - that genie is not going back in the bottle. My point with this post is that one should not assume AI development programs will prioritize safety over time-to-market.
 
  • #320
jack action said:
The objective is to make them sound more human, so NOT having two identical outputs is the desired output.
Exactly, for example in the case of chatgpt, the idea isn't to create a machine that responds "hello" to all inputs, but quite the opposite: a machine that generates a specific output for each input—the optimal output for each input—potentially handling a vast number of different inputs (far more than a calculator can handle) of increasingly higher complexity.

The inputs have certain limits: there's a limit on the input length, and computational cost limits that determine the complexity of the language it can handle, and other limits.
 
  • #321
Grinkle said:
Not deliberately. But one possible motivation for head-over-heels development of AI is more concern for the consequences of being 2nd to get there than the consequences of anyone getting there.
What is deliberate is the introduction of randomness into a neural network to mimic human conversations or their artistic side. But human conversations and art are not the only uses for AI - which has no randomness to begin with, just statistical analysis.

So, if one were to design an AI to control the nuclear weapons or find patterns to determine the cause of cancer, why in the heck would they introduce randomness in the process, i.e., use a chatbot to do the job? In cases of control, it is exactly the point to replace the expected randomness of human beings by a solid protocol based on probabilities. And in cases of research, you want to expand the statistical analysis beyond human capabilities, so randomness is totally useless.
 
  • #322
@jack action Look for simulated annealing (SA) algorithms and search for their applications if you are not familiar, it may give you some insight into where randomness is helpful that hadn't occurred to you before.

I suspect the temperature setting that @gleem mentioned as a control parameter indicates there is an SA algorithm under the hood that is helping prevent the LLM from missing a better output that might just be a 'hilltop' or two away in its search space.

This is a different application of randomness (unless I misunderstood you) from what you said about randomly picking between two possible arrived-at answers.
 
  • Like
Likes   Reactions: PeroK

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K