Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #301
DaveC426913 said:
For example, I am one who has been sort of assuming that we would first see something that behaved intelligently like a human behaves intelligently (for example able to carry on a conversation) before it would rise to the level of dangerous. I am now seeing that it does not need to have anything like what I might have of thought of as intelligence in order to get into up to some very serious shenanigans.
LLMs are carrying on conversations with people who trustingly provide very personal information.


DaveC426913 said:
One AI recently wiped some user's entire library of files accidentally (*citation needed). It didn't mean to; it simply had sufficient resources available to it (e.g. full edit/delete permissions) and insufficient (i.e. no) oversight.
I read that same? article. The AI was an open source agenic LLM now called Openclaw. The author of this model recomends to load the program on a virtual computer that does not have your files. Openclaw requires administrative permissions.
 
Physics news on Phys.org
  • #302
javisot said:
Many hallucinations, after in-depth analysis, end up being categorized as simple errors that could be corrected, or even predicted
Interesting. I was not aware of that. Do you have a reference for that?
 
  • Like
Likes   Reactions: javisot
  • #303
jack action said:
The results of LLMs are based on very simple math (addition, multiplication, etc.) repeated billions of times. If you use exactly the same inputs, you get exactly the same results. Nothing that is humanely impossible to do, except for the time it would require to do these calculations. The complexity and length of the process may make it harder to follow, but there is no magic.
Not true, that is not always the time. LLMs have a parameter called Temperature, ranging from 0 to 2, with the most probable weight choices selected (deterministic) for 0 and becoming more random as the Temperature increases, which will lead, of course, to "hallucinations". In Chat mode, the default temperature is between 0.7 and 1.0. The Temperature can be set by the user in the LLM Tools or Playground.


Grinkle said:
I am now questioning if its productive (from the software developers sense) to call a hallucination a bug.
I'm thinking not. It could be the behavior that is necessary for inspiration and true creativity. Developers are trying to make LLMs default to a "I do not know" response if the models get to a point where hallucinations are more probable but they sometimes ignore this constraint.

Lets just call it machine intelligence at least for now and start using it. Like any tool learn to use it correctly and know its strengths and limlitations.

If GPT6 make a leap in capability this year like Gemini 3 did last year than LLMs may still have some capability of achieving AGI.
 
  • Like
Likes   Reactions: javisot
  • #304
gleem said:
default to a "I do not know" response if the models get to a point where hallucinations are more probable but they sometimes ignore this constraint.

As one does. :wink:
 
  • Like
Likes   Reactions: DaveC426913
  • #305
Dale said:
Interesting. I was not aware of that. Do you have a reference for that?
For example,

https://arxiv.org/abs/2509.04664 , Why lenguage models hallucinate

Or, https://www.arxiv.org/pdf/2512.21577

https://arxiv.org/abs/2311.05232 , A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions

https://arxiv.org/abs/2401.01313? , A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models

https://www.nature.com/articles/s41586-024-07421-0 , Detecting hallucinations in large language models using semantic entropy (Nature)

https://arxiv.org/abs/2401.07897 , The Pitfalls of Defining Hallucination
 
Last edited:
  • Informative
Likes   Reactions: Dale
  • #306
gleem said:
If GPT6 make a leap in capability this year like Gemini 3 did last year than LLMs may still have some capability of achieving AGI.
What do we mean by AGI?

The AI bubble is inflated by selling AGI instead of AI itself. Major companies are selling an exotic idea of what AI could become. If AGI isn't achieved, the bubble will burst.
 
  • #307
jack action said:
With a neural network, once the inputs are set,
But the inputs to an LLM are never fully "set", because each prompt you give it is additional input.
 
  • #308
javisot said:
If you replace the LLM with a rock, then the human will most likely beat the rock. The conclusion: the rock is less intelligent than the human, but is the concept of intelligence even applicable to a rock?

However, if we conduct the same test with two humans and one wins, we can agree that one of them is more intelligent than the other. In this case, the concept of intelligence is applicable.

(Okay, I see that with the definition and test you've proposed you're trying to ensure that response time isn't a parameter that determines intelligence)
I once asked myself, is a rock intelligent?

Suppose intelligence is ability to reach goals. How do I know that the rock isn't 100% achieving its goals? I don't. So I gave up on this question.
 
  • Like
Likes   Reactions: javisot and Dale
  • #309
Hornbein said:
I once asked myself, is a rock intelligent?

Suppose intelligence is ability to reach goals. How do I know that the rock isn't 100% achieving its goals? I don't. So I gave up on this question.
Even a rock doesn't have neurons.

If by achieving all goals you mean achieving zero goals, I agree. But the most common definition of intelligence is "the ability to solve problems," and a rock doesn't solve problems. Furthermore, achieving goals implies having pre-established them beforehand; would you say that a rock does that?
 
  • #310
javisot said:
Even a rock doesn't have neurons.
Are nuerons a requirement for ijntelligence?

Does this apply to, say, alien life?

javisot said:
If by achieving all goals you mean achieving zero goals, I agree. But the most common definition of intelligence is "the ability to solve problems," and a rock doesn't solve problems.
It's largely a fanciful, philosophical exercise but there's a nugget of validity there.

How do you determine whether, say, an aline entity is meeting those criteria? Would you even recognize it?

javisot said:
Furthermore, achieving goals implies having pre-established them beforehand;
I thought it was about solving problems - problems that one has not encountered beforehand? Some problems are spontaneous - not pre-established - such as a jaguar pouncing from a tree.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K