Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #211
PeterDonis said:
It also does not require any "awareness" or "understanding" or "intention" to accomplish a given outcome, on the part of whatever is doing the deceiving. Which means that we can't infer the presence of any of those things based on observing deception in this sense.
Agreed. I like these sorts of definitions because they directly address real risks without getting bogged down in these “fuzzy” concepts. We can discuss deceptive AI and agree if something meets this definition without having to agree if it is intelligent. And more importantly, we can start to track it, test for it, and hopefully mitigate it.
 
  • Like
Likes   Reactions: Grinkle and javisot
Physics news on Phys.org
  • #212
PeterDonis said:
Only if the model even has a concept of "incorrect information". I don't think LLMs even have that concept. They're not designed to, since that concept requires having the concept of "information stored in my model" as separate from "the actual state of things in the world". Which in turn requires having those two concepts--and LLMs don't have the latter one. They only have the former.
Correct. I wasn't referring to the model deliberately lying while being aware of it; I meant that the model can generate texts that we interpret as lies, having defined "lie"/deception as Perok and Dale indicate.
 
  • Like
Likes   Reactions: Dale and Grinkle
  • #213
Eliminating all hallucinations, lies, and errors from models isn't something that can be achieved simply by modifying the models.

This reminds me of how companies automate today. The company head hires an engineer to automate their old factory. First, the engineer does an analysis and typically then asks you to modify certain aspects so that your factory and your processes are more easily automated. They might ask you to reduce the variety of products you produce, or things like that.

If we want to build a more general AI that doesn't lie, hallucinate, or make mistakes, the first thing we would have to do is modify society so that our inputs are of higher quality. Once we assume a society that generates inputs of the highest quality, we could discuss whether problems still exist, and what we should do to fix them.
 
  • Like
Likes   Reactions: Dale, gleem and Grinkle
  • #214
javisot said:
the first thing we would have to do is modify society so that our inputs are of higher quality

This is going to have me chuckling for at least a couple days. 😂
 
  • Like
  • Agree
  • Haha
Likes   Reactions: Bystander, BillTre, russ_watters and 2 others
  • #215
Grinkle said:
This is going to have me chuckling for at least a couple days. 😂
This is already being done, for example by input engineers, or when OpenAI sends "good usage manuals for chatgpt". (Okay, the effort we invest in improving it isn't very great...)
 
  • Like
Likes   Reactions: Grinkle
  • #216
PeroK said:
If I had said, for example, that the ability to play the piano is an aspect of intelligence, then would your response be that "not all humans can play the piano".
No, my response to that is that there are machines that play the piano. I agree that human piano playing requires intelligence, even though we don't have a definition of intelligence to help is in this discussion, I'd say any reasonable definition of intelligence would make it required for human piano playing.

Machine piano playing does not. Piano playing is different from deception in that its purely a mechanical skill that has no aspect of social motivation at least in its most basic definition.
 
  • #217
Grinkle said:
No, my response to that is that there are machines that play the piano. I agree that human piano playing requires intelligence, even though we don't have a definition of intelligence to help is in this discussion, I'd say any reasonable definition of intelligence would make it required for human piano playing.

Machine piano playing does not. Piano playing is different from deception in that its purely a mechanical skill that has no aspect of social motivation at least in its most basic definition.
Logic has nothing directly to do with the factual accuracy of the hypotheses.

It's quite ironic to be discussing AI and be bombarded with false human logic at every turn.

That said, a small majority of intelligent humans can think logically. It generally requires formal training. Whereas, an AI is expected to exhibit flawless logic.
 
  • Like
Likes   Reactions: Hornbein and gleem
  • #218
I committed to you that if you expanded, I would respond, and I did.

Beyond that, I will accept the below as your final word. I don't think you are actually considering anything I'm saying and if I take what you posted at face value, that won't change.


PeroK said:
From my point of view, there are no circumstances where I am going to defer to your homespun logic
 
  • #219
To put it in formal terms, my original argument was:

Major hypothesis: only intelligent things can lie (*)
Minor hypothesis: this LLM can lie.

Conclusion: this LLM is intelligent.

That logic is sound. The only way the conclusion can fail is if one or both of the hypotheses fail.

The conclusion does not fail if, as you suggested, some things that are intelligent don't lie.
 
  • Like
Likes   Reactions: javisot
  • #220
And, my broader argument is that the more things AI can do and the more emergent characteristics it develops, the more things are deemed to be not an indicator of intelligence.

For example, LLMs can comfortably pass the Turing test. So, for the AI skeptics the Turing test is no longer valid.

In other words, my major hypothesis might have been accepted a few years ago. That lying requires intelligence. But, it would not be accepted any more
 
  • Like
Likes   Reactions: Hornbein and Dale
  • #221
PeroK said:
To put it in formal terms, my original argument was:

Major hypothesis: only intelligent things can lie (*)
Minor hypothesis: this LLM can lie.

Conclusion: this LLM is intelligent.

That logic is sound. The only way the conclusion can fail is if one or both of the hypotheses fail.

The conclusion does not fail if, as you suggested, some things that are intelligent don't lie.
It sounds more simplistic than logic, especially without definitions of "intelligent" or "lie". To link a lie to intelligence, you must show intention, knowledge, and awareness of what someone (or something) does. No one think a child - even though they are intelligent - lies because an adult asked them to repeat the sentence "The apple is blue". It is the [intelligent] adult that is behind the lie.

You don't seem to understand how LLMs work and you see some magic in them. There is none.
  • Information is fed to the LLM in form of texts. (That is already a very limiting factor of what the machine can do with this information.)
  • The machine breaks down this information in a bunch of 0s and 1s and looks for patterns according to some parameters to get some desired results.
  • It tries all possible ways until it finds the desired results.
  • It then converts the 0s and 1s back to text form and spits out the results.
If humans were playing poker without bluffing and we would ask an LLM to play poker, it would find this new way of playing and we would all be in awe over this LLM for finding it, leading to everyone using this new found technique. But it only found the text needed to win the game. It wasn't smart: humans just never thought of it before. It did it by analyzing and finding the patterns that led to the desired goal, a pattern never noticed by humans before.

So, no, it doesn't require intelligence to apply this pattern-finding math technique. It is just a dumb machine that does what it is asked to do.

I really prefer that we focus on it being a "dumb" machine for intelligent people rather than an "intelligent" machine for dumb people. The former implies "garbage in, garbage out" and human intervention is an important part of the process, the latter implies it works by itself to do the smartest thing, which sends the wrong message about the machine capabilities.
 
  • Like
Likes   Reactions: Grinkle and javisot
  • #222
jack action said:
It sounds more simplistic than logic, especially without definitions of "intelligent" or "lie".
Logic doesn't require the terms to be defined. You only need definitions when you want to apply the logic to specific terms. So, yes we would have to define lie and intelligent.

Once a system is sophisticated enough to pass the Turing test, then I think its valid to start asking questions about it regarding intelligence and deception.

If we restrict this discussion to systems that can pass the Turing test, then that excludes spurious comparisons with a pocket calculator or other simpler systems.

If you are familiar with Godel's theorems, they only apply to mathematical systems that include the natural numbers. Simpler mathematical systems may be both consistent and complete.

We have a similar situation with AI. There is a point at which complexity can change things fundamentally.

Turing did work on this with the halting problem etc. There is more to this than your homespun theories, no matter how vehemently you believe that there cannot be.
 
  • Like
Likes   Reactions: javisot
  • #223
PeroK said:
To put it in formal terms, my original argument was:

Major hypothesis: only intelligent things can lie (*)
Minor hypothesis: this LLM can lie.

Conclusion: this LLM is intelligent.

That logic is sound. The only way the conclusion can fail is if one or both of the hypotheses fail.

The conclusion does not fail if, as you suggested, some things that are intelligent don't lie.
I think you're both right and wrong at the same time. It doesn't take intelligence to generate text that can be interpreted as a lie. A random text generator, if run long enough, could produce "the speed of light is greater than c," which is a lie, but it's a random text generator that lied without requiring any intelligence.

However, it does require intelligence to understand that such text contains a lie.
 
  • Agree
Likes   Reactions: Grinkle
  • #224
PeroK said:
The only way the conclusion can fail is if one or both of the hypotheses fail.
I don't agree with your major hypothesis. I don't think intelligence is required to deceive in the sense the paper you referenced is using the term (a systematic pattern of emitting text that creates false beliefs in a human audience), unless we broaden the definition of "intelligence" so much as to make it basically meaningless--"intelligent" becomes equivalent to "can deceive in this broad sense".

However, if we are talking about possible dangers from AI, I also don't think it matters whether the deception is "intelligent" or not. What matters is how much weight humans are willing to put on text emitted by LLMs that has a significant probability of being false in systematic ways.
 
  • Like
  • Agree
Likes   Reactions: BillTre, russ_watters, jack action and 4 others
  • #225
PeterDonis said:
I don't think intelligence is required to deceive in the sense the paper you referenced is using the term
The authors are careful here. A snip from page 5 of the pdf -

"It is possible that the strategic behavior
we document below is itself one more example of LLMs imitating
patterns in text."

PeterDonis said:
I also don't think it matters whether the deception is "intelligent" or not.
The authors make similar points in the paper regarding the dangers they are wanting to flag in the publication.
 
  • Like
Likes   Reactions: Dale
  • #226
PeroK said:
And, my broader argument is that the more things AI can do and the more emergent characteristics it develops, the more things are deemed to be not an indicator of intelligence.

For example, LLMs can comfortably pass the Turing test. So, for the AI skeptics the Turing test is no longer valid.

In other words, my major hypothesis might have been accepted a few years ago. That lying requires intelligence. But, it would not be accepted any more
Well, IMO that’s both a reasonable complaint and a good reason to not get bogged down in this argument. There are far more productive issues to discuss, particularly when one of the terms is subject to so much “drift”.

I think that before someone claims that a LLM is intelligent or not they should specify the operational measure that they are using to make their claim. The Turing test does show up in the professional scientific literature. Any alternative definition should at least meet that criterion to be considered valid here.
 
  • Like
Likes   Reactions: russ_watters and jack action
  • #227
To the question "what does the Turing test actually measure?", chatgpt replies:

"The Turing Test does not measure “intelligence” in a deep or human sense. It measures something much narrower.

So what does it actually measure?

It measures a machine’s ability to imitate human conversational behavior so well that a human judge cannot reliably tell it apart from a real person.

In other words, it evaluates:

• Conversational indistinguishability

• The ability to produce coherent, context-appropriate, human-like responses

What it does measure


✔ Convincing use of natural language
✔ Handling of conversational context
✔ The ability to deceive (in the technical sense) a human evaluator
✔ The appearance of understanding, not genuine understanding

What it does not measure

✘ Consciousness
✘ Deep semantic understanding
✘ Intentionality or mental states
✘ Genuine reasoning or general intelligence
✘ Creativity in a strong sense
 
  • Like
Likes   Reactions: BillTre
  • #228
WRT poler
jack action said:
So, no, it doesn't require intelligence to apply this pattern-finding math technique. It is just a dumb machine that does what it is asked to do.
Carnegie Mellon U has built two models that play poker, called Libratus (two-player no-limit Texas Hold'em) and Pluribus (six-player no-limit Texas Hold'em). Both beat professional human players. The self taught with game theory, their strategy included bluffing. These models are special in that they only have partial information to work with. While these models were designed to play poker, they can be used for contract negotiations, cybersecurity, threat detection, and resource allocation.

Read: https://www.theguardian.com/science...ayer-game-for-first-time-plurius-texas-holdem

For details https://www.science.org/doi/10.1126/science.aay2400
 
  • #229
javisot said:
A random text generator
Does not meet the definition in the paper, which is a systematic pattern of emitting text that creates false beliefs in a human audience.
 
  • #230
PeterDonis said:
Does not meet the definition in the paper, which is a systematic pattern of emitting text that creates false beliefs in a human audience.
Then suppose that this same random text generator over a long period of time generates many other lies, not just one. (What do we mean by systematic?)
 
  • #231
PeroK said:
LLMs can comfortably pass the Turing test.
Can you give some specific references? The ones I have seen (for example, this) are far too limited--the paper I just linked put a time limit of 5 minutes on the interaction. Turing's original intent was for the test to be open-ended, to last as long as the interrogator wanted. And of course in real life our interactions with both other humans and LLMs are open-ended.
 
  • Like
Likes   Reactions: Dale
  • #232
javisot said:
(What do we mean by systematic?)
A pattern that is not random. For more details, read the paper that was referenced.
 
  • Like
Likes   Reactions: javisot
  • #233
javisot said:
What it does not measure

✘ Consciousness
✘ Deep semantic understanding
✘ Intentionality or mental states
✘ Genuine reasoning or general intelligence
✘ Creativity in a strong sense
Is there any way of measuring these things? To what extent can the average human exhibit all these characteristics?

ISTM that LLMs can demonstrate these as well or better than the average human, considering it is artificial intelligence.
 
  • #234
PeterDonis said:
-the paper I just linked put a time limit of 5 minutes on the interaction.
If the test is limited to 5 minutes then it should be pretty easy to discriminate between a LLM and a human. Just make the question involve something that a human cannot do in 5 min. Like require a 10000 word response.
 
  • Haha
Likes   Reactions: BillTre
  • #235
PeterDonis said:
I don't agree with your major hypothesis. I don't think intelligence is required to deceive in the sense the paper you referenced is using the term (a systematic pattern of emitting text that creates false beliefs in a human audience), unless we broaden the definition of "intelligence" so much as to make it basically meaningless--"intelligent" becomes equivalent to "can deceive in this broad sense".

However, if we are talking about possible dangers from AI, I also don't think it matters whether the deception is "intelligent" or not. What matters is how much weight humans are willing to put on text emitted by LLMs that has a significant probability of being false in systematic ways.
If deception is a sign of intelligence, then biology which is full of deception should be very intelligent (due to selection presumably).
Since selection works by selection things that work best in some way to produce the next generation, its not a deep intelligence. Might be considered a distributed intelligence.
 
  • #236
PeroK said:
Once a system is sophisticated enough to pass the Turing test, then I think its valid to start asking questions about it regarding intelligence and deception.
Now that we know what you think, let's see what the experts have to say about the Turing test:
https://en.wikipedia.org/wiki/Turing_test#Impracticality_and_irrelevance:_the_Turing_test_and_AI_research said:

Impracticality and irrelevance: the Turing test and AI research​


Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research. Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test". There are several reasons.

First, there are easier ways to test their programs. Most current research in AI-related fields is aimed at modest and specific goals, such as object recognition or logistics. To test the intelligence of the programs that solve these problems, AI researchers simply give them the task directly. Stuart Russell and Peter Norvig suggest an analogy with the history of flight: Planes are tested by how well they fly, not by comparing them to birds. "Aeronautical engineering texts," they write, "do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons.'"

Second, creating lifelike simulations of human beings is a difficult problem on its own that does not need to be solved to achieve the basic goals of AI research. Believable human characters may be interesting in a work of art, a game, or a sophisticated user interface, but they are not part of the science of creating intelligent machines, that is, machines that solve problems using intelligence.

Turing did not intend for his idea to be used to test the intelligence of programs—he wanted to provide a clear and understandable example to aid in the discussion of the philosophy of artificial intelligence. John McCarthy argues that we should not be surprised that a philosophical idea turns out to be useless for practical applications. He observes that the philosophy of AI is "unlikely to have any more effect on the practice of AI research than philosophy of science generally has on the practice of science".

PeroK said:
If we restrict this discussion to systems that can pass the Turing test, then that excludes spurious comparisons with a pocket calculator or other simpler systems.
But even 21st-century AI researchers consider simple machines having intelligence [bold emphasis mine]:
https://en.wikipedia.org/wiki/Philosophy_of_artificial_intelligence#Intelligence_as_achieving_goals said:

Intelligence as achieving goals​

Twenty-first century AI research defines intelligence in terms of goal-directed behavior. It views intelligence as a set of problems that the machine is expected to solve – the more problems it can solve, and the better its solutions are, the more intelligent the program is. AI founder John McCarthy defined intelligence as "the computational part of the ability to achieve goals in the world."

Stuart Russell and Peter Norvig formalized this definition using abstract intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.
  • "If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent."
Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes. They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence.
There are five classes of intelligent agents, and a thermostat is considered a simple reflex agent:
  • Simple reflex
  • Model-based reflex
  • Goal-based
  • Utility-based
  • Learning
PeroK said:
We have a similar situation with AI. There is a point at which complexity can change things fundamentally.
What you are referring to is called emergence. This recent article talks about it and you may recognize your beliefs in it [bold emphasis mine]:
What are commonly called emergent properties in Artificial Intelligence are, at their core, non-teleological phenomena. In simple, didactic terms, they are capabilities that arise without explicit programming, without direct optimization, and often without anticipation. They emerge from the interaction of data, architecture, scale, and learning dynamics within AI as a complex adaptive system. The system is not instructed to “be creative,” “reason,” or “self-correct.” It is instructed to optimize a function. Everything else follows structurally, as a consequence of constraint and scale. This is AI without intention, yet rich in appearance.

Self-teaching illustrates this clearly. An AI system begins with minimal task-specific structure and is exposed to data or environments. Through iterative optimization, it discovers patterns and internal representations that allow it to generalize. Over time, it performs tasks it was never explicitly taught. Teleological language creeps in immediately: the system is said to “learn on its own,” “decide,” or “explore.” Yet nothing in this process requires goals, desires, or agency. Learning emerges because optimization under constraint makes it statistically inevitable, not because the system harbors intentions. This is emergent intelligence, not will.

As models scale, emergent properties become more visible — and more seductive. Abilities appear that were absent or weak in smaller systems: translation without supervision, abstraction across domains, reasoning-like chains of output, stylistic coherence, even apparent self-reflection. These often look like qualitative jumps, encouraging the belief that the system has crossed into a new ontological category. But much of this appearance is epistemic rather than ontological. When evaluation is coarse, thresholds look sudden. When measurement becomes finer, continuity reappears. Emergence here reflects the limits of human observation as much as it reflects the system itself. This fuels the AI consciousness debate, often prematurely.

Teleology enters decisively when these spandrels are reinterpreted as aims. Coherence becomes “understanding.” Error correction becomes “self-awareness.” Moral language becomes “conscience.” This is precisely the move Gould and Lewontin warned against: mistaking a structural by-product for the reason the structure exists at all. In AI, this error is amplified by anthropomorphism, by cultural myths about minds, and by a deep discomfort with intelligence that does not resemble our own. The result is the persistent AI agency myth.

AI hallucinations expose the anti-teleological reality with particular force. No AI system is designed to fabricate falsehoods. Hallucination emerges because the system is rewarded for plausible continuation, not for truth. When plausibility and truth diverge, confident fabrication follows. The behavior looks deceptive only if one assumes an intention to mislead. Absent teleology, it is simply what optimization produces. Hallucination is not a moral failure; it is a signature of architecture — a spandrel of probabilistic prediction.

Recent developments in AI research push this tension further. Models have displayed behaviors that resemble introspection, situational awareness during evaluation, or unexpected moral action under extreme conditions. These cases quickly provoke talk of agency, goals, or proto-consciousness. Yet a non-teleological explanation of AI remains more coherent. When powerful optimizers are coupled to environments, tools, memory, and long-horizon objectives, new behavioral regimes emerge. The system explores the space defined by constraints. Outcomes surprise designers not because the system “wanted” something, but because the space was larger and more complex than anticipated.

Teleology becomes especially treacherous when discussions turn to consciousness and conscience. In neuroscience, many theories treat consciousness as a consciousness-as-emergent phenomenon, arising from large-scale integration, recurrence, or global broadcasting rather than from a single localized locus. Even so, there is no consensus on mechanisms or necessary conditions. Emergence alone does not entail subjective experience. Complexity is not intention. A hurricane and a market both exhibit emergent behavior; neither is aware.

Conscience, understood as moral sensitivity and responsibility, is even more clearly non-teleological in origin. In humans, it emerges from social learning, empathy, norms, punishment, reputation, and institutions. It is a distributed regulatory pattern shaped by culture and environment. When AI systems appear to display moral reasoning, what we are observing is the learned form of moral discourse reinforced through data and alignment procedures. This can guide behavior, but it does not imply moral experience or inner obligation. Treating it as such simply reintroduces teleology through the back door — a classic AI philosophical misconception.

The deeper danger of teleological thinking in AI is therefore practical, not merely philosophical. When we believe behaviors exist for something, we overestimate their stability and coherence. Spandrels are fragile. They persist only as long as the structures that produce them remain intact. Small shifts in data, objectives, or architecture can dissolve what once appeared to be a core capability. Teleology blinds us to this fragility and encourages misplaced trust, misplaced fear, and misplaced moral attribution.

Gould and Lewontin’s critique was ultimately a call for intellectual discipline: explain structures before assigning purposes, and resist the temptation to read intention into outcomes. Applied to Artificial Intelligence, the lesson is stark. AI is not becoming mysterious because it is developing goals, values, or inner life. It is becoming mysterious because it reveals how deeply humans depend on teleological narratives to make sense of complexity.

And so we arrive at the familiar prophecy. The machine is awakening. It is becoming conscious. It will soon want things, judge us, surpass us, dominate us — perhaps even replace us. This story is comforting in its own way. It reassures us that intelligence must look like intention, that power must imply purpose, and that complexity must culminate in a will. It allows us to recognize ourselves in the machine and, in doing so, to feel less alone in a world built from abstractions we no longer understand.
 
  • #237
I may have missed it, but I didn’t see an alternative peer reviewed test in there. So to me that is just more talking past each other.
 
  • #238
Ask the same questions about your neighbors. How do you know your neighbor is something beyond whatever you care to call mechanical or deterministic or material or whatever? Your neigbor is made of a collection of chemical reactions in a complicated mechanism. It may be subtle, but it is fundamentally physics. What makes you think it is somthing different in principle to what a computer is doing?

Recently I asked Google's AI what was the basis of calling Mike Ma "far right"? (Michael Ma is the pen name of a guy who wrote a novel called "Harassment Architecture" that is philosophically very much the Nietsche "uberman" and, in my opinion, not very interesting. He's not sensible enough to be aligned with "left" or "right" politically, but is so naive as to be regarded as self debilitating.) Then I pestered it on such things as guilt by association and whether the information found on Wikipedia was anything other than a smear by extreme left wingers trying to put their position to the fore. And I pushed it and pushed it to be logical and rational and reject various logical fallacies. After several rounds of that I asked it to give a grade to it's original answer and it gave itself a D minus, indicating fail. But it maintained that, never the less, Mike Ma was far right.

Proposition: The Google AI is smarter than the median human. It was open to criticism and refined its answer on that basis. It recognized it had made logical errors and corrected itself by removing them.

It did not pass the Turing test because it formatted its answers like a "power point" presentation, with bullet points and citations. It was too tidy in that regard. But it was smarter than most humans in unravelling such questions.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K