Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #211
PeterDonis said:
It also does not require any "awareness" or "understanding" or "intention" to accomplish a given outcome, on the part of whatever is doing the deceiving. Which means that we can't infer the presence of any of those things based on observing deception in this sense.
Agreed. I like these sorts of definitions because they directly address real risks without getting bogged down in these “fuzzy” concepts. We can discuss deceptive AI and agree if something meets this definition without having to agree if it is intelligent. And more importantly, we can start to track it, test for it, and hopefully mitigate it.
 
  • Like
Likes   Reactions: Grinkle and javisot
Physics news on Phys.org
  • #212
PeterDonis said:
Only if the model even has a concept of "incorrect information". I don't think LLMs even have that concept. They're not designed to, since that concept requires having the concept of "information stored in my model" as separate from "the actual state of things in the world". Which in turn requires having those two concepts--and LLMs don't have the latter one. They only have the former.
Correct. I wasn't referring to the model deliberately lying while being aware of it; I meant that the model can generate texts that we interpret as lies, having defined "lie"/deception as Perok and Dale indicate.
 
  • Like
Likes   Reactions: Dale and Grinkle
  • #213
Eliminating all hallucinations, lies, and errors from models isn't something that can be achieved simply by modifying the models.

This reminds me of how companies automate today. The company head hires an engineer to automate their old factory. First, the engineer does an analysis and typically then asks you to modify certain aspects so that your factory and your processes are more easily automated. They might ask you to reduce the variety of products you produce, or things like that.

If we want to build a more general AI that doesn't lie, hallucinate, or make mistakes, the first thing we would have to do is modify society so that our inputs are of higher quality. Once we assume a society that generates inputs of the highest quality, we could discuss whether problems still exist, and what we should do to fix them.
 
  • Like
Likes   Reactions: Dale, gleem and Grinkle
  • #214
javisot said:
the first thing we would have to do is modify society so that our inputs are of higher quality

This is going to have me chuckling for at least a couple days. 😂
 
  • Like
  • Agree
  • Haha
Likes   Reactions: Bystander, BillTre, russ_watters and 2 others
  • #215
Grinkle said:
This is going to have me chuckling for at least a couple days. 😂
This is already being done, for example by input engineers, or when OpenAI sends "good usage manuals for chatgpt". (Okay, the effort we invest in improving it isn't very great...)
 
  • Like
Likes   Reactions: Grinkle
  • #216
PeroK said:
If I had said, for example, that the ability to play the piano is an aspect of intelligence, then would your response be that "not all humans can play the piano".
No, my response to that is that there are machines that play the piano. I agree that human piano playing requires intelligence, even though we don't have a definition of intelligence to help is in this discussion, I'd say any reasonable definition of intelligence would make it required for human piano playing.

Machine piano playing does not. Piano playing is different from deception in that its purely a mechanical skill that has no aspect of social motivation at least in its most basic definition.
 
  • #217
Grinkle said:
No, my response to that is that there are machines that play the piano. I agree that human piano playing requires intelligence, even though we don't have a definition of intelligence to help is in this discussion, I'd say any reasonable definition of intelligence would make it required for human piano playing.

Machine piano playing does not. Piano playing is different from deception in that its purely a mechanical skill that has no aspect of social motivation at least in its most basic definition.
Logic has nothing directly to do with the factual accuracy of the hypotheses.

It's quite ironic to be discussing AI and be bombarded with false human logic at every turn.

That said, a small majority of intelligent humans can think logically. It generally requires formal training. Whereas, an AI is expected to exhibit flawless logic.
 
  • Like
Likes   Reactions: Hornbein and gleem
  • #218
I committed to you that if you expanded, I would respond, and I did.

Beyond that, I will accept the below as your final word. I don't think you are actually considering anything I'm saying and if I take what you posted at face value, that won't change.


PeroK said:
From my point of view, there are no circumstances where I am going to defer to your homespun logic
 
  • #219
To put it in formal terms, my original argument was:

Major hypothesis: only intelligent things can lie (*)
Minor hypothesis: this LLM can lie.

Conclusion: this LLM is intelligent.

That logic is sound. The only way the conclusion can fail is if one or both of the hypotheses fail.

The conclusion does not fail if, as you suggested, some things that are intelligent don't lie.
 
  • Like
Likes   Reactions: javisot
  • #220
And, my broader argument is that the more things AI can do and the more emergent characteristics it develops, the more things are deemed to be not an indicator of intelligence.

For example, LLMs can comfortably pass the Turing test. So, for the AI skeptics the Turing test is no longer valid.

In other words, my major hypothesis might have been accepted a few years ago. That lying requires intelligence. But, it would not be accepted any more
 
  • Like
Likes   Reactions: Hornbein and Dale
  • #221
PeroK said:
To put it in formal terms, my original argument was:

Major hypothesis: only intelligent things can lie (*)
Minor hypothesis: this LLM can lie.

Conclusion: this LLM is intelligent.

That logic is sound. The only way the conclusion can fail is if one or both of the hypotheses fail.

The conclusion does not fail if, as you suggested, some things that are intelligent don't lie.
It sounds more simplistic than logic, especially without definitions of "intelligent" or "lie". To link a lie to intelligence, you must show intention, knowledge, and awareness of what someone (or something) does. No one think a child - even though they are intelligent - lies because an adult asked them to repeat the sentence "The apple is blue". It is the [intelligent] adult that is behind the lie.

You don't seem to understand how LLMs work and you see some magic in them. There is none.
  • Information is fed to the LLM in form of texts. (That is already a very limiting factor of what the machine can do with this information.)
  • The machine breaks down this information in a bunch of 0s and 1s and looks for patterns according to some parameters to get some desired results.
  • It tries all possible ways until it finds the desired results.
  • It then converts the 0s and 1s back to text form and spits out the results.
If humans were playing poker without bluffing and we would ask an LLM to play poker, it would find this new way of playing and we would all be in awe over this LLM for finding it, leading to everyone using this new found technique. But it only found the text needed to win the game. It wasn't smart: humans just never thought of it before. It did it by analyzing and finding the patterns that led to the desired goal, a pattern never noticed by humans before.

So, no, it doesn't require intelligence to apply this pattern-finding math technique. It is just a dumb machine that does what it is asked to do.

I really prefer that we focus on it being a "dumb" machine for intelligent people rather than an "intelligent" machine for dumb people. The former implies "garbage in, garbage out" and human intervention is an important part of the process, the latter implies it works by itself to do the smartest thing, which sends the wrong message about the machine capabilities.
 
  • Like
Likes   Reactions: Grinkle and javisot
  • #222
jack action said:
It sounds more simplistic than logic, especially without definitions of "intelligent" or "lie".
Logic doesn't require the terms to be defined. You only need definitions when you want to apply the logic to specific terms. So, yes we would have to define lie and intelligent.

Once a system is sophisticated enough to pass the Turing test, then I think its valid to start asking questions about it regarding intelligence and deception.

If we restrict this discussion to systems that can pass the Turing test, then that excludes spurious comparisons with a pocket calculator or other simpler systems.

If you are familiar with Godel's theorems, they only apply to mathematical systems that include the natural numbers. Simpler mathematical systems may be both consistent and complete.

We have a similar situation with AI. There is a point at which complexity can change things fundamentally.

Turing did work on this with the halting problem etc. There is more to this than your homespun theories, no matter how vehemently you believe that there cannot be.
 
  • Like
Likes   Reactions: javisot
  • #223
PeroK said:
To put it in formal terms, my original argument was:

Major hypothesis: only intelligent things can lie (*)
Minor hypothesis: this LLM can lie.

Conclusion: this LLM is intelligent.

That logic is sound. The only way the conclusion can fail is if one or both of the hypotheses fail.

The conclusion does not fail if, as you suggested, some things that are intelligent don't lie.
I think you're both right and wrong at the same time. It doesn't take intelligence to generate text that can be interpreted as a lie. A random text generator, if run long enough, could produce "the speed of light is greater than c," which is a lie, but it's a random text generator that lied without requiring any intelligence.

However, it does require intelligence to understand that such text contains a lie.
 
  • Agree
Likes   Reactions: Grinkle
  • #224
PeroK said:
The only way the conclusion can fail is if one or both of the hypotheses fail.
I don't agree with your major hypothesis. I don't think intelligence is required to deceive in the sense the paper you referenced is using the term (a systematic pattern of emitting text that creates false beliefs in a human audience), unless we broaden the definition of "intelligence" so much as to make it basically meaningless--"intelligent" becomes equivalent to "can deceive in this broad sense".

However, if we are talking about possible dangers from AI, I also don't think it matters whether the deception is "intelligent" or not. What matters is how much weight humans are willing to put on text emitted by LLMs that has a significant probability of being false in systematic ways.
 
  • Like
  • Agree
Likes   Reactions: BillTre, russ_watters, jack action and 4 others
  • #225
PeterDonis said:
I don't think intelligence is required to deceive in the sense the paper you referenced is using the term
The authors are careful here. A snip from page 5 of the pdf -

"It is possible that the strategic behavior
we document below is itself one more example of LLMs imitating
patterns in text."

PeterDonis said:
I also don't think it matters whether the deception is "intelligent" or not.
The authors make similar points in the paper regarding the dangers they are wanting to flag in the publication.
 
  • Like
Likes   Reactions: Dale
  • #226
PeroK said:
And, my broader argument is that the more things AI can do and the more emergent characteristics it develops, the more things are deemed to be not an indicator of intelligence.

For example, LLMs can comfortably pass the Turing test. So, for the AI skeptics the Turing test is no longer valid.

In other words, my major hypothesis might have been accepted a few years ago. That lying requires intelligence. But, it would not be accepted any more
Well, IMO that’s both a reasonable complaint and a good reason to not get bogged down in this argument. There are far more productive issues to discuss, particularly when one of the terms is subject to so much “drift”.

I think that before someone claims that a LLM is intelligent or not they should specify the operational measure that they are using to make their claim. The Turing test does show up in the professional scientific literature. Any alternative definition should at least meet that criterion to be considered valid here.
 
  • Like
Likes   Reactions: russ_watters and jack action
  • #227
To the question "what does the Turing test actually measure?", chatgpt replies:

"The Turing Test does not measure “intelligence” in a deep or human sense. It measures something much narrower.

So what does it actually measure?

It measures a machine’s ability to imitate human conversational behavior so well that a human judge cannot reliably tell it apart from a real person.

In other words, it evaluates:

• Conversational indistinguishability

• The ability to produce coherent, context-appropriate, human-like responses

What it does measure


✔ Convincing use of natural language
✔ Handling of conversational context
✔ The ability to deceive (in the technical sense) a human evaluator
✔ The appearance of understanding, not genuine understanding

What it does not measure

✘ Consciousness
✘ Deep semantic understanding
✘ Intentionality or mental states
✘ Genuine reasoning or general intelligence
✘ Creativity in a strong sense
 
  • Like
Likes   Reactions: BillTre
  • #228
WRT poler
jack action said:
So, no, it doesn't require intelligence to apply this pattern-finding math technique. It is just a dumb machine that does what it is asked to do.
Carnegie Mellon U has built two models that play poker, called Libratus (two-player no-limit Texas Hold'em) and Pluribus (six-player no-limit Texas Hold'em). Both beat professional human players. The self taught with game theory, their strategy included bluffing. These models are special in that they only have partial information to work with. While these models were designed to play poker, they can be used for contract negotiations, cybersecurity, threat detection, and resource allocation.

Read: https://www.theguardian.com/science...ayer-game-for-first-time-plurius-texas-holdem

For details https://www.science.org/doi/10.1126/science.aay2400
 
  • #229
javisot said:
A random text generator
Does not meet the definition in the paper, which is a systematic pattern of emitting text that creates false beliefs in a human audience.
 
  • #230
PeterDonis said:
Does not meet the definition in the paper, which is a systematic pattern of emitting text that creates false beliefs in a human audience.
Then suppose that this same random text generator over a long period of time generates many other lies, not just one. (What do we mean by systematic?)
 
  • #231
PeroK said:
LLMs can comfortably pass the Turing test.
Can you give some specific references? The ones I have seen (for example, this) are far too limited--the paper I just linked put a time limit of 5 minutes on the interaction. Turing's original intent was for the test to be open-ended, to last as long as the interrogator wanted. And of course in real life our interactions with both other humans and LLMs are open-ended.
 
  • Like
Likes   Reactions: Dale
  • #232
javisot said:
(What do we mean by systematic?)
A pattern that is not random. For more details, read the paper that was referenced.
 
  • Like
Likes   Reactions: javisot
  • #233
javisot said:
What it does not measure

✘ Consciousness
✘ Deep semantic understanding
✘ Intentionality or mental states
✘ Genuine reasoning or general intelligence
✘ Creativity in a strong sense
Is there any way of measuring these things? To what extent can the average human exhibit all these characteristics?

ISTM that LLMs can demonstrate these as well or better than the average human, considering it is artificial intelligence.
 
  • #234
PeterDonis said:
-the paper I just linked put a time limit of 5 minutes on the interaction.
If the test is limited to 5 minutes then it should be pretty easy to discriminate between a LLM and a human. Just make the question involve something that a human cannot do in 5 min. Like require a 10000 word response.
 
  • Haha
Likes   Reactions: BillTre
  • #235
PeterDonis said:
I don't agree with your major hypothesis. I don't think intelligence is required to deceive in the sense the paper you referenced is using the term (a systematic pattern of emitting text that creates false beliefs in a human audience), unless we broaden the definition of "intelligence" so much as to make it basically meaningless--"intelligent" becomes equivalent to "can deceive in this broad sense".

However, if we are talking about possible dangers from AI, I also don't think it matters whether the deception is "intelligent" or not. What matters is how much weight humans are willing to put on text emitted by LLMs that has a significant probability of being false in systematic ways.
If deception is a sign of intelligence, then biology which is full of deception should be very intelligent (due to selection presumably).
Since selection works by selection things that work best in some way to produce the next generation, its not a deep intelligence. Might be considered a distributed intelligence.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K