Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #271
Grinkle said:
The more I consider @Dale 's approach, the more I like it. Stripping away all of the romance and drama around AI by simply talking about software products avoids all that
Yeah, but even I got sucked into the "intelligence" debate.
 
Physics news on Phys.org
  • #272
There is a big flaw in the Turing test: It is assumed that the judge is a simple human.

But what if the judge is using AI for assistance or for doing the job entirely? If AI is so efficient, it should be able to detect AI. It should be able to detect the patterns coming from a neural network. There must be patterns in neural networks, otherwise the whole concept of a neural network wouldn't work. And if one is tempted to say that a neural network could be intelligent enough to fool another, why wouldn't this neural network be the one used for AI detection?

Then the human using AI to detect AI is still the most intelligent in the room.

That is what is wrong with all the doomsday scenarios: they never consider that both "good" and "evil" sides can use the same tools.
 
  • #273
jack action said:
Then the human using AI to detect AI is still the most intelligent in the room.
You are conflating detection with intelligence here, no?

The Turing test can't be flawed in the manner you suggest because it is what it is by definition. It doesn't assume the judge is human, its defined that way. As I read it, its not defined to have anything to do with intelligence per se, that is our retro-fit in this thread.

There may be many approaches to sussing out the AI in the Turing test, they don't all have to do with intelligence. Style, for example; I speculate that an AI will be more prone to including examples in its response to justify its response than a human, even if both responses agree with each other conceptually.

jack action said:
That is what is wrong with all the doomsday scenarios: they never consider that both "good" and "evil" sides can use the same tools.
In my own scenario of society self-dumbing down, that's actually the key part of the scenario.
 
Last edited:
  • #274
Grinkle said:
The Turing test can't be flawed in the manner you suggest because it is what it is by definition. It doesn't assume the judge is human, its defined that way.
From Wikipedia [bold emphasis mine]:
The Turing test, [...], is a test of a machine's ability to exhibit intelligent behaviour equivalent to that of a human. In the test, a human evaluator judges a text transcript of a natural-language conversation between a human and a machine. The evaluator tries to identify the machine, and the machine passes if the evaluator cannot reliably tell them apart.
Can the human evaluator use a tool to help him do the job? If he is nearsighted and both text transcripts look equally blurry to the point he cannot distinguish them, then as the machine passed the test? Or can the human evaluator wear glasses to help him see better? Another example: if two metal samples (one heat-treated, the other not) are indistinguishable to the naked eye but distinguishable under a microscope, are they considered the same, or is using a microscope considered cheating?

The problem with the Turing test is that everyone puts themselves as an acceptable human evaluator. But if a machine can reliably tell them apart, can we still say the machine can exhibit intelligent behaviour equivalent to that of a human?
 
  • #275
I think I am not understanding you properly - the below is to help me understand better.

jack action said:
Can the human evaluator use a tool to help him do the job?
Imo, yes. I speculate if one could ask Turing whether one could use an Encyclopedia Britannica (the Google of his day) during the evaluation, he would have shrugged and said something like "Whatever, sure, if you want, but that's not how I talk to humans, myself".

jack action said:
The problem with the Turing test is that everyone puts themselves as an acceptable human evaluator.
I am missing your point. Why is this a problem? I agree that everyone puts themselves as an acceptable human evaluator.

jack action said:
But if a machine can reliably tell them apart, can we still say the machine can exhibit intelligent behaviour equivalent to that of a human?
Maybe, and if a machine could do this without using some deterministic trivial style-based fingerprint in the discourse, I'd be very impressed with the machine. How are you connecting that with the Turing test? Its an entirely different test with an entirely different motivation, imo. Its an interesting test, also, imo.
 
  • #276
jack action said:
Can the human evaluator use a tool to help him do the job? If he is nearsighted and both text transcripts look equally blurry to the point he cannot distinguish them, then as the machine passed the test?
I think this is missing the forest for the trees.

The Turing Test does not place conditions on the tester (except that they are human). It can be assumed that the test's setup involves ideal conditions (ones that best facilitate an accurate test of the challenge itself - sans confounding factors).

It's not about whether Tester had a bad day at all, or dropped out of grade school or whatever. It's just: given its best shot, can a human tell the difference?
 
  • Like
Likes   Reactions: Grinkle
  • #277
jack action said:
From Wikipedia [bold emphasis mine]:

Can the human evaluator use a tool to help him do the job? If he is nearsighted and both text transcripts look equally blurry to the point he cannot distinguish them, then as the machine passed the test? Or can the human evaluator wear glasses to help him see better? Another example: if two metal samples (one heat-treated, the other not) are indistinguishable to the naked eye but distinguishable under a microscope, are they considered the same, or is using a microscope considered cheating?

The problem with the Turing test is that everyone puts themselves as an acceptable human evaluator. But if a machine can reliably tell them apart, can we still say the machine can exhibit intelligent behaviour equivalent to that of a human?
The human judge represents the ability to know the reality of things, that's why the judge is not an AI.
 
  • #278
Grinkle said:
I am missing your point. Why is this a problem? I agree that everyone puts themselves as an acceptable human evaluator.
Well, anyone who wants to see intelligence in a machine can just say: "It is intelligent because it fooled me." Not a very scientific method in my opinion.
DaveC426913 said:
It's just: given its best shot, can a human tell the difference?
But can "its best shot" be done with a tool he made? What is the difference between those two statements:
  1. Can a human fly? No. What if he builds and uses an airplane to do so? Yes.
  2. Can a human tell a machine and a human apart in a conversation? No. What if he builds and uses an AI machine to do so? Yes.
The thing about the second statement is that, according to the Turing test, the answer determines the state of the machine evaluated: it either can or cannot exhibit intelligent behaviour equivalent to that of a human.

javisot said:
The human judge represents the ability to know the reality of things, that's why the judge is not an AI.
But if an AI that can exhibit intelligent behaviour equivalent to that of a human exists, doesn't it also mean that the same AI should be able to know the reality of things and be the evaluator in a Turing test? Therefore, I arrive at the paradox that if I use AI against itself and it sees the difference, then the machine does not pass the Turing test.

Or is the Turing test not really measuring intelligence, but just the ability to recognize patterns, one of the characteristics of intelligence?

That being said, why does a test developed in 1949 as a thought experiment by a guy who died in 1954, i.e. has no clue of what a modern computer does, can be taken seriously by anyone? Again, no serious AI researchers do:
jack action said:
Mainstream AI researchers argue that trying to pass the Turing test is merely a distraction from more fruitful research. Indeed, the Turing test is not an active focus of much academic or commercial effort—as Stuart Russell and Peter Norvig write: "AI researchers have devoted little attention to passing the Turing test".
Only doomsdayers use it to sell their point.
 
  • Like
Likes   Reactions: Dale
  • #279
jack action said:
Or is the Turing test not really measuring intelligence,
No, it is not measuring intelligence. It was not intended to measure intelligence - this is the crux of why you are getting so much push back. Can you find any reference to Turing saying he intended his test to measure intelligence?
 
  • Like
Likes   Reactions: russ_watters
  • #280
jack action said:
That being said, why does a test developed in 1949 as a thought experiment by a guy who died in 1954, i.e. has no clue of what a modern computer does, can be taken seriously by anyone? Again, no serious AI researchers do:
Because the whole point is that it's not about what goes on on the inside; it's about results.

If it is so sophisticated that you can't tell from the outside whether it's a human or an AI, then what - really - is the difference?
 
  • #281
jack action said:
But if an AI that can exhibit intelligent behaviour equivalent to that of a human exists, doesn't it also mean that the same AI should be able to know the reality of things and be the evaluator in a Turing test? Therefore, I arrive at the paradox that if I use AI against itself and it sees the difference, then the machine does not pass the Turing test.
Yes, but the original test presentation omits that and directly assumes a human judge who knows the reality. I suppose you can replace the human judge with AGI.
 
  • #282
Intelligence tests measure our ability to solve intelligence tests, plain and simple.

Let's say we take an intelligence test and get a certain score. Let's say we're given feedback on what we did right and wrong on the test. Then we repeat the exact same test, and now our memory (having learned the answers) allows us to get a better score. Are we more intelligent?

Here I'd like to emphasize: let's say someone is able to provide a lot of correct information in conversation, capable of winning a general quiz, that's specifically an exercise in memory, not exclusively in intelligence.

Personally, I understand human intelligence as the ability to optimally manage a certain quality and quantity of information. I mean, I don't believe intelligence is solely about having a lot of increasingly complex information stored, but rather the ability to manage that information.

The Turing test measures a machine's ability to generate information as complex and interconnected as that which an average human being can generate. (Chatgpt says: "A system can pass the Turing test without being intelligent, and can be intelligent without passing the Turing test")
 
Last edited:
  • #283
javisot said:
Intelligence tests measure our ability to solve intelligence tests, plain and simple.

Let's say we take an intelligence test and get a certain score. Let's say we're given feedback on what we did right and wrong on the test. Then we repeat the exact same test, and now our memory (having learned the answers) allows us to get a better score. Are we more intelligent?
That's not really the way they work.

A well-designed intelligence test will not be one that you get to take a second time and have it be identical.

They are usually questions of a kind that you won't have encountered previously. They test your ability to lateralize and generalize - to apply general problem-solving skills to problems you have not seen before.

Example: If you see a 3D mapping puzzle that asks you which flat object can be folded into a given 3D shape, finding the solution once doesn't mean you know the answer to the next one.

Solving this, on the first test:
1770763266568.webp


doesn't mean you have already solved this, on the second:
1770763339833.webp

You will need to internalize the general principles. And that's the hallmark of intelligence.

javisot said:
I don't believe intelligence is solely about having a lot of increasingly complex information stored, but rather the ability to manage that information.
Specifically, to derive principles that can be applied to new problems, or better yet, a whole class of problems.
 
Last edited:

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K