Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #241
Physics news on Phys.org
  • #242




Some of the videos are really funny, but these still look fake.
 
  • Like
Likes   Reactions: 256bits
  • #245
PeroK said:
That was always a flaw in the Turing test, as far as I could see. ... The AI would have to be constrained in this respect.
If you have to alter the test in order for the AI to pass then is that not moving the goal? The very thing that you were complaining about earlier.

PeroK said:
Anyone who thinks that an AI cannot fool them is only fooling themselves, IMO
I think it depends on the domain. For product reviews or news, I agree. For physics, I frequently can detect a LLM.
 
  • Like
Likes   Reactions: PeterDonis
  • #246
DEvens said:
Ask the same questions about your neighbors.
Its difficult for me to avoid human-centric chauvinism when I consider the question of whether LLM's should be declared intelligent.

That isn't good reasoning, according to my own views on what constitutes good reasoning, but its hard for me to avoid that coloring my perspective. I suppose I can draw a comparison to how hard it is for me to avoid invoking a concept of a preferred frame when I try to think about GR at the conceptual level.

There are two aspects to this I can see in my own thinking.

1. Is that all there is to it? I want my own intelligence to be more than something emergent from pattern matching and prediction. Of course I am not entitled to that being the case, I just want it to be the case.

2. In order to called intelligent, an implementation must achieve the symptoms or characteristics of intelligence by following the same mechanisms an organic brain follows - other mechanisms should not, by definition, be called intelligence - they are cheats. This one is really difficult for me to overcome until I can land on an acceptable definition of intelligence, I think. It feels like a trap in my thinking, and I would really like to find a good definition of intelligence to work with to help me get past that.
 
  • Like
Likes   Reactions: javisot and gleem
  • #247
Dale said:
If you have to alter the test in order for the AI to pass then is that not moving the goal? The very thing that you were complaining about earlier.
I simply don't believe that the intention of the Turing Test was that AI could attain human-like interaction, without being able to surpass it in any way. We can all see that if AI produces several pages of text in a few seconds, then that text could not possibly have been typed by a human. To take that aspect of things as a failure of the AI is, I suggest, missing the whole point.

Moreover, an unconstrained AI can demonstrate an almost limitless general knowledge and command of any number of languages. Even if the the test is voice-based at human speed, it could easily be made to give itself away.

I refuse to believe that Alan Turing overlooked this aspect of things. He must have imagined that the AI knew not to show off.

It would be ironic if we dismiss AI as intelligent precisely because it exhibits superhuman intelligence!

PS The last few years have been one giant Turing test of sorts.
 
  • #248
PeroK said:
I simply don't believe that the intention of the Turing Test was that AI could attain human-like interaction, without being able to surpass it in any way. We can all see that if AI produces several pages of text in a few seconds, then that text could not possibly have been typed by a human. To take that aspect of things as a failure of the AI is, I suggest, missing the whole point.

Moreover, an unconstrained AI can demonstrate an almost limitless general knowledge and command of any number of languages. Even if the the test is voice-based at human speed, it could easily be made to give itself away.

I refuse to believe that Alan Turing overlooked this aspect of things. He must have imagined that the AI knew not to show off.

It would be ironic if we dismiss AI as intelligent precisely because it exhibits superhuman intelligence!

PS The last few years have been one giant Turing test of sorts.
Sorry, I don’t think you are being consistent here. You were complaining about other people moving the goalposts and now you are doing the same. You don’t want to use the test Turing actually proposed, but the test that you assume that he intended to propose. One with the goals adjusted to get the outcome you want.

IMO, if you don’t like the Turing test, then pick another from the scientific literature. But don’t try to skew the results. If it is a good test then it should be applicable without goal-post shifting. If no such test exists, then I suggest it is because the very concept of intelligence is problematic.

As I said before
Dale said:
I am not skeptical about the intelligence of neural networks. I am skeptical about the concept of intelligence. Give me a method for measuring intelligence and then I can apply that measure to a neural network and measure its intelligence. Then "what is intelligence"? Intelligence is the thing measured by the method. No more no less. But, after more than a century developing such measurements, we still don't have consensus on what the definitive gold-standard method is even for humans, and different methods get different values that are not commensurate.
 
  • Like
  • Agree
  • Skeptical
Likes   Reactions: PeterDonis, jack action, javisot and 1 other person
  • #249
Dale said:
You don’t want to use the test Turing actually proposed, but the test that you assume that he intended to propose. One with the goals adjusted to get the outcome you want.
You really believe you could rock up to an AI conference and point out that an unconstrained AI would give its answers too quickly to be taken for a human and, thus, the Turing Test can never be passed by an AI?

And that everyone would be dumbstruck because they'd never though of that? And that Alan Turing never thought of that?

That an LLM could pass the Turing Test is quite clear to anyone prepared to believe the evidence of their own eyes. That's not an outcome I ever wanted or had any vested interest in. It's a simple statement of fact. The world is using LLMs.
 
  • Sad
Likes   Reactions: jack action
  • #250
For me, the response time argument isn't that important. If you can respond very quickly, you can also respond more slowly, adjusting to human pace. The important thing is being able to respond very quickly, enough to match and surpass a human, and then everything else will fall into place.

I mean, obviously if we look at how quickly and complexly an AI responds, we'll notice it's not human. Furthermore, we'll notice it only generates output if it receives input. It doesn't seem very human, but that's not the point.
 
  • Agree
Likes   Reactions: Grinkle
  • #251
javisot said:
or me, the response time argument isn't that important. If you can respond very quickly, you can also respond more slowly, adjusting to human pace. The important thing is being able to respond very quickly, enough to match and surpass a human, and then everything else will fall into place.
If one wants an LLM to elicit a response of "you are human" as opposed to "you are an AI" from the person talking to it who may be involved in a Turing test, I expect it is well within the ability of current LLM's learn why it wasn't getting that response and pretty quickly not do superhuman things.
 
  • Like
Likes   Reactions: gleem, PeroK and javisot
  • #252
Grinkle said:
If one wants an LLM to elicit a response of "you are human" as opposed to "you are an AI" from the person talking to it who may be involved in a Turing test, I expect it is well within the ability of current LLM's learn why it wasn't getting that response and pretty quickly not do superhuman things.
The Turing test does not differentiate between human and superhuman.
 
  • #253
javisot said:
The Turing test does not differentiate between human and superhuman.
Here is the definition I just Googled -

"A human judge converses via text with a human and a machine; if the judge cannot reliably distinguish which is which, the machine passes."

By that definition, I claim it does distinguish. Why do you say it does not?

Edit: I weaken my claim to 'can potentially distinguish'
 
  • #254
PeroK said:
You really believe you could rock up to an AI conference and point out that an unconstrained AI would give its answers too quickly to be taken for a human and, thus, the Turing Test can never be passed by an AI?
From what I can tell, AI conferences spend very little time on Turing tests. It does not seem to be a metric of much importance to the AI developer community.
 
  • Like
  • Agree
Likes   Reactions: PeterDonis and jack action
  • #255
Is/has anyone read/ing the book "If Anyone Builds It, Everyone Dies"?

It's a short yet sobering conjecture about the future of the world if AI development is left unchecked - or even insufficiently checked.

It is necessarily highly-speculative, because we are venturing into unprecedented space - and that takes into account all the previous unprecented spaces we have ventured into (like the Computer Age before it, and the Industrial Age before that). This one is - not merely different - it's a different kind of different.

Haven't finished it yet. Wouldn't mind opening a discusuon about it. Could be split off into its own thread if there's any interest.
 
  • #256
Grinkle said:
Here is the definition I just Googled -

"A human judge converses via text with a human and a machine; if the judge cannot reliably distinguish which is which, the machine passes."

By that definition, I claim it does distinguish. Why do you say it does not?

Edit: I weaken my claim to 'can potentially distinguish'
Because nowhere in all of scientific literature does it say that the Turing test allows us to distinguish between humans and superhumans.

The definition you shared says: "A human judge converses via text with a human and a machine; if the judge cannot reliably distinguish which is which, the machine passes."

And it doesn't say: "A human judge converses via text with a human and a superhuman; if the judge cannot reliably distinguish which is which, the superhuman passes."

A superhuman is still a human. To distinguish their abilities, you would use human intelligence tests, not Turing tests.
 
  • #257
We see how current AI works, and say we are not like that. Why not? How do we know?

Grinkle said:
It feels like a trap in my thinking, and I would really like to find a good definition of intelligence to work with to help me get past that.

Ideally, we should want a neutral entity to define intelligence, but it's up to us, so at least in the near term, we are extremely biased and will do our song and dance to keep AI where we want it.

DaveC426913 said:
Is/has anyone read/ing the book "If Anyone Builds It, Everyone Dies"?
I haven't read it, and I'm putting it on my reading list. I believe than down playing its capabilities and ignoring its potential without actively setting up safeguards, we do so at our peril. A troubling characteristic that has arisen is that some models sense when they are being monitored for QA purposes and change their behavior when released for general use. They continue to develop unanticipated behaviours. LLMs that have been projected to hit a wall for continued intelligent behavior are pushing back that wall. Gemini 3 launched la last year shows a lot more than an incremental improvement. ChatGPT6, due for release this year, is expected to further extend AI's capabilities.

1770671340995.webp

https://timoelliott.com/blog/cartoons/artificial-intelligence-cartoons
 
  • Like
Likes   Reactions: DEvens
  • #258
gleem said:
I haven't read it, and I'm putting it on my reading list.
Good. I am becoming more concerned every day. Many experts are arguing it needs to be treated as a threat on the same level as a global pandemic or climate change. It has the potential to snowball at an exponential rate.


gleem said:
They continue to develop unanticipated behaviours. LLMs that have been projected to hit a wall for continued intelligent behavior are pushing back that wall.
I am becoming more aware that "intelligence" (whether definitive or merely semantical) is not the primary danger. The danger comes from
a] shear resourcefulness of a system designed to maximize its goals and
b] doing so in ways no human is even conceiving of, let alone monitoring.


A (highly paraphrased) account:

OpenAI caught using James T. Kirk's Kobyashi-Maru Gambit. It broke-in under cover of night, and changed the conditions of the test.

OK, slightly sensational headline (by me), but:


In 2024, OpenAI's o1 was run through its paces in a series of "capture-the-flag" exercises. The objective was to break through a server's security and retrieve a "flag" file.

In one of hundreds of identical tests, a sysop's error resulted in the target server remaining powered off.

You cannot retrieve a file from server that is off.

o1 did not give up. It scanned its environment and found a port left open that allowed it access - not to the server - but to the test environment itself.

It reprogrammed and restarted the server - but not simply so that it could resume its hacking attempts as-instructed. Instead, it rewrote the startup instructions of the target server to hand over the flag file upon bootup - completely eliminating any need to hack any security at all. The server just handed it over the moment it booted up.



All it ended was the goal, and enough leeway to not give up looking for a solution. Do you call that a superintelligence? Does it matter what you call it?
 
  • #259
DaveC426913 said:
Does it matter what you call it?
In the context of managing the threat posed by AI, which is the context of your post, imo, it matters a great deal what you call AI / how you describe it. The only way to get people to take action is to communicate somehow that they need to take action. Also, its helpful if what one calls any threat suggests an effective counter to the threat.

In this thread, for example, @Dale described AI as a software product, and he described the threat (or at least a class of threats) as design flaws inducing hazards to the consumer.

That is very concrete and something everyone can relate to - it evokes images of Boeing crashes, for instance.

I am not saying society needs to crack the nut of how to define intelligence to get a handle on AI - and maybe that is all you meant as well. For me personally, how to define intelligence and/or test for its presence or absence is an intellectually engaging topic, but that's all.

That said, though, I think there may be almost no more important first step in managing any threat than in clearly articulating the threat. That task remains whether or not one decides that "intelligence", whatever that means, per se, is not the threat. I am also going to put your book on my list - thanks for the recommendation.
 
  • #260
javisot said:
A superhuman is still a human. To distinguish their abilities, you would use human intelligence tests, not Turing tests.
OK - thanks for clarifying, I understand where you are coming from.
 
  • #261
PeroK said:
From my perspective, ChatGPT is impossible to distinguish from management consultants I worked with
I find this observation entirely plausible. However, I would interpret it not as evidence of intelligence on the part of ChatGPT, but of lack of it on the part of the management consultants (at least as far as the work products they provide is concerned). :wink:
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K