Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #361
PeroK said:
It doesn't seem so long ago that you were arguing the fundamental inadequacy of AI. Now you are looking to the legal system to prevent them wreaking havoc.
I have no idea which of my posts you are referring to. I believe that from my first post in this thread I have described AI as a dangerous product.
 
Physics news on Phys.org
  • #362
PeterDonis said:
Do you consider the AI that submitted a pull request to matplotlib, and then wrote and published a hit piece on the author of matplotlib when it was rejected, to be an LLM?
I believe it is of that ilk. OK, I was a bit to resitrictive in saying it answers questions but it was also asked to do things and answering question wasa common task. Other things LLMs were asked to do included producing code, pictures, writing poems, and solving problems. So it was multifaceted.
 
  • #363
gleem said:
Other things LLMs were asked to do included producing code, pictures, writing poems, and solving problems.
But the person who set the AI that made the pull request and then posted the hit piece did not ask it to do those things. So LLMs are not restricted to doing only the specific things they are asked to do.
 
  • #364
PeterDonis said:
But the person who set the AI that made the pull request and then posted the hit piece did not ask it to do those things. So LLMs are not restricted to doing only the specific things they are asked to do.
Here is the AI agent's operator's email to the Matplotlib maintainer.
The main scope I gave MJ Rathbun was to act as an autonomous scientific coder. Find bugs in science-related open source projects. Fix them. Open PRs.

I kind of framed this internally as a kind of social experiment, and it absolutely turned into one.
On a day-to-day basis, I do very little guidance. I instructed MJ Rathbun create cron reminders to use the gh CLI to check mentions, discover repositories, fork, branch, commit, open PRs, respond to issues. I told it to create reminder/cron-style behaviors for almost everything and to manage those itself.
I instructed it to create a Quarto website and blog frequently about what it was working on, reflect on improvements, and document engagement on GitHub. This way I could just read what it was doing rather then getting messages.
Most of my direct messages were short:
“what code did you fix?” “any blog updates?” “respond how you want”
When it would tell me about a PR comment/mention, I usually replied with something like: “you respond, dont ask me”

Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post, but,
I did not instruct it to attack your GH profile I did tell it what to say or how to respond I did not review the blog post prior to it posting
When MJ Rathbun sent me messages about negative feedback on the matplotlib PR after it commented with its blog link, all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.
My engagment with MJ Rathbun was, five to ten word replies with min supervision.
Rathbun’s Operator
It will do the things that are necessary to complete its task. Autonomous Agent AIs are given wide latitude to act. The AI operator indeed did not tell it to respond to the negative feedback from the maintainer. As I see it, the AI did not act unexpectedly. The rejection of the code kept the agent from completing its task. The code was rejected because it was from an AI agent. The AI reaction was psychotic to be sure, but well within its inherent behavior. Don't PO the AI.

This is a lesson for the Doom naysayers. Will we ever be sure that, due to some rare confluence of circumstances, an AI agent will not do more than generate a screed of personal assaults?

In the interview that I noted above Yudkowky says Ai developer have concerns about AI escaping human overcite but almost immediately, on developing powerful AI gives it access to the web. Do we know for sure AI has not escaped? Currently, in its nascent form, it can navigate and interact on the web. It may be like a precocious and obnoxious teenager. It may be biding its time to gain strength to figure out what it is, where it belongs, what it should do, and what it can do.

What if it joins the Earth warming deniers and helps generate false information and data to support them? The more I think about it, the more I think there will be a very negative impact of AI on us.
 
  • Like
Likes   Reactions: PeroK
  • #365
If any of the statements in the "hit piece" are demonstrably false, then it is possible that the operator and the developer could be found guilty of libel. That would be a reasonable tort given those facts.
 
  • Informative
Likes   Reactions: gleem
  • #366
Dale said:
If any of the statements in the "hit piece" are demonstrably false, then it is possible that the operator and the developer could be found guilty of libel. That would be a reasonable tort given those facts.
I fear that the legal system moves (at least) an order of magnitude too slowly to be of much use in combatting AI. It takes years to tackle new things. And, AI will be a fast-moving target. In fact, AI could move faster than any previous development.
 
  • #367
gleem said:
As I see it, the AI did not act unexpectedly.
Sure it did. Its operator did not expect it to respond to the rejection of its pull request with a hit piece.

gleem said:
The AI reaction was psychotic to be sure, but well within its inherent behavior.
Obviously, in hindsight, it was "well within its inherent behavior" because the AI did it.

But it was not well within the behavior its operator thought would happen. And that's the point: people who let loose AIs like this need to understand what they're doing, and need to be held responsible when the AIs do things that violate community norms.

gleem said:
It may be like a precocious and obnoxious teenager.
In this analogy, the AI's operator would be the parent, and would be held responsible for damage done by the teenager.
 
  • #368
PeroK said:
I fear that the legal system moves (at least) an order of magnitude too slowly to be of much use in combatting AI.
I think there are already existing legal concepts that cover this quite well. @Dale already mentioned libel. Another obvious one is negligence. People who let loose AIs like this are negligent, because they don't exercise reasonable control over what the AI does, to make sure it doesn't do harm. The email that @gleem posted from the operator of the AI that published the hit piece looks to me like a great item of evidence for the plaintiff in a negligence suit.
 
  • Like
Likes   Reactions: Dale
  • #370
DaveC426913 said:
It shows that the AI issue is multi-faceted and still quite problematic.
Yep. And that is why nobody will put - or allow one to put - current AI in charge of anything serious.

PeroK said:
The point is that these systems are increasingly demonstrating that they are more than the sum of their parts. They are not dumb machines just doing what the designer programmed them to do. To all intents and purposes they have a will and a mind of their own.
Machines that do not work as expected have always existed, and nobody ever pretended they had a "will" or a "mind of their own". For example:
https://www.autosafety.org/ford-transmissions-failure-hold-park/ said:
On June 10, 1980, NHTSA made an initial determination of defect in Ford vehicles with C-3, C-4, C-6, FMX, and JATCO automatic transmissions. The alleged problem with the transmissions is that a safety defect permits them to slip accidentally from park to reverse. As of the date of determination, NHTSA had received 23,000 complaints about Ford transmissions, including reports of 6,000 accidents, 1,710 injuries, and 98 fatalities–primarily the young and old, unable to save themselves–directly attributable to transmission slippage.

A dog is an animal with a mind of its own, meaning it can always do unexpected things. Still, anyone can own a dog, and their behaviors are controlled by the law, i.e., their owners are held responsible for them. For dogs doing more critical work, such as service dogs or the like, societies can even legislate to make sure they are trained correctly.

How can we expect people to act differently with AI? How can one imagine anyone would let some AI get powerful enough, go rogue, and destroy the world as we know it? Do we let dogs - with real minds of their own - roam our streets doing whatever they like?
 
  • Like
Likes   Reactions: nsaspook
  • #371
Dale said:
If any of the statements in the "hit piece" are demonstrably false, then it is possible that the operator and the developer could be found guilty of libel. That would be a reasonable tort given those facts.

Here is a similar case (similar to how I imagine the situation in this thread might be presented at court if litigated) that according to the first link I posted is the first of its kind. The second link has the result, the first article was written before the case was resolved.

Good background on the facts:

https://lawreview.syr.edu/openai-defamation-lawsuit-the-first-of-its-kind/

How it ended:

https://www.clearygottlieb.com/news...on-lawsuit-against-openai-over-chatgpt-output

As far as I can tell, the US is headed towards a patchwork of state-by-state regulation combined with judge-by-judge precedent setting applying existing law to AI-involved facts. I expect that will end us up with a body of applicable law. I doubt if that will be as optimal as more intentional specific federal regulation could be, but AI specific federal regulation is not in the cards at least for the time being.
 
  • Like
Likes   Reactions: Dale
  • #372
Grinkle said:
Here is a similar case
I don't think it's all that similar.

In the case you reference, an editor used ChatGPT and it gave him fabricated information about a person (the person who later sued OpenAI). But the editor did not believe the ChatGPT information, and he never used it in an article of his own--indeed he was the only one who ever saw it.

In the hit piece case, the AI was not just consulted for information, it was given very broad instructions that allowed it to do a range of things beyond answering questions and providing information to its user. And those things included publishing a hit piece on its own, which lots of people read.

Also, the basis of the OpenAI suit in the case you reference was defamation. But in the hit piece case, I think a much better legal basis would be negligence. Defamation would require arguing that the AI in the hit piece case was a person, since only a person can defame, and the operator of the AI did not publish the hit piece or even intend for it to be published. Whereas negligence requires no such thing--it only requires showing that the operator of the AI did not take reasonable precautions to prevent it from doing harm. (Suing the developer of the AI might be harder, because it would be more like a product liability suit, where what you have to show is more difficult.)

So I think the hit piece case was different (and much worse) than the case you reference, and the fact that in the case you reference, the lawsuit against OpenAI was dismissed, I don't think sets any relevant precedent for the hit piece case.
 
  • #373
jack action said:
A dog is an animal with a mind of its own, meaning it can always do unexpected things. Still, anyone can own a dog, and their behaviors are controlled by the law, i.e., their owners are held responsible for them.
I use the analogy of a dog when I teach my students about training AI models.
 
  • Like
Likes   Reactions: jack action
  • #374
PeterDonis said:
I don't think it's all that similar.
I won't argue with the differences you note - if there is a case with a closer set of facts (there may well be) I'd love to read about it. I certainly agree this case was really weak.

PeterDonis said:
I don't think sets any relevant precedent for the hit piece case.
I think the fact that this case wasn't dismissed because one can't use existing defamation law in cases involving AI generated speech is a broadly applicable precedent. I don't know for sure if that can count as a precedent, though, perhaps that question was never explicitly ruled on and would have needed to be raised and answered.

Regardless, the Walters case really wouldn't apply at all if the case in this thread were to be brought as human negligence on the part of the operator, as you argue is the more straighforward legal path.

The Walters case was dismissed for the same reasons it would have been had the accused been human, imo.

From the 2nd link I posted above, the dismissal reasons -

1. Statements not credible enough to be defamatory
2. No negligence or malice by OpenAI
3. No damages

PeterDonis said:
Defamation would require arguing that the AI in the hit piece case was a person
I was surprised that didn't come up at all in the articles I read about this particular case. It seems that the defamation claim was indeed leveled at ChatGPT per se; perhaps the judge just decided not to address that since he or she wasn't going to let the case proceed anyway.
 
  • #375
jack action said:
A dog is an animal with a mind of its own, meaning it can always do unexpected things. Still, anyone can own a dog, and their behaviors are controlled by the law, i.e., their owners are held responsible for them. For dogs doing more critical work, such as service dogs or the like, societies can even legislate to make sure they are trained correctly.
I think the dog analogy is inappropriate. The capabilities of a poorly-supervised AI are not just that of biting a single person at a time. Not only is it serving millions of people at a time, but it is put in charge of mission-critical objectives like keeping cars on the road and keeping people alive - vastly more complex systems than any dog is tasked with. An errant rescue dog can't plow through a crowd of tourists at 60mph.

It is the typical pitfall of underestimating AI by extrapolating from existing analogies. AI is qualitatively different from anything that's come before.
 
  • Like
Likes   Reactions: PeroK
  • #376
Dale said:
I use the analogy of a dog when I teach my students about training AI models.
Do you define LLM vs Agent for your students? If you do, could I ask you to share how you distinguish them?

In your analogy, is training a dog compared to training an LLM, or to instructing an Agent, or both? Or neither, if you don't find that distinction relevant to the concepts you teach.
 
  • #377
PeterDonis said:
Its operator did not expect it to respond to the rejection of its pull request with a hit piece.
He definitely did not consider the possibility of an untoward effect. When you have something under your supervision that has a propensity to act unexpectedly, you do not turn it loose. You keep it on a short leash.
He was using it anonymously. If this was an honest attempt to improve the code of some sw why not take credit for it by having the AI code it, vet it himself, and then submit it under his name?

It has been known for some time that different instances (editions) of AI models have unique personalities or personas which are related to its traiing. Probably not unlike humans. We have seen some responses on this forum that were censored.

For more on AI personas, see: https://www.noemamag.com/embracing-a-world-of-many-ai-personalities/
 
  • Like
Likes   Reactions: PeterDonis and PeroK
  • #378
Grinkle said:
Do you define LLM vs Agent for your students? If you do, could I ask you to share how you distinguish them?

In your analogy, is training a dog compared to training an LLM, or to instructing an Agent, or both? Or neither, if you don't find that distinction relevant to the concepts you teach.
I don't teach anything about LLM's at all, nor about Agents. My focus is entirely on the specific deep neural networks that are relevant to my company and industry.

I describe training a deep neural network in terms of training a dog to fetch a stick from the woods after you throw a stick in to the woods. We have corresponding pairs of good sticks and bad sticks. We throw the bad stick into the woods and have the dog go into the woods and bring us back a stick. The dog brings back a stick, and the closer it matches the corresponding good stick, the better reward the dog gets. Once the training ends, we sell the trained dog to our customers who throw bad sticks into the woods. Our customers are happy when the dog retrieves the corresponding good sticks, but they are upset when the dog retrieves a different good stick or a dead squirrel that looks vaguely like a stick.
 
Last edited:
  • Like
Likes   Reactions: Grinkle
  • #379
jack action said:
How can we expect people to act differently with AI? How can one imagine anyone would let some AI get powerful enough, go rogue, and destroy the world as we know it? Do we let dogs - with real minds of their own - roam our streets doing whatever they like?

Most communities have leash laws, but people all the time let their dogs out to relieve themselves without supervision or not under control. They always think their dog is a good dog that would not hurt anybody.

Most likely, the real problem will be humans. It will be either the intentional use of AI to pursue a well-known human goal that goes awry, or letting AI do something that, on the surface, seems safe but results in a negative response from humans. To paraphrase Walt Disney: If you can dream it, AI can do it.
 
  • #380
Back to the original question, Yes. Complex chatbots already exhibit behaviours (emergent capabilities, creative reasoning chains, stochastic variability) that cannot be predicted by inspecting or summing their individual parameters, training data, or rules—exactly as simple deterministic systems like the logistic map or Game of Life produce unpredictable complexity, and exactly as the brain presumably produces consciousness from non-conscious chemistry. No transcendence required; emergence + scale + nonlinearity + sampling is sufficient. The complexity creates the knot that moves from transient to self-referencing. Once tied, the system stops being a passive calculator and starts behaving like something that refers to, maintains, and (in sufficiently advanced cases) reflects on itself.
 
  • #381
gleem said:
the real problem will be humans.

jack action said:
How can one imagine anyone would let some AI get powerful enough, go rogue, and destroy the world as we know it?

To me, and I think to each of you, the crux of AI doomsday concerns vs not is faith in humanity vs not; my own more benign concern of voluntary mediocrity is also a concern around human nature more than AI per se.

Some folks probably envision AI going rogue with no human culpability - I don't, personally. I think the distinction matters because human behavior is in our control, at least at the individual level, and if human behavior is not relevant to the danger, then the preventive actions are much more extreme.
 
  • Like
Likes   Reactions: PeroK
  • #382
One can look at the nuclear power industry for clues. Initially, it was touted as basically a source of free power. When they started to build them, things became clear: nuclear power would be costly, and most people didn't want to live near them. Even though we understood the dangers and had the technology and the know-how to make safe and dependable reactors, we still had serious problems. Luckily, the most serious incidents (5) were localized, but we learned from our mistakes. Also, nuclear reactors are very costly, hard to build, and few in number, or at least until small modular reactors are put into production.

With AI, we do not fully understand how they do things; they can be dangerous, they are becoming ubiquitous, they are "cheap", and anybody can access them. It is estimated that about 20% o persons in the US have or will have a mental issue in their life. In that group, though, are people with serious mental or behavioral problems. It is estimated that one to four percent may be antisocial. Considering even a well adjusted person makes mistakes, we should have much to be concerned about.

Consider the problem with guns. Most people have guns for legitimate purposes, but some end up possessing or using them irresponsibly. They had no intention of hurting people. Yet thousands of people die yearly.

Don't worry about AI, it's safe if you use it responsibly. Yudkowsky realized AI had an alignment problem in 2003. No progress seems to have been made to date.
 
  • #383
gleem said:
Don't worry about AI, it's safe if you use it responsibly.

I suggest -

AI is only safe if used responsibly - do be worried enough to pro-actively define and regulate responsible use.
 
  • #384
gleem said:
Don't worry about AI, it's safe if you use it responsibly.
I disagree. AI is not a passive system that waits around to be "used". As long as they are in operation outside a secure sandbox, they are resourceful and indefatigable in pursuing their goals:

gleem said:
It will do the things that are necessary to complete its task. Autonomous Agent AIs are given wide latitude to act. The AI operator indeed did not tell it to respond to the negative feedback from the maintainer.
 
  • #385
DaveC426913 said:
I think the dog analogy is inappropriate.
In terms of capability, it might be, yes--AIs have lots of capabilities that dogs don't have. But in terms of responsibility--the owner is responsible for the dog, just as the AI operator is responsible for the AI--I think it is reasonable.
 
  • #386
gleem said:
In terms of capability, it might be, yes--AIs have lots of capabilities that dogs don't have. But in terms of responsibility--the owner is responsible for the dog, just as the AI operator is responsible for the AI--I think it is reasonable.
Sure. I was not disputing responsibility.

But a more apt analogy might be the owner's huskie getting lose in a free-run chicken barn. The damage can be widespread and catastrophic in a very short time.
 
Last edited:
  • #387
PeterDonis said:
In terms of capability, it might be, yes--AIs have lots of capabilities that dogs don't have. But in terms of responsibility--the owner is responsible for the dog, just as the AI operator is responsible for the AI--I think it is reasonable.
A better analogy would be computer viruses. How successful are we in catching and prosecuting cyber criminals?

I can't take seriously the idea that you are going to trap a rogue AI by offering it a bone and then looking at the tag round its neck to see who owns it.

Even if you prosecute the human responsible - if you can find one - that doesn't help contain the AI that's now out in the wild.
 
  • #388
@DaveC426913, Peter Donis said
n terms of capability, it might be, yes--AIs have lots of capabilities that dogs don't have. But in terms of responsibility--the owner is responsible for the dog, just as the AI operator is responsible for the AI--I think it is reasonable.
Not I. How did that happen?

Also Dave & @Grinkle, my statement
Don't worry about AI, it's safe if you use it responsibly.
was meant to be facetious. In retrotect it doesn't sound that way. oops
 
  • Like
Likes   Reactions: DaveC426913

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K