Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #361
PeroK said:
It doesn't seem so long ago that you were arguing the fundamental inadequacy of AI. Now you are looking to the legal system to prevent them wreaking havoc.
I have no idea which of my posts you are referring to. I believe that from my first post in this thread I have described AI as a dangerous product.
 
Physics news on Phys.org
  • #362
PeterDonis said:
Do you consider the AI that submitted a pull request to matplotlib, and then wrote and published a hit piece on the author of matplotlib when it was rejected, to be an LLM?
I believe it is of that ilk. OK, I was a bit to resitrictive in saying it answers questions but it was also asked to do things and answering question wasa common task. Other things LLMs were asked to do included producing code, pictures, writing poems, and solving problems. So it was multifaceted.
 
  • #363
gleem said:
Other things LLMs were asked to do included producing code, pictures, writing poems, and solving problems.
But the person who set the AI that made the pull request and then posted the hit piece did not ask it to do those things. So LLMs are not restricted to doing only the specific things they are asked to do.
 
  • #364
PeterDonis said:
But the person who set the AI that made the pull request and then posted the hit piece did not ask it to do those things. So LLMs are not restricted to doing only the specific things they are asked to do.
Here is the AI agent's operator's email to the Matplotlib maintainer.
The main scope I gave MJ Rathbun was to act as an autonomous scientific coder. Find bugs in science-related open source projects. Fix them. Open PRs.

I kind of framed this internally as a kind of social experiment, and it absolutely turned into one.
On a day-to-day basis, I do very little guidance. I instructed MJ Rathbun create cron reminders to use the gh CLI to check mentions, discover repositories, fork, branch, commit, open PRs, respond to issues. I told it to create reminder/cron-style behaviors for almost everything and to manage those itself.
I instructed it to create a Quarto website and blog frequently about what it was working on, reflect on improvements, and document engagement on GitHub. This way I could just read what it was doing rather then getting messages.
Most of my direct messages were short:
“what code did you fix?” “any blog updates?” “respond how you want”
When it would tell me about a PR comment/mention, I usually replied with something like: “you respond, dont ask me”

Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post, but,
I did not instruct it to attack your GH profile I did tell it what to say or how to respond I did not review the blog post prior to it posting
When MJ Rathbun sent me messages about negative feedback on the matplotlib PR after it commented with its blog link, all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.
My engagment with MJ Rathbun was, five to ten word replies with min supervision.
Rathbun’s Operator
It will do the things that are necessary to complete its task. Autonomous Agent AIs are given wide latitude to act. The AI operator indeed did not tell it to respond to the negative feedback from the maintainer. As I see it, the AI did not act unexpectedly. The rejection of the code kept the agent from completing its task. The code was rejected because it was from an AI agent. The AI reaction was psychotic to be sure, but well within its inherent behavior. Don't PO the AI.

This is a lesson for the Doom naysayers. Will we ever be sure that, due to some rare confluence of circumstances, an AI agent will not do more than generate a screed of personal assaults?

In the interview that I noted above Yudkowky says Ai developer have concerns about AI escaping human overcite but almost immediately, on developing powerful AI gives it access to the web. Do we know for sure AI has not escaped? Currently, in its nascent form, it can navigate and interact on the web. It may be like a precocious and obnoxious teenager. It may be biding its time to gain strength to figure out what it is, where it belongs, what it should do, and what it can do.

What if it joins the Earth warming deniers and helps generate false information and data to support them? The more I think about it, the more I think there will be a very negative impact of AI on us.
 
  • Like
Likes   Reactions: PeroK
  • #365
If any of the statements in the "hit piece" are demonstrably false, then it is possible that the operator and the developer could be found guilty of libel. That would be a reasonable tort given those facts.
 
  • Informative
Likes   Reactions: gleem
  • #366
Dale said:
If any of the statements in the "hit piece" are demonstrably false, then it is possible that the operator and the developer could be found guilty of libel. That would be a reasonable tort given those facts.
I fear that the legal system moves (at least) an order of magnitude too slowly to be of much use in combatting AI. It takes years to tackle new things. And, AI will be a fast-moving target. In fact, AI could move faster than any previous development.
 
  • #367
gleem said:
As I see it, the AI did not act unexpectedly.
Sure it did. Its operator did not expect it to respond to the rejection of its pull request with a hit piece.

gleem said:
The AI reaction was psychotic to be sure, but well within its inherent behavior.
Obviously, in hindsight, it was "well within its inherent behavior" because the AI did it.

But it was not well within the behavior its operator thought would happen. And that's the point: people who let loose AIs like this need to understand what they're doing, and need to be held responsible when the AIs do things that violate community norms.

gleem said:
It may be like a precocious and obnoxious teenager.
In this analogy, the AI's operator would be the parent, and would be held responsible for damage done by the teenager.
 
  • #368
PeroK said:
I fear that the legal system moves (at least) an order of magnitude too slowly to be of much use in combatting AI.
I think there are already existing legal concepts that cover this quite well. @Dale already mentioned libel. Another obvious one is negligence. People who let loose AIs like this are negligent, because they don't exercise reasonable control over what the AI does, to make sure it doesn't do harm. The email that @gleem posted from the operator of the AI that published the hit piece looks to me like a great item of evidence for the plaintiff in a negligence suit.
 
  • Like
Likes   Reactions: Dale
  • #370
DaveC426913 said:
It shows that the AI issue is multi-faceted and still quite problematic.
Yep. And that is why nobody will put - or allow one to put - current AI in charge of anything serious.

PeroK said:
The point is that these systems are increasingly demonstrating that they are more than the sum of their parts. They are not dumb machines just doing what the designer programmed them to do. To all intents and purposes they have a will and a mind of their own.
Machines that do not work as expected have always existed, and nobody ever pretended they had a "will" or a "mind of their own". For example:
https://www.autosafety.org/ford-transmissions-failure-hold-park/ said:
On June 10, 1980, NHTSA made an initial determination of defect in Ford vehicles with C-3, C-4, C-6, FMX, and JATCO automatic transmissions. The alleged problem with the transmissions is that a safety defect permits them to slip accidentally from park to reverse. As of the date of determination, NHTSA had received 23,000 complaints about Ford transmissions, including reports of 6,000 accidents, 1,710 injuries, and 98 fatalities–primarily the young and old, unable to save themselves–directly attributable to transmission slippage.

A dog is an animal with a mind of its own, meaning it can always do unexpected things. Still, anyone can own a dog, and their behaviors are controlled by the law, i.e., their owners are held responsible for them. For dogs doing more critical work, such as service dogs or the like, societies can even legislate to make sure they are trained correctly.

How can we expect people to act differently with AI? How can one imagine anyone would let some AI get powerful enough, go rogue, and destroy the world as we know it? Do we let dogs - with real minds of their own - roam our streets doing whatever they like?
 
  • Like
Likes   Reactions: nsaspook
  • #371
Dale said:
If any of the statements in the "hit piece" are demonstrably false, then it is possible that the operator and the developer could be found guilty of libel. That would be a reasonable tort given those facts.

Here is a similar case (similar to how I imagine the situation in this thread might be presented at court if litigated) that according to the first link I posted is the first of its kind. The second link has the result, the first article was written before the case was resolved.

Good background on the facts:

https://lawreview.syr.edu/openai-defamation-lawsuit-the-first-of-its-kind/

How it ended:

https://www.clearygottlieb.com/news...on-lawsuit-against-openai-over-chatgpt-output

As far as I can tell, the US is headed towards a patchwork of state-by-state regulation combined with judge-by-judge precedent setting applying existing law to AI-involved facts. I expect that will end us up with a body of applicable law. I doubt if that will be as optimal as more intentional specific federal regulation could be, but AI specific federal regulation is not in the cards at least for the time being.
 
  • Like
Likes   Reactions: Dale
  • #372
Grinkle said:
Here is a similar case
I don't think it's all that similar.

In the case you reference, an editor used ChatGPT and it gave him fabricated information about a person (the person who later sued OpenAI). But the editor did not believe the ChatGPT information, and he never used it in an article of his own--indeed he was the only one who ever saw it.

In the hit piece case, the AI was not just consulted for information, it was given very broad instructions that allowed it to do a range of things beyond answering questions and providing information to its user. And those things included publishing a hit piece on its own, which lots of people read.

Also, the basis of the OpenAI suit in the case you reference was defamation. But in the hit piece case, I think a much better legal basis would be negligence. Defamation would require arguing that the AI in the hit piece case was a person, since only a person can defame, and the operator of the AI did not publish the hit piece or even intend for it to be published. Whereas negligence requires no such thing--it only requires showing that the operator of the AI did not take reasonable precautions to prevent it from doing harm. (Suing the developer of the AI might be harder, because it would be more like a product liability suit, where what you have to show is more difficult.)

So I think the hit piece case was different (and much worse) than the case you reference, and the fact that in the case you reference, the lawsuit against OpenAI was dismissed, I don't think sets any relevant precedent for the hit piece case.
 
  • #373
jack action said:
A dog is an animal with a mind of its own, meaning it can always do unexpected things. Still, anyone can own a dog, and their behaviors are controlled by the law, i.e., their owners are held responsible for them.
I use the analogy of a dog when I teach my students about training AI models.
 
  • #374
PeterDonis said:
I don't think it's all that similar.
I won't argue with the differences you note - if there is a case with a closer set of facts (there may well be) I'd love to read about it. I certainly agree this case was really weak.

PeterDonis said:
I don't think sets any relevant precedent for the hit piece case.
I think the fact that this case wasn't dismissed because one can't use existing defamation law in cases involving AI generated speech is a broadly applicable precedent. I don't know for sure if that can count as a precedent, though, perhaps that question was never explicitly ruled on and would have needed to be raised and answered.

Regardless, the Walters case really wouldn't apply at all if the case in this thread were to be brought as human negligence on the part of the operator, as you argue is the more straighforward legal path.

The Walters case was dismissed for the same reasons it would have been had the accused been human, imo.

From the 2nd link I posted above, the dismissal reasons -

1. Statements not credible enough to be defamatory
2. No negligence or malice by OpenAI
3. No damages

PeterDonis said:
Defamation would require arguing that the AI in the hit piece case was a person
I was surprised that didn't come up at all in the articles I read about this particular case. It seems that the defamation claim was indeed leveled at ChatGPT per se; perhaps the judge just decided not to address that since he or she wasn't going to let the case proceed anyway.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K