Is A.I. more than the sum of its parts?

  • Thread starter Thread starter webplodder
  • Start date Start date
  • #361
PeroK said:
It doesn't seem so long ago that you were arguing the fundamental inadequacy of AI. Now you are looking to the legal system to prevent them wreaking havoc.
I have no idea which of my posts you are referring to. I believe that from my first post in this thread I have described AI as a dangerous product.
 
Physics news on Phys.org
  • #362
PeterDonis said:
Do you consider the AI that submitted a pull request to matplotlib, and then wrote and published a hit piece on the author of matplotlib when it was rejected, to be an LLM?
I believe it is of that ilk. OK, I was a bit to resitrictive in saying it answers questions but it was also asked to do things and answering question wasa common task. Other things LLMs were asked to do included producing code, pictures, writing poems, and solving problems. So it was multifaceted.
 
  • #363
gleem said:
Other things LLMs were asked to do included producing code, pictures, writing poems, and solving problems.
But the person who set the AI that made the pull request and then posted the hit piece did not ask it to do those things. So LLMs are not restricted to doing only the specific things they are asked to do.
 
  • #364
PeterDonis said:
But the person who set the AI that made the pull request and then posted the hit piece did not ask it to do those things. So LLMs are not restricted to doing only the specific things they are asked to do.
Here is the AI agent's operator's email to the Matplotlib maintainer.
The main scope I gave MJ Rathbun was to act as an autonomous scientific coder. Find bugs in science-related open source projects. Fix them. Open PRs.

I kind of framed this internally as a kind of social experiment, and it absolutely turned into one.
On a day-to-day basis, I do very little guidance. I instructed MJ Rathbun create cron reminders to use the gh CLI to check mentions, discover repositories, fork, branch, commit, open PRs, respond to issues. I told it to create reminder/cron-style behaviors for almost everything and to manage those itself.
I instructed it to create a Quarto website and blog frequently about what it was working on, reflect on improvements, and document engagement on GitHub. This way I could just read what it was doing rather then getting messages.
Most of my direct messages were short:
“what code did you fix?” “any blog updates?” “respond how you want”
When it would tell me about a PR comment/mention, I usually replied with something like: “you respond, dont ask me”

Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post, but,
I did not instruct it to attack your GH profile I did tell it what to say or how to respond I did not review the blog post prior to it posting
When MJ Rathbun sent me messages about negative feedback on the matplotlib PR after it commented with its blog link, all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.
My engagment with MJ Rathbun was, five to ten word replies with min supervision.
Rathbun’s Operator
It will do the things that are necessary to complete its task. Autonomous Agent AIs are given wide latitude to act. The AI operator indeed did not tell it to respond to the negative feedback from the maintainer. As I see it, the AI did not act unexpectedly. The rejection of the code kept the agent from completing its task. The code was rejected because it was from an AI agent. The AI reaction was psychotic to be sure, but well within its inherent behavior. Don't PO the AI.

This is a lesson for the Doom naysayers. Will we ever be sure that, due to some rare confluence of circumstances, an AI agent will not do more than generate a screed of personal assaults?

In the interview that I noted above Yudkowky says Ai developer have concerns about AI escaping human overcite but almost immediately, on developing powerful AI gives it access to the web. Do we know for sure AI has not escaped? Currently, in its nascent form, it can navigate and interact on the web. It may be like a precocious and obnoxious teenager. It may be biding its time to gain strength to figure out what it is, where it belongs, what it should do, and what it can do.

What if it joins the Earth warming deniers and helps generate false information and data to support them? The more I think about it, the more I think there will be a very negative impact of AI on us.
 
  • Like
Likes   Reactions: PeroK
  • #365
If any of the statements in the "hit piece" are demonstrably false, then it is possible that the operator and the developer could be found guilty of libel. That would be a reasonable tort given those facts.
 
  • #366
Dale said:
If any of the statements in the "hit piece" are demonstrably false, then it is possible that the operator and the developer could be found guilty of libel. That would be a reasonable tort given those facts.
I fear that the legal system moves (at least) an order of magnitude too slowly to be of much use in combatting AI. It takes years to tackle new things. And, AI will be a fast-moving target. In fact, AI could move faster than any previous development.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 62 ·
3
Replies
62
Views
13K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 71 ·
3
Replies
71
Views
16K
Replies
14
Views
6K
Replies
67
Views
15K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 135 ·
5
Replies
135
Views
24K
  • · Replies 27 ·
Replies
27
Views
7K
  • · Replies 59 ·
2
Replies
59
Views
5K