How far will we let AI control us?

Click For Summary
AI has become an integral part of daily life, influencing creativity and decision-making across various fields, including STEM. Concerns are rising that reliance on AI may diminish human ingenuity, as people increasingly turn to AI for generating ideas and content. The discussion highlights the potential dangers of AI, including its rapid development and the risk of it overshadowing human creativity. While some argue that AI can enhance artistic processes, others fear it may lead to a loss of original thought and critical thinking. Ultimately, the conversation revolves around finding a balance between utilizing AI as a tool and maintaining human creativity and autonomy.
  • #61
PeroK said:
On one thread, people are complaining that it "hallucinates". I.e. makes stuff up.
"makes stuff up" is imprecise. Every token is selected from a probability distribution. Depending on settings, it will select the token with the highest probability of confidence. OpenAI admitted that hallucinations are a feature not a bug. Rather than return with "we don't know", they will accept a low confidence probability for the next token to output. hallucinations are most times when the output should have been "we don't know", but instead an output is forced that it knows is wrong. that is not creativity.
 
  • Like
  • Skeptical
Likes javisot, PeroK and russ_watters
Computer science news on Phys.org
  • #62
DaveC426913 said:
A case, yes. I am not of that mind.

The example is a biased scenario* - the fact that the pictures are already chosen and available for you to view. Let me 'splain.

*not your fault. I'm just saying it's not the whole picture


No, but I get it. It is a discussion I often have with friends. I tend to side with "No she couldn't."



Consider:

It is missing the point to look at one or two pieces of paper that a baby has made a mess on.

It is not the baby who is embuing the piece with import or value or message. It is the curator of the pieces that does so (whomever that might be - parent, an agent, whatever).

Nobody goes to a friend's home to entertain themselves with a baby's works of art. Nobody is going to go to an art show that has a hundred pieces that indiscriminately include every piece a baby does.(Just like nobody is going to read every iteration of pages that a million monkeys churn out on a millon typerwriters.)

By the time we see them, they are already curated. A signal has been found among the noise.

Those one or two pages that the baby made a mess on were chosen - out of hundreds - because the curator thought they saw a relatable message embodied in them.

The act of choosing - separating what makes a new, interesting message from what does not - is where the deliberateness and creativity lies.
If you are using modern art to demonstrate that humans can be creative where a machine cannot, then you will struggle to have objective criteria on which to base any scientifically viable case. Modern Art generally cannot be judged by objective criteria. The ultimate example, perhaps, is John Cage's 4'33. This is a piece which is four minutes and 33 seconds of silence.

This whole subject is so subjective, that I can't see how you are going to draw any solid conclusions from it.

This is what Google AI says:

The question of whether Jackson Pollock was a charlatan is a long-standing debate in the art world, with strong arguments on both sides.
 
  • #63
PeroK said:
Have you ever seen the documentary "My Daughter Could Paint That"?
DaveC426913 said:
No, but I get it. It is a discussion I often have with friends. I tend to side with "No she couldn't."
You've jumped to a false conclusion here. The documentary is about Marla Olmstead, who was a successful abstract artist by the age of four:

https://en.wikipedia.org/wiki/Marla_Olmstead

Apologies, but I got the title of the documentary slightly wrong. It was "kid", not "daughter":

https://en.wikipedia.org/wiki/My_Kid_Could_Paint_That

It's fascinating even if a bit off topic.
 
  • Informative
Likes DaveC426913
  • #64
Greg Bernhardt said:
"makes stuff up" is imprecise. Every token is selected from a probability distribution. Depending on settings, it will select the token with the highest probability of confidence. OpenAI admitted that hallucinations are a feature not a bug. Rather than return with "we don't know", they will accept a low confidence probability for the next token to output. hallucinations are most times when the output should have been "we don't know", but instead an output is forced that it knows is wrong. that is not creativity.
This is an a priori argument that "creativity is what a human does". If you define creativity this way, then machines cannot be creative by definition.

Picasso's brain must have followed some process. It must have been neurons firing deterministically. That is not creativity either.
 
  • #65
PeroK said:
If you are using modern art to demonstrate that humans can be creative where a machine cannot, then you will struggle to have objective criteria on which to base any scientifically viable case. Modern Art generally cannot be judged by objective criteria. The ultimate example, perhaps, is John Cage's 4'33. This is a piece which is four minutes and 33 seconds of silence.

This whole subject is so subjective, that I can't see how you are going to draw any solid conclusions from it.
Agreed.

But just because I don't agree with a verdict that X is art, does not mean the signal - the message - is not there for you.

A 4'33" recording of silence did have a meaning. The artist knew it would cause (at least some) people to think about what that 296 seconds is normally filled with, or what it might be filled with, or what happens when you remove what it is filled with, and what you end up filling with in its absense.

Whether or not you think that message is enlightening is not the point - you might think it's frivilous to think about silence where we expect sound - but that doesn't mean it's not the artist's conscious attempt to make you think in a new way.

PeroK said:
The question of whether Jackson Pollock was a charlatan is a long-standing debate in the art world, with strong arguments on both sides.
Yes.

Except the fact that we know who he is and are still debating it is, itself, and indication that it has made us think and debate about what it means to be art. Which is one of the goals.

Personally, I think the people who regard Pollock - and indeed, any abstract art - as a sham are missing the point. I think they fear being "duped" and being "separated from their money". It is easier to be cynical and reject something that might make you sound like a fool, than to just look at the work and ask "How does this make me feel?" That requires courage and openness.
 
  • #66
PeroK said:
This is an a priori argument that "creativity is what a human does". If you define creativity this way, then machines cannot be creative by definition.
I'm not convinved that's what Greg was saying.

PeroK said:
Picasso's brain must have followed some process. It must have been neurons firing deterministically. That is not creativity either.
I'm not convinced that's a valid example that supports your point, above.

"Process" is a bit of a weasel word there. Not all processes are equal. One process can be rooted in slavish copying, whereas another process can involve hallucinogenic drugs and dream states.
 
  • #67
DaveC426913 said:
"Process" is a bit of a weasel word there. Not all processes are equal.
Exactly. Some process are carbon-based; some are silicon based. They are not equivalent. If your objective criteria is how something original is produced, then a priori computers cannot be intelligent, creative or understand anything. Because, intelligence, creativity, understanding are exclusively carbon-based (human) processes.

But, if you define intelligence, creativity and understanding in terms of input/output and capability (and not determine these entirely in a priori human processes), then it's a lot less clear that computers have no intelligence or creativity and can understand nothing.

All your arguments are a priori, because there is, by definition, nothing a computer can do that counts as intelligence, creativity or understanding.
 
  • #68
PeroK said:
Define "truly novel".

Hah! You're impossible! Debating with you is like asking to be sealioned! :woot:
 
  • #69
DaveC426913 said:
Personally, I think the people who regard Pollock - and indeed, any abstract art - as a sham are missing the point. I think they fear being "duped" and being "separated from their money". It is easier to be cynical and reject something that might make you sound like a fool, than to just look at the work and ask "How does this make me feel?" That requires courage and openness.
This is another a priori argument. Listen to what you are saying:

I, Dave C, have certain opinions about abstract art. These opinions are essentially objectively correct. Anyone who disagrees is objectively wrong. They are missing the point. Anyone who disagrees with you suffers from "fear"; they are "cynical"; they lack "courage" and "openness".

These are not credible arguments.
 
  • #70
sbrothy said:
Hah! You're impossible! Debating with you is like asking to be sealioned! :woot:
Not at all. I think there are aspects of human thought that are objectively different from LLM's. The problem is that it's not a universal and fundamental difference.

I see the attempts on this thread (and previous similar threads) to settle the debate by a priori definitions and assumptions that exclude computer processes. So that there is nothing to debate.

I see the whole question of intelligence, creativity and understanding as fundamentally more subtle. It's not clear that a poet, for example, doesn't do something analagous to what an LLM does. You can wave your arms and invoke metaphysical arguments. But, ultimately, a poet's brain is made of neurons; and, neurons follow set patterns and must follow algorithmic processs at some level.

Creativity must be an emergent property of algorithmic processes of some description. Likewise, all our consciousness, self-awareness and intelligence must emerge from biological processes. A single cell cannot be self-aware. These properties cannot be inherent in the lowest-level biological processes.

In my view, there is a very serious open question of how close are LLM's to developing these emergent characteristics. The argument that they are not capable of developing any higher-level characteristics does not convince me. Until you can prove what are the minimum set of requirements for human-like intelligence, you cannot say for sure that LLM's are lacking. Or, more to the point (since LLM's are almost certainly lacking something of the human spark), you cannot say to what extent they are capable of human characteristics and to what extent they are not.

It's my understanding that the concern that they are capable of developing higher-level emergent characteristics (like an instinct for self preservation) is central to current research - especially in terms of safeguards against this.

If those who have taken the counter argument on this thread are correct, then there would be no need for any AI safeguards in this respect. AI could be developed as far as possible at this stage, with no risk whatsoever that they could act against human interests. They would simply be dumb machines, slavishly producing output of some description.

I've said this on previous threads. We at PF are supposed to be guided by the professional scientific literature. In this respect, what I am saying is in line with the scientific literature. And, what the others are saying is that they prefer their own personal theories and that in some sense the "experts" (Geoffrey Hinton et al) are all wrong.

With that, I don't intend to post any further on this thread. No disrepect to anyone, but I've said what I think.
 
  • Like
Likes Filip Larsen
  • #71
PeroK said:
Exactly. Some process are carbon-based; some are silicon based. They are not equivalent. If your objective criteria is how something original is produced, then a priori computers cannot be intelligent, creative or understand anything. Because, intelligence, creativity, understanding are exclusively carbon-based (human) processes.

But, if you define intelligence, creativity and understanding in terms of input/output and capability (and not determine these entirely in a priori human processes), then it's a lot less clear that computers have no intelligence or creativity and can understand nothing.

All your arguments are a priori, because there is, by definition, nothing a computer can do that counts as intelligence, creativity or understanding.
But who is doing that?

I don't think anyone has stated that, presciptively, only human processes can be creative. I think what everyone is saying is it just so happens that the processes AI does use don't meet the criteria for creativity.

It's descriptive, not prescriptive.

You are drawing the a priori conclusion that we are saying artifical == not creative, and nobody is saying that.

It's more like artificial = uncomprehending determined algorthim that predicts next data bit = not creative.

If, for example, AI could comprehend what it is saying, then yes, it could certainly be creative.
 
  • #72
PeroK said:
Not at all. I think there are aspects of human thought that are objectively different from LLM's. The problem is that it's not a universal and fundamental difference.

I see the attempts on this thread (and previous similar threads) to settle the debate by a priori definitions and assumptions that exclude computer processes. So that there is nothing to debate.

I see the whole question of intelligence, creativity and understanding as fundamentally more subtle. It's not clear that a poet, for example, doesn't do something analagous to what an LLM does. You can wave your arms and invoke metaphysical arguments. But, ultimately, a poet's brain is made of neurons; and, neurons follow set patterns and must follow algorithmic processs at some level.

Creativity must be an emergent property of algorithmic processes of some description. Likewise, all our consciousness, self-awareness and intelligence must emerge from biological processes. A single cell cannot be self-aware. These properties cannot be inherent in the lowest-level biological processes.

In my view, there is a very serious open question of how close are LLM's to developing these emergent characteristics. The argument that they are not capable of developing any higher-level characteristics does not convince me. Until you can prove what are the minimum set of requirements for human-like intelligence, you cannot say for sure that LLM's are lacking. Or, more to the point (since LLM's are almost certainly lacking something of the human spark), you cannot say to what extent they are capable of human characteristics and to what extent they are not.

It's my understanding that the concern that they are capable of developing higher-level emergent characteristics (like an instinct for self preservation) is central to current research - especially in terms of safeguards against this.

If those who have taken the counter argument on this thread are correct, then there would be no need for any AI safeguards in this respect. AI could be developed as far as possible at this stage, with no risk whatsoever that they could act against human interests. They would simply be dumb machines, slavishly producing output of some description.

I've said this on previous threads. We at PF are supposed to be guided by the professional scientific literature. In this respect, what I am saying is in line with the scientific literature. And, what the others are saying is that they prefer their own personal theories and that in some sense the "experts" (Geoffrey Hinton et al) are all wrong.

With that, I don't intend to post any further on this thread. No disrepect to anyone, but I've said what I think.

I'm confident that you understand that I was joking. From the amount of work you put in your posts alone it seems obvious you're not trolling anyone (at least not on purpose o0)).

Also, you may be correct that my understanding of LLMs is superficial (re: one of your first responses to me), but I'm pretty sure I haven't voiced a lot of opinions except that AI isn't strong yet. Whether it ever will be I'm not sure anyone can say.

I'd like to see it though.
 
  • #73
A random generator, given enough time to work, randomly generates a pre-existing novel (let's say the Bible) or even a completely new one. Can we say that this random generator is intelligent and creative?

Obviously not, but the paradox occurs of encountering a new novel. In this case, creating something new is not a guarantee of creativity.
 
  • #74
We argue against AI's creativity because we see how it functions. We argue for human creativity, but we do not know how it functions. So they must be different?

In one of my previous posts, I gave a reference to a study that showed LLM could understand human emotions better than humans in a written test. In that study, they also asked ChatGPT4 to write a similar test. The results were basically the same. The AI-generated test was noted to be at least as demanding as the human-generated one. They also noted that 88% of the AI-generated questions were deemed original and not just paraphrases of previous questions. I see this as contradicting the position that LLMs are just auto-complete algorithms or lucky guessers. But AI doesn't guess.

As far as creativity in the visual arts is concerned, if you are not the only artist in a "school of art" who is creative, only the first to execute a painting in that school?
 
  • #75
Greg Bernhardt said:
"makes stuff up" is imprecise. Every token is selected from a probability distribution. Depending on settings, it will select the token with the highest probability of confidence. OpenAI admitted that hallucinations are a feature not a bug. Rather than return with "we don't know", they will accept a low confidence probability for the next token to output. hallucinations are most times when the output should have been "we don't know", but instead an output is forced that it knows is wrong. that is not creativity.
OK, how about "outright lies"?

I have multiple accounts of AI saying*
I understand you want the first three primes that are two digits or more.
The first three primes of two digits or more are 2, 3, and 5.
You asked me for the first three primes that are two digits or more.

No. Those are one digit.
You are correct!
You asked for the first three primes that are two digits or more.
And those aren't it. I forgot about the 2-digit requirement.
I will revise.
You asked me for the first three primes that are two digits or more.
The first three primes of two digits or more are 2, 3, and 5.
That is the correct answer.

No. Those are STILL one digit.
You are correct!
The first three primes of two digits or more are 2, 3, and 5.

etc.

That's not a token prediction error; that's doubling and tripling down on an abjecet, explicit falsehood.

Directly contradicting oneself, claiming something is true immediately after one has claimed they got it wrong, could be considered a hallucination.


*this is an apocryphal account, modifed for simplicity, but accurate in the important ways. The real examples (at least two) involved somewhat more complex math.
 
  • #76
What LLM model are you using because when I ask ChatGPT5.1, Claude Opus 4.5 or Gemini-3-Pro, it answers with two digits the first time.

DaveC426913 said:
That's not a token prediction error; that's doubling and tripling down on an abjecet, explicit falsehood.
So what are you saying, it's sentient and intending to lie?
DaveC426913 said:
could be considered a hallucination
What is your technical definition of an LLM hallucination
 
  • #77
Greg Bernhardt said:
What LLM model are you using because when I ask ChatGPT5.1, Claude Opus 4.5 or Gemini-3-Pro, it answers with two digits the first time.
Sory, that was an exampel, but demosntrtion purposes.

See my Round Robin Darts Schedule and Travel Time to a Destination 20ly Away scenarios in the Chatbot Examples thread.

Greg Bernhardt said:
So what are you saying, it's sentient and intending to lie?
I am saying that, in the same breath (as it were), the Chatbot acknowledged it got the answer wrong, showed what was wrong about it, then said it would correct it the made the exact same mistake, and then told me explicitly it fixed the error and got the correct answer. Even though it was the exact same wrong answer.

If an AI makes explicit statements of truth and immediately explicitly falsifies them while claiming they are true, that seems pretty hallucinatory.

Greg Bernhardt said:
What is your technical definition of an LLM hallucination
In this case, I would say a hallucination occurs when the AI can't even stick to a fact (a fact it created) from one sentence to the next. Literally "In line 3, you said this was wrong. And then line 5 you asserted it was correct. They can't both be true." It cannot tell its own reality.
 
  • #78
ChatGPT can receive more input, generate more output, and work with more languages than a calculator. Furthermore, unlike a calculator, ChatGPT isn't limited by an "error" message when the output is incorrect. The price of this is hallucinations.

Hallucinations are not a creative process. It's like building a house with an instruction manual you don't understand. The final product won't be a perfect house; it will be a hallucination.
 
  • Agree
Likes jack action
  • #79
DaveC426913 said:
Directly contradicting oneself, claiming something is true immediately after one has claimed they got it wrong, could be considered a hallucination.
Well, given that LLMs are trained on (thus: rooted in) human produced data - based on my experiences with the reasoning of some people I would call that 'good pupil', actually ... o0)

Greg Bernhardt said:
OpenAI admitted that hallucinations are a feature not a bug. Rather than return with "we don't know", they will accept a low confidence probability for the next token to output. hallucinations are most times when the output should have been "we don't know", but instead an output is forced that it knows is wrong. that is not creativity.
One source of human creativity is wild association: checking on the 'low confidence' returns instead of the obvious ones.
So - I would not say it's creativity only because I think LLMs are not 'thinking': they are only mechanically 'satisfying'.
But the tingle is already there.
 
Last edited:
  • #80
Sounds like some AI have anterograde amnesia.
 

Similar threads

Replies
10
Views
4K
Replies
7
Views
6K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 42 ·
2
Replies
42
Views
7K
Replies
8
Views
5K
Replies
3
Views
6K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 15 ·
Replies
15
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
Replies
3
Views
2K