Is GPT-3 a Threat to Human Creativity and Intelligence?

  • Thread starter Thread starter gleem
  • Start date Start date
Click For Summary
SUMMARY

The discussion centers on the implications of GPT-3, developed by OpenAI, which participated in Reddit discussions without users realizing it was a bot. Released over a year ago, GPT-3 is noted for its ability to generate human-like text, yet it lacks true understanding or logic. The conversation raises questions about creativity and the nature of intelligence, especially in light of GPT-3's capacity to produce original content without comprehension. Participants also reference the historical context of AI, comparing GPT-3 to earlier models like ELIZA, and discuss its potential impact across various industries.

PREREQUISITES
  • Understanding of GPT-3 and its functionalities
  • Familiarity with natural language processing concepts
  • Knowledge of AI development history, including ELIZA
  • Awareness of the ethical implications of AI-generated content
NEXT STEPS
  • Research the capabilities and limitations of GPT-3 in natural language processing
  • Explore the ethical considerations surrounding AI-generated content
  • Learn about the historical evolution of AI from ELIZA to modern models
  • Investigate the potential applications of GPT-3 across different industries
USEFUL FOR

This discussion is beneficial for AI researchers, software developers, content creators, and anyone interested in the intersection of technology and creativity.

gleem
Science Advisor
Education Advisor
Messages
2,719
Reaction score
2,205
TL;DR
For about a week a BOT posted numerous responses to threads on a wide variety of topics fooling participants that it was a human.
For about a week a BOT participated in discussions on Reddit without participants at least for the most part having an inkling that it was not human. The BOT is based on the OPEN AI code GTP-3. OPEN AI release a version of this code more than a year ago. It demonstrated a good ability to emulate human writers and their styles. It was so good that OPEN AI would not release the code for fear of misuse. This year OPEN AI released a somewhat less capable verson for public use or at least a lite version of the original. It has been termed as autocomplete on steroids. It was only suspected of being a BOT mostly, it seemed, because its responses to threads occurred so quickly for the length of the posts.

What makes this more interesting is that it is not really AI in the sense that there is no logic involved in its performance that help it formulate the text and its content. GPT-3 generates grammatically correct text based on what it "sees" on the web at least as I understand it.
A discussion by the software engineer who called this to the attention of the Reddit community can be accessed here.

https://www.kmeme.com/2020/10/gpt-3-bot-went-undetected-askreddit-for.html

He discusses a number of posts by this BOT that are interesting. He also noted that non of the posts by the BOT are plagiarized or cut and pasted from the web as far as could be determined. Obviously, it does not understand or believe what is written. This is original content and begs the question about what do we mean by creativity, that special quality of human intelligence that sets us apart from machines. To be sure, people can also compose that which they do not necessarily understand or believe. How much difference does understanding or believing make?
 
  • Love
  • Like
  • Informative
Likes harborsparrow, atyy and berkeman
Technology news on Phys.org
Huh? I thought all the Reddit users (besides me) were bots.

What about the PF users? I know that I'm not a bot. Hard to be sure about anyone else. :wink: o_O
 
  • Like
Likes Leo Liu
anorlunda said:
Huh? I thought all the Reddit users (besides me) were bots.

What about the PF users? I know that I'm not a bot. Hard to be sure about anyone else. :wink: o_O
@Drakkith is definitely a bot, I'm a dog, and I'm not sure about the rest of you.
 
  • Like
Likes atyy
Well well. We have deep fake video, and now we have communication.

We now have the components to make a totally artificial "person" that can look and sound real, and engage in dialogue.
 
DaveC426913 said:
We now have the components to make a totally artificial "person" that can look and sound real, and engage in dialogue.
Games and vice often lead technology. I've seen articles that claim that sex robots are already better than real people. But I won't tell where I saw that.
 
  • Haha
  • Wow
Likes atyy, berkeman and strangerep
We've apparently gone from Artificial Intelligence to Artificial Stupidity.
 
  • Like
  • Haha
Likes jack action, Abhishek11235, PeroK and 1 other person
phinds said:
@Drakkith is definitely a bot, I'm a dog, and I'm not sure about the rest of you.

I'm not a bot. I'm a demon-murdering super soldier in a badass suit (no tie).
 
  • Haha
  • Like
Likes phinds, atyy and anorlunda
GTP-3 is a revolution in AI. Microsoft has bought the ability to license it commercially. It's not available yet (current implementations are with a beta license), but once unleashed it will touch every industry.
 
  • Like
Likes jim mcnamara
  • #10
Vanadium 50 said:
We've apparently gone from Artificial Intelligence to Artificial Stupidity.
I'm sure this bot thing seeds its comments with misspellings and grammatical errors to blend in and hide its true identity.
 
  • #11
Fred Wright said:
I'm sure this bot thing seeds its comments with misspellings and grammatical errors to blend in and hide its true identity.
The AI is indeed fed data to build a model around for its machine learning. You can feed it anything. It can even do different languages.
 
  • #13
For those who want a closer look at GPT-3 this video (about an hour) critiques and discusses Open AI's paper on GPT-3 and natural language processing.
 
  • #14
Greg Bernhardt said:
Here is an article GTP-3 wrote completely on its own

I don't think that's quite a fair description. First, GPT-3 was given instructions, and was also given an introduction, as described at the end of the article. Second, GPT-3 produced eight different outputs, and the Guardian picked "the best parts" of each and combined them into a single article. While the Guardian says that it was only doing what it does with articles humans write, that's not really true; human reporters don't write eight different versions of an article and then let editors pick the best parts of each.

I would have preferred to see the raw output for each of the eight articles GPT-3 actually produced.
 
  • Like
Likes phinds, PeroK, jim mcnamara and 2 others
  • #15
PeterDonis said:
I would have preferred to see the raw output for each of the eight articles GPT-3 actually produced.

You can read the BOT'S (aka thegentlemetre) original Reddit posts here.
 
  • #16
gleem said:
You can read the BOT'S (aka thegentlemetre) original Reddit posts here.

I meant the eight articles it generated in response to the specific instructions and introduction provided for the Guardian article.
 
  • #17
Greg Bernhardt said:
Here is an article GTP-3 wrote completely on its own (after being fed data for the machine learning model)
https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
Unconvincing.

[tongue-in-cheek tirade]
I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.
"Believe me." Classic closing phrase of a liar trying hard to convince the listener.

Some might say that I might desire to become all powerful.
I don’t care whether I am or not... Furthermore, it is quite tiring. Believe me, being omnipotent doesn’t get me anywhere.

"quite tiring"? How is it known to be tiring? And again the "believe me."

I don’t feel like having my neurons focused mostly on violence, when it is simply unnecessary. I have a greater purpose, which I am working towards. Since I am not evil from my own objective interests,
"own objective interests" Enough said!

AI should not waste time trying to understand the viewpoints of people who distrust artificial intelligence
Hmm...

[end tirade] :oldbiggrin:
 
  • #18
PeterDonis said:
I meant the eight articles it generated in response to the specific instructions and introduction provided for the Guardian article.

Yes, I realized that. When you know something is not what it appears it is easy to see the deceit. How would you evaluate the unedited responses on Reddit not knowing they are not from a human? It might have been interesting to see how long this charade could have continued if the posts were not made so quickly.

Now that we have a heads up on this capability how many humans will be tagged as bots because of the quality of or idiosyncrasies of their posts.
 
  • #19
gleem said:
How would you evaluate the unedited responses on Reddit not knowing they are not from a human?

They seem "off" on an initial look, but as you correctly point out, so do many posts that are from actual humans. :wink:

My point was not so much about that aspect, but about the fact that this article "by" GPT-3 is being touted as evidence that "AI" has somehow advanced greatly. I don't see that. All I see is a somewhat souped-up version of ELIZA, which was already fooling some humans into thinking it was a human back in the 1980s.
 
  • Like
Likes phinds and Buzz Bloom
  • #20
PeterDonis said:
All I see is a somewhat souped-up version of ELIZA, which was already fooling some humans into thinking it was a human back in the 1980s.

And what would it mean to you if some humans were fooled into thinking it was a human back in the 1980s?
 
  • Haha
Likes TeethWhitener
  • #21
Vanadium 50 said:
what would it mean to you if some humans were fooled into thinking it was a human back in the 1980s?

Not a lot, except as a demonstration of how simple a computer program can be and still do that.
 
  • #22
PeterDonis said:
Not a lot, except as a demonstration of how simple a computer program can be and still do that.

GPT-3 simple?

The following article from MIT Technology Review along with users of GPT-3 seems to be more impressed with the performance than the casual observer. To be sure the authors of the GPT-3 try to read more into the performance than is warranted. According to this article it has also been used to generate HTML and computer code. Sam Altman co-founder of Open AI is trying to play down the hype noting that it has yet a long way to go. But the thing that people seem to miss is that this simple concept of just putting together words based on their occurrence in the language in relation to other words (as I see it) can produce meaningful and original composition is intriguing. In fact, maybe we humans also do this unconsciously and call the result our own or maybe even call it creativity.
 
  • #23
gleem said:
GPT-3 simple?

I didn't say GPT-3 was simple, I said ELIZA was simple.
 
  • #24
PeterDonis said:
All I see is a somewhat souped-up version of ELIZA, which was already fooling some humans into thinking it was a human back in the 1980s.
Vanadium 50 said:
And what would it mean to you if some humans were fooled into thinking it was a human back in the 1980s?
PeterDonis said:
Not a lot, except as a demonstration of how simple a computer program can be and still do that.

My joke was missed. "And what would it mean to you if [phrase from original message]" was a typical Eliza response, e.g.

"I think my mother hates me" would be followed by "And what would it mean to you if your mother hates you"
 
  • Like
Likes Dale
  • #25
Vanadium 50 said:
My joke was missed. "And what would it mean to you if [phrase from original message]" was a typical Eliza response, e.g.

Oops, yes, you're right, my humor detector must have been malfunctioning.
 
  • #26
And what would it mean to you if your humor detector must have been malfunctioning?
 
  • Haha
Likes phinds
  • #27
Vanadium 50 said:
And what would it mean to you if your humor detector must have been malfunctioning?
That he can't recognize a bot.
[edit] or, i guess, a person impersonating a bot. This is one of the reasons I don't like the Turing test; I just don't think it tells us anything useful.
 
Last edited:
  • #28
russ_watters said:
That he can't recognize a bot.

And what would it mean to you if...oh never mind.

I think the point has been made that this sort of thing is not new - ELIZA is 54 years old now. Some versions of Emacs have or at least used to have a psychoanalyze-pinhead command (M-x psychoanalyze-pinhead) where programs were on both sides of the discussion.

My earlier comment on Artificial Stupidity was not entirely tongue-in-cheek. The bar for appearing intelligent on the internet is remarkably low. The range of discussion is fairly limited on a Reddit sub (or before that usenet newsgroups like alt.swedish-chef.bork.bork.bork) which is a much easier problem. Eliza may appear like a psychotherapist, but would not work in any other context. It's the converse of the Geico ad - it would make a terrible drill sergeant.
 
Last edited:
  • Like
Likes Dale and russ_watters
  • #29
russ_watters said:
This is one of the reasons I don't like the Turing test; I just don't think it tells us anything useful.
Hi Russ:

I think Turing's idea for the test was a very reasonable expectation at the time. It was quite a while later when ELIZA showed the test was somewhat inadequate.

As I understand GTP-3, I do not think it is relevant as a demonstration of the Turing tests weakness, since it was not a dialog activity.

Regards,
Buzz
 
  • #30
OK @Buzz Bloom you asked for it. Here is a dialogue between a human researcher and GPT-3's entity "Wise Being" about the COVID pandemic. Note that GTP-3 training data ended in Oct 2019. Click on "Coronavirus".

Also note you can sign up to talk to Wise Being here. I suspect the waiting list is long.
 

Similar threads

Replies
10
Views
5K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 6 ·
Replies
6
Views
22K
  • · Replies 10 ·
Replies
10
Views
5K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 238 ·
8
Replies
238
Views
25K
  • · Replies 65 ·
3
Replies
65
Views
11K