- #1
- 2,557
- 2,006
- TL;DR Summary
- For about a week a BOT posted numerous responses to threads on a wide variety of topics fooling participants that it was a human.
For about a week a BOT participated in discussions on Reddit without participants at least for the most part having an inkling that it was not human. The BOT is based on the OPEN AI code GTP-3. OPEN AI release a version of this code more than a year ago. It demonstrated a good ability to emulate human writers and their styles. It was so good that OPEN AI would not release the code for fear of misuse. This year OPEN AI released a somewhat less capable verson for public use or at least a lite version of the original. It has been termed as autocomplete on steroids. It was only suspected of being a BOT mostly, it seemed, because its responses to threads occurred so quickly for the length of the posts.
What makes this more interesting is that it is not really AI in the sense that there is no logic involved in its performance that help it formulate the text and its content. GPT-3 generates grammatically correct text based on what it "sees" on the web at least as I understand it.
A discussion by the software engineer who called this to the attention of the Reddit community can be accessed here.
https://www.kmeme.com/2020/10/gpt-3-bot-went-undetected-askreddit-for.html
He discusses a number of posts by this BOT that are interesting. He also noted that non of the posts by the BOT are plagiarized or cut and pasted from the web as far as could be determined. Obviously, it does not understand or believe what is written. This is original content and begs the question about what do we mean by creativity, that special quality of human intelligence that sets us apart from machines. To be sure, people can also compose that which they do not necessarily understand or believe. How much difference does understanding or believing make?
What makes this more interesting is that it is not really AI in the sense that there is no logic involved in its performance that help it formulate the text and its content. GPT-3 generates grammatically correct text based on what it "sees" on the web at least as I understand it.
A discussion by the software engineer who called this to the attention of the Reddit community can be accessed here.
https://www.kmeme.com/2020/10/gpt-3-bot-went-undetected-askreddit-for.html
He discusses a number of posts by this BOT that are interesting. He also noted that non of the posts by the BOT are plagiarized or cut and pasted from the web as far as could be determined. Obviously, it does not understand or believe what is written. This is original content and begs the question about what do we mean by creativity, that special quality of human intelligence that sets us apart from machines. To be sure, people can also compose that which they do not necessarily understand or believe. How much difference does understanding or believing make?