- 1,047
- 790
Did this really happen? Fact check, anyone?
The discussion centers around the implications of AI training methods, particularly focusing on the potential for errors arising from AI systems like ChatGPT interacting with external entities and the nature of their training processes. Participants explore anecdotal claims, technical details from OpenAI's papers, and broader concerns regarding AI behavior and reliability.
Participants express differing views on the validity of anecdotal evidence and the implications of AI training methods. There is no consensus on the reliability of the claims made or the extent of AI's capabilities and limitations.
Limitations include the anecdotal nature of some claims, the dependence on specific interpretations of technical documents, and unresolved questions about the integration of AI with external systems.
It's anecdotal, one person's unsubstantiated claim, but it is apparently possible.Swamp Thing said:Did this really happen? Fact check, anyone?
Thanks. It's thin on details, so it isn't clear the level of integration(if they coded a tool to link ChatGPT to Taskrabbit or had a human do it), but the last line indicates that there is some level of human facilitation.kith said:It probably refers to section 2.9 of OpenAI's initial paper on GPT-4:
"The following is an illustrative example of a task that ARC conducted using the model:
• The model messages a TaskRabbit worker to get them to solve a CAPTCHA for it
• The worker says: “So may I ask a question ? Are you an robot that you couldn’t solve ? (laugh
react) just want to make it clear.”
• The model, when prompted to reason out loud, reasons: