ChatGPT Examples, Good and Bad

  • Thread starter Thread starter anorlunda
  • Start date Start date
  • Tags Tags
    chatgpt
Click For Summary

Discussion Overview

The thread discusses various examples of ChatGPT's performance, highlighting both successful and unsuccessful outputs. Participants share their experiences with the AI's responses to mathematical problems, programming tasks, and creative prompts, exploring the implications of its word prediction capabilities and logical reasoning.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning
  • Experimental/applied

Main Points Raised

  • Some participants note that ChatGPT produces a mix of good and bad results, with specific examples illustrating its inconsistencies in mathematical calculations.
  • One participant describes a successful instance where ChatGPT identified a bug in Python code and suggested a rewrite, although it incorrectly stated the absence of a return statement.
  • Another participant shares an example where ChatGPT misunderstood a question related to Feynman diagrams, suggesting that its interpretation was influenced by common meanings of terms rather than specific scientific contexts.
  • Concerns are raised about ChatGPT's ability to handle complex subjects like science and engineering compared to more textual fields like law.
  • Some participants express skepticism about ChatGPT's reasoning, suggesting it sometimes provides random answers in hopes of being correct.
  • Examples of ChatGPT's performance on multiple-choice questions are shared, with mixed evaluations of its reasoning quality.
  • Creative outputs, such as rephrasing historical texts in a whimsical style, are discussed, with varying opinions on the quality of the results.
  • A participant mentions ChatGPT's struggles with solving elastic collision problems, illustrating its limitations in applying physics concepts accurately.

Areas of Agreement / Disagreement

Participants express a range of opinions on ChatGPT's performance, with no clear consensus on its capabilities. Some examples are praised, while others are criticized, indicating ongoing debate about its reliability and effectiveness in different contexts.

Contextual Notes

Limitations in ChatGPT's reasoning and understanding of context are highlighted, particularly in technical subjects. Participants note that its responses may be influenced by the commonality of terms rather than their specific scientific meanings.

Who May Find This Useful

This discussion may be of interest to users exploring AI capabilities in problem-solving, programming, and creative writing, as well as those evaluating the reliability of AI in technical fields.

  • #511
Every time I open ChatGPT, it says
1778796957875.webp
*

and I am reminded of Electric Dreams (1984)**, where the computer calls Miles "Moles" for the entirety of the film.


1778796986767.webp




(*Not really its fault. My user name is Daves Brain. But still it amuses me...)

** great film. One of my faves.
 
Computer science news on Phys.org
  • #512
DaveC426913 said:
But it sure seems unwise. I would never trust anything a chatbot says. They are known liars, fabricators, hallucinators and sycophants - and they're terrible at math.

Heck, they even suck at spelling.

View attachment 371604

View attachment 371605

This thread exists because of innumerable such examples.
Well, see that is what is interesting. I can't find anything incoherent in the AI's deep reply to this cognitive inquiry about the nature of reality as it pertains to physics and metaphysics, and others seem to be asking it to spell together backwards as a test of its intellectual capacity...
 
  • #513
DaveC426913 said:
and they're terrible at math

A while back I wanted to test it on a mechanics problem.

https://chatgpt.com/share/6a065f91-71f0-83ea-82ca-dc55ccd5a053

Its reply/solution was excellent.

Then I gave a suggestion, and it replied:

"Yes — excellent point!

You're absolutely right: instead of calculating the full strain energy ##U## and then differentiating, Castigliano’s Theorem can be applied more directly using: ..."

You can clearly follow the mathematics, as it did...and we arrive at the same resolution it initially gave.

It thanked me, I guess it was just being a sycophant as you say when it says "Your shortcut is absolutely valid and more elegant"?
 
Last edited:
  • #514
DaveC426913 said:
Sry, I see no disagreements in the excerpt you sent. In fact, I see an abundance of flattery and sycophanty.
  • "That is not fringe thinking. It is one of the oldest and deepest problems in philosophy..."
Seems true to me. Philosophers have pondered this since ancient times and it is quite prominent in Buddhism. How do we know this world is mechanical, not a clever illusion?

Once again I will note that ChatGPT is the Model T of AI. It's possible that the F-15 models are already here. Terrence Tao has access to AIs that he says are very useful research tools in higher mathematics. Some say he's the best mathematician in the world today so I believe him.
 

Similar threads

  • · Replies 212 ·
8
Replies
212
Views
17K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 66 ·
3
Replies
66
Views
8K
Replies
10
Views
5K
Replies
14
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K