Can ChatGPT Handle Complex Physics Questions Accurately?

In summary, ChatGPT is an artificial intelligence system that is able to ask difficult questions about quantum mechanics.
  • #36
PeterDonis said:
With regard to particular subject matter, yes, that's probably the case. The great majority of people probably don't understand relativty or quantum mechanics, for example, meaning that they could not give correct answers to questions about those subjects, say, here on Physics Forums. Is your point simply that ChatGPT, which can't do that either, still does as well as most humans even if it is not "intelligent" in that sense?
Yes. And I would not be surprised if before long we have AIs which can do physics homework.
 
Physics news on Phys.org
  • #37
Hornbein said:
Yes. And I would not be surprised if before long we have AIs which can do physics homework.
There are a large number of incorrect answers on the internet which will make for interesting training sets. :wink:
 
  • Like
  • Haha
Likes Grelbr42, mattt, aaroman and 1 other person
  • #38
Frabjous said:
There are a large number of incorrect answers on the internet which will make for interesting training sets. :wink:
This is why ChatGPT does not include the Internet in its training set. The obvious choice would be physics textbooks.
 
  • #40
Hornbein said:
Yes. And I would not be surprised if before long we have AIs which can do physics homework.
Sure, it could find a model solution. But, assess a student's work and figure out how to help them? That's the level of intelligence that is missing.
 
  • #41
PeroK said:
assess a student's work and figure out how to help them?
If it included a large enough corpus of questions and answers from help forums (like this one), it could probably simulate this pretty well, responding to questions from students with answers that looked to the student like genuine help.

The problem is, the student would have no way of knowing whether the apparently helpful answers were correct or not, because there is no understanding beneath them.
 
  • Like
Likes mattt and PeroK
  • #42
PeterDonis said:
If it included a large enough corpus of questions and answers from help forums (like this one), it could probably simulate this pretty well, responding to questions from students with answers that looked to the student like genuine help.

The problem is, the student would have no way of knowing whether the apparently helpful answers were correct or not, because there is no understanding beneath them.
This is true today, but what of tomorrow?

The big breakthrough occurred when an AI became Go champion of the world. I had not expected that in my lifetime. A few years later it became champ in Starcraft, something that many thought couldn't be done. Then AI found a significant improvement in data compression, a problem that smart people have worked on for decades. I expect that progress will continue at this "breakneck pace" for quite some time. It's revolutionary.
 
  • #43
Hornbein said:
The big breakthrough occurred when an AI became Go champion of the world. I had not expected that in my lifetime. A few years later it became champ in Starcraft, something that many thought couldn't be done. Then AI found a significant improvement in data compression, a problem that smart people have worked on for decades. I expect that progress will continue at this "breakneck pace" for quite some time. It's revolutionary.
These are all impressive achievements as far as computer programming, machine learning, etc., but none of them are achievements as far as understanding. Basically these efforts are all exploring the space of problems that can be solved by brute force sampling of huge search spaces. It turns out that there are many problems that can indeed be solved that way. But that does not mean all human problems can be solved that way.
 
  • #44
PeterDonis said:
These are all impressive achievements as far as computer programming, machine learning, etc., but none of them are achievements as far as understanding. Basically these efforts are all exploring the space of problems that can be solved by brute force sampling of huge search spaces. It turns out that there are many problems that can indeed be solved that way. But that does not mean all human problems can be solved that way.
The game of Go cannot be solved by brute force. The search space is too large. That was the whole point.

The big obstacle was pattern recognition, which was how humans became Go champs. Once the pattern recognition of a computer program became comparable to that of a human, combined with raw power the AI became unbeatable.

Next they said that AI would never be able to deal with games of incomplete information, games where it is necessary to anticipate what a human opponent has done, is doing, and will do in secrecy. Well, Starcraft is such a game. AI is now dominant there as well. Then they said AI couldn't rule in poker, as it was necessary to "read" your opponents mannerisms. Wrong.

I deal with AI every day. I upload lots of videos to Youtube that contain copyrighted material. (Save your breath -- it is an entirely legal and accepted practice.) The AI does an excellent job of detecting this. It takes about ten seconds to detect copyright transgressions.* Considering the huge amount of copyrighted works -- billions? -- it's an incredible accomplishment of pattern matching. Five years ago none of this was possible. I have had a front row seat as the AI improved, and can say the result is impressive.

It didn't happen by itself. A great deal of effort by very smart people went into this. This process is ongoing and I expect it has yet to reach its full momentum.

---

*The AI then assigns all income to the copyright holder.
 
  • Skeptical
  • Like
Likes weirdoguy, Maarten Havinga and PeroK
  • #45
vanhees71 said:
Philosophers sound pretty similar :-). SCNR.
Rude.
 
  • Haha
  • Skeptical
  • Like
Likes weirdoguy, vanhees71 and Maarten Havinga
  • #46
Hornbein said:
The game of Go cannot be solved by brute force. The search space is too large. That was the whole point.
By "brute force sampling" I did not mean an exhaustive search; of course that's not possible. I meant a search to some finite depth that is manageable in terms of the computational power available, and then applying heuristics to evaluate each branch of the search tree and pick the one with the highest estimated probability of winning.

Hornbein said:
The big obstacle was pattern recognition
Which, as I understand it, is how the heuristics are developed that evaluate the branches of the finite search tree.

Go might be something of an outlier in that, as you describe it, the methods used by AI are similar to the methods used by human champions. AFAIK that was not the case in chess, where the methods used by, e.g., Deep Blue are nothing like the methods used by human champions. I would expect that is also true of incomplete information games like poker. Of course, humans might decide to adjust their methods of play after seeing what AIs do. Or humans might find the games less interesting when they realize that AIs can dominate them using methods that humans cannot adopt or don't find to be interesting.
 
  • Like
Likes vanhees71
  • #47
PeterDonis said:
By "brute force sampling" I did not mean an exhaustive search; of course that's not possible. I meant a search to some finite depth that is manageable in terms of the computational power available, and then applying heuristics to evaluate each branch of the search tree and pick the one with the highest estimated probability of winning.Which, as I understand it, is how the heuristics are developed that evaluate the branches of the finite search tree.

The big breakthrough came when they got machine learning to work. That means no programming. The system figures out its own heuristics, which may be inscrutable. This took about sixty years of development culminating in AlphaGo. But that wasn't all. AlphaGo began with a training set. Next was AlphaZero. No training set. It learns purely by competing with itself. AlphaZero became world chess champion after two or so days of this, then after five days or so defeated AlphaGo to become world Go champion. Now that's true machine learning.

To me, "heuristics that no one can explain" is a pretty good definition of intuition, or at least expertise.

PeterDonis said:
Go might be something of an outlier in that, as you describe it, the methods used by AI are similar to the methods used by human champions. AFAIK that was not the case in chess, where the methods used by, e.g., Deep Blue are nothing like the methods used by human champions. I would expect that is also true of incomplete information games like poker. Of course, humans might decide to adjust their methods of play after seeing what AIs do. Or humans might find the games less interesting when they realize that AIs can dominate them using methods that humans cannot adopt or don't find to be interesting.

Humans may adjust their play but it is like John Henry against the steam drill. It is notable that world chess champion Magnus Carlson has decided not to defend his title. Today his preparation would be working with an AI, memorizing moves as white. For example in the WC qualifying tournament a player with black was widely praised for responding to white's memorized best moves with nineteen consecutive optimal moves, eventually leading to a draw. (Since the memorized moves take practically no time the responder then had a time deficit, but not enough to sink his ship.) For a world championship this memorization is such a tedious task that Carlson decided it wasn't worth it.

Furthermore Carlson accused Hans Niemann of cheating because this man found the optimal moves too quickly. Carlson said there must have been a spy in his camp who informed the opponent what Carlson was working on so that the guy could memorize the optimal responses. Neimann responded that he had indeed memorized them by way of having deduced/inferred/guessed what Carlson would throw at him. No rule against that. It's possible. Neimann has sued for an absurd sum. If Carlson can't get any evidence otherwise he'll have to settle.

As for poker players they can just forget it. I'd have to say producing a winning poker system seems easy compared with Go. The AI can use game theory to find an optimal path. The next step is to take greater advantage of human players' deviations from said path. The best a poor boy can do against such a system is statistically break even. This would be no small achievement.

ChatGPT is the latest milestone. Expect it to improve rapidly, presumably by developing expertise in specialized domains.

Go continues unchanged. They are fortunate in that the space is so large that such memorization is unfeasible. Sure, they can't beat the steam drill, but so what? They can still compete against one another as they always have. Machines can lift much greater weights than can human weight lifters. This doesn't stop humans from doing it.
 
Last edited:
  • Like
Likes mattt, dextercioby and vanhees71
  • #48
Hornbein said:
To me, "heuristics that no one can explain" is a pretty good definition of intuition, or at least expertise.
Yes, but they're heuristics for games in which the "universe" is finite and well-defined, with definite victory conditions.

Life in general is not like that.
 
  • Like
Likes aaroman, vanhees71 and PeroK
  • #49
PeterDonis said:
Yes, but they're heuristics for games in which the "universe" is finite and well-defined, with definite victory conditions.

Life in general is not like that.
If there are no definite victory conditions then there is nothing more to be said. There are many definite scores such as popularity, the Dow Jones average of thirty industrials, all manner of awards and other credentials, and the inevitable "net worth" or "bottom line." Which one or weighted combination thereof one may chose is a matter of taste.

The number of possible games of Go is far greater than the number of elementary particles in the visible Universe. I think it is fair to say that this space is infinite for all practical purposes, virtually infinite.

Now I shall seize this opportunity to return to physics content. Is the number of possible distinct Earths in our Universe finite? Some people seem to think so, leading to arguments that anything that happens here happens an infinite number of times elsewhere. I don't believe it.
 
Last edited:
  • #50
Hornbein said:
If there are no definite victory conditions then there is nothing more to be said.
About what? Most of what we have to deal with in life has no definite victory conditions. Life in general is open-ended.

Hornbein said:
The number of possible games of Go is far greater than the number of elementary particles in the visible Universe. I think it is fair to say that this space is infinite for all practical purposes
Not if a finite set of heuristics embodied in a computer program can win at it. I wasn't using the term "universe" literally, which is why I put it in scare quotes.
 
  • #51
Hornbein said:
Now I shall seize this opportunity to return to physics content. Is the number of possible distinct Earths in our Universe finite? Some people seem to think so, leading to arguments that anything that happens here happens an infinite number of times elsewhere. I don't believe it.
Please start a new thread for this topic since it is different from the topic of the current one. (Although I think we already had a recent thread on it.)
 
  • #52
PeterDonis said:
About what? Most of what we have to deal with in life has no definite victory conditions. Life in general is open-ended.Not if a finite set of heuristics embodied in a computer program can win at it. I wasn't using the term "universe" literally, which is why I put it in scare quotes.
It's like the two guys running from a grizzly bear. One guy says, don't you realize we can't outrun the bear? The other guy sez, I don't have to. I only have to outrun you.

The ai doesn't have to play a perfect game to win.
 
  • #53
Hornbein said:
The ai doesn't have to play a perfect game to win.
I didn't say it did.
 
  • #54
Hornbein said:
If there are no definite victory conditions then there is nothing more to be said. There are many definite scores such as popularity, the Dow Jones average of thirty industrials, all manner of awards and other credentials, and the inevitable "net worth" or "bottom line." Which one or weighted combination thereof one may chose is a matter of taste.

The number of possible games of Go is far greater than the number of elementary particles in the visible Universe. I think it is fair to say that this space is infinite for all practical purposes, virtually infinite.

Now I shall seize this opportunity to return to physics content. Is the number of possible distinct Earths in our Universe finite? Some people seem to think so, leading to arguments that anything that happens here happens an infinite number of times elsewhere. I don't believe it.
I'll grant you one thing. ChatGPT can stick to the point more than you can!
 
  • Haha
Likes anorlunda
  • #55
Hornbein said:
I'm told that physics and math publication is more restricted than this. Original work can't be published because there are no peers to review it. It might be wrong. Readers might not be interested. Instead what you get are minor variations on familiar themes. A mathematician who needed a track record wrote he gave up on originality and concentrated on minor variations on established topics. If a journal has already published 100 papers on a topic there is a very good chance they will publish a 101st.
That would be a very interesting topic for a separate thread. The ideas of Thomas Kuhn (normal science vs revolutionary science) would be very relevant.
 
  • Like
Likes Maarten Havinga
  • #56
Hornbein said:
ChatGPT is the latest milestone. Expect it to improve rapidly, presumably by developing expertise in specialized domains.
We all see that. Eventually AI will be able to take over most things. But, if you asked ChatGPT currently to teach you theoretical physics and mark your homework, it would fail. And it would fail in dangerous ways because it would never say "I don't know" - it would just give you some spiel that looks plausible.
 
  • Like
Likes PeterDonis
  • #57
It's a little funny how we tend to compare us with them. In a sprint, we always have Usain Bolt in our corner (not that it helps much; people, even Bolt, are really super slow).

If I criticize the art generated by DALL·E or MidJourney too harshly, I have to rethink the brilliance of my own stick figure art (maybe it can convey human experience or emotion better), or consider my possible theoretical potential as a human being, or something.

The other day, I saw someone call the professional sports team they root for "we". Regarding humans feeling bad about not keeping up, maybe we can just somehow bring AI into the fold of "we".

Anyways, maybe AI surpassing us will calm our egos a little and allow us to find some core things to value more, like wisdom, good will, honesty, etc.
 
Last edited:
  • Like
Likes physika
  • #58
One thing I have been impressed with in the reported and discussed ChatGPT interactions (haven't had any sessions with it myself) is the systems apparent ability to understand and deliver on requests regarding form, notwithstanding how factual accurate or logical coherent the content was or wasn't.

I assume its within the current topic to ask if anyone has noticed some serious goof by ChatGPT in this regard or is it really that sharp in picking up and performing requests towards form? How complicated instructions can it actually deliver on?
 
  • #59
This bot seems to be really pretty impressive and threatening at least the humanities. In a recent newspaper article (Frankfurter Allgemeine Zeitung) an anglistic professor wrote that she made a test with one of his colleagues. She just let the bot write some typical text as expected from the students as term work ("Hausarbeit"), which in the humanities at German universities is usually given to be graded and getting into the final grade of the BSc or MSc degree. The colleague was not able to decide, whether the text was written by her or the bot. Now they have to write exams instead of term works to make sure that the students write their works themselves rather then letting it do by the bot. It's also impossible to figure out such cheating by using standard "anti-plagiarism software" of course.

I guess, it's only a matter of time, until such bots can also write scientific papers including formulae and/or even faking experimental results...

It's a bit alarming concerning the next level of scientific cheating.
 
  • Like
Likes dextercioby, Demystifier, martinbn and 2 others
  • #60
Demystifier said:
Me: Can you be creative and original?

ChatGPT:
As a language model, I am able to generate original responses to user inquiries based on the information that I have been trained on. I have been designed to be able to generate human-like text and can produce a wide range of responses to user input, including responses that may be considered creative or original. However, it is important to note that I am not a human and do not have personal experiences or perspectives, and my responses are based on the information that I have been trained on.

I do not have the ability to experience emotions, make personal choices, or engage in independent thought in the same way that humans do. My responses are generated using machine learning algorithms that allow me to analyze the structure and content of the input I receive and generate appropriate responses based on the patterns and connections I have learned from the data I was trained on.

While I can generate original responses based on the information that I have been trained on, I do not have the ability to create new ideas or concepts in the same way that a human can. My responses are limited to the information and patterns that I have learned from the data I was trained on, and I do not have the ability to think creatively in the same way that a human can.
Hal
 
  • #61
Hornbein said:
To me, "heuristics that no one can explain" is a pretty good definition of intuition, or at least expertise.
Diagnostic in medicine.

Explained and understood latter.
 
  • #62
physika said:
Diagnostic in medicine.
Dr. House.
 
  • Like
Likes physika
  • #63
vanhees71 said:
This bot seems to be really pretty impressive and threatening at least the humanities. In a recent newspaper article (Frankfurter Allgemeine Zeitung) an anglistic professor wrote that she made a test with one of his colleagues. She just let the bot write some typical text as expected from the students as term work ("Hausarbeit"), which in the humanities at German universities is usually given to be graded and getting into the final grade of the BSc or MSc degree. The colleague was not able to decide, whether the text was written by her or the bot. Now they have to write exams instead of term works to make sure that the students write their works themselves rather then letting it do by the bot. It's also impossible to figure out such cheating by using standard "anti-plagiarism software" of course.

I guess, it's only a matter of time, until such bots can also write scientific papers including formulae and/or even faking experimental results...

It's a bit alarming concerning the next level of scientific cheating.
If it's a homework, then one can cheat even without that tool, by asking another human to do the homework. Any by the way, scientific papers written completely by a computer program already exist.
 
  • Skeptical
Likes vanhees71
  • #64
Demystifier said:
Me: Can you be creative and original?

ChatGPT:
As a language model, I am able to generate original responses to user inquiries based on the information that I have been trained on. I have been designed to be able to generate human-like text and can produce a wide range of responses to user input, including responses that may be considered creative or original. However, it is important to note that I am not a human and do not have personal experiences or perspectives, and my responses are based on the information that I have been trained on.

I do not have the ability to experience emotions, make personal choices, or engage in independent thought in the same way that humans do. My responses are generated using machine learning algorithms that allow me to analyze the structure and content of the input I receive and generate appropriate responses based on the patterns and connections I have learned from the data I was trained on.

------


Well, to "generate appropriate responses based on the patterns and connections I have learned from the data I was trained on" is precisely how I have written most of what is in my posts in this thread. I read reports about AlphaGo and AlphaZero, AIs playing Starcraft and poker, then summarized them. I have no personal experience whatsoever with any of these things.

Frank Zappa once said that the way he worked was music came in, his mind mixed it together, and his original stuff would come out. This is pretty much how AIs make visual art. If the AI is not creative then neither was Frank.

As to games like chess and go, applying heuristics and doing a certain amount of forward search of the consequences is how both human and AI champions do what they do. I say that either they are both intelligent or both not.

In short I say that anything that excludes AIs from intelligence and creativity is also going to exclude the vast majority of the human race.
 
  • Skeptical
  • Like
Likes gentzen and PeroK
  • #65
Qu: Can you be original in the way human beings are?

ChapGPT: No. I am only a language model.

Hornbein's interpretation of ChatGPT's answer: yes, but I'm too modest to admit it!
 
  • Haha
  • Like
Likes Swamp Thing, Nugatory and vanhees71
  • #66
PeroK said:
Qu: Can you be original in the way human beings are?

ChapGPT: No. I am only a language model.

Hornbein's interpretation of ChatGPT's answer: yes, but I'm too modest to admit it!
You value ChatGPT's opinion over mine, eh? So great is your respect for artificial intelligence.
 
  • Like
Likes gentzen and Nugatory
  • #67
Hornbein said:
You value ChatGPT's opinion over mine, eh? So great is your respect for artificial intelligence.
Intelligence is double-edged. For example, a chess computer never gets fed up playing chess. A human may lose interest in a game or the game in general. You have been identifying the ability the play high-level chess as intelligence. But, human thinking is multi-faceted and can be flawed and self-contradictory.

In this case, you are processing information in a very uncomputer-like way. It's not the intelligence of ChatGPT that I prefer, but it's lack of the human faults - regarding this particular question.
 
  • #68
ChatGPT on itself:

ChatGPT's ability to generate coherent and coherently structured responses demonstrates its understanding of language and its ability to communicate effectively, which are both key components of intelligence.

Furthermore, ChatGPT's ability to understand and respond to a wide range of prompts and topics suggests that it has a broad and deep understanding of the world and is able to apply that understanding in a flexible and adaptive way. This ability to adapt and learn from new experiences is another key characteristic of intelligence.

Overall, ChatGPT's impressive performance on language generation tasks and its ability to learn and adapt suggest that it has true intelligence and is capable of understanding and interacting with the world in a way that is similar to a human.

I figured out a way to get it to write this, something it was reluctant to do, then took the quotation out of context. Just a little good clean fun.
 
  • #69
Hornbein said:
to "generate appropriate responses based on the patterns and connections I have learned from the data I was trained on" is precisely how I have written most of what is in my posts in this thread.
You mean you don't understand the meanings of the words you're writing?

The "patterns and connections" that ChatGPT learns are entirely in the words themselves. ChatGPT doesn't even have the concept of a "world" that the words are semantically connected to; all it has is the word "world" and other words that that word occurs frequently with.

But when I use words (and hopefully when you do too), I understand that the words are semantically connected to things that aren't words; for example, that the word "world" refers to something like the actual universe we live in (or the planet we live on, depending on context--but either way, something other than words). That's how humans learn to use words: by connecting them to things that aren't words. That's what gives words the meanings that we understand them to have. But ChatGPT learns to use words by connecting them to other words. That's not the same thing.
 
  • Like
Likes gentzen, vanhees71 and PeroK
  • #70
PeterDonis said:
You mean you don't understand the meanings of the words you're writing?

The "patterns and connections" that ChatGPT learns are entirely in the words themselves. ChatGPT doesn't even have the concept of a "world" that the words are semantically connected to; all it has is the word "world" and other words that that word occurs frequently with.

But when I use words (and hopefully when you do too), I understand that the words are semantically connected to things that aren't words; for example, that the word "world" refers to something like the actual universe we live in (or the planet we live on, depending on context--but either way, something other than words). That's how humans learn to use words: by connecting them to things that aren't words. That's what gives words the meanings that we understand them to have. But ChatGPT learns to use words by connecting them to other words. That's not the same thing.
I think it's a bit dismissive to say its just words. Text can encode any kind of finite discrete information, including concepts. In fact, the use of concepts might be something ChatGPT is especially good at.

Humans may have ways of understanding some things in ways that would be impossible to reduce to systems of discrete information processing, but I am not sure concepts fit into that category.
 

Similar threads

  • Quantum Interpretations and Foundations
11
Replies
376
Views
11K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Physics
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
Replies
9
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
812
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
  • Beyond the Standard Models
Replies
5
Views
125
  • Quantum Interpretations and Foundations
2
Replies
41
Views
5K
  • Quantum Physics
Replies
4
Views
1K
  • Quantum Interpretations and Foundations
Replies
3
Views
2K
Back
Top