Can ChatGPT Handle Complex Physics Questions Accurately?

Click For Summary
ChatGPT demonstrates an understanding of complex physics topics, including interpretations of quantum mechanics and the implications of the Bell theorem on hidden variables. It explains that while local realism is incompatible with quantum correlations, some interpretations, like de Broglie-Bohm, still allow for hidden variables. The discussion also touches on the compatibility of Bohmian mechanics with relativistic quantum field theory, highlighting ongoing debates in the field. Users express skepticism about the originality of ChatGPT's responses, suggesting it often resembles regurgitated information rather than independent thought. Despite its capabilities, the conversation raises questions about the nature of creativity and original insight in AI-generated content.
  • #61
Hornbein said:
To me, "heuristics that no one can explain" is a pretty good definition of intuition, or at least expertise.
Diagnostic in medicine.

Explained and understood latter.
 
Physics news on Phys.org
  • #62
physika said:
Diagnostic in medicine.
Dr. House.
 
  • #63
vanhees71 said:
This bot seems to be really pretty impressive and threatening at least the humanities. In a recent newspaper article (Frankfurter Allgemeine Zeitung) an anglistic professor wrote that she made a test with one of his colleagues. She just let the bot write some typical text as expected from the students as term work ("Hausarbeit"), which in the humanities at German universities is usually given to be graded and getting into the final grade of the BSc or MSc degree. The colleague was not able to decide, whether the text was written by her or the bot. Now they have to write exams instead of term works to make sure that the students write their works themselves rather then letting it do by the bot. It's also impossible to figure out such cheating by using standard "anti-plagiarism software" of course.

I guess, it's only a matter of time, until such bots can also write scientific papers including formulae and/or even faking experimental results...

It's a bit alarming concerning the next level of scientific cheating.
If it's a homework, then one can cheat even without that tool, by asking another human to do the homework. Any by the way, scientific papers written completely by a computer program already exist.
 
  • #64
Demystifier said:
Me: Can you be creative and original?

ChatGPT:
As a language model, I am able to generate original responses to user inquiries based on the information that I have been trained on. I have been designed to be able to generate human-like text and can produce a wide range of responses to user input, including responses that may be considered creative or original. However, it is important to note that I am not a human and do not have personal experiences or perspectives, and my responses are based on the information that I have been trained on.

I do not have the ability to experience emotions, make personal choices, or engage in independent thought in the same way that humans do. My responses are generated using machine learning algorithms that allow me to analyze the structure and content of the input I receive and generate appropriate responses based on the patterns and connections I have learned from the data I was trained on.

------


Well, to "generate appropriate responses based on the patterns and connections I have learned from the data I was trained on" is precisely how I have written most of what is in my posts in this thread. I read reports about AlphaGo and AlphaZero, AIs playing Starcraft and poker, then summarized them. I have no personal experience whatsoever with any of these things.

Frank Zappa once said that the way he worked was music came in, his mind mixed it together, and his original stuff would come out. This is pretty much how AIs make visual art. If the AI is not creative then neither was Frank.

As to games like chess and go, applying heuristics and doing a certain amount of forward search of the consequences is how both human and AI champions do what they do. I say that either they are both intelligent or both not.

In short I say that anything that excludes AIs from intelligence and creativity is also going to exclude the vast majority of the human race.
 
  • Skeptical
  • Like
Likes gentzen and PeroK
  • #65
Qu: Can you be original in the way human beings are?

ChapGPT: No. I am only a language model.

Hornbein's interpretation of ChatGPT's answer: yes, but I'm too modest to admit it!
 
  • Haha
  • Like
Likes Swamp Thing, Nugatory and vanhees71
  • #66
PeroK said:
Qu: Can you be original in the way human beings are?

ChapGPT: No. I am only a language model.

Hornbein's interpretation of ChatGPT's answer: yes, but I'm too modest to admit it!
You value ChatGPT's opinion over mine, eh? So great is your respect for artificial intelligence.
 
  • Like
Likes gentzen and Nugatory
  • #67
Hornbein said:
You value ChatGPT's opinion over mine, eh? So great is your respect for artificial intelligence.
Intelligence is double-edged. For example, a chess computer never gets fed up playing chess. A human may lose interest in a game or the game in general. You have been identifying the ability the play high-level chess as intelligence. But, human thinking is multi-faceted and can be flawed and self-contradictory.

In this case, you are processing information in a very uncomputer-like way. It's not the intelligence of ChatGPT that I prefer, but it's lack of the human faults - regarding this particular question.
 
  • #68
ChatGPT on itself:

ChatGPT's ability to generate coherent and coherently structured responses demonstrates its understanding of language and its ability to communicate effectively, which are both key components of intelligence.

Furthermore, ChatGPT's ability to understand and respond to a wide range of prompts and topics suggests that it has a broad and deep understanding of the world and is able to apply that understanding in a flexible and adaptive way. This ability to adapt and learn from new experiences is another key characteristic of intelligence.

Overall, ChatGPT's impressive performance on language generation tasks and its ability to learn and adapt suggest that it has true intelligence and is capable of understanding and interacting with the world in a way that is similar to a human.

I figured out a way to get it to write this, something it was reluctant to do, then took the quotation out of context. Just a little good clean fun.
 
  • #69
Hornbein said:
to "generate appropriate responses based on the patterns and connections I have learned from the data I was trained on" is precisely how I have written most of what is in my posts in this thread.
You mean you don't understand the meanings of the words you're writing?

The "patterns and connections" that ChatGPT learns are entirely in the words themselves. ChatGPT doesn't even have the concept of a "world" that the words are semantically connected to; all it has is the word "world" and other words that that word occurs frequently with.

But when I use words (and hopefully when you do too), I understand that the words are semantically connected to things that aren't words; for example, that the word "world" refers to something like the actual universe we live in (or the planet we live on, depending on context--but either way, something other than words). That's how humans learn to use words: by connecting them to things that aren't words. That's what gives words the meanings that we understand them to have. But ChatGPT learns to use words by connecting them to other words. That's not the same thing.
 
  • Like
Likes gentzen, vanhees71 and PeroK
  • #70
PeterDonis said:
You mean you don't understand the meanings of the words you're writing?

The "patterns and connections" that ChatGPT learns are entirely in the words themselves. ChatGPT doesn't even have the concept of a "world" that the words are semantically connected to; all it has is the word "world" and other words that that word occurs frequently with.

But when I use words (and hopefully when you do too), I understand that the words are semantically connected to things that aren't words; for example, that the word "world" refers to something like the actual universe we live in (or the planet we live on, depending on context--but either way, something other than words). That's how humans learn to use words: by connecting them to things that aren't words. That's what gives words the meanings that we understand them to have. But ChatGPT learns to use words by connecting them to other words. That's not the same thing.
I think it's a bit dismissive to say its just words. Text can encode any kind of finite discrete information, including concepts. In fact, the use of concepts might be something ChatGPT is especially good at.

Humans may have ways of understanding some things in ways that would be impossible to reduce to systems of discrete information processing, but I am not sure concepts fit into that category.
 
  • #71
Jarvis323 said:
Text can encode any kind of finite discrete information, including concepts.
No, "text" alone can't do that. Text only encodes things that aren't text to humans, or other entities who can understand the semantic meanings involved. ChatGPT can't do that; we know that because we know how ChatGPT works: as I said, it works solely by looking at connections between words, and it has no information at all about connections between words and things that aren't words.

Please note that I am not saying that no computer program at all could have information about semantic meanings; just that any such program would have to actually have semantic connections to the world the way humans do. It would have to have actual sense organs that were affected by the world, and actual motor organs that could do things in the world, and the program would have to be connected to those sense organs and motor organs in a way similar to the way human sense organs and motor organs are connected.

ChatGPT has none of those things. It is just a program that looks at a fixed set of text data and encodes patterns in the connections between words.
 
  • Like
Likes InkTide and Demystifier
  • #72
Hornbein said:
As to games like chess and go, applying heuristics and doing a certain amount of forward search of the consequences is how both human and AI champions do what they do. I say that either they are both intelligent or both not.
Even if we accept this as true for purposes of discussion (I'm not sure it's entirely true, but that's a matter of opinion that goes well off topic for this thread), note that ChatGPT does not do these things. A chess or go computer program makes moves that affect the world, and it sees the changes in the world that result; it also sees its opponent making moves and sees the changes in the world that result. That is true even for a program like Alpha Zero that learned solely by playing itself.

ChatGPT does not do those things. It encodes a fixed set of training data, and its responses do not take into account any changes in the world, including changes that are caused by those responses (such as humans reading them and thinking up new questions to ask it).
 
  • #73
PeterDonis said:
we know that because we know how ChatGPT works: as I said, it works solely by looking at connections between words, and it has no information at all about connections between words and things that aren't words.

The idea it uses connections and patterns between words is only an assumption. We assume that, because we assume that is what would be required to accomplish what it does. Connections between words in and of itself is something which is quite general.

Regarding what it actually encodes in its latent space, nobody understands that precisely in intuitive terms. We can just say it is a very complex model with lots of parameters that emerges when you train it in some way. The way it actually works (to achieve what it does) may be very difficult for us to understand intuitively.
 
  • #74
Jarvis323 said:
The idea it uses connections and patterns between words is only an assumption.
No, it's not. The people who built it told us how they did it.

Jarvis323 said:
Regarding what it actually encodes in its latent space, nobody understands that.
We don't have to to know what its inputs are. And its inputs are a fixed set of training data text. It gets no inputs that aren't text. Humans get enormous amounts of inputs that aren't text. And all those non-text inputs are where our understanding of the meanings of words comes from.

ChatGPT reminds me of a common criticism of the Turing Test: the fact that, as Turing originally formulated the test, it involves a human typing text to the computer and the computer responding with more text. Taken literally, this is a severe restriction on the kinds of questions that can be asked. For example, you could not ask the computer, "What color is the shirt I'm wearing?" because you can't assume that the computer has any sensory inputs that would allow it to answer correctly. But of course in the real world we assess people's intelligence all the time by seeing how aware they are of the world around them and how well they respond to changes in the world that affect them. None of this is even possible with ChatGPT.
 
  • #75
PeterDonis said:
ChatGPT does not do those things. It encodes a fixed set of training data,
This isn't accurate. It learns a model for generating new data in the same distribution as the training data. It does have information from the training data in some latent form of memory, but also it has what it has learned that allows it to use that information to create new data. The latter is mysterious and not so limited in theory.
 
Last edited:
  • #76
Jarvis323 said:
It learns a model for generating new data in the same distribution as the training data.
Sure, but that doesn't give it any new input. Its internal model is never calibrated against any further data.

Jarvis323 said:
it has what it has learned that allows it to use that information to create new data.
It's not "creating new data" because whatever output it generates, and whatever response that output causes in the external world, is never used to update the internal model.

That is not the case with humans. It's also not the case with other AI that has been mentioned in this thread, such as programs for playing chess or Go. Those programs update their internal models with new data from the world, including their own moves and the changes those cause, and the moves of their opponents and the changes those cause. That's a big difference.
 
  • #77
PeterDonis said:
No, it's not. The people who built it told us how they did it.
They didn't do it though, back propagation did it.
 
  • #78
PeterDonis said:
Sure, but that doesn't give it any new input. Its internal model is never calibrated against any further data.

It's not "creating new data" because whatever output it generates, and whatever response that output causes in the external world, is never used to update the internal model.

There is a slight exception, because the model can be continually fine-tuned based on user feedback, and it can be trained incrementally picking up where it left off.

Also people are now interacting and learning from it. A future version will be informed by people who learned from its previous version through the next generation of training data.

People obtain information from their environment, but most of the information they obtain is not really fundamentally new information. For the most part, we do our thinking using pre-existing information.
 
  • #79
Jarvis323 said:
They didn't do it though, back propagation did it.
You're quibbling. They gave it a fixed set of input data and executed a training algorithm that generated an internal model. The fact that the resulting internal model was not known in advance to the programmers, nor understood by them (or anyone else) once it was generated is irrelevant to the points I have been making.

Jarvis323 said:
A future version will be informed by people who learned from its previous version through the next generation of training data.
That's true, but then the next version will still have the same limitations as the current one: its data set is fixed and its internal model won't get updated once it's generated from that fixed data set.
 
  • #80
PeterDonis said:
its internal model won't get updated once it's generated from that fixed data set.

Unless you operate it in the mode where it learns from user feedback, or unless someone decides to train it some more with some new data.
 
Last edited:
  • #81
ChatGPT is new Google

In the first phase I played with ChatGPT, to see what it can and can't do. Now I use it as a practical tool. For example, I used it to correct my English grammar and to solve some problems with Windows. It's comparable to Google, in the sense that it is an online tool with a very very wide spectrum of possible uses.

At first I was in love with ChatGPT. Now it's one of my best friends, very much like Google. I bookmarked ChatGPT next to Google.
 
  • #82
I asked it a few questions about cricket and although it did its usual thing, it clearly didn't understand the game. Like getting terminology slightly muddled. It started to feel a bit like arguing with something who thought they knew something and could quote material ad nauseum, but was ultimately just quoting what it didn't really understand.

It also occurred to me how much my knowledge of cricket comes from memories of watching the game and not solely of what has been written about it. I could bore you with information, anecdotes and jokes about the game that are not documented anywhere. Training data must eventually include video of the game being played, which the AI would have to watch, understand and be able to relate to the words written about the game.

The most impressive thing is how well it understands the context of the question.
 
  • #83
PeroK said:
The most impressive thing is how well it understands the context of the question.
How would you define "it understands"? Does it understand or does it seem it understands?

It seems to me that ChatGPT isn't more that a very helpful tool. If it says "I" doesn't indicate that it has a shimmer of self-awareness. It is far away from human intelligence which is closely intertwined with self-awareness.
 
  • Like
Likes Lord Jestocost
  • #84
Well, the question is, whether you can distinguish some "babbling of just words" (be it from a lazy student, as I was concerning the "humanities" in high school, where I just adapted the lingo of the teacher to get best grades ;-), or an AI that just mimicks this by learning this babbling from just being fed with a huge amount of such texts) or some "true thought about a subject". I'm not so sure that this is possible, as this little experiment by the anglistic professor with her colleague showed: The colleague admitted that he was not able to decide with certainty whether a text was written by a human or the AI!
 
  • Like
Likes mattt, Demystifier, dextercioby and 1 other person
  • #85
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
 
  • Haha
  • Like
Likes aaroman, weirdoguy and vanhees71
  • #86
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
Perhaps that says something about the relative difficulty of those two topics!
 
  • Haha
Likes weirdoguy and vanhees71
  • #87
martinbn said:
But it can write articles on the foundations fo QM just like the experts in the field.:devil:
But is this surprising? Aren't those articles resulting from a "clever" algorithm created by a clever person?

I'm not surprised that I have no chances against a chess engine.
 
  • #88
PeroK said:
Perhaps that says something about the relative difficulty of those two topics!
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
The foundations of QM are to a large part philosophy and thus texts about it can be built by an AI just imitating the usual babbling of this community. To answer a concrete physics or math question is more difficult since you expect a concrete unique answer and no babbling. SCNR.
 
  • Haha
  • Like
Likes aaroman, timmdeeg and PeroK
  • #89
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
Mathematica, on the other hand, can answer high school math questions, but can't write articles on quantum foundations. Those are simply different tools, it doesn't follow from it that one thing is easy and the other hard.
 
  • Like
Likes gentzen, dextercioby and PeroK
  • #90
vanhees71 said:
The foundations of QM are to a large part philosophy and thus texts about it can be built by an AI just imitating the usual babbling of this community. To answer a concrete physics or math question is more difficult since you expect a concrete unique answer and no babbling.
True. But let me just remind that what you just said here (and what I am saying now) is "babbling", not a unique answer. Both "babbling" and unique answers have a value for humans, and require intelligence from a human side. People who are good in unique answers but very bad in "babbling" are probably autists.
 

Similar threads

  • · Replies 376 ·
13
Replies
376
Views
21K
  • · Replies 6 ·
Replies
6
Views
577
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 155 ·
6
Replies
155
Views
4K
  • · Replies 292 ·
10
Replies
292
Views
11K
Replies
9
Views
3K
  • · Replies 5 ·
Replies
5
Views
396
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K