Asking ChatGPT hard physics questions

In summary, ChatGPT is an artificial intelligence system that is able to ask difficult questions about quantum mechanics.
  • #71
Jarvis323 said:
Text can encode any kind of finite discrete information, including concepts.
No, "text" alone can't do that. Text only encodes things that aren't text to humans, or other entities who can understand the semantic meanings involved. ChatGPT can't do that; we know that because we know how ChatGPT works: as I said, it works solely by looking at connections between words, and it has no information at all about connections between words and things that aren't words.

Please note that I am not saying that no computer program at all could have information about semantic meanings; just that any such program would have to actually have semantic connections to the world the way humans do. It would have to have actual sense organs that were affected by the world, and actual motor organs that could do things in the world, and the program would have to be connected to those sense organs and motor organs in a way similar to the way human sense organs and motor organs are connected.

ChatGPT has none of those things. It is just a program that looks at a fixed set of text data and encodes patterns in the connections between words.
 
  • Like
Likes InkTide and Demystifier
Physics news on Phys.org
  • #72
Hornbein said:
As to games like chess and go, applying heuristics and doing a certain amount of forward search of the consequences is how both human and AI champions do what they do. I say that either they are both intelligent or both not.
Even if we accept this as true for purposes of discussion (I'm not sure it's entirely true, but that's a matter of opinion that goes well off topic for this thread), note that ChatGPT does not do these things. A chess or go computer program makes moves that affect the world, and it sees the changes in the world that result; it also sees its opponent making moves and sees the changes in the world that result. That is true even for a program like Alpha Zero that learned solely by playing itself.

ChatGPT does not do those things. It encodes a fixed set of training data, and its responses do not take into account any changes in the world, including changes that are caused by those responses (such as humans reading them and thinking up new questions to ask it).
 
  • #73
PeterDonis said:
we know that because we know how ChatGPT works: as I said, it works solely by looking at connections between words, and it has no information at all about connections between words and things that aren't words.

The idea it uses connections and patterns between words is only an assumption. We assume that, because we assume that is what would be required to accomplish what it does. Connections between words in and of itself is something which is quite general.

Regarding what it actually encodes in its latent space, nobody understands that precisely in intuitive terms. We can just say it is a very complex model with lots of parameters that emerges when you train it in some way. The way it actually works (to achieve what it does) may be very difficult for us to understand intuitively.
 
  • #74
Jarvis323 said:
The idea it uses connections and patterns between words is only an assumption.
No, it's not. The people who built it told us how they did it.

Jarvis323 said:
Regarding what it actually encodes in its latent space, nobody understands that.
We don't have to to know what its inputs are. And its inputs are a fixed set of training data text. It gets no inputs that aren't text. Humans get enormous amounts of inputs that aren't text. And all those non-text inputs are where our understanding of the meanings of words comes from.

ChatGPT reminds me of a common criticism of the Turing Test: the fact that, as Turing originally formulated the test, it involves a human typing text to the computer and the computer responding with more text. Taken literally, this is a severe restriction on the kinds of questions that can be asked. For example, you could not ask the computer, "What color is the shirt I'm wearing?" because you can't assume that the computer has any sensory inputs that would allow it to answer correctly. But of course in the real world we assess people's intelligence all the time by seeing how aware they are of the world around them and how well they respond to changes in the world that affect them. None of this is even possible with ChatGPT.
 
  • #75
PeterDonis said:
ChatGPT does not do those things. It encodes a fixed set of training data,
This isn't accurate. It learns a model for generating new data in the same distribution as the training data. It does have information from the training data in some latent form of memory, but also it has what it has learned that allows it to use that information to create new data. The latter is mysterious and not so limited in theory.
 
Last edited:
  • #76
Jarvis323 said:
It learns a model for generating new data in the same distribution as the training data.
Sure, but that doesn't give it any new input. Its internal model is never calibrated against any further data.

Jarvis323 said:
it has what it has learned that allows it to use that information to create new data.
It's not "creating new data" because whatever output it generates, and whatever response that output causes in the external world, is never used to update the internal model.

That is not the case with humans. It's also not the case with other AI that has been mentioned in this thread, such as programs for playing chess or Go. Those programs update their internal models with new data from the world, including their own moves and the changes those cause, and the moves of their opponents and the changes those cause. That's a big difference.
 
  • #77
PeterDonis said:
No, it's not. The people who built it told us how they did it.
They didn't do it though, back propagation did it.
 
  • #78
PeterDonis said:
Sure, but that doesn't give it any new input. Its internal model is never calibrated against any further data.

It's not "creating new data" because whatever output it generates, and whatever response that output causes in the external world, is never used to update the internal model.

There is a slight exception, because the model can be continually fine-tuned based on user feedback, and it can be trained incrementally picking up where it left off.

Also people are now interacting and learning from it. A future version will be informed by people who learned from its previous version through the next generation of training data.

People obtain information from their environment, but most of the information they obtain is not really fundamentally new information. For the most part, we do our thinking using pre-existing information.
 
  • #79
Jarvis323 said:
They didn't do it though, back propagation did it.
You're quibbling. They gave it a fixed set of input data and executed a training algorithm that generated an internal model. The fact that the resulting internal model was not known in advance to the programmers, nor understood by them (or anyone else) once it was generated is irrelevant to the points I have been making.

Jarvis323 said:
A future version will be informed by people who learned from its previous version through the next generation of training data.
That's true, but then the next version will still have the same limitations as the current one: its data set is fixed and its internal model won't get updated once it's generated from that fixed data set.
 
  • #80
PeterDonis said:
its internal model won't get updated once it's generated from that fixed data set.

Unless you operate it in the mode where it learns from user feedback, or unless someone decides to train it some more with some new data.
 
Last edited:
  • #81
ChatGPT is new Google

In the first phase I played with ChatGPT, to see what it can and can't do. Now I use it as a practical tool. For example, I used it to correct my English grammar and to solve some problems with Windows. It's comparable to Google, in the sense that it is an online tool with a very very wide spectrum of possible uses.

At first I was in love with ChatGPT. Now it's one of my best friends, very much like Google. I bookmarked ChatGPT next to Google.
 
  • Like
Likes mattt
  • #82
I asked it a few questions about cricket and although it did its usual thing, it clearly didn't understand the game. Like getting terminology slightly muddled. It started to feel a bit like arguing with something who thought they knew something and could quote material ad nauseum, but was ultimately just quoting what it didn't really understand.

It also occurred to me how much my knowledge of cricket comes from memories of watching the game and not solely of what has been written about it. I could bore you with information, anecdotes and jokes about the game that are not documented anywhere. Training data must eventually include video of the game being played, which the AI would have to watch, understand and be able to relate to the words written about the game.

The most impressive thing is how well it understands the context of the question.
 
  • Like
Likes SammyS
  • #83
PeroK said:
The most impressive thing is how well it understands the context of the question.
How would you define "it understands"? Does it understand or does it seem it understands?

It seems to me that ChatGPT isn't more that a very helpful tool. If it says "I" doesn't indicate that it has a shimmer of self-awareness. It is far away from human intelligence which is closely intertwined with self-awareness.
 
  • Like
Likes Lord Jestocost
  • #84
Well, the question is, whether you can distinguish some "babbling of just words" (be it from a lazy student, as I was concerning the "humanities" in high school, where I just adapted the lingo of the teacher to get best grades ;-), or an AI that just mimicks this by learning this babbling from just being fed with a huge amount of such texts) or some "true thought about a subject". I'm not so sure that this is possible, as this little experiment by the anglistic professor with her colleague showed: The colleague admitted that he was not able to decide with certainty whether a text was written by a human or the AI!
 
  • Like
Likes mattt, Demystifier, dextercioby and 1 other person
  • #85
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
 
  • Haha
  • Like
Likes aaroman, weirdoguy and vanhees71
  • #86
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
Perhaps that says something about the relative difficulty of those two topics!
 
  • Haha
Likes weirdoguy and vanhees71
  • #87
martinbn said:
But it can write articles on the foundations fo QM just like the experts in the field.:devil:
But is this surprising? Aren't those articles resulting from a "clever" algorithm created by a clever person?

I'm not surprised that I have no chances against a chess engine.
 
  • Like
Likes vanhees71
  • #88
PeroK said:
Perhaps that says something about the relative difficulty of those two topics!
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
The foundations of QM are to a large part philosophy and thus texts about it can be built by an AI just imitating the usual babbling of this community. To answer a concrete physics or math question is more difficult since you expect a concrete unique answer and no babbling. SCNR.
 
  • Haha
  • Like
Likes aaroman, timmdeeg and PeroK
  • #89
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
Mathematica, on the other hand, can answer high school math questions, but can't write articles on quantum foundations. Those are simply different tools, it doesn't follow from it that one thing is easy and the other hard.
 
  • Like
Likes gentzen, dextercioby and PeroK
  • #90
vanhees71 said:
The foundations of QM are to a large part philosophy and thus texts about it can be built by an AI just imitating the usual babbling of this community. To answer a concrete physics or math question is more difficult since you expect a concrete unique answer and no babbling.
True. But let me just remind that what you just said here (and what I am saying now) is "babbling", not a unique answer. Both "babbling" and unique answers have a value for humans, and require intelligence from a human side. People who are good in unique answers but very bad in "babbling" are probably autists.
 
  • Like
Likes gentzen
  • #91
PeroK said:
Perhaps that says something about the relative difficulty of those two topics!
See my #89.
 
  • #92
Demystifier said:
Mathematica, on the other hand, can answer high school math questions, but can't write articles on quantum foundations. Those are simply different tools, it doesn't follow from it that one thing is easy and the other hard.
No, Mathematica cannot answer high school questions.
 
  • #93
martinbn said:
No, Mathematica cannot answer high school questions.
It can't if the questions are written in normal English. But it can if they are written in formal Mathematica language. For example, instead of asking "What is the solution of ...", in Mathematica you write something like "Solve[...]".
 
  • Like
Likes mattt, aaroman, dextercioby and 1 other person
  • #94
Demystifier said:
It can't if the questions are written in normal English. But it can if they are written in formal Mathematica language. For example, instead of asking "What is the solution of ...", in Mathematica you write something like "Solve[...]".
What if the problem is "the room is 5 by 8 the price of carpet is 3 per m^2. What is the cost?" Or some geometry problem which reaquires for the solution to prove something.
 
  • Sad
Likes PeroK
  • #95
martinbn said:
What if the problem is "the room is 5 by 8 the price of carpet is 3 per m^2. What is the cost?" Or some geometry problem which reaquires for the solution to prove something.
I believe we are not far from a tool which will combine ideas of systems such as ChatGPT and Mathematica to solve such problems.
 
  • Like
Likes PeroK, dextercioby and vanhees71
  • #96
There are AIs that do approximately average (amongst a select group of high achievers!) on high school math olympiad level questions. It does particularly well where things require formal proofs (its less good with geometry).

ChatGPT right now produces essays roughly at the first year undergraduate level (less in some respects, better in others) and can generalize concepts in essays at roughly the same level (although it is confidently incorrect a good percentage, where perhaps a human might be more cautious).

The way this stuff is scaling, I don’t think its very long (<10 years) before they are solving properly formulated physics/math subproblems (the type we give students for research projects).

I suspect it will be somewhat field dependant for awhile. One thing I have come to expect with this revolution, is that it struggles in places where naively I would have thought it would be strong, and does far better where I would have expected it to be weak.
 
  • Like
Likes dextercioby and vanhees71
  • #97
Me:"Write a python script that characterizes a quantum particle in a well using the consistent histories formalism"

ChatGPT:"Sure! Here is a python script that characterizes a quantum particle in a well using the consistent histories formalism

import numpy as np # Define the Hamiltonian for the quantum particle in a well def Hamiltonian(x, p, mass, well_width): H = (p**2) / (2 * mass) + (well_width**2) * (x**2) / 2 return H # Define the decoherence functional for the quantum particle in a well def decoherence_functional(x1, x2, t1, t2, mass, well_width): delta = x1 - x2 prefactor = (mass / (2 * np.pi * (t1 - t2)))**0.5 exponent = - mass * delta**2 / (2 * (t1 - t2)) D = prefactor * np.exp(exponent) return D # Define the function that computes the probabilities for the quantum particle in a well def probabilities(x1_values, x2_values, t1, t2, mass, well_width): prob = [] for x1 in x1_values: for x2 in x2_values: D = decoherence_functional(x1, x2, t1, t2, mass, well_width) prob.append(D) return prob # Define the initial and final times for the quantum particle t1 = 0 t2 = 1 # Define the mass and well width for the quantum particle mass = 1 well_width = 1 # Define the values of x1 and x2 to compute the probabilities for x1_values = np.linspace(-5, 5, 100) x2_values = np.linspace(-5, 5, 100) # Compute the probabilities for the quantum particle in a well probs = probabilities(x1_values, x2_values, t1, t2, mass, well_width)

This script defines a function for the Hamiltonian of a quantum particle in a well, a function for the decoherence functional of the particle, and a function for computing the probabilities of the particle being in different positions at different times using the consistent histories formalism. It then sets the initial and final times for the particle, the mass and well width, and the values of x1 and x2 to compute the probabilities for, and calls the probability function to get the probabilities."I haven't reviewed the script yet but it's interesting that it at least gets the jist.
 
  • Like
  • Wow
Likes mattt, gentzen, PeroK and 2 others
  • #98
Me: Solve ##x^5=1##
ChatGPT:
If ##x^5 = 1##, then ##x## is one of the fifth roots of ##1##. The fifth roots of ##1## are ##1, -1, i, -i##, and ##-i^2##. Note that ##i^2 = -1##. Therefore, the solutions to ##x^5 = 1## are:
##1, -1, i, -i##, and ##-i^2##
So, ##x## can be any of these values.

Me: Solve ##x^5=x##
ChatGPT:
To solve ##x^5 = x##, we can use the fact that ##x^5 - x = 0##. We can then factor this equation as follows:

##(x^5 - x) = 0##

##(x^4 + x^3 + x^2 + x + 1)(x - 1) = 0##

Therefore, the solutions to ##x^5 = x## are ##x = 1## and ##x = -(x^4 + x^3 + x^2 + x + 1)##. The first solution, ##x = 1##, is a real solution. The second solution, ##x = -(x^4 + x^3 + x^2 + x + 1)##, is not a real solution.
 
  • Haha
  • Skeptical
Likes mattt, aaroman and vanhees71
  • #99
So cheating using the bot in a math exam for the 1st semester might not be the best idea ;-).
 
  • Like
  • Haha
Likes PeroK and Demystifier
  • #100
martinbn said:
Me: Solve x^5=0.
Me: Explain Galois theory such that an average physicist can understand it.

I will not copy paste the answer, but let me just say that now I understand Galois theory better than before.
 
  • Like
  • Wow
Likes mattt, PeroK, physika and 1 other person
  • #101
Are you sure that the bot explained it right to you? Given the answer to ##x^5=1##, I'd be very skeptical.
 
  • Like
Likes aaroman
  • #103
vanhees71 said:
Are you sure that the bot explained it right to you? Given the answer to ##x^5=1##, I'd be very skeptical.
I don't understand your point. Maybe I should ask ChatGPT to explain me what you meant?
 
  • Haha
Likes PeroK and vanhees71
  • #104
Demystifier said:
Me: Explain Galois theory such that an average physicist can understand it.

I will not copy paste the answer, but let me just say that now I understand Galois theory better than before.
It obviously gave the wrong answer. The correct answer is that the average physicist cannot understand Galois theory. :devil:
 
  • Haha
  • Sad
  • Like
Likes aaroman, Demystifier, SammyS and 2 others
  • #105
Demystifier said:
I will not copy paste the answer...
Out of curiosity, is this the answer?

Galois theory is a branch of mathematics that deals with the study of field extensions. Specifically, it studies the conditions under which a field can be obtained as an extension of another field, and the properties of these extensions.
One of the main results of Galois theory is the Galois correspondence, which gives a one-to-one correspondence between subgroups of the Galois group of a field extension and intermediate fields of the extension. This result allows for a deep understanding of the symmetries inherent in field extensions, and has numerous applications in mathematics and physics.
Galois theory has found applications in a wide range of fields, including number theory, algebraic geometry, and the study of elliptic curves. It also has applications in physics, particularly in the study of quantum mechanics and the theory of solitons.
Overall, Galois theory is a fundamental tool for understanding the structure of field extensions and their symmetries, and has had a profound impact on a wide range of mathematical and physical disciplines.
 

Similar threads

  • Quantum Interpretations and Foundations
11
Replies
376
Views
10K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Physics
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
Replies
1
Views
319
  • Quantum Interpretations and Foundations
Replies
9
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
745
  • Quantum Interpretations and Foundations
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
Replies
9
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
41
Views
5K
  • Quantum Physics
Replies
4
Views
964
Back
Top