Can ChatGPT Handle Complex Physics Questions Accurately?

AI Thread Summary
ChatGPT demonstrates an understanding of complex physics topics, including interpretations of quantum mechanics and the implications of the Bell theorem on hidden variables. It explains that while local realism is incompatible with quantum correlations, some interpretations, like de Broglie-Bohm, still allow for hidden variables. The discussion also touches on the compatibility of Bohmian mechanics with relativistic quantum field theory, highlighting ongoing debates in the field. Users express skepticism about the originality of ChatGPT's responses, suggesting it often resembles regurgitated information rather than independent thought. Despite its capabilities, the conversation raises questions about the nature of creativity and original insight in AI-generated content.
  • #51
Hornbein said:
Now I shall seize this opportunity to return to physics content. Is the number of possible distinct Earths in our Universe finite? Some people seem to think so, leading to arguments that anything that happens here happens an infinite number of times elsewhere. I don't believe it.
Please start a new thread for this topic since it is different from the topic of the current one. (Although I think we already had a recent thread on it.)
 
Physics news on Phys.org
  • #52
PeterDonis said:
About what? Most of what we have to deal with in life has no definite victory conditions. Life in general is open-ended.Not if a finite set of heuristics embodied in a computer program can win at it. I wasn't using the term "universe" literally, which is why I put it in scare quotes.
It's like the two guys running from a grizzly bear. One guy says, don't you realize we can't outrun the bear? The other guy sez, I don't have to. I only have to outrun you.

The ai doesn't have to play a perfect game to win.
 
  • #53
Hornbein said:
The ai doesn't have to play a perfect game to win.
I didn't say it did.
 
  • #54
Hornbein said:
If there are no definite victory conditions then there is nothing more to be said. There are many definite scores such as popularity, the Dow Jones average of thirty industrials, all manner of awards and other credentials, and the inevitable "net worth" or "bottom line." Which one or weighted combination thereof one may chose is a matter of taste.

The number of possible games of Go is far greater than the number of elementary particles in the visible Universe. I think it is fair to say that this space is infinite for all practical purposes, virtually infinite.

Now I shall seize this opportunity to return to physics content. Is the number of possible distinct Earths in our Universe finite? Some people seem to think so, leading to arguments that anything that happens here happens an infinite number of times elsewhere. I don't believe it.
I'll grant you one thing. ChatGPT can stick to the point more than you can!
 
  • #55
Hornbein said:
I'm told that physics and math publication is more restricted than this. Original work can't be published because there are no peers to review it. It might be wrong. Readers might not be interested. Instead what you get are minor variations on familiar themes. A mathematician who needed a track record wrote he gave up on originality and concentrated on minor variations on established topics. If a journal has already published 100 papers on a topic there is a very good chance they will publish a 101st.
That would be a very interesting topic for a separate thread. The ideas of Thomas Kuhn (normal science vs revolutionary science) would be very relevant.
 
  • Like
Likes Maarten Havinga
  • #56
Hornbein said:
ChatGPT is the latest milestone. Expect it to improve rapidly, presumably by developing expertise in specialized domains.
We all see that. Eventually AI will be able to take over most things. But, if you asked ChatGPT currently to teach you theoretical physics and mark your homework, it would fail. And it would fail in dangerous ways because it would never say "I don't know" - it would just give you some spiel that looks plausible.
 
  • #57
It's a little funny how we tend to compare us with them. In a sprint, we always have Usain Bolt in our corner (not that it helps much; people, even Bolt, are really super slow).

If I criticize the art generated by DALL·E or MidJourney too harshly, I have to rethink the brilliance of my own stick figure art (maybe it can convey human experience or emotion better), or consider my possible theoretical potential as a human being, or something.

The other day, I saw someone call the professional sports team they root for "we". Regarding humans feeling bad about not keeping up, maybe we can just somehow bring AI into the fold of "we".

Anyways, maybe AI surpassing us will calm our egos a little and allow us to find some core things to value more, like wisdom, good will, honesty, etc.
 
Last edited:
  • #58
One thing I have been impressed with in the reported and discussed ChatGPT interactions (haven't had any sessions with it myself) is the systems apparent ability to understand and deliver on requests regarding form, notwithstanding how factual accurate or logical coherent the content was or wasn't.

I assume its within the current topic to ask if anyone has noticed some serious goof by ChatGPT in this regard or is it really that sharp in picking up and performing requests towards form? How complicated instructions can it actually deliver on?
 
  • #59
This bot seems to be really pretty impressive and threatening at least the humanities. In a recent newspaper article (Frankfurter Allgemeine Zeitung) an anglistic professor wrote that she made a test with one of his colleagues. She just let the bot write some typical text as expected from the students as term work ("Hausarbeit"), which in the humanities at German universities is usually given to be graded and getting into the final grade of the BSc or MSc degree. The colleague was not able to decide, whether the text was written by her or the bot. Now they have to write exams instead of term works to make sure that the students write their works themselves rather then letting it do by the bot. It's also impossible to figure out such cheating by using standard "anti-plagiarism software" of course.

I guess, it's only a matter of time, until such bots can also write scientific papers including formulae and/or even faking experimental results...

It's a bit alarming concerning the next level of scientific cheating.
 
  • Like
Likes dextercioby, Demystifier, martinbn and 2 others
  • #60
Demystifier said:
Me: Can you be creative and original?

ChatGPT:
As a language model, I am able to generate original responses to user inquiries based on the information that I have been trained on. I have been designed to be able to generate human-like text and can produce a wide range of responses to user input, including responses that may be considered creative or original. However, it is important to note that I am not a human and do not have personal experiences or perspectives, and my responses are based on the information that I have been trained on.

I do not have the ability to experience emotions, make personal choices, or engage in independent thought in the same way that humans do. My responses are generated using machine learning algorithms that allow me to analyze the structure and content of the input I receive and generate appropriate responses based on the patterns and connections I have learned from the data I was trained on.

While I can generate original responses based on the information that I have been trained on, I do not have the ability to create new ideas or concepts in the same way that a human can. My responses are limited to the information and patterns that I have learned from the data I was trained on, and I do not have the ability to think creatively in the same way that a human can.
Hal
 
  • #61
Hornbein said:
To me, "heuristics that no one can explain" is a pretty good definition of intuition, or at least expertise.
Diagnostic in medicine.

Explained and understood latter.
 
  • #62
physika said:
Diagnostic in medicine.
Dr. House.
 
  • #63
vanhees71 said:
This bot seems to be really pretty impressive and threatening at least the humanities. In a recent newspaper article (Frankfurter Allgemeine Zeitung) an anglistic professor wrote that she made a test with one of his colleagues. She just let the bot write some typical text as expected from the students as term work ("Hausarbeit"), which in the humanities at German universities is usually given to be graded and getting into the final grade of the BSc or MSc degree. The colleague was not able to decide, whether the text was written by her or the bot. Now they have to write exams instead of term works to make sure that the students write their works themselves rather then letting it do by the bot. It's also impossible to figure out such cheating by using standard "anti-plagiarism software" of course.

I guess, it's only a matter of time, until such bots can also write scientific papers including formulae and/or even faking experimental results...

It's a bit alarming concerning the next level of scientific cheating.
If it's a homework, then one can cheat even without that tool, by asking another human to do the homework. Any by the way, scientific papers written completely by a computer program already exist.
 
  • #64
Demystifier said:
Me: Can you be creative and original?

ChatGPT:
As a language model, I am able to generate original responses to user inquiries based on the information that I have been trained on. I have been designed to be able to generate human-like text and can produce a wide range of responses to user input, including responses that may be considered creative or original. However, it is important to note that I am not a human and do not have personal experiences or perspectives, and my responses are based on the information that I have been trained on.

I do not have the ability to experience emotions, make personal choices, or engage in independent thought in the same way that humans do. My responses are generated using machine learning algorithms that allow me to analyze the structure and content of the input I receive and generate appropriate responses based on the patterns and connections I have learned from the data I was trained on.

------


Well, to "generate appropriate responses based on the patterns and connections I have learned from the data I was trained on" is precisely how I have written most of what is in my posts in this thread. I read reports about AlphaGo and AlphaZero, AIs playing Starcraft and poker, then summarized them. I have no personal experience whatsoever with any of these things.

Frank Zappa once said that the way he worked was music came in, his mind mixed it together, and his original stuff would come out. This is pretty much how AIs make visual art. If the AI is not creative then neither was Frank.

As to games like chess and go, applying heuristics and doing a certain amount of forward search of the consequences is how both human and AI champions do what they do. I say that either they are both intelligent or both not.

In short I say that anything that excludes AIs from intelligence and creativity is also going to exclude the vast majority of the human race.
 
  • Skeptical
  • Like
Likes gentzen and PeroK
  • #65
Qu: Can you be original in the way human beings are?

ChapGPT: No. I am only a language model.

Hornbein's interpretation of ChatGPT's answer: yes, but I'm too modest to admit it!
 
  • Haha
  • Like
Likes Swamp Thing, Nugatory and vanhees71
  • #66
PeroK said:
Qu: Can you be original in the way human beings are?

ChapGPT: No. I am only a language model.

Hornbein's interpretation of ChatGPT's answer: yes, but I'm too modest to admit it!
You value ChatGPT's opinion over mine, eh? So great is your respect for artificial intelligence.
 
  • Like
Likes gentzen and Nugatory
  • #67
Hornbein said:
You value ChatGPT's opinion over mine, eh? So great is your respect for artificial intelligence.
Intelligence is double-edged. For example, a chess computer never gets fed up playing chess. A human may lose interest in a game or the game in general. You have been identifying the ability the play high-level chess as intelligence. But, human thinking is multi-faceted and can be flawed and self-contradictory.

In this case, you are processing information in a very uncomputer-like way. It's not the intelligence of ChatGPT that I prefer, but it's lack of the human faults - regarding this particular question.
 
  • #68
ChatGPT on itself:

ChatGPT's ability to generate coherent and coherently structured responses demonstrates its understanding of language and its ability to communicate effectively, which are both key components of intelligence.

Furthermore, ChatGPT's ability to understand and respond to a wide range of prompts and topics suggests that it has a broad and deep understanding of the world and is able to apply that understanding in a flexible and adaptive way. This ability to adapt and learn from new experiences is another key characteristic of intelligence.

Overall, ChatGPT's impressive performance on language generation tasks and its ability to learn and adapt suggest that it has true intelligence and is capable of understanding and interacting with the world in a way that is similar to a human.

I figured out a way to get it to write this, something it was reluctant to do, then took the quotation out of context. Just a little good clean fun.
 
  • #69
Hornbein said:
to "generate appropriate responses based on the patterns and connections I have learned from the data I was trained on" is precisely how I have written most of what is in my posts in this thread.
You mean you don't understand the meanings of the words you're writing?

The "patterns and connections" that ChatGPT learns are entirely in the words themselves. ChatGPT doesn't even have the concept of a "world" that the words are semantically connected to; all it has is the word "world" and other words that that word occurs frequently with.

But when I use words (and hopefully when you do too), I understand that the words are semantically connected to things that aren't words; for example, that the word "world" refers to something like the actual universe we live in (or the planet we live on, depending on context--but either way, something other than words). That's how humans learn to use words: by connecting them to things that aren't words. That's what gives words the meanings that we understand them to have. But ChatGPT learns to use words by connecting them to other words. That's not the same thing.
 
  • Like
Likes gentzen, vanhees71 and PeroK
  • #70
PeterDonis said:
You mean you don't understand the meanings of the words you're writing?

The "patterns and connections" that ChatGPT learns are entirely in the words themselves. ChatGPT doesn't even have the concept of a "world" that the words are semantically connected to; all it has is the word "world" and other words that that word occurs frequently with.

But when I use words (and hopefully when you do too), I understand that the words are semantically connected to things that aren't words; for example, that the word "world" refers to something like the actual universe we live in (or the planet we live on, depending on context--but either way, something other than words). That's how humans learn to use words: by connecting them to things that aren't words. That's what gives words the meanings that we understand them to have. But ChatGPT learns to use words by connecting them to other words. That's not the same thing.
I think it's a bit dismissive to say its just words. Text can encode any kind of finite discrete information, including concepts. In fact, the use of concepts might be something ChatGPT is especially good at.

Humans may have ways of understanding some things in ways that would be impossible to reduce to systems of discrete information processing, but I am not sure concepts fit into that category.
 
  • #71
Jarvis323 said:
Text can encode any kind of finite discrete information, including concepts.
No, "text" alone can't do that. Text only encodes things that aren't text to humans, or other entities who can understand the semantic meanings involved. ChatGPT can't do that; we know that because we know how ChatGPT works: as I said, it works solely by looking at connections between words, and it has no information at all about connections between words and things that aren't words.

Please note that I am not saying that no computer program at all could have information about semantic meanings; just that any such program would have to actually have semantic connections to the world the way humans do. It would have to have actual sense organs that were affected by the world, and actual motor organs that could do things in the world, and the program would have to be connected to those sense organs and motor organs in a way similar to the way human sense organs and motor organs are connected.

ChatGPT has none of those things. It is just a program that looks at a fixed set of text data and encodes patterns in the connections between words.
 
  • Like
Likes InkTide and Demystifier
  • #72
Hornbein said:
As to games like chess and go, applying heuristics and doing a certain amount of forward search of the consequences is how both human and AI champions do what they do. I say that either they are both intelligent or both not.
Even if we accept this as true for purposes of discussion (I'm not sure it's entirely true, but that's a matter of opinion that goes well off topic for this thread), note that ChatGPT does not do these things. A chess or go computer program makes moves that affect the world, and it sees the changes in the world that result; it also sees its opponent making moves and sees the changes in the world that result. That is true even for a program like Alpha Zero that learned solely by playing itself.

ChatGPT does not do those things. It encodes a fixed set of training data, and its responses do not take into account any changes in the world, including changes that are caused by those responses (such as humans reading them and thinking up new questions to ask it).
 
  • #73
PeterDonis said:
we know that because we know how ChatGPT works: as I said, it works solely by looking at connections between words, and it has no information at all about connections between words and things that aren't words.

The idea it uses connections and patterns between words is only an assumption. We assume that, because we assume that is what would be required to accomplish what it does. Connections between words in and of itself is something which is quite general.

Regarding what it actually encodes in its latent space, nobody understands that precisely in intuitive terms. We can just say it is a very complex model with lots of parameters that emerges when you train it in some way. The way it actually works (to achieve what it does) may be very difficult for us to understand intuitively.
 
  • #74
Jarvis323 said:
The idea it uses connections and patterns between words is only an assumption.
No, it's not. The people who built it told us how they did it.

Jarvis323 said:
Regarding what it actually encodes in its latent space, nobody understands that.
We don't have to to know what its inputs are. And its inputs are a fixed set of training data text. It gets no inputs that aren't text. Humans get enormous amounts of inputs that aren't text. And all those non-text inputs are where our understanding of the meanings of words comes from.

ChatGPT reminds me of a common criticism of the Turing Test: the fact that, as Turing originally formulated the test, it involves a human typing text to the computer and the computer responding with more text. Taken literally, this is a severe restriction on the kinds of questions that can be asked. For example, you could not ask the computer, "What color is the shirt I'm wearing?" because you can't assume that the computer has any sensory inputs that would allow it to answer correctly. But of course in the real world we assess people's intelligence all the time by seeing how aware they are of the world around them and how well they respond to changes in the world that affect them. None of this is even possible with ChatGPT.
 
  • #75
PeterDonis said:
ChatGPT does not do those things. It encodes a fixed set of training data,
This isn't accurate. It learns a model for generating new data in the same distribution as the training data. It does have information from the training data in some latent form of memory, but also it has what it has learned that allows it to use that information to create new data. The latter is mysterious and not so limited in theory.
 
Last edited:
  • #76
Jarvis323 said:
It learns a model for generating new data in the same distribution as the training data.
Sure, but that doesn't give it any new input. Its internal model is never calibrated against any further data.

Jarvis323 said:
it has what it has learned that allows it to use that information to create new data.
It's not "creating new data" because whatever output it generates, and whatever response that output causes in the external world, is never used to update the internal model.

That is not the case with humans. It's also not the case with other AI that has been mentioned in this thread, such as programs for playing chess or Go. Those programs update their internal models with new data from the world, including their own moves and the changes those cause, and the moves of their opponents and the changes those cause. That's a big difference.
 
  • #77
PeterDonis said:
No, it's not. The people who built it told us how they did it.
They didn't do it though, back propagation did it.
 
  • #78
PeterDonis said:
Sure, but that doesn't give it any new input. Its internal model is never calibrated against any further data.

It's not "creating new data" because whatever output it generates, and whatever response that output causes in the external world, is never used to update the internal model.

There is a slight exception, because the model can be continually fine-tuned based on user feedback, and it can be trained incrementally picking up where it left off.

Also people are now interacting and learning from it. A future version will be informed by people who learned from its previous version through the next generation of training data.

People obtain information from their environment, but most of the information they obtain is not really fundamentally new information. For the most part, we do our thinking using pre-existing information.
 
  • #79
Jarvis323 said:
They didn't do it though, back propagation did it.
You're quibbling. They gave it a fixed set of input data and executed a training algorithm that generated an internal model. The fact that the resulting internal model was not known in advance to the programmers, nor understood by them (or anyone else) once it was generated is irrelevant to the points I have been making.

Jarvis323 said:
A future version will be informed by people who learned from its previous version through the next generation of training data.
That's true, but then the next version will still have the same limitations as the current one: its data set is fixed and its internal model won't get updated once it's generated from that fixed data set.
 
  • #80
PeterDonis said:
its internal model won't get updated once it's generated from that fixed data set.

Unless you operate it in the mode where it learns from user feedback, or unless someone decides to train it some more with some new data.
 
Last edited:
  • #81
ChatGPT is new Google

In the first phase I played with ChatGPT, to see what it can and can't do. Now I use it as a practical tool. For example, I used it to correct my English grammar and to solve some problems with Windows. It's comparable to Google, in the sense that it is an online tool with a very very wide spectrum of possible uses.

At first I was in love with ChatGPT. Now it's one of my best friends, very much like Google. I bookmarked ChatGPT next to Google.
 
  • #82
I asked it a few questions about cricket and although it did its usual thing, it clearly didn't understand the game. Like getting terminology slightly muddled. It started to feel a bit like arguing with something who thought they knew something and could quote material ad nauseum, but was ultimately just quoting what it didn't really understand.

It also occurred to me how much my knowledge of cricket comes from memories of watching the game and not solely of what has been written about it. I could bore you with information, anecdotes and jokes about the game that are not documented anywhere. Training data must eventually include video of the game being played, which the AI would have to watch, understand and be able to relate to the words written about the game.

The most impressive thing is how well it understands the context of the question.
 
  • #83
PeroK said:
The most impressive thing is how well it understands the context of the question.
How would you define "it understands"? Does it understand or does it seem it understands?

It seems to me that ChatGPT isn't more that a very helpful tool. If it says "I" doesn't indicate that it has a shimmer of self-awareness. It is far away from human intelligence which is closely intertwined with self-awareness.
 
  • Like
Likes Lord Jestocost
  • #84
Well, the question is, whether you can distinguish some "babbling of just words" (be it from a lazy student, as I was concerning the "humanities" in high school, where I just adapted the lingo of the teacher to get best grades ;-), or an AI that just mimicks this by learning this babbling from just being fed with a huge amount of such texts) or some "true thought about a subject". I'm not so sure that this is possible, as this little experiment by the anglistic professor with her colleague showed: The colleague admitted that he was not able to decide with certainty whether a text was written by a human or the AI!
 
  • Like
Likes mattt, Demystifier, dextercioby and 1 other person
  • #85
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
 
  • Haha
  • Like
Likes aaroman, weirdoguy and vanhees71
  • #86
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
Perhaps that says something about the relative difficulty of those two topics!
 
  • Haha
Likes weirdoguy and vanhees71
  • #87
martinbn said:
But it can write articles on the foundations fo QM just like the experts in the field.:devil:
But is this surprising? Aren't those articles resulting from a "clever" algorithm created by a clever person?

I'm not surprised that I have no chances against a chess engine.
 
  • #88
PeroK said:
Perhaps that says something about the relative difficulty of those two topics!
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
The foundations of QM are to a large part philosophy and thus texts about it can be built by an AI just imitating the usual babbling of this community. To answer a concrete physics or math question is more difficult since you expect a concrete unique answer and no babbling. SCNR.
 
  • Haha
  • Like
Likes aaroman, timmdeeg and PeroK
  • #89
martinbn said:
I tried some simple high school math questions. It fails. But it can write articles on the foundations fo QM just like the experts in the field.:devil:
Mathematica, on the other hand, can answer high school math questions, but can't write articles on quantum foundations. Those are simply different tools, it doesn't follow from it that one thing is easy and the other hard.
 
  • Like
Likes gentzen, dextercioby and PeroK
  • #90
vanhees71 said:
The foundations of QM are to a large part philosophy and thus texts about it can be built by an AI just imitating the usual babbling of this community. To answer a concrete physics or math question is more difficult since you expect a concrete unique answer and no babbling.
True. But let me just remind that what you just said here (and what I am saying now) is "babbling", not a unique answer. Both "babbling" and unique answers have a value for humans, and require intelligence from a human side. People who are good in unique answers but very bad in "babbling" are probably autists.
 
  • #91
PeroK said:
Perhaps that says something about the relative difficulty of those two topics!
See my #89.
 
  • #92
Demystifier said:
Mathematica, on the other hand, can answer high school math questions, but can't write articles on quantum foundations. Those are simply different tools, it doesn't follow from it that one thing is easy and the other hard.
No, Mathematica cannot answer high school questions.
 
  • #93
martinbn said:
No, Mathematica cannot answer high school questions.
It can't if the questions are written in normal English. But it can if they are written in formal Mathematica language. For example, instead of asking "What is the solution of ...", in Mathematica you write something like "Solve[...]".
 
  • Like
Likes mattt, aaroman, dextercioby and 1 other person
  • #94
Demystifier said:
It can't if the questions are written in normal English. But it can if they are written in formal Mathematica language. For example, instead of asking "What is the solution of ...", in Mathematica you write something like "Solve[...]".
What if the problem is "the room is 5 by 8 the price of carpet is 3 per m^2. What is the cost?" Or some geometry problem which reaquires for the solution to prove something.
 
  • #95
martinbn said:
What if the problem is "the room is 5 by 8 the price of carpet is 3 per m^2. What is the cost?" Or some geometry problem which reaquires for the solution to prove something.
I believe we are not far from a tool which will combine ideas of systems such as ChatGPT and Mathematica to solve such problems.
 
  • Like
Likes PeroK, dextercioby and vanhees71
  • #96
There are AIs that do approximately average (amongst a select group of high achievers!) on high school math olympiad level questions. It does particularly well where things require formal proofs (its less good with geometry).

ChatGPT right now produces essays roughly at the first year undergraduate level (less in some respects, better in others) and can generalize concepts in essays at roughly the same level (although it is confidently incorrect a good percentage, where perhaps a human might be more cautious).

The way this stuff is scaling, I don’t think its very long (<10 years) before they are solving properly formulated physics/math subproblems (the type we give students for research projects).

I suspect it will be somewhat field dependant for awhile. One thing I have come to expect with this revolution, is that it struggles in places where naively I would have thought it would be strong, and does far better where I would have expected it to be weak.
 
  • Like
Likes dextercioby and vanhees71
  • #97
Me:"Write a python script that characterizes a quantum particle in a well using the consistent histories formalism"

ChatGPT:"Sure! Here is a python script that characterizes a quantum particle in a well using the consistent histories formalism

import numpy as np # Define the Hamiltonian for the quantum particle in a well def Hamiltonian(x, p, mass, well_width): H = (p**2) / (2 * mass) + (well_width**2) * (x**2) / 2 return H # Define the decoherence functional for the quantum particle in a well def decoherence_functional(x1, x2, t1, t2, mass, well_width): delta = x1 - x2 prefactor = (mass / (2 * np.pi * (t1 - t2)))**0.5 exponent = - mass * delta**2 / (2 * (t1 - t2)) D = prefactor * np.exp(exponent) return D # Define the function that computes the probabilities for the quantum particle in a well def probabilities(x1_values, x2_values, t1, t2, mass, well_width): prob = [] for x1 in x1_values: for x2 in x2_values: D = decoherence_functional(x1, x2, t1, t2, mass, well_width) prob.append(D) return prob # Define the initial and final times for the quantum particle t1 = 0 t2 = 1 # Define the mass and well width for the quantum particle mass = 1 well_width = 1 # Define the values of x1 and x2 to compute the probabilities for x1_values = np.linspace(-5, 5, 100) x2_values = np.linspace(-5, 5, 100) # Compute the probabilities for the quantum particle in a well probs = probabilities(x1_values, x2_values, t1, t2, mass, well_width)

This script defines a function for the Hamiltonian of a quantum particle in a well, a function for the decoherence functional of the particle, and a function for computing the probabilities of the particle being in different positions at different times using the consistent histories formalism. It then sets the initial and final times for the particle, the mass and well width, and the values of x1 and x2 to compute the probabilities for, and calls the probability function to get the probabilities."I haven't reviewed the script yet but it's interesting that it at least gets the jist.
 
  • Like
  • Wow
Likes mattt, gentzen, PeroK and 2 others
  • #98
Me: Solve ##x^5=1##
ChatGPT:
If ##x^5 = 1##, then ##x## is one of the fifth roots of ##1##. The fifth roots of ##1## are ##1, -1, i, -i##, and ##-i^2##. Note that ##i^2 = -1##. Therefore, the solutions to ##x^5 = 1## are:
##1, -1, i, -i##, and ##-i^2##
So, ##x## can be any of these values.

Me: Solve ##x^5=x##
ChatGPT:
To solve ##x^5 = x##, we can use the fact that ##x^5 - x = 0##. We can then factor this equation as follows:

##(x^5 - x) = 0##

##(x^4 + x^3 + x^2 + x + 1)(x - 1) = 0##

Therefore, the solutions to ##x^5 = x## are ##x = 1## and ##x = -(x^4 + x^3 + x^2 + x + 1)##. The first solution, ##x = 1##, is a real solution. The second solution, ##x = -(x^4 + x^3 + x^2 + x + 1)##, is not a real solution.
 
  • Haha
  • Skeptical
Likes mattt, aaroman and vanhees71
  • #99
So cheating using the bot in a math exam for the 1st semester might not be the best idea ;-).
 
  • Like
  • Haha
Likes PeroK and Demystifier
  • #100
martinbn said:
Me: Solve x^5=0.
Me: Explain Galois theory such that an average physicist can understand it.

I will not copy paste the answer, but let me just say that now I understand Galois theory better than before.
 
  • Like
  • Wow
Likes mattt, PeroK, physika and 1 other person
Back
Top