- 24,488
- 15,056
Are you sure that the bot explained it right to you? Given the answer to ##x^5=1##, I'd be very skeptical.
AI model determines cardiovascular risk from routine chest X-rayDemystifier said:Dr. House.
I don't understand your point. Maybe I should ask ChatGPT to explain me what you meant?vanhees71 said:Are you sure that the bot explained it right to you? Given the answer to ##x^5=1##, I'd be very skeptical.
It obviously gave the wrong answer. The correct answer is that the average physicist cannot understand Galois theory.Demystifier said:Me: Explain Galois theory such that an average physicist can understand it.
I will not copy paste the answer, but let me just say that now I understand Galois theory better than before.
Out of curiosity, is this the answer?Demystifier said:I will not copy paste the answer...
Galois theory is a branch of mathematics that deals with the study of field extensions. Specifically, it studies the conditions under which a field can be obtained as an extension of another field, and the properties of these extensions.
One of the main results of Galois theory is the Galois correspondence, which gives a one-to-one correspondence between subgroups of the Galois group of a field extension and intermediate fields of the extension. This result allows for a deep understanding of the symmetries inherent in field extensions, and has numerous applications in mathematics and physics.
Galois theory has found applications in a wide range of fields, including number theory, algebraic geometry, and the study of elliptic curves. It also has applications in physics, particularly in the study of quantum mechanics and the theory of solitons.
Overall, Galois theory is a fundamental tool for understanding the structure of field extensions and their symmetries, and has had a profound impact on a wide range of mathematical and physical disciplines.
@Demystifier is not your average physicist!martinbn said:It obviously gave the wrong answer. The correct answer is that the average physicist cannot understand Galois theory.![]()
But I'm average (in the population of theoretical physicists with PhD) in my abilities to understand abstract algebra.PeroK said:@Demystifier is not your average physicist!
Me: Why are some people happy when others are wrong?martinbn said:I was reading is as it was typing it, and I thought "Damn it it is OK. I cannot complain about it." But then it wrote about Klein and the Square and I was happy again.
No, mine was different, and more illuminating to me.martinbn said:Out of curiosity, is this the answer?
Interesting. So it does not always generate the same answer.Demystifier said:No, mine was different, and more illuminating to me.
Oh come on. The previous posts were jokes. Wasnt it clear? I even put one of those faces.Demystifier said:Me: Why are some people happy when others are wrong?
ChatGPT:
There could be a variety of reasons why some people might feel happy when others are wrong. Here are a few possibilities:
1. Schadenfreude: This is the feeling of pleasure or joy that comes from seeing someone else experience failure or disappointment. Some people might feel happy when others are wrong because it gives them a sense of superiority or validation.
2. Competition: If someone views a situation as a competition, they might feel happy when others are wrong because it gives them an advantage.
3. Personal biases: People may feel happy when others are wrong if it confirms their preexisting beliefs or biases.
4. Misunderstandings: Sometimes, people might feel happy when others are wrong simply because they misunderstand the situation or the intentions of the other person.
It's important to note that feeling happy when others are wrong is not necessarily a healthy or positive reaction. It's usually more productive and empathetic to try to understand others' perspectives and find common ground, rather than taking pleasure in their mistakes or failures.
True. You even have a button "Regenerate Response" that gives you a different answer to the same question.martinbn said:Interesting. So it does not always generate the same answer.
If it was clear that you were joking, why do you think that I was not joking too? I'm sure you had a good reason.martinbn said:Oh come on. The previous posts were jokes. Wasnt it clear? I even put one of those faces.
Because you put a "I am sad" response on my post. I expected a "haha".Demystifier said:If it was clear that you were joking, why do you think that I was not joking too? I'm sure you had a good reason.
You are good in math, but bad in rationalization.martinbn said:Because you put a "I am sad" response on my post. I expected a "haha".
Come to think about it, I am no longer surpised that it generates responses like this one when it comes to QM. After all physicists do the same when they write popular articles/books about QM.timmdeeg said:Me: how would you explain "quantum spookiness" with an every day analogy?
ChatGPT:
Quantum mechanics is a fundamental theory in physics that describes the behavior of matter and energy at the atomic and subatomic scale. One of the most striking and counterintuitive aspects of quantum mechanics is the idea of "quantum superposition," in which a quantum system can exist in multiple states simultaneously. This concept is often referred to as "quantum spookiness."
To understand quantum superposition, it might be helpful to consider the following everyday analogy: imagine you are playing a game of "rock, paper, scissors" with a friend. At any given moment, your hand can be in one of three states: rock, paper, or scissors. But in the world of quantum mechanics, a quantum system can be in multiple states at the same time. It's as if your hand could be both rock and scissors simultaneously, and only when you make a choice and "measure" the state of your hand does it become one or the other.
This strange behavior is a consequence of the fundamental uncertainty principle in quantum mechanics, which states that it is impossible to know the precise state of a quantum system with complete accuracy. The idea of quantum superposition helps to explain many of the strange and counterintuitive phenomena that are observed at the quantum scale, such as the fact that particles can be entangled and can seemingly communicate instantaneously over vast distances.
It works much better when you give "him" a little help, like to a real person.martinbn said:Me: Solve ##x^5=1##
ChatGPT:
If ##x^5 = 1##, then ##x## is one of the fifth roots of ##1##. The fifth roots of ##1## are ##1, -1, i, -i##, and ##-i^2##. Note that ##i^2 = -1##. Therefore, the solutions to ##x^5 = 1## are:
##1, -1, i, -i##, and ##-i^2##
So, ##x## can be any of these values.
Exactly!Jarvis323 said:The fact it isn't very great at math isn't surprising given its ability to do math is basically a side effect of its ability to generate meaningful text.
Perhaps a model designed and trained specifically for doing math is just around the corner and it will blow our minds?
I think systems trained through machine learning generally don't crash. Once the training is over, the high-level explanation of their working is not based on logical loops (repeat until ...), but on shortcut heuristics (if it quacks like a duck, it's probably a duck). Of course, at the low level it is still based on logical circuits, but this level does not distinguish "deep" questions (what's the meaning of everything) from "mundane" ones (solve equation x=x+1).Maarten Havinga said:Ask it to which essential question the answer is 42. If it crashes, you can blame me! ;-)
Not only that, you can talk it into correcting an error and then into making the error once again. As a source of information - especially on the more diffuclt subjects - it is completely uselesss. Plus, it is dangerous, as it sounds pretty confident.Demystifier said:It makes errors, just like real intelligence. Moreover, it can correct itself.
The best way I've seen its usual output described is "fluent bullshit". It's bad at coding and math because it is, at its core, a weighted shuffling of components in the training data.Borek said:Not only that, you can talk it into correcting an error and then into making the error once again. As a source of information - especially on the more diffuclt subjects - it is completely uselesss.
"William's syndrome?"InkTide said:described is "fluent bullshit"
InkTide said:"fluent bullshit"
I don't agree with gendering terms like that, but I definitely agree with the core sentiment.Borek said:
Anticipating two chatGPT's since they are language models they would be able to chat to each other in their own language more efficient.Demystifier said:While I, as a language model, am able to provide information and explanations on a wide range of topics, I do not have access to the Internet and am not able to browse for or access current information about specific topics
I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.Delta Prime said:Anticipating two chatGPT's since they are language models they would be able to chat to each other in their own language more efficient.
Siri versus Alexa.gleem said:I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.
Just to play it safe I wonder if one could have access to the internet and the other not.gleem said:I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.
Intriguing! Perhaps we may gain insight on those who are exceptional in mathematics being savantsDemystifier said:Exactly!
And I would add that most humans are not very good in math for a similar reason.
Possibly an apocryphal story inspired by events in Colossus: The Forbin Project (1970). In the movie, US and Soviet defense computers conspire to control the world using their own indecipherable mathematical language.gleem said:I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.
Given the track record of comic books being ahead of science by 50 years I would say Sci-Fi movies are right on time,. Relatively speaking. (Pun intended)renormalize said:Possibly an apocryphal story inspired by events in Colossus: The Forbin Project (1970). In the movie, US and Soviet defense computers conspire to control the world using their own indecipherable mathematical language.
I had to go back and check this out. It turns out that in 2017 Google, Facebook, and OpenAI reported AI seemingly using its own language. The one I was thinking of was Facebook's chatbots, Bob and Alice. Facebook was using them to study how two AI agents could negotiate. They were not s specifically required to do this in English and apparently began using a made-up language. Facebook then required them to use English and continued their study. Apparently, some misinterpreted this and reported it in a more alarming manner.gleem said:I read that this happened at Google a few years ago. They had two AI agents that were communicating with one another in their own language. Google not knowing what was being exchanged shut them down.
BWV said:ChatGPT (black) defeats Stockfish (white ). i have been studying ChatGPT’s moves and adopting them has raised my ELO by 1000 points already
View attachment 322150
What good is an explanation in simple language that's wrong?RebeccaSantios said:You know ChatGPT, however inaccurate, can eventually explain complex ideas in simple language.
Arguably, if you ask a bunch of human experts some questions, and half the answers are some impenetrable combination of snobby, off-putting, jargon-heavy, missing the point, and at inappropriate level - you'll learn about as much as if you asked a bot and got perfectly tailored, easy to digest answers that are wrong half the time.PeterDonis said:What good is an explanation in simple language that's wrong?
In the sense that you'll learn zero, yes. But how does that help? It doesn't. Learning zero information from a wrong chatbot is no better than learning it from "unskilled and unaware of it" humans.Bandersnatch said:Arguably, if you ask a bunch of human experts some questions, and half the answers are some impenetrable combination of snobby, off-putting, jargon-heavy, missing the point, and at inappropriate level - you'll learn about as much as if you asked a bot and got perfectly tailored, easy to digest answers that are wrong half the time.
It's not always wrong, is the point.PeterDonis said:In the sense that you'll learn zero