Haborix
- 382
- 446
Someone should ask these AI models if they fear a Butlerian Jihad.
Exactly, and this is what ChatGPT (or at least the version discussed in the article here) lacks.Ken G said:if you look at a child learning language, it seems pretty clear that they are establishing frequency of association. But it is not just association with other words, it association with experience.
hmm has this been established by linguists? I have observed association with both experience and words. Think how a child is taught to say, "you're welcome," in response to "thank you." It is just rote. Same with learning times tables: "what is five times seven?" little Johnny says "thirty-five." He's not calculating the answer, at least not after a couple months in class.Ken G said:I am not a linguist, but if you look at a child learning language, it seems pretty clear that they are establishing frequency of association. But it is not just association with other words, it association with experience.
Yup, as someone who works on the back end on one of the bigger companies in the LLM space, we have been hiring "tutors" so we can mass train on math, coding, physics, chemistry, etc. Essentially, they come up with problems and see if they can stump our model, and if it stumps the model, they will correct it and we collect all these as training data. The other part is we use web scrappers to take problems from the internet too, and then have the tutors see if the LLM solved the problem correctly, or if they have to correct them (sorry physicsforums!).mathwonk said:properly used and implemented, AI seems to help with physics instruction, potentially replacing a section man to answer student questions. One advantage of AI in this experiment seems to be its superior accomodation to the level/speed of the student. But they had to "customize" the program appropriately.
https://news.harvard.edu/gazette/st...i-tutor-to-physics-course-engagement-doubled/
Ken G said:By the way, I really welcome ChatGPT in the classroom. If you think about it, the fact that ChatGPT often gives quite good answers to general introductory level physics and astronomy questions (not mathematical logic, mind you, it's not good at that at all so here I'm talking about lower level classes that do not include much mathematical reasoning), yet does not have any deeper understanding of those answers, means that it simulates the kind of student that can get an A by parroting what they have heard without understanding it at all. The way ChatGPT fools us into thinking it understands its own explanations, is exactly the problem we should be trying to avoid in our students. The worst part is, sometimes our students don't even realize they are only fooling us, because we have trained them to fool themselves as well. We tell them an answer, then ask them the question, and give them an A when they successfully repeat the answer we gave them before. They walk away thinking they learned something, and we think they learned something. But don't dig into their understanding, don't ask them a question which calls on them to think beyond what we told them, if you don't want to dispell this illusion!
Hence, the way to defeat using ChatGPT as a cheat engine is the same as the way to dispell the illusion of understanding where there is not understanding: ask the follow on question, dig into what has been parroted. That's actually one of the things that often happens in this forum, we start with some seemingly simple question, and get a seemingly straightforward answer, but after a few more posts it quickly becomes clear that there was more to the question than what was originally intended by the asker. If we teach students to dig, we are teaching them science. If we teach them to do what ChatGPT does, we cannot complain that ChatGPT can be used to cheat!
It's odd that he wanted you to explain it in "your own terms," but only accepted the literal definition. It's tricky in math, because there really isn't "your own terms" in math, the definitions are extremely precise and proofs are, at some level, purely syntactic. Computers are used to prove very difficult theorems via brute force methods, but for me that kind of spoils the whole point of math. Do we prove theorems because we want to use them to prove other theorems that we don't understand any better than the ones the computer proved? Or do we prove theorems because we think that if you understand its proof, you understand the theorem better?jedishrfu said:He wanted me to explain them in my own terms which I did but then he said well that's not quite right. Go study some more and when you're ready come back.
I went back a second and third time getting more not quite right responses. My fourth and final attempt, I recited the limit definition exactly as it was stated in the book and he said you know I think you got it.
He was one my favorite profs. This approach could work for those students using ChatGPT to do their work.
Yes, that's the kind of mistake these AIs make, they can't tell when their answer is absurd, or contradictory. They only know it's the answer they got.jedishrfu said:---
A more primitive example, was when some freshmen were doing an electrical lab measuring resistance and current and asked to determine the voltage of a battery.
We watched them use their newly acquired electronic calculators circa 1973 to compute an answer of 1500v for the battery and were shocked by their answer. We asked how did the arrive at that answer to wit they said that's what the calculator said.
We had to chuckle because we knew it to be a 1.5v D cell wrapped in some tape to obscure the labeling.