Effects of LLMs on math learning, self taught or otherwise

  • Thread starter Thread starter elias001
  • Start date Start date
AI Thread Summary
The discussion highlights the transformative potential of large language models (LLMs) in mathematics education, particularly in proof-based courses. With tools like Lean and AI-driven platforms, students can learn to code proofs and receive instant feedback, addressing challenges faced in large university classes. LLMs can serve as personal tutors, providing explanations and quizzes, but users must verify the accuracy of the information provided. The integration of LLMs raises questions about their role in education, emphasizing that while they can assist learning, they cannot replace the insights of experienced educators. Overall, the technology is evolving, and its impact on math learning will continue to unfold.
elias001
Messages
363
Reaction score
24
There has been a lot of buzz about LLMs being able to do math research.

Here are a few links:









I am not sure which of the LLMs from Google or Microsoft or which ever big tech companies that did really well in this year's math IMO.

Also ever since the theorem proof checker speciality programming language Lean, and its 4.0 version came on the scene, there has been a lot of excitement from the tech savy corner of the research math community, now with the rise of LLMs, their excitement on X/twitter is so palpable to the point of feeling almost orgasmic. In some universities, there were intro to proofs course being offer where students are to learn to do proofs by using Lean. There are efforts such as: https://github.com/danieleschmidt/autoformalize-math-lab for translating math statements into code that will be readable by any of the proof checker programming languages. Not to mention there is an existing start up that uses AI to read a math psper and translate it back into LaTex code: https://mathpix.com/

OK, here is something to think about? How will all these technological innovations affect the teaching and learning of mathematics at all levels, from kids to the graduate level?

Think about it, how many parents that are not good in math, but need to find tutors for their kids with learning math for whatever reasons. I am not going to lay the blame on bad math teachers, or limited school resources, etc. However, one thing that will have an impact are parents who have limited financial means or parents who doesn't speak English or whatever language their children speak in school. These situations apply from grade school all the way to high school level.

Then there is what happens at universities. First year calculus or linear algebra courses. If a course has 600 plus students, there are only so many hours a course instructor or teaching assistant can be made available. There is also emailing and waiting for replying or waiting until the weekly office hours.

The fun gets even more suspenseful when any math courses where proofs are involved. I mean now with Lean, students can learn how to code in the language and write out their own proofs in Lean and have it check if it is correct. Why is this last part important and relevant.

In a math course involving proofs, if there are 50 students, and there are weekly problem sets, and each problem sets have 10 questions all involving proofs. A teaching assistant can mark two questions for each hand in problem sets. That means the other eight don't get looked at. The tests and exams usually have the announcement from their dear instructors/professors along the following line: students are expected to know all materials covered in lectures, textbook assigned readings and homework assignments. If there are 50 students, 13 week courses=13 problem sets and 10 proof questions/problem set. There are a total of 130 questions that involve proofs. A teaching assistant has a weekly office hour of 1 hour, plus certain alloted time for marking. There is of course the instructors/professors., but they have a limited amount of time per week.

Oh I haven't even mentioned about the engineer oriented math courses. But such issues don't apply to engineers. They are in constant training to be resourceful in finding solutions to problems they don't know how to solve, and LLM is another tool they can add to their tool belt.
 
Last edited:
Computer science news on Phys.org
LLMs are useful tools that can quiz you and prepare you for tests.

There is a caveat here. It's assumed that you have some understanding of the subject matter to decide if the LLM is giving you a truthful answer.

As an example, for differential calculus, you could ask for a quiz specifically on the product rule, where it would provide you with a multiple-choice list of answers. If you ask it, the LLM will show why your answer is either true or false with some calculations. Now it is up to you to read and review the answer to understand where you went astray or if the LLM has hallucinated an answer.

If the LLM has given you an answer and you really can't decide, then that is a good time to contact your professor and ask what is wrong here.

---

I used to tutor my son in high school math. The usual refrain I got was "that's not how the teacher did it," followed by a wild burst of exasperation and impatience. After a few minutes of arguing, he'd say, "Oh, is that how it works? Okay, I got it." Later, we got another student two grades older to tutor him, and we never heard a peep out of him.

My point here is that the LLM may solve a problem in a way that differs from your expectations, and now you must decide whether to trust it or consult your professor.

---

Bottom line: Use it to help you study and not to do your homework. You can use it to do some research or provide you with an outline for an essay, but you, as a student, need to do the work of writing the essay. If you have trouble with specific small areas, you could use the LLM to help there.

An LLM can digest entire articles and summarize them for you. However, it is up to you to determine if the summary is accurate by reading the article and comparing it to the summary.

A case in point, I used it to summarize one day of a criminal trial, and the LLM got witnesses confused with other witnesses, conflating facts together, and just messing up things in the summary. I had to redo the summary by asking the LLM to summarize smaller chunks, i.e., one witness at a time.
 
  • Like
Likes robphy, jack action and FactChecker
@jedishrfu I am including the possibility of these LLM being a personal tutor that can explain a concept to someone who are 100% new to the subject. Yes LLM makes mistakes now, but that won't stay true forever. A LLM can explain to a novice student who have say grade 11 level math in the US, and can teach that student calculus or explain to a beginner student any concepts from linear algebra, be it at the level of giving illustrative examples, to explaining steps with the proof of theorems to literally helping the students with solving homework problems.
 
elias001 said:
@jedishrfu I am including the possibility of these LLM being a personal tutor that can explain a concept to someone who are 100% new to the subject. Yes LLM makes mistakes now, but that won't stay true forever. A LLM can explain to a novice student who have say grade 11 level math in the US, and can teach that student calculus or explain to a beginner student any concepts from linear algebra, be it at the level of giving illustrative examples, to explaining steps with the proof of theorems to literally helping the students with solving homework problems.
Do LLM services have enough continuity from day-to-day to be consistent in their explanations?
Also, if there are other classes, books, or teachers, involved, I would recommend staying with one approach until the concepts are understood.
 
@FactChecker so say if you pick google gemini, and you initiate a conversation, as long as you saved the url of your tab session, you can always continue where you left off say if you asked five different questions on a topic, and then you can continue the next day. You can even reboot your computer, open google chrome or Firefox and reload the url where you had your conversation. Also you can ask the llm to give your answer in a latex document where it can be compiled to a pdf and printed out.

Also, if you are reading something you are not understanding from a text, the more information you can give to the llm, say definitions, theorems that preceds what you have been reading and if it is relevant to what you are not understanding, enter that into the llm, and it can give you an answer that directly relate to what you are not understanding in the context of what you have read before. This technology is so new and one can't expect the llm to be forever be making mistakes. I don't think we should have the expectations that it will be perfect in terms of the answers it gives you 100% of the time.

I am not saying that the AI won't ever give a non sensical answer or to "hallucinate". We should keep in mind LLM has access to the math libraries of Lean, HOL, Isabella, Cog, and Mizur. If not now, someone will make then publicily accessible to the various LLMs eventually. Also, these libraries are being constantly updated.

Also there is Lean, these proof checkers. Is not very user friendly and the learning curve is more steep than learning C++ pointers management. But imagine if LLMs and something like Lean together as a studying tool for students.

I remember I was told before the popularity of Maple, Matlab or mathematica, mathematics used to post onto various journals or online forums asking for help with evaluating some complicated integrals. But ever since the arrival of CAS, such activities literally stopped, or become less and less so. Does that mean Maple, Matlab and Mathematical are correct and helpful 100% of the time? Of course not, but it is always nice to have additional tools available.

There seems to be two separate issues here, on twitter/X, amongst the math community, there are mathematicians wondering if these new AI tools will ever replace mathematicians. One lady who does research and teaches, at one of the R1 university, in software verification keep trying to tell her math counterpart that it won't, that is in her professional capacity.

The second issue is whether these new AI tools will replace teaching assistants or math instructors. I personally don't think so either. An AI even ones that are self aware and fully self conscious can never replace the insight and experience a human expert has acquired over a life time. That is just my personal two cents. Others might disagree with me.
 
Last edited:
  • Informative
Likes FactChecker
elias001 said:
their excitement on X/twitter is so palpable to the point of feeling almost orgasmic.
You seem to be drinking from the same well.

elias001 said:
How will all these technological innovations affect the teaching and learning of mathematics at all levels, from kids to the graduate level?
As @jedishrfu said, LLM is a tool for retrieving information and summarizing it in a particular format. It is an "assistant" to help someone do their job. That someone must already know how to do the job and verify the work before submitting it. So, LLM could be an "assistant" for a teacher, if used correctly.

You cannot expect someone to learn by themselves with LLM more than you can expect someone to learn by themselves with books.

elias001 said:
Yes LLM makes mistakes now, but that won't stay true forever.
elias001 said:
This technology is so new and one can't expect the llm to be forever be making mistakes.
What are you basing these statements on? It sounds more like wishful thinking.
 
@jack action Then students should not be using maple, Matlab, mathematica, or any of the CAS systems. I specifically said LLM can improve upon itself, and there is also proof checking programming languages that will hopefully becomes user friendly enough in the sense that learning to program in it will be as easy as learning to code in python. Many people on this site and on other science forums learn things by themselves by reading books. I also said we should not expect it to be 100% perfect all of the time. Unless many of the most advanced LLM can't explain to a grade 11 high school students what is the concept of limits and will only give out gibberish if asked it to explain what it is in plain English or it will even give wrong examples. Also, whether one likes it or not, the technology is here to stay. The genie can't be put back into the bottle.


The question is how will various math and statistics departments deal with it?
 
A book or class should have been well thought out, checked by experts, and proofread. Even if LLMs can be a starting point, you still have to check it. And you don't know the subject. It would be much better if an expert started with an LLM result and edited it. If you can do that, experts can do it too, and much more reliably.
 
MIT is offering a wide range of what appear to be AI-taught courses.

I don't know anything about it.
 
  • Informative
Likes FactChecker
  • #10
Hornbein said:
MIT is offering a wide range of what appear to be AI-taught courses.

I don't know anything about it.

This looks like a great example of beneficial AI.
 
  • #11
elias001 said:
@jedishrfu I am including the possibility of these LLM being a personal tutor that can explain a concept to someone who are 100% new to the subject. Yes LLM makes mistakes now, but that won't stay true forever. A LLM can explain to a novice student who have say grade 11 level math in the US, and can teach that student calculus or explain to a beginner student any concepts from linear algebra, be it at the level of giving illustrative examples, to explaining steps with the proof of theorems to literally helping the students with solving homework problems.
I think the LLM will always be suspect. It’s too big to test completely.

In fact, full testing may take so long that it won’t be feasible to do. Instead companies will use known problem.

Sometime small developer tweaks can result in bizarre behavior.
 
Back
Top