elias001
- 389
- 26
I have s question for anyone who has any technical side of the various LLMs.
From its current limitations and capabilities, how good can it be used to come up with examples in a particular subdiscipline of higher mathematics.
Let me explain, say one picks any theiren randomly out of an typical undergraduate textbook. You ask the LLM to come up with examples to illustrate the theorem.
Here, say we ask ChatGPT to come up with examples to illustrate the various theorem in the topic of exact sequences in the theory of Module theory, three by three, four by four, snake lemma, then we can move onto examples for the two derived functors Tor and Ext, then to the homological algebra chapter in Dummit and Foote.
I know many of you have reservations about reliability of AIs, either there are benefits to science and math education this technology has to offer, or all the ph.ds in AI and machine learning has been wasting their time all this time. I am not expecting any of you to give a 100% correction all of the time every time you ask a LLM, but if you think just for the entirety of the undergraduate and beginning graduate level courses and textbook include, the chances of any of the LLM being accurate is less than 50%, then when do you think it will be at least at 80%+ accuracy.
Also why am I asking this question? Because many of you who have study pure math know that after certain point in a subject, the examples gets fewer and fewer, even amongst topics that has made it to the undergraduate level.
From its current limitations and capabilities, how good can it be used to come up with examples in a particular subdiscipline of higher mathematics.
Let me explain, say one picks any theiren randomly out of an typical undergraduate textbook. You ask the LLM to come up with examples to illustrate the theorem.
Here, say we ask ChatGPT to come up with examples to illustrate the various theorem in the topic of exact sequences in the theory of Module theory, three by three, four by four, snake lemma, then we can move onto examples for the two derived functors Tor and Ext, then to the homological algebra chapter in Dummit and Foote.
I know many of you have reservations about reliability of AIs, either there are benefits to science and math education this technology has to offer, or all the ph.ds in AI and machine learning has been wasting their time all this time. I am not expecting any of you to give a 100% correction all of the time every time you ask a LLM, but if you think just for the entirety of the undergraduate and beginning graduate level courses and textbook include, the chances of any of the LLM being accurate is less than 50%, then when do you think it will be at least at 80%+ accuracy.
Also why am I asking this question? Because many of you who have study pure math know that after certain point in a subject, the examples gets fewer and fewer, even amongst topics that has made it to the undergraduate level.