Insights Blog
-- Browse All Articles --
Physics Articles
Physics Tutorials
Physics Guides
Physics FAQ
Math Articles
Math Tutorials
Math Guides
Math FAQ
Education Articles
Education Guides
Bio/Chem Articles
Technology Guides
Computer Science Tutorials
Forums
Trending
Featured Threads
Log in
Register
What's new
Search
Search
Search titles only
By:
Menu
Log in
Register
Navigation
More options
Contact us
Close Menu
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
The Lounge
Feedback and Announcements
Challenges in Training AI for Complex Scientific Questions
Reply to thread
Message
[QUOTE="Nugatory, post: 6837623, member: 382138"] From the PF mission statement, PF's role is: [INDENT]Our mission is to provide a place for people (whether students, professional scientists, or others interested in science) to learn and discuss science as it is currently generally understood and practiced by the professional scientific community. As our name suggests, our main focus is on physics, but we also have forums for most other academic areas including engineering, chemistry, biology, social sciences, etc.[/INDENT] and I don't see any reason why that should change. Instead the question we have to be asking is, what role does AI play in executing on that mission statement? "Learn" and "discuss" are different things and I expect that chatbot interactions will be different in those contexts. I can imagine an AI (like reddit's modbots but more sophsticated) giving initial responses to many of the more common questions. We don't really need a human being to point people to "rest frame of a photon" or Twin Paradox FAQs, or to do first response to B-level "conscious observer" quantum mechanics questions. I wouldn't be surprised to find that a properly trained AI would do a pretty good job at setting the A/I/B prefixes on incoming threads - correcting these is the single most common moderator action I do, and although it's only a few mouse clicks it's not something that obviously needs a human for the first response. I expect that we will start seeing posts along the lines of "Chatbot said this and I don't understand. Help". These are analogous to the "I read this pop-sci source and now I'm confused" questions that we get all the time and that keep the SA's busy. That's another path to our "learn" mission and something that we should welcome. These are all more or less positive. The most likely negative I see is that we may be flooded with bad contributions to technical threads. We get these today when some kid drunk on their most recent encounter with their favorite pop-sci video jumps into a thread to explain that "it's just wave-particle duality" or whatever, we take them down as soon as we notice or they are reported, and we're done. A chatbot will make it lot easier to construct these well-intentioned but self-aggrandizing contributions - human moderators may not be able to keep up. [/QUOTE]
Insert quotes…
Post reply
Forums
The Lounge
Feedback and Announcements
Challenges in Training AI for Complex Scientific Questions
Back
Top