- 8,946
- 687
It's not just at Phsyics Forums, as for example, this graph of the monthly question rate at stack overflow:
jack action said:But since LLMs take their information from the web, if there are no questions nor answers there anymore, how will they be able to answer questions?
Killtech said:so i found people are more often subject to hallucinations then AI is.
this is an example for a misunderstanding an AI wouldn't do. a discussion is broader then a single question and exploring any topic beyond the established requires a good understanding of the current knowledge of all things related to it. barely any human has all the related knowledge and its history, so easily claims something inaccurate or even false. asking for AI opinion helps to clear things up and find sources that contradict a given statement. often there is issues with people claiming that things "are" a certain way, misunderstanding that this is often merely a valid interpretation and ignoring that other equivalent interpretations exist where things are the other way around. therefore, they make a lot of false statements when they talk about a framework based on another interpretation. people often get stuck in a singular perspective. that's really a problem.Borek said:And how do you judge, if the answer is beyond your knowledge?
We can't compete on the speed of response, if nothing else. It's more likely that university lectures will be done by AI in the relatively near future.Greg Bernhardt said:First it was Reddit, Exchange and Quora eating into PF. Now it's the LLMs and AI search including Google AI Overviews. PF will always be here for when people snap back and realize they actually want to talk with humans.
I do use AI for some of my more intricate questions, but if I want to make sure some detail is correct I turn to PF for that human connection and education.Greg Bernhardt said:First it was Reddit, Exchange and Quora eating into PF. Now it's the LLMs and AI search including Google AI Overviews. PF will always be here for when people snap back and realize they actually want to talk with humans.
well, education is slow while integration could be much faster. why learn so much knowledge and how to solve very complicated differential equations and integrals over decades, when we would have the option to expand our natural wiring to additional hardware where this comes pre-trained? would still take time to learn to use the newly integrated knowledge. we have to realize that out biological hardware is at its limits here, so either we evolve further, or accept the risk at becoming outdated and not longer play a relevant role in shaping the future. but todays AI is still too static and cannot actively learn from daily experiences, so its not ready for it either.PeroK said:In fact, it's possible that higher education for the masses will be seen as no longer required or desirable. Then, it will come down to a battle between the oligarchs and the AI that they've used to oppress the vast majority of humanity.
But, from your link, it is not successful:PeroK said:One of the most difficult things to do is to accept that something may be successful
You are supposed to use AI as an assistant, not as an expert:Another student says: “There are some useful things in the presentation. But it’s like, 5% is useful nuggets, and a lot is repetition. There is some gold in the bottom of this pan. But presumably we could get the gold ourselves, by asking ChatGPT.”
The lecturer laughs uncomfortably. “I appreciate people being candid …” he says, then he changes the subject to another tutorial he made – using ChatGPT. “I’ve done this short notice, to be honest,” he says.
Without the AI work reviewed by experts, without a "responsible and ethical use of digital technologies", it is as successful as a professor who would ask their assistant to pick and read snippets from multiple books in class. I can read a book by myself, so why am I paying for a professor?Eventually, the course head told James that two human lecturers would be going over the material for the final session, “so you don’t get an AI experience”.
In response to a query from the Guardian, the University of Staffordshire said “academic standards and learning outcomes were maintained” on the course.
It said: “The University of Staffordshire supports the responsible and ethical use of digital technologies in line with our guidance. AI tools may support elements of preparation, but they do not replace academic expertise and must always be used in ways that uphold academic integrity and sector standards.”
I was using success in the sense that it ultimately makes more money than the alternative. Or, simply comes to dominate the market. This is different from gaining general consumer approval.jack action said:But, from your link, it is not successful: