Why have the number of questions declined over recent years?

  • Thread starter Thread starter rcgldr
  • Start date Start date
rcgldr
Homework Helper
Messages
8,946
Reaction score
687
It's not just at Phsyics Forums, as for example, this graph of the monthly question rate at stack overflow:

stkoe.webp
 
Physics news on Phys.org
Several processes - one is that at some point "all questions were already answered", quite often it was enough to google for problem to find several discussions and explanations. Second is that demographics changed - people asking questions expected more and more to be treated nicely, so started to ignore places with a somewhat toxic culture of "question was asked, google, thread closed" or "RTFM" (or with completely toxic cultures). Third is that AI/LLM/ChatGPT is able to deliver good looking answers (whether they are correct is another thing) that are never toxic, which makes them even more attractive. And on top of that in the long run number of questions asked depends on the number of people giving answers (you won't post on a site where questions are unanswered). Social media sites like facebook/twitter actively use sociological techniques to keep people there and not leave the site. Forums or StackOverflow type sites never implemented these techniques, so people migrated and the traffic shifted.
 
  • Like
  • Agree
  • Wow
Likes Bystander, symbolipoint, PeroK and 2 others
But since LLMs take their information from the web, if there are no questions nor answers there anymore, how will they be able to answer questions? That will be interesting in a few years, when there will be questions about new technologies, processes, protocols, etc.
 
jack action said:
But since LLMs take their information from the web, if there are no questions nor answers there anymore, how will they be able to answer questions?

They won't. The process has already started.
 
  • Wow
Likes symbolipoint
i must admit that for me LLMs have revolutionized learning and exploring certain subjects beyond established knowledge. i find that LLM are for more proficient at exactly understanding my questions and how i approach them than people ever could. removing the factor of constant misunderstandings really makes such a huge difference. This allows AI to point you directly to the proper background source material for further studies and pull relevant key point for you from it. also, it does not judge when you make mistakes - unlike people. so you feel far more encouraged to ask more question to deepen your understanding. but one last point i found to be the biggest issue: when going deep into physical topics you find that people that do understand the underlaying mathematics and physics on its deepest level are a null set. you only get experts for one with the rudimentary understanding of the other at best. LLM don't suffer from this restriction. and they can easily catch any analogy to other fields and make a very proficient analysis how deep such an analogy goes and of what use it can be. people on the other hand don't have this extremely broad knowledge and hence make false claims as they barely understand the other field (or don't even know it is a large subject of study). so i found people are more often subject to hallucinations then AI is.

finally, latest LLM are good even at answering novel questions to which the answers aren't known. they need to think it through and do it step by step - best along someone to help guide them. but they do good work to approaching a solution, considerably accelerating the process of getting there.
 
Killtech said:
so i found people are more often subject to hallucinations then AI is.

And how do you judge, if the answer is beyond your knowledge?
 
Borek said:
And how do you judge, if the answer is beyond your knowledge?
this is an example for a misunderstanding an AI wouldn't do. a discussion is broader then a single question and exploring any topic beyond the established requires a good understanding of the current knowledge of all things related to it. barely any human has all the related knowledge and its history, so easily claims something inaccurate or even false. asking for AI opinion helps to clear things up and find sources that contradict a given statement. often there is issues with people claiming that things "are" a certain way, misunderstanding that this is often merely a valid interpretation and ignoring that other equivalent interpretations exist where things are the other way around. therefore, they make a lot of false statements when they talk about a framework based on another interpretation. people often get stuck in a singular perspective. that's really a problem.
 
Last edited:
  • Like
Likes morrobay and PeroK

Similar threads

  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 10 ·
Replies
10
Views
633
  • · Replies 9 ·
Replies
9
Views
2K
Replies
2
Views
1K
Replies
17
Views
2K
  • · Replies 5 ·
Replies
5
Views
487
  • · Replies 32 ·
2
Replies
32
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 14 ·
Replies
14
Views
2K
Replies
1
Views
2K