Is it just me or is PF dying?

  • Thread starter Thread starter gravenewworld
  • Start date Start date
Click For Summary
The discussion centers around concerns that the Physics Forums (PF) is experiencing a decline in traffic and engagement, with long-time members noting a loss of popular contributors and a perceived increase in toxicity. Users express frustration with the site's moderation policies, feeling that threads are often closed too harshly, which may deter participation from new and existing members. The decline in forum activity is attributed to various factors, including competition from social media platforms and changes in user behavior, particularly on mobile devices. While some members believe PF is not dying, they acknowledge a loss of vibrancy compared to previous years and suggest that the site could benefit from a more inclusive and less abrasive atmosphere. The conversation highlights the need for PF to adapt to maintain relevance in a changing online landscape.
  • #61
WWGD said:
Is it possible to do some sort of data mining or analytics on PF ? I am not sure of what to look for, but maybe there are
proxies for good posts, good questions? I was thinking a sort of logistic regression re probability a post is good.
Does anyone use the "Top Threads" feature?
 
Physics news on Phys.org
  • #62
Greg Bernhardt said:
Does anyone use the "Top Threads" feature?
I was not even aware it existed. Ref., please? Maybe if we have a data bank of good posts, we can mine them.
 
  • #63
WWGD said:
I was not even aware it existed. Ref., please? Maybe if we have a data bank of good posts, we can mine them.
To the right of top pagination at the forum thread list level. It's a filter button.
 
  • #64
Greg Bernhardt said:
To the right of top pagination at the forum thread list level. It's a filter button.
It filters away your announcement of the new space news forum :p.

Data mining is possible, of course, but I don't see any automatic way to judge the quality of posts.
Number of likes received is a very weak and variable indicator and not useful for posts older than a year.
 
  • #65
mfb said:
It filters away your announcement of the new space news forum :p.
Must not be interesting :)
 
  • #66
WWGD said:
Is it possible to do some sort of data mining or analytics on PF ? I am not sure of what to look for, but maybe there are
proxies for good posts, good questions?
Stack Overflow has an 'Answered' flag that the OP can set. It helps subsequent readers skip to the answer.

I don't think that would work here, since, unlike programming, answers are not definitive.
 
  • #67
I like PF. I don't think PF needs to compete against any other service. PF and the posters here have helped me tremendously through a lot of courses.

I think the discipline/rigor is very high here, but it is high in any science related field/lab setting. I don't want to be on a forum with people discussing outlandish conspiracy theories -- i have facebook for that. I come to PF to learn and for me, out of all of my tabs / internet searches, PF is always #1 for my learning process.

High quality content, posters that help and produce good reading quality. I do also highly enjoy the insights section as well.
 
  • Like
Likes Drakkith, WWGD, mfb and 3 others
  • #68
Would it be too difficult to implement some measurements like, say , a rating of 0-5 by OP on quality of answer/satisfaction,
similar for moderators involved assigning a number for quality, originality etc. and then finding correlates/proxies? Maybe then a databank with the higher posted ratings and one for the lower-rated
postings can be kept and analyzed?
 
Last edited:
  • #69
WWGD said:
a rating of 0-5 by OP on quality of answer/satisfaction
We had thread ratings for the first 8 years and they were rarely used.
 
  • #70
Greg Bernhardt said:
We had thread ratings for the first 8 years and they were rarely used.
Do you think it would be worthwhile mining them? I am no expert, but maybe I can look into it and learn more myself, hopefully helping
in the process?
 
  • #71
WWGD said:
Do you think it would be worthwhile mining them?
Mining what and looking for what?
 
  • #72
Greg Bernhardt said:
Mining what and looking for what?
Just doing some regressions to find proxies for good posts, e.g., patterns of traits that seem to correlate with highly -rated posts. Or maybe
logistic regression, which assigns a probability ( of a post being high -quality) from given traits? I am not 100 % , but AFAIK, this is done in many
corporate settings, maybe can be adapted here?
 
  • #73
WWGD said:
Just doing some regressions to find proxies for good posts, e.g., patterns of traits that seem to correlate with highly -rated posts. Or maybe
logistic regression, which assigns a probability ( of a post being high -quality) from given traits? I am not 100 % , but AFAIK, this is done in many
corporate settings, maybe can be adapted here?

I don't think we have that much data...
 
  • #74
micromass said:
I don't think we have that much data...
I don't mean to be pushy, but is it possible to somehow set aside posts that are good in order to be analyzed? I know mentors are already overworked,
so maybe this can be done gradually until a relatively large amount of data is available?
 
  • #75
WWGD said:
I don't mean to be pushy, but is it possible to somehow set aside posts that are good in order to be analyzed? I know mentors are already overworked,
so maybe this can be done gradually until a relatively large amount of data is available?
Feedback and suggesting are good. How would you analyze/evaluate a post?
 
  • #76
Greg Bernhardt said:
Feedback and suggesting are good. How would you analyze/evaluate a post?
Maybe a combination of numbers: one of them is a measure of satisfaction with answer by OP, others are measures by mentors of : originality, quality, opening up new avenues, good example. Say from 0-5. Then we can average the measures given by all mentors and select those with, say, score from 4-5 as the good ones and those in 0-1 as the worse ones and look for patterns, for qualities . Maybe we can also link to these from the main page to illustrate what we consider to be quality posts. These types of analyses are done at corporate level under the general term " metrics" with, e.g., measures of product satisfaction.
 
  • #77
WWGD said:
Maybe a combination of numbers: one of them is a measure of satisfaction with answer by OP, others are measures by mentors of : originality, quality, opening up new avenues, good example. Say from 0-5. Then we can average the measures given by all mentors and select those with, say, score from 4-5 as the good ones and those in 0-1 as the worse ones and look for patterns, for qualities . Maybe we can also link to these from the main page to illustrate what we consider to be quality posts. These types of analyses are done at corporate level under the general term " metrics" with, e.g., measures of product satisfaction.
Identifying good posts is not hard, what is hard is developing an automatic system. Companies spend millions to develop such technology and I think it's likely beyond our capabilities. Furthermore I'm starting to get lost on the purpose of this. Is it to teach people to write good posts or to list good posts for people to read? And are we talking about good posts or good threads. We already have the featured thread area. I don't think linking random good posts from various points in different threads would be cohesive.
 
  • #78
Also one of the best and easiest ways to measure a post's worth is by "Liking" it.
 
  • #79
I would say the idea , or at least an idea is to get people to write better posts and to allow mentors, others, if possible,to steer low-quality posts up into better ones by having a better idea of the traits to identify as being conducive to better quality. This way creating a positive feedback loop of motivation and quality posts. And, yes, another measure of quality would be the "liking" . Other measures could be, e.g., : enlightening/motivating, clear explanation, etc. If we could find commonalities to these high-quality posts, could this be used to try to steer "lower-numbered" posts into being better ?
 
  • #80
WWGD said:
I would say the idea , or at least an idea is to get people to write better posts and to allow mentors, others, if possible,to steer low-quality posts up into better ones by having a better idea of the traits to identify as being conducive to better quality. This way creating a positive feedback loop of motivation and quality posts. And, yes, another measure of quality would be the "liking" .
We do have measures to inform members if we think their post is low quality. Unfortunately given human nature and the flighty nature of online communities it's very difficult to actually make big gains in post quality. The best way is to simply find a way to attract those who just already have it. Those low post members aren't going to be very receptive and suddenly turn into journal quality writers.
 
  • #81
Greg Bernhardt said:
We do have measures to inform members if we think their post is low quality. Unfortunately given human nature and the flighty nature of online communities it's very difficult to actually make big gains in post quality. The best way is to simply find a way to attract those who just already have it. Those low post members aren't going to be very receptive and suddenly turn into journal quality writers.

Yes, I guess 20-80 law: https://en.wikipedia.org/wiki/Pareto_principle
 
  • Like
Likes Greg Bernhardt
  • #82
The idea of assigning some objective quantity of "goodness" to posts raises some alarm bells in my head.

Seems to me, the spirit of scientific inquiry is a democratic and merit-based one. By merit, I mean 'any good post' as opposed to 'someone who has a high goodness score'.

An inquisitive mind should read as much as possible before drawing a conclusion as to what the most helpful or wise answer is. A goodness rating - especially an automated one - interrupts that, and inches us down the slippery slope of the "appeal to popularity" and "appeal to authority" fallacies.

I think it will damage PF.
 
  • #83
DaveC426913 said:
The idea of assigning some objective quantity of "goodness" to posts raises some alarm bells in my head.

Seems to me, the spirit of scientific inquiry is a democratic and merit-based one. An inquistive mind should read as much as possible before drawing a conclusion as to what the most helpful or wise answer is. A goodness rating - especially an automated one - interrupts that, and inches us down the slippery slope of the "appeal to popularity" and "appeal to authority" fallacies.

I think it will damage PF.
These are supposed to be used more as rules of thumb than as deterministic rules. To help guide, but can always be overruled if the context suggests the rule does not apply.
 
  • #84
WWGD said:
These are supposed to be used more as rules of thumb than as deterministic rules. To help guide, but can always be overruled if the context suggests the rule does not apply.
I'm afraid I don't see how that alters my point at all. Popularity is a snowball effect. A nobody writing a good post isn't enough anymore, now they also need a 'reputation' to be heard over the 'high goodness score' people.

We already have Insights and FAQs, if people are looking for succinct, approved posts.
 
  • #85
DaveC426913 said:
I'm afraid I don't see how that alters my point at all. Popularity is a snowball effect. A nobody writing a good post isn't enough anymore, now they also need a 'reputation' to be heard over the 'high goodness score' people.

But the measure is intended to be a weighted average that will not give full weight to popularity alone, other factors will also be considered.
 
  • #86
DaveC426913 said:
The idea of assigning some objective quantity of "goodness" to posts raises some alarm bells in my head.

Seems to me, the spirit of scientific inquiry is a democratic and merit-based one. By merit, I mean 'any good post' as opposed to 'someone who has a high goodness score'.

An inquisitive mind should read as much as possible before drawing a conclusion as to what the most helpful or wise answer is. A goodness rating - especially an automated one - interrupts that, and inches us down the slippery slope of the "appeal to popularity" and "appeal to authority" fallacies.

I think it will damage PF.

We are always on this slippery slope (for example, we already have mentors and science advisors), but I do agree with the spirit of your post.
 
  • #87
atyy said:
We are always on this slippery slope (for example, we already have mentors and science advisors), but I do agree with the spirit of your post.
But again, this is intended to serve as an aide, not as a substitute for judgement.
 
  • #88
DaveC426913 said:
PF is trying to "compete" with Redditt??
Reddit is the most common website visited immediately prior hitting PF (aside from google).
 
  • #89
I don't know the timescale they use for the analysis. That number could be a bit biased from the nearly 400,000 hits this article got from being on the reddit front page recently.
 
  • #90
mfb said:
I don't know the timescale they use for the analysis. That number could be a bit biased from the nearly 400,000 hits this article got from being on the reddit front page recently.
And alexa data is near useless because it relies on people using their toolbar, which is about exactly no one.
 

Similar threads

  • · Replies 25 ·
Replies
25
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
2
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K