Study: Social media probably can’t be fixed

  • Thread starter Thread starter Filip Larsen
  • Start date Start date
Click For Summary
A recent study suggests that the structural dynamics of social media platforms inherently foster negative outcomes, making proposed intervention strategies largely ineffective. The discussion highlights that the architecture of social media, rather than algorithms or user behavior, is to blame for toxicity and misinformation. Effective moderation is often overlooked, despite its success in smaller online communities, due to the challenges posed by scale and user engagement. Australia has introduced a ban on social media for users under 16 to protect young people from these harms. Overall, the conversation emphasizes the need for responsible platform management and the complexities of moderating vast online spaces.
Filip Larsen
Gold Member
Messages
2,030
Reaction score
974
As one who has never really used social media and thus been a bit baffled at the public discussion of the toxicity and other issues they bring about, I find that this study really explains a lot if it holds up:
https://arxiv.org/abs/2508.03385

Jennifer at Arstechnica also had a nice interview with one of the authors:
https://arstechnica.com/science/2025/08/study-social-media-probably-cant-be-fixed
Numerous platform-level intervention strategies have been proposed to combat these issues [with social media], but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it's not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media.
 
  • Like
  • Informative
Likes sbrothy, bhobba, jack action and 1 other person
Physics news on Phys.org
The monster from the id has been loosed.
 
  • Haha
  • Like
Likes diogenesNY, bhobba and jedishrfu
PeroK said:
The monster from the id has been loosed.
And the Krell are rolling over in their graves.
 
  • Wow
Likes bhobba and jedishrfu
renormalize said:
And the Krell are rolling over in their graves.
They'll not feel so bad, seeing it happening to us as well!
 
PeroK said:
The monster from the id has been loosed.
I am only vaguely aware what this title/meme implies. Is it here meant as a analogy to the observation made in the interview that fixing social media, even if that was technical possible, might also be complicated by the fact that the generation that now has grown up exposed to all the issues believing social media really reflect what the world looks like now simply will "stick" to forming society according to those "skewed" views?
 
Filip Larsen said:
I am only vaguely aware what this title/meme implies. Is it here meant as a analogy to the observation made in the interview that fixing social media, even if that was technical possible, might also be complicated by the fact that the generation that now has grown up exposed to all the issues believing social media really reflect what the world looks like now simply will "stick" to forming society according to those "skewed" views?
It's a reference to The Forbibben Planet (1956). The Krell civilization built a machine that granted their every wish. But, when they went to sleep, a dormant, subconscious hatred for each other was unleased; and, in the night, these monsters from the id were summoned and they tore the Krell limb from limb!

Many of us have independently thought we recognised the Krell machine in social media.
 
  • Like
  • Agree
Likes dwarde, diogenesNY, bhobba and 2 others
My favorite movie with my grandfather Robbie the Robot.

Forbidden Planet is loosely based on the Shakespearean play The Tempest.
 
Last edited:
I know exchanging movie memes and preferences always seem to bring people together, but as the the OP (for once) I would like to encourage discussion relevant to the issues apparent with social media or study thereof.

For instance, I am slightly puzzled not to see "effective moderation" on the list of mechanisms that might be considered candidate to reduce the negative effects, e.g. toxicity, since well established moderation is (as far as I understand it) very much a requirement for openly available online communities (like PF Forums) since the early days of BBS, usenet news, etc, and for those communities it "seems to work". However, I also understand social media today already employ various mechanisms to moderate, but apparently with less success and I wonder if this is due to ineffective moderation "rules" (i.e. social media choose to still allow some content in order sites like PF would moderate out) or if it is simply the scale of the network that is the difference. It obviously is going to be more practical difficult to achieve "perfect" moderation when 2 billion people hang out as compared to, say only 500, but I am curious if some of the issues with global social media in principle could be sufficiently reduce if "only" social media would employ the right amount of moderation (ignoring here all the incentives social media obviously has for not wanting to do this).
 
Australia has banned all social media for those under 16:

Yes, Australia has indeed banned social media access for individuals under the age of 16. This landmark legislation, the Online Safety Amendment (Social Media Minimum Age) Act 2024, aims to protect young people from potential harms associated with social media use. The ban includes platforms like TikTok, X, Facebook, and Instagram.
 
  • Like
  • Wow
Likes sbrothy, mcastillo356, OmCheeto and 2 others
  • #10
Filip Larsen said:
For instance, I am slightly puzzled not to see "effective moderation" on the list of mechanisms that might be considered candidate to reduce the negative effects, e.g. toxicity, since well established moderation is (as far as I understand it) very much a requirement for openly available online communities (like PF Forums) since the early days of BBS, usenet news, etc, and for those communities it "seems to work".
Strict moderation is rare because it takes a lot of effort* and reduces user engagement. What self-respecting capitalist CEO/investor wants that?

*May be fixable with automation.
 
  • Like
Likes DaveE and jedishrfu
  • #11
Consider the number of posts to be moderated. Sure, they can use AI to sift through them and then have a human moderator decide on rejected posts.

There was a documentary about the toll on these special moderators who saw some truly horrific things enough to give them PTSD or something worse.



Banning a user just means they create a new account. Checking if the IP matches a banned user means they start using a VPN...

Sadly we have no self-destruct code we can send a banned to brick their computer or maybe we can if they install the social media app, but then there will be lawsuits claiming social media bricked their computer for no reason.

Can you recall the SONY fiasco with rootkits being installed on all machines that played any Sony CD?

The idea was to prevent piracy via a software rootkit named Extended Copy Protection (XCP). However, XCP exposed the client machine to malware attacks, compromised system stability, and potential privacy breaches.

Sony never told the user that this software was being installed. The court ruled that Sony went too far and consequently was punished in civil court, paying out damages to all affected parties.

I believe the only solution is a way to track all users through a verified ID, maybe even a Real ID, but then people prefer to stay anonymous, which reduces the risk of cyber stalking by government or private entities.
 
  • #12
russ_watters said:
Strict moderation is rare because it takes a lot of effort* and reduces user engagement. What self-respecting capitalist CEO/investor wants that?

*May be fixable with automation.
I've believed from the beginning that any platform, whether TV, print or Internet must ultimately be responsible for everything they publish or present.

For example, there's a British tennis player who received her first death threat at the age of 16. No one has shown the will to prevent this.

In fact, attempts to control the excesses of social media are increasingly seen as attacks on free speech.
 
  • Like
  • Agree
Likes BillTre, phinds and russ_watters
  • #13
jedishrfu said:
Consider the number of posts to be moderated. Sure, they can use AI to sift through them and then have a human moderator decide on rejected posts.

There was a documentary about the toll on these special moderators who saw some truly horrific things enough to give them PTSD or something worse.



Banning a user just means they create a new account. Checking if the IP matches a banned user means they start using a VPN...

Sadly we have no self-destruct code we can send a banned to brick their computer or maybe we can if they install the social media app, but then there will be lawsuits claiming social media bricked their computer for no reason.

Can you recall the SONY fiasco with rootkits being installed on all machines that played any Sony CD?

The idea was to prevent piracy via a software rootkit named Extended Copy Protection (XCP). However, XCP exposed the client machine to malware attacks, compromised system stability, and potential privacy breaches.

Sony never told the user that this software was being installed. The court ruled that Sony went too far and consequently was punished in civil court, paying out damages to all affected parties.

I believe the only solution is a way to track all users through a verified ID, maybe even a Real ID, but then people prefer to stay anonymous, which reduces the risk of cyber stalking by government or private entities.

If the platform cannot identify who wrote it, then it is responsible. Someone, for example, would be able to charge X with making a death threat against them.

PS saying that it's not possible to moderate the platform you've created should never have been accepted as a defence.
 
  • Like
  • Agree
Likes symbolipoint, BillTre, martinbn and 2 others
  • #14
Yeah, Zuckerberg's "move fast and break things" has ended up breaking a lot of people.
 
  • #15
In many ways, Z is a psychopath in his thinking, as are many leaders who place corporate profits over humanity.

There was a recent leaked document from Meta concerning GenAI on their platform. Several independent news organizations reviewed it.

Here are Reuters and Axios take on the document:

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

https://www.axios.com/2025/08/15/meta-children-chatbot-flirt-reuters-hawley-investigation

And here's a review of some older leaked policies from FB:

https://www.reuters.com/technology/...instagram-harm-teens-senators-say-2021-09-30/

https://www.reuters.com/article/tec...-of-content-it-allows-guardian-idUSKBN18I0HG/
 
  • #16
Back in the 90s I used Usenet, the first social medium. It was very nice at first. It has about 1000 subgroups. Then AOL brought in ordinary people. Every unmoderated group -- the great majority -- sooner or later turned into a cesspool. All decent people left. Today's social media moderated by AI is far superior.

I learned that the political discourse of the great majority of Internet users consists entirely of insults. If you want to experience this for yourself, go to Rumble.

You can't make a silk purse out of a sow's ear.
 
Last edited:
  • Like
  • Sad
Likes TensorCalculus and collinsmark
  • #17
Hornbein said:
-- sooner or later turned into a cesspool. All decent people left. Today's social media moderated by AI is far superior.

Of course, it depends on the AI. I don't think I'd want Grok as a moderator.
https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content
Honestly, I'm not sure about any AI being a moderator at this stage, except maybe as an initial sweep.

Other than that, I agree with your post. I too started with Usenet and had a similar experience. Your description of Usenet degrading into a cesspool is right on the mark.
 
  • Like
Likes TensorCalculus
  • #18
There are definitely products out there aiming to "fix" social media:
One such tool is Topaz, developed by an old acquaintance of mine. It's definitely an interesting idea... but the extent to which it will actually help change social media is debatable...

Online forums can get really bad. And for now, I think I prefer human moderators. Sometimes the problem is that there's not enough of them, or the rules aren't strict enough.
PF is the only (public) online forum that I'm active on precisely because it's heavily moderated enough that I don't have to worry a whole lot about seeing too much really bad stuff. Other places like Reddit one would have to be much more careful.
 
  • Like
Likes russ_watters, BillTre, fresh_42 and 1 other person
  • #19
I understand that examples of social networks (over time) where moderation apparently didn't work are abundant, but since moderation do seem to work in some contexts (e.g. PF) I am still puzzled to identify the key enabler for why this is so. Is it network size alone or do other factors play essential part? For instance, on PF there is topic restriction which (I understand) makes moderation a lot easier simply by avoiding topic that most often incite to toxicity, but in other contexts (e.g. in European cross-political discussion meetings like the danish in-person People's Meeting) people are encouraged to discuss politics without things getting out of hand, but I do not know if that can be replicated on-line with same success.

I strongly suspect that the broken windows theory also applies to online communities and moderation since its all about human nature (and not about the actual windows), that is, for moderation to work it quickly has to catch when a window is being broken and take steps to clean up (just like PF do), including efforts to stop repeat offenders. But again if such moderation works for small networks why doesn't it work for larger networks? Or does it work, but, as several here seems to say, social media is just in general "too lax" with such cleanup? Or perhaps becomes too difficult to detect and handle the multitude of way people (intentionally) can break window without appearing to throw a stone?

To make that last one more tricky we know there will probably always be trolls that for some reason primarily seem to join in order to wreck havoc or disruption. I understand some platforms try to work with up/down voting (on posts at least), and I assume that to some extend works to hide trolling behavior when it is done by only a few, but I wonder if any platforms also have tried account voting? The idea sounds naive, because this would obviously also allows for misuse in a "bi-partisan" contexts where larger subgroups are in "strong disagreement" with each other. A stone to "break a window" with this mechanism would then also be to down-vote someone because you disagree with that person rather than because the person really misbehaved.

Is the conclusion on moderation really that it is only effective enough in small enough networks and in larger networks effective moderation means social media would then not be social media? If that is true I guess that is somewhat equivalent to say that humans in large enough groups cannot be expected to be able to coorperate constructively with each other on an individual level without organizing into hierachies or sub-groups.
 
  • #20
Filip Larsen said:
I understand that examples of social networks (over time) where moderation apparently didn't work are abundant, but since moderation do seem to work in some contexts (e.g. PF) I am still puzzled to identify the key enabler for why this is so. Is it network size alone or do other factors play essential part? For instance, on PF there is topic restriction which (I understand) makes moderation a lot easier simply by avoiding topic that most often incite to toxicity...
Topic restriction does help, and over time we've narrowed the focus, eliminating politics, philosophy, "theory development" and pseudoscience. But moreso it's a commitment to quality over quantity, and recognition that it's never going to be huge (and I'm not sure it is even profitable at all*).
I strongly suspect that the broken windows theory also applies to online communities and moderation since its all about human nature (and not about the actual windows)...
It does.
But again if such moderation works for small networks why doesn't it work for larger networks?
It can, except insofar as it might prevent them from getting large to begin with, and interfere with profit due to the much lower engagement and much higher moderation workload. Improved automation can help with the latter.

*I'm strongly opposed to the idea of unpaid moderators in a significantly profitable company. It's exploitive.
 
  • Like
Likes DaveC426913, BillTre, PeroK and 2 others
  • #21
I bet the popular social media get a billion posts a day. Automated censorship is the only possibility. YouTube bans 45% of comments. Evidently they have an "if in doubt throw it out" style. Usually I can't figure out why a comment of mine is banned.
 
  • Like
Likes russ_watters
  • #22
Hornbein said:
I bet the popular social media get a billion posts a day. Automated censorship is the only possibility. YouTube bans 45% of comments. Evidently they have an "if in doubt throw it out" style. Usually I can't figure out why a comment of mine is banned.
Do you view that as censorship?
And as an afront to a basic given right such as free speech?

If one ( not pinpointing you yourself in particular, but a general person ) is in favour of free speech, and anti-censorship in general society, then why on social media, which is a function of society, should the rules of society not apply?

The people with utterance of 'that's just them (the social media ) clouding the issue under cover of free speech', most likely enjoy the cover of free speech in the general society, not realizing that such similar blanket free speech statements about this and any other topic, have not been thoroughly thought through, and rather unintellectual. It could be, nevertheless, that they are really are in favour of the 'thought police', as least for your thoughts, not their own, but who is to know, as an expansion on the utterance for one to to feel out their actual opinion(s) is not given. It is left for one to decipher the meaning, to agree, or disagree, but in doing so one has accepted the utterance as basic fact, which it may or may not be.
If one agrees, then the slippery slope can apply to the establishment of curtailment of free speech in social media over and above what is necessary, to enhanced curtailment in general society far and above that such as 'yelling fire' in a crowded movie theatre.
 
  • #23
I'll wager that in the future you cannot get elected to a public post unless virtually your entire life is on video.

EDIT: "You can see I didn't inhale!" :smile:
 
  • #24
There is a lot of focus on the bad sides of SoMe. Surely there must be some upsides, no?

EDIT: Although, I'm drawing a blank for any serious ones......

EDIT2: I admit I jumped to the conclusion for now, but it's like it's the age-old dynamic of who(m?) yells highest and has the most sociological/political backers behind them.
 
  • #25
Filip Larsen said:
Is the conclusion on moderation really that it is only effective enough in small enough networks and in larger networks effective moderation means social media would then not be social media? If that is true I guess that is somewhat equivalent to say that humans in large enough groups cannot be expected to be able to coorperate constructively with each other on an individual level without organizing into hierachies or sub-groups.
Sounds to me like general society in action, where groups and sub-groups exist. Since social media is a function of society, it should not be surprising that clumpings will also exist.

In general society there are rules and regulations for people to follow to be considered a good citizen.
If one of those laws are broken, and if found out, there are consequences for the bad behavior. For example, not all speeders are charged with travelling over the speed limit. Those that are found out, are found out by the moderators of the street and highway system, in this case the policeman. If the traffic is light, and the network small ( a town ), just a few moderators can catch most, if not all, speeders over the expanse of the jurisdiction, as all infractions will be nearly in line of sight, and the moderator might just happen to know just about everyone, including those with a propensity to speed.

As the town and traffic network increases in size, more moderators will be needed for patrol.
https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_number_of_police_officers
Note the relationship for Canada and the United States, two countries more similar than different in other aspects of economic representation as a basis of comparison - 184 and 422 per 100,00 inhabitants. ( As a side note, the Vatican rate is 15, 439 - go figure ). We can't say anything as to the efficiency of catching speeders in either jurisdiction, as the uncaught speeders is an unknown.

If general society would want every speeder to be caught, then the number of necessary moderators ( policemen ) would be a hard pressed figure to obtain. A high number such as a policeman on every corner might be necessary, or the again overkill. Nevertheless, it still does not do away with those who might want to attempt to beat the system and do it anyway, or as part of a speeding convoy group, or as one who will speed if they feel 'safe' enough to do so anyways. Instead, general society has decided that catching some percentage of speeders, not all, is a reasonable outcome for the use of tax dollars. Nonetheless, that 'some' can be increased through the usage of technology, such as hand held radar and photo radar, where every vehicle can be tracked as it passes by, with nary a thought given by most of the population of drivers, as they are 'not doing anything wrong', just doing ' a little bit wrong', and its for a good cause such as to 'save the children' ( who could argue against that ).

For social media policing, catching each and every infraction seems to be a desired goal ( contrary to the reasoning applied to general society ), with each and every posting on social media monitored.
Just as much as one does not want to be under investigation for saying 'fire fly' in a crowded movie theatre, one would not welcome a wellness check for saying the same in a posting in forest fire season.

As is breaking up the clumps. Clumps that have 'undesirable' traits would, or should be, under scrutiny. Undesirable, then would have to be defined, and by whom, with the definition falling under interpretation due to the politics of the day.
 
  • #26
Here is one group thing I just came across that is kinda weird looking in from the outside, including self censorship, or rather the ban of postings during the event. An expensive undertaking for the young women.

https://www.independent.co.uk/news/...sa-tiktok-university-of-alabama-b2808808.html
Rushtok
Across the country, rush is typically a 10-day event where “prospective new members” try out sororities through rounds of activities prescribing a strict slate of outfits and etiquette. In the lead-up, girls often submit "social resumes" and letters of recommendation from sorority alums.

Participation often requires an eye-opening price tag.

After spending sometimes tens of thousands of dollars on outfits, makeup and plane tickets, each of this week's 2,600 recruits paid $550 to participate. It's non-refundable if they don't get picked. If accepted, they'll pay an average $8,400 a semester to live in the sorority house, or $4,100 if they live elsewhere, according to the Alabama Panhellenic Association.

The pressure can be so intense that an industry of consultants now helps girls navigate the often mysterious criteria for landing a desired sorority. Some charge up to $10,000 for months of services that can begin in high school.
 
  • #27
256bits said:
Do you view that as censorship?
And as an afront to a basic given right such as free speech?

If one ( not pinpointing you yourself in particular, but a general person ) is in favour of free speech, and anti-censorship in general society, then why on social media, which is a function of society, should the rules of society not apply?

The people with utterance of 'that's just them (the social media ) clouding the issue under cover of free speech', most likely enjoy the cover of free speech in the general society...
Freedom of speech is a government guaranteed right, to prevent government suppression of dissenting ideas. It has nothing to do with social media beyond that government can't do or mandate the censorship. [Late edit] Except protection from criminal abuse/harassment.

PF is heavily moderated. Yes, that goes against a principle of free speech. No, it does not violate any "basic given right such as free speech" because such a thing does not exist beyond the narrow scope I described above.

If someone insults/berates you in your own house, wouldn't you kick them out? If someone insults/berates you in their house, wouldn't you leave?
 
Last edited:
  • #28
256bits said:
If one ( not pinpointing you yourself in particular, but a general person ) is in favour of free speech, and anti-censorship in general society, then why on social media, which is a function of society, should the rules of society not apply?
Those 'rules of society' also has something to say about responsibility (and occasionally: consequences too).
Why it is always only about free speech, then?
 
  • Like
Likes russ_watters
  • #29
Rive said:
Those 'rules of society' also has something to say about responsibility (and occasionally: consequences too).
Why it is always only about free speech, then?
Because it is a catchy phrase, and who can be against free speech.

And by 'rules of society' I do not mean only those written down in a constitution, or passed by majority vote by lawmakers. Unwritten rules, such as pleasantries ( handshakes, greetings upon meeting someone ) as a simple example, also apply.

As far as I know, it is that the government cannot limit free speech, except under exceptional circumstances. And it seems to be that the arena for free speech infringement is attempting to extend its fingers into the private arena, as well as changing the concepts of acceptable moral and ethical behavior in that a recipients 'feelings' can be disregarded as irrelevant.
A case in point - at one time past, giving the finger to a policeman was giving him/her the opportunity of investigation for disorderly conduct. Now, it has become a protected legal free speech expression, at least in Canada and the US.
 
Last edited:
  • #30
russ_watters said:
PF is heavily moderated. Yes, that goes against a principle of free speech. No, it does not violate any "basic given right such as free speech" because such a thing does not exist beyond the narrow scope I described above.
exactly.
Yet how many people actually know the laws of the land, and where and where not they can be applied.
Boorish behavior gained some credence as being acceptable from the argument of the concept of free speech being extended and applied to general society.

russ_watters said:
If someone insults/berates you in your own house, wouldn't you kick them out? If someone insults/berates you in their house, wouldn't you leave?
For sane, rational people that would happen and it would the end of it.
For others, emotions overcome, conflict ensues, with outcomes at the worst tragic.