Study: Social media probably can’t be fixed

  • Thread starter Thread starter Filip Larsen
  • Start date Start date
AI Thread Summary
A recent study suggests that the structural dynamics of social media platforms inherently foster negative outcomes, making proposed intervention strategies largely ineffective. The discussion highlights that the architecture of social media, rather than algorithms or user behavior, is to blame for toxicity and misinformation. Effective moderation is often overlooked, despite its success in smaller online communities, due to the challenges posed by scale and user engagement. Australia has introduced a ban on social media for users under 16 to protect young people from these harms. Overall, the conversation emphasizes the need for responsible platform management and the complexities of moderating vast online spaces.
Filip Larsen
Gold Member
Messages
2,003
Reaction score
947
As one who has never really used social media and thus been a bit baffled at the public discussion of the toxicity and other issues they bring about, I find that this study really explains a lot if it holds up:
https://arxiv.org/abs/2508.03385

Jennifer at Arstechnica also had a nice interview with one of the authors:
https://arstechnica.com/science/2025/08/study-social-media-probably-cant-be-fixed
Numerous platform-level intervention strategies have been proposed to combat these issues [with social media], but according to a preprint posted to the physics arXiv, none of them are likely to be effective. And it's not the fault of much-hated algorithms, non-chronological feeds, or our human proclivity for seeking out negativity. Rather, the dynamics that give rise to all those negative outcomes are structurally embedded in the very architecture of social media.
 
  • Like
  • Informative
Likes sbrothy, bhobba, jack action and 1 other person
Physics news on Phys.org
The monster from the id has been loosed.
 
  • Haha
  • Like
Likes diogenesNY, bhobba and jedishrfu
PeroK said:
The monster from the id has been loosed.
And the Krell are rolling over in their graves.
 
  • Wow
Likes bhobba and jedishrfu
renormalize said:
And the Krell are rolling over in their graves.
They'll not feel so bad, seeing it happening to us as well!
 
PeroK said:
The monster from the id has been loosed.
I am only vaguely aware what this title/meme implies. Is it here meant as a analogy to the observation made in the interview that fixing social media, even if that was technical possible, might also be complicated by the fact that the generation that now has grown up exposed to all the issues believing social media really reflect what the world looks like now simply will "stick" to forming society according to those "skewed" views?
 
Filip Larsen said:
I am only vaguely aware what this title/meme implies. Is it here meant as a analogy to the observation made in the interview that fixing social media, even if that was technical possible, might also be complicated by the fact that the generation that now has grown up exposed to all the issues believing social media really reflect what the world looks like now simply will "stick" to forming society according to those "skewed" views?
It's a reference to The Forbibben Planet (1956). The Krell civilization built a machine that granted their every wish. But, when they went to sleep, a dormant, subconscious hatred for each other was unleased; and, in the night, these monsters from the id were summoned and they tore the Krell limb from limb!

Many of us have independently thought we recognised the Krell machine in social media.
 
  • Like
  • Agree
Likes dwarde, diogenesNY, bhobba and 2 others
My favorite movie with my grandfather Robbie the Robot.

Forbidden Planet is loosely based on the Shakespearean play The Tempest.
 
Last edited:
I know exchanging movie memes and preferences always seem to bring people together, but as the the OP (for once) I would like to encourage discussion relevant to the issues apparent with social media or study thereof.

For instance, I am slightly puzzled not to see "effective moderation" on the list of mechanisms that might be considered candidate to reduce the negative effects, e.g. toxicity, since well established moderation is (as far as I understand it) very much a requirement for openly available online communities (like PF Forums) since the early days of BBS, usenet news, etc, and for those communities it "seems to work". However, I also understand social media today already employ various mechanisms to moderate, but apparently with less success and I wonder if this is due to ineffective moderation "rules" (i.e. social media choose to still allow some content in order sites like PF would moderate out) or if it is simply the scale of the network that is the difference. It obviously is going to be more practical difficult to achieve "perfect" moderation when 2 billion people hang out as compared to, say only 500, but I am curious if some of the issues with global social media in principle could be sufficiently reduce if "only" social media would employ the right amount of moderation (ignoring here all the incentives social media obviously has for not wanting to do this).
 
Australia has banned all social media for those under 16:

Yes, Australia has indeed banned social media access for individuals under the age of 16. This landmark legislation, the Online Safety Amendment (Social Media Minimum Age) Act 2024, aims to protect young people from potential harms associated with social media use. The ban includes platforms like TikTok, X, Facebook, and Instagram.
 
  • Like
  • Wow
Likes sbrothy, mcastillo356, OmCheeto and 2 others
  • #10
Filip Larsen said:
For instance, I am slightly puzzled not to see "effective moderation" on the list of mechanisms that might be considered candidate to reduce the negative effects, e.g. toxicity, since well established moderation is (as far as I understand it) very much a requirement for openly available online communities (like PF Forums) since the early days of BBS, usenet news, etc, and for those communities it "seems to work".
Strict moderation is rare because it takes a lot of effort* and reduces user engagement. What self-respecting capitalist CEO/investor wants that?

*May be fixable with automation.
 
  • Like
Likes DaveE and jedishrfu
  • #11
Consider the number of posts to be moderated. Sure, they can use AI to sift through them and then have a human moderator decide on rejected posts.

There was a documentary about the toll on these special moderators who saw some truly horrific things enough to give them PTSD or something worse.



Banning a user just means they create a new account. Checking if the IP matches a banned user means they start using a VPN...

Sadly we have no self-destruct code we can send a banned to brick their computer or maybe we can if they install the social media app, but then there will be lawsuits claiming social media bricked their computer for no reason.

Can you recall the SONY fiasco with rootkits being installed on all machines that played any Sony CD?

The idea was to prevent piracy via a software rootkit named Extended Copy Protection (XCP). However, XCP exposed the client machine to malware attacks, compromised system stability, and potential privacy breaches.

Sony never told the user that this software was being installed. The court ruled that Sony went too far and consequently was punished in civil court, paying out damages to all affected parties.

I believe the only solution is a way to track all users through a verified ID, maybe even a Real ID, but then people prefer to stay anonymous, which reduces the risk of cyber stalking by government or private entities.
 
  • #12
russ_watters said:
Strict moderation is rare because it takes a lot of effort* and reduces user engagement. What self-respecting capitalist CEO/investor wants that?

*May be fixable with automation.
I've believed from the beginning that any platform, whether TV, print or Internet must ultimately be responsible for everything they publish or present.

For example, there's a British tennis player who received her first death threat at the age of 16. No one has shown the will to prevent this.

In fact, attempts to control the excesses of social media are increasingly seen as attacks on free speech.
 
  • Like
  • Agree
Likes BillTre, phinds and russ_watters
  • #13
jedishrfu said:
Consider the number of posts to be moderated. Sure, they can use AI to sift through them and then have a human moderator decide on rejected posts.

There was a documentary about the toll on these special moderators who saw some truly horrific things enough to give them PTSD or something worse.



Banning a user just means they create a new account. Checking if the IP matches a banned user means they start using a VPN...

Sadly we have no self-destruct code we can send a banned to brick their computer or maybe we can if they install the social media app, but then there will be lawsuits claiming social media bricked their computer for no reason.

Can you recall the SONY fiasco with rootkits being installed on all machines that played any Sony CD?

The idea was to prevent piracy via a software rootkit named Extended Copy Protection (XCP). However, XCP exposed the client machine to malware attacks, compromised system stability, and potential privacy breaches.

Sony never told the user that this software was being installed. The court ruled that Sony went too far and consequently was punished in civil court, paying out damages to all affected parties.

I believe the only solution is a way to track all users through a verified ID, maybe even a Real ID, but then people prefer to stay anonymous, which reduces the risk of cyber stalking by government or private entities.

If the platform cannot identify who wrote it, then it is responsible. Someone, for example, would be able to charge X with making a death threat against them.

PS saying that it's not possible to moderate the platform you've created should never have been accepted as a defence.
 
  • Like
  • Agree
Likes symbolipoint, BillTre, martinbn and 2 others
  • #14
Yeah, Zuckerberg's "move fast and break things" has ended up breaking a lot of people.
 
  • #15
In many ways, Z is a psychopath in his thinking, as are many leaders who place corporate profits over humanity.

There was a recent leaked document from Meta concerning GenAI on their platform. Several independent news organizations reviewed it.

Here are Reuters and Axios take on the document:

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

https://www.axios.com/2025/08/15/meta-children-chatbot-flirt-reuters-hawley-investigation

And here's a review of some older leaked policies from FB:

https://www.reuters.com/technology/...instagram-harm-teens-senators-say-2021-09-30/

https://www.reuters.com/article/tec...-of-content-it-allows-guardian-idUSKBN18I0HG/
 
  • #16
Back in the 90s I used Usenet, the first social medium. It was very nice at first. It has about 1000 subgroups. Then AOL brought in ordinary people. Every unmoderated group -- the great majority -- sooner or later turned into a cesspool. All decent people left. Today's social media moderated by AI is far superior.

I learned that the political discourse of the great majority of Internet users consists entirely of insults. If you want to experience this for yourself, go to Rumble.

You can't make a silk purse out of a sow's ear.
 
Last edited:
  • Like
  • Sad
Likes TensorCalculus and collinsmark
  • #17
Hornbein said:
-- sooner or later turned into a cesspool. All decent people left. Today's social media moderated by AI is far superior.

Of course, it depends on the AI. I don't think I'd want Grok as a moderator.
https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content
Honestly, I'm not sure about any AI being a moderator at this stage, except maybe as an initial sweep.

Other than that, I agree with your post. I too started with Usenet and had a similar experience. Your description of Usenet degrading into a cesspool is right on the mark.
 
  • Like
Likes TensorCalculus
  • #18
There are definitely products out there aiming to "fix" social media:
One such tool is Topaz, developed by an old acquaintance of mine. It's definitely an interesting idea... but the extent to which it will actually help change social media is debatable...

Online forums can get really bad. And for now, I think I prefer human moderators. Sometimes the problem is that there's not enough of them, or the rules aren't strict enough.
PF is the only (public) online forum that I'm active on precisely because it's heavily moderated enough that I don't have to worry a whole lot about seeing too much really bad stuff. Other places like Reddit one would have to be much more careful.
 
  • Like
Likes russ_watters, BillTre, fresh_42 and 1 other person
  • #19
I understand that examples of social networks (over time) where moderation apparently didn't work are abundant, but since moderation do seem to work in some contexts (e.g. PF) I am still puzzled to identify the key enabler for why this is so. Is it network size alone or do other factors play essential part? For instance, on PF there is topic restriction which (I understand) makes moderation a lot easier simply by avoiding topic that most often incite to toxicity, but in other contexts (e.g. in European cross-political discussion meetings like the danish in-person People's Meeting) people are encouraged to discuss politics without things getting out of hand, but I do not know if that can be replicated on-line with same success.

I strongly suspect that the broken windows theory also applies to online communities and moderation since its all about human nature (and not about the actual windows), that is, for moderation to work it quickly has to catch when a window is being broken and take steps to clean up (just like PF do), including efforts to stop repeat offenders. But again if such moderation works for small networks why doesn't it work for larger networks? Or does it work, but, as several here seems to say, social media is just in general "too lax" with such cleanup? Or perhaps becomes too difficult to detect and handle the multitude of way people (intentionally) can break window without appearing to throw a stone?

To make that last one more tricky we know there will probably always be trolls that for some reason primarily seem to join in order to wreck havoc or disruption. I understand some platforms try to work with up/down voting (on posts at least), and I assume that to some extend works to hide trolling behavior when it is done by only a few, but I wonder if any platforms also have tried account voting? The idea sounds naive, because this would obviously also allows for misuse in a "bi-partisan" contexts where larger subgroups are in "strong disagreement" with each other. A stone to "break a window" with this mechanism would then also be to down-vote someone because you disagree with that person rather than because the person really misbehaved.

Is the conclusion on moderation really that it is only effective enough in small enough networks and in larger networks effective moderation means social media would then not be social media? If that is true I guess that is somewhat equivalent to say that humans in large enough groups cannot be expected to be able to coorperate constructively with each other on an individual level without organizing into hierachies or sub-groups.
 
  • #20
Filip Larsen said:
I understand that examples of social networks (over time) where moderation apparently didn't work are abundant, but since moderation do seem to work in some contexts (e.g. PF) I am still puzzled to identify the key enabler for why this is so. Is it network size alone or do other factors play essential part? For instance, on PF there is topic restriction which (I understand) makes moderation a lot easier simply by avoiding topic that most often incite to toxicity...
Topic restriction does help, and over time we've narrowed the focus, eliminating politics, philosophy, "theory development" and pseudoscience. But moreso it's a commitment to quality over quantity, and recognition that it's never going to be huge (and I'm not sure it is even profitable at all*).
I strongly suspect that the broken windows theory also applies to online communities and moderation since its all about human nature (and not about the actual windows)...
It does.
But again if such moderation works for small networks why doesn't it work for larger networks?
It can, except insofar as it might prevent them from getting large to begin with, and interfere with profit due to the much lower engagement and much higher moderation workload. Improved automation can help with the latter.

*I'm strongly opposed to the idea of unpaid moderators in a significantly profitable company. It's exploitive.
 
  • Like
Likes DaveC426913, BillTre, PeroK and 2 others
  • #21
I bet the popular social media get a billion posts a day. Automated censorship is the only possibility. YouTube bans 45% of comments. Evidently they have an "if in doubt throw it out" style. Usually I can't figure out why a comment of mine is banned.
 
  • Like
Likes russ_watters
  • #22
Hornbein said:
I bet the popular social media get a billion posts a day. Automated censorship is the only possibility. YouTube bans 45% of comments. Evidently they have an "if in doubt throw it out" style. Usually I can't figure out why a comment of mine is banned.
Do you view that as censorship?
And as an afront to a basic given right such as free speech?

If one ( not pinpointing you yourself in particular, but a general person ) is in favour of free speech, and anti-censorship in general society, then why on social media, which is a function of society, should the rules of society not apply?

The people with utterance of 'that's just them (the social media ) clouding the issue under cover of free speech', most likely enjoy the cover of free speech in the general society, not realizing that such similar blanket free speech statements about this and any other topic, have not been thoroughly thought through, and rather unintellectual. It could be, nevertheless, that they are really are in favour of the 'thought police', as least for your thoughts, not their own, but who is to know, as an expansion on the utterance for one to to feel out their actual opinion(s) is not given. It is left for one to decipher the meaning, to agree, or disagree, but in doing so one has accepted the utterance as basic fact, which it may or may not be.
If one agrees, then the slippery slope can apply to the establishment of curtailment of free speech in social media over and above what is necessary, to enhanced curtailment in general society far and above that such as 'yelling fire' in a crowded movie theatre.
 
  • #23
I'll wager that in the future you cannot get elected to a public post unless virtually your entire life is on video.

EDIT: "You can see I didn't inhale!" :smile:
 
  • #24
There is a lot of focus on the bad sides of SoMe. Surely there must be some upsides, no?

EDIT: Although, I'm drawing a blank for any serious ones......

EDIT2: I admit I jumped to the conclusion for now, but it's like it's the age-old dynamic of who(m?) yells highest and has the most sociological/political backers behind them.
 
  • #25
Filip Larsen said:
Is the conclusion on moderation really that it is only effective enough in small enough networks and in larger networks effective moderation means social media would then not be social media? If that is true I guess that is somewhat equivalent to say that humans in large enough groups cannot be expected to be able to coorperate constructively with each other on an individual level without organizing into hierachies or sub-groups.
Sounds to me like general society in action, where groups and sub-groups exist. Since social media is a function of society, it should not be surprising that clumpings will also exist.

In general society there are rules and regulations for people to follow to be considered a good citizen.
If one of those laws are broken, and if found out, there are consequences for the bad behavior. For example, not all speeders are charged with travelling over the speed limit. Those that are found out, are found out by the moderators of the street and highway system, in this case the policeman. If the traffic is light, and the network small ( a town ), just a few moderators can catch most, if not all, speeders over the expanse of the jurisdiction, as all infractions will be nearly in line of sight, and the moderator might just happen to know just about everyone, including those with a propensity to speed.

As the town and traffic network increases in size, more moderators will be needed for patrol.
https://en.wikipedia.org/wiki/List_of_countries_and_dependencies_by_number_of_police_officers
Note the relationship for Canada and the United States, two countries more similar than different in other aspects of economic representation as a basis of comparison - 184 and 422 per 100,00 inhabitants. ( As a side note, the Vatican rate is 15, 439 - go figure ). We can't say anything as to the efficiency of catching speeders in either jurisdiction, as the uncaught speeders is an unknown.

If general society would want every speeder to be caught, then the number of necessary moderators ( policemen ) would be a hard pressed figure to obtain. A high number such as a policeman on every corner might be necessary, or the again overkill. Nevertheless, it still does not do away with those who might want to attempt to beat the system and do it anyway, or as part of a speeding convoy group, or as one who will speed if they feel 'safe' enough to do so anyways. Instead, general society has decided that catching some percentage of speeders, not all, is a reasonable outcome for the use of tax dollars. Nonetheless, that 'some' can be increased through the usage of technology, such as hand held radar and photo radar, where every vehicle can be tracked as it passes by, with nary a thought given by most of the population of drivers, as they are 'not doing anything wrong', just doing ' a little bit wrong', and its for a good cause such as to 'save the children' ( who could argue against that ).

For social media policing, catching each and every infraction seems to be a desired goal ( contrary to the reasoning applied to general society ), with each and every posting on social media monitored.
Just as much as one does not want to be under investigation for saying 'fire fly' in a crowded movie theatre, one would not welcome a wellness check for saying the same in a posting in forest fire season.

As is breaking up the clumps. Clumps that have 'undesirable' traits would, or should be, under scrutiny. Undesirable, then would have to be defined, and by whom, with the definition falling under interpretation due to the politics of the day.
 
  • #26
Here is one group thing I just came across that is kinda weird looking in from the outside, including self censorship, or rather the ban of postings during the event. An expensive undertaking for the young women.

https://www.independent.co.uk/news/...sa-tiktok-university-of-alabama-b2808808.html
Rushtok
Across the country, rush is typically a 10-day event where “prospective new members” try out sororities through rounds of activities prescribing a strict slate of outfits and etiquette. In the lead-up, girls often submit "social resumes" and letters of recommendation from sorority alums.

Participation often requires an eye-opening price tag.

After spending sometimes tens of thousands of dollars on outfits, makeup and plane tickets, each of this week's 2,600 recruits paid $550 to participate. It's non-refundable if they don't get picked. If accepted, they'll pay an average $8,400 a semester to live in the sorority house, or $4,100 if they live elsewhere, according to the Alabama Panhellenic Association.

The pressure can be so intense that an industry of consultants now helps girls navigate the often mysterious criteria for landing a desired sorority. Some charge up to $10,000 for months of services that can begin in high school.
 
  • #27
256bits said:
Do you view that as censorship?
And as an afront to a basic given right such as free speech?

If one ( not pinpointing you yourself in particular, but a general person ) is in favour of free speech, and anti-censorship in general society, then why on social media, which is a function of society, should the rules of society not apply?

The people with utterance of 'that's just them (the social media ) clouding the issue under cover of free speech', most likely enjoy the cover of free speech in the general society...
Freedom of speech is a government guaranteed right, to prevent government suppression of dissenting ideas. It has nothing to do with social media beyond that government can't do or mandate the censorship. [Late edit] Except protection from criminal abuse/harassment.

PF is heavily moderated. Yes, that goes against a principle of free speech. No, it does not violate any "basic given right such as free speech" because such a thing does not exist beyond the narrow scope I described above.

If someone insults/berates you in your own house, wouldn't you kick them out? If someone insults/berates you in their house, wouldn't you leave?
 
Last edited:
  • #28
256bits said:
If one ( not pinpointing you yourself in particular, but a general person ) is in favour of free speech, and anti-censorship in general society, then why on social media, which is a function of society, should the rules of society not apply?
Those 'rules of society' also has something to say about responsibility (and occasionally: consequences too).
Why it is always only about free speech, then?
 
  • Like
Likes russ_watters
  • #29
Rive said:
Those 'rules of society' also has something to say about responsibility (and occasionally: consequences too).
Why it is always only about free speech, then?
Because it is a catchy phrase, and who can be against free speech.

And by 'rules of society' I do not mean only those written down in a constitution, or passed by majority vote by lawmakers. Unwritten rules, such as pleasantries ( handshakes, greetings upon meeting someone ) as a simple example, also apply.

As far as I know, it is that the government cannot limit free speech, except under exceptional circumstances. And it seems to be that the arena for free speech infringement is attempting to extend its fingers into the private arena, as well as changing the concepts of acceptable moral and ethical behavior in that a recipients 'feelings' can be disregarded as irrelevant.
A case in point - at one time past, giving the finger to a policeman was giving him/her the opportunity of investigation for disorderly conduct. Now, it has become a protected legal free speech expression, at least in Canada and the US.
 
Last edited:
  • #30
russ_watters said:
PF is heavily moderated. Yes, that goes against a principle of free speech. No, it does not violate any "basic given right such as free speech" because such a thing does not exist beyond the narrow scope I described above.
exactly.
Yet how many people actually know the laws of the land, and where and where not they can be applied.
Boorish behavior gained some credence as being acceptable from the argument of the concept of free speech being extended and applied to general society.

russ_watters said:
If someone insults/berates you in your own house, wouldn't you kick them out? If someone insults/berates you in their house, wouldn't you leave?
For sane, rational people that would happen and it would the end of it.
For others, emotions overcome, conflict ensues, with outcomes at the worst tragic.
 
  • #31
256bits said:
And by 'rules of society' I do not mean only those written down in a constitution, or passed by majority vote by lawmakers. Unwritten rules, such as pleasantries ( handshakes, greetings upon meeting someone ) as a simple example, also apply.

But you must agree that even outside of law enforcement/government, "rules of society" dictate that words and actions do have consequences.

Free speech means that you won't be arrested by law enforcement (or prosecuted under the legal system) for voicing your opinions. It does not mean that you are free of social and societal consequences from voicing your opinion.

256bits said:
A case in point - at one time past, giving the finger to a policeman was giving him/her the opportunity of investigation for disorderly conduct. Now, it has become a protected legal free speech expression, at least in Canada and the US.

True. But now imagine walking into a coffee shop and giving the finger to the barista followed by giving the finger to patrons sitting around at tables. At the very least, by the rules of society, you can expect to be called out for being an jerk. And more likely than not you'll be kicked out of the coffee shop. Protesting the consequences by crying, "I shouldn't have been kicked out of the coffee shop. Free speech, free speech," misses the whole point of what "free speech" is really about.
 
  • Like
  • Informative
Likes russ_watters, symbolipoint and 256bits
  • #32
collinsmark said:
Free speech, free speech," misses the whole point of what "free speech" is really about.
For the general public, is this it?
Voltaire — ‘I may not agree with what you have to say, but I will defend to the death your right to say it.’

I am not advocating boorish behavior in private ( nor public settings ).
Invoking a trespass upon the individual(s) can deal with the physical situation, at a specific location such as a cafe, or the home.
In the digital world, the banning of the account, and to a lessor degree, the removal of postings accomplishes a similar outcome.

The free speech aspect in the media world, be it digital, print, video, radio, auditorium comes about as to the censorship of ideas that may offend, and there is always a likelihood that someone out there will be offended.

The 'gatekeeper of ideas' is problematic in that "Will the gatekeeper be completely impartial in the selection process?"
Should the gatekeeper be an agency directed by the government? in which case, the legal aspect of the right to free speech arises. Questions arise as to whether the agency is immune to the whims of the political flavour of the day?
Should the gatekeeper be the a private enterprise? who will have there own leanings as to what is proper, with differences from enterprise to the next.

In either case, funds have to be allocated ( as stated in an posting earlier ), either with tax dollars or designated fees from the enterprise ( or user ) paid to the government run agency, or by the enterprise(s) themselves running their own system. The taxdollar system a lot of people consider as being free. The fee system, or the enterprise system gets pushback.
 
  • #33
256bits said:
For the general public, is this it?
Voltaire — ‘I may not agree with what you have to say, but I will defend to the death your right to say it.’
https://en.wikipedia.org/wiki/Evelyn_Beatrice_Hall

In The Friends of Voltaire, Hall wrote: "I disapprove of what you say, but I will defend to the death your right to say it" as an illustration of Voltaire's beliefs. This quotation – which is sometimes misattributed to Voltaire himself – is often cited to describe the principle of freedom of speech.
 
  • #34
256bits said:
For the general public, is this it?

[...]

Invoking a trespass upon the individual(s) can deal with the physical situation, at a specific location such as a cafe, or the home.

In the digital world, the banning of the account, and to a lessor degree, the removal of postings accomplishes a similar outcome.

So far I agree. But hold on. Two things.

  1. In my earlier posts, I thought we were talking about social media platforms like Facebook or Twitter (X or whatever it's called now) or YouTube, etc. All privately owned. But rereading your recent post it sounds like you're discussing a taxpayer funded, community owned, public social media site. I didn't know such places existed. Is that what you propose (future platform maybe)?
  2. "Freedom of speech" applies online as it does anywhere else. There's no need for a distinction. If you live in a country that has freedom of speech, you can post your opinions all you want on Facebook or Twitter, or the feedback section of your local government's website, and rest assured that cops won't knock on your door and arrest you because they don't like your opinion. But that doesn't mean there are no consequences for such actions. If you're a jerk, expect to get called out for it (just like anywhere else). Your behavior might be discouraged and/or even mocked. If you break the rules of the site, expect repercussions (i.e., similar to no placing advertisements or skateboarding in the public park) . If it's a privately owned site (all social media sites that I presently know of are), they may refuse you service or even ban you altogether. Just like a coffee shop.
Takeaways:
Freedom of speech, where applicable, is no different online than it is anywhere else (where applicable).
Freedom of speech does not imply freedom from consequences.
 
Last edited:
  • #35
Freedom of speech looks different to freedom of 'the press'. The latter is the freedom of press owners to promote their views widely whilst suppressing (or mocking or misrepresenting) others, including political partisan ones, with minimal constraints. They can be held accountable for slander... if the ones slandered have deep enough pockets but telling the truth is not a requirement and telling lies appears to be not just permitted but was normalised long before social media.

Where social media owners use their 'freedom' to promote some views and suppress others it looks to me like they are exercising freedom of the press. I am not as unthinkingly in favour of freedom of the press as freedom of speech for individuals.

In order to influence - whether to induce spending money on advertiser's products and services or to promote political and other viewpoints - getting attention is the crucial step one.

Pressing people's buttons, triggering their triggers, is a powerful way to grab attention. I do fear the use of AI for this purpose; what presses our buttons and reacting seems to switch off the capacity to take in new information or to judge the validity and value.
It is the attention that counts most with social media and being nasty gets lots of it. Sometimes that is what it takes to induce participation.

Or we can be distracted from thinking about what is important... did that cute cat really do that? Now we can add in the question of whether it was AI generated to our distractions.

As an Australian I don't know how the new age requirements will work out - and to what extent those can be circumvented. Or to what extent the companies will encourage such circumventions. Should be able to block accounts from VPN's. It seems like some kind of age restrictions are widely supported in principle - there really is a lot of unsavoury material around that children should not be exposed to. In practice people of all ages will probably find the signing up and signing in processes (more) frustrating and annoying.
 
  • #36
collinsmark said:
"Freedom of speech" applies online as it does anywhere else.
Except that 'IRL' people has only one physical presence and that natural limitation will severely hinder everybody in doing plenty of things completely 'natural' online.

The whole framework of discussing the 'freedom of speech' was set up and based on that physical presence.

256bits said:
‘I may not agree with what you have to say, but I will defend to the death your right to say it.’
Sometimes I wonder what that sentence would have been about with the other party being a deaf ghost set at high respawn rate and with a loudspeaker.
 
  • #37
collinsmark said:
it sounds like you're discussing a taxpayer funded, community owned, public social media site.
That could be a third way to regulate content - direct nationalisation of the media industry - so that only messaging supportive of the government initiatives are expressed.

The government involvement that I was discussing is the development of a separate body with the powers of regulation, similar to what is now available for a wide range of industries. The transportation boards, in one form or another, regulate the movement of vehicles whether that be on land, air or sea. Elon Musk, for example, requires approval of a launch window before sending a rocket into space. The CRTC in Canada is given the power to regulate the airways, giving out a licence for broadcast. The regulation was extended to cable. The extension to the internet becomes problematic due to the international aspect of that segment of the media industry.
 
  • #38
Rive said:
Sometimes I wonder what that sentence would have been about with the other party being a deaf ghost set at high respawn rate and with a loudspeaker.
Cute phrases are only cute in their own context, outliers proving the exception rather than the rule.

"The early bird gets the worm"
"It is never too late"
 
  • #39
Filip Larsen said:
I understand that examples of social networks (over time) where moderation apparently didn't work are abundant, but since moderation do seem to work in some contexts (e.g. PF) I am still puzzled to identify the key enabler for why this is so. Is it network size alone or do other factors play essential part?
If the internet were a brick-and-mortar corporation, the normal process would be for it to grow geometrically, but in doing so, it would set up barriers between its processes and its consumers perpetually distancing itself.

Think about a giant company like Coca-Cola (which is apparently owned by Berkshire Hathaway). No cola consumers could even get on the grounds of the corporation - or parent company - let alone into their meeting rooms to give them your opinion.

But the internet is the great equalizer, and puts the consumer right up rubbing shoulders with the owner.

Imagine having direct, real-time access to every employer and user of the corporation. Imagine you, a user, in Sheboygan WI saying "I think that Cola-drinkers all over the country should be offered name badges that display our gender pronouns" and the CEO of Coca Cola in Atlanta, responding with "That's a great idea. We'll do that ASAP."


That and the fact that it costs comparatively nothing to set up an online service if what someone is offering is not exactly to your taste. If PF were flooded with millions of users, you can bet that there would be a LOT more friction about how its should be run. The larger an organization, the harder it is to please everyone. Why try? Let them go to a Politics / Climate Change / Philosophy forum if that's what they want.

And that keeps numbers down. And that's a good thing. Focus. Specialization.
 
  • #40
Has this passed in the US?
The article states it has been presented a few times. with some changes with support from a few tech companies and platform giants have voiced support, while other entities have voiced disagreement.

The article is from May 2025 ( Bravo to the Times to date their content )
https://time.com/7288539/kids-online-safety-act-status-what-to-know/

I believe Australia has passed a bill as well as the UK addressing the issue of age verification.
@Ken Fabian
 
  • #41
Rive said:
Why it is always only about free speech, then?
It is frequently misunderstood (in the US anyway) to be both absolute and universal. It's a cultural quirk.
 
  • #42
256bits said:
The 'gatekeeper of ideas' is problematic in that "Will the gatekeeper be completely impartial in the selection process?"
It's really not, unless someone is over-interpreting the concept of freedom of speech, as I described before: it doesn't exist outside of government, so there's no problem if people/companies want to do censorship. There's nobody to require anybody to be impartial.
Should the gatekeeper be an agency directed by the government? in which case, the legal aspect of the right to free speech arises. Questions arise as to whether the agency is immune to the whims of the political flavour of the day? Should the gatekeeper be the a private enterprise? who will have there own leanings as to what is proper, with differences from enterprise to the next.
Except, narrowly, for criminal levels of abuse there's no legal requirement. Beyond that it's up to the company/forum. The company/forum does the moderation, and the government just judges if it is sufficient to prevent criminal abuse. This can be explicitly mandated, but right now is mostly just via lawsuits.
 
Last edited:
  • #43
I just read an article about Reanne Evans, a 12-time women’s world snooker champion.
The abuse she receives on social media has been a constant source of distress, intensifying even when she wins. “It’s the usual stuff, but I get even more abuse when I win,” she shares. “I could post a picture of a tree, and I’d still get abuse.”
Source: https://sportgoal.com.ng/unexpected-health-issues-and-online-abuse-forces-snooker-queen-to-quit/

This is so ridiculous that I wonder what kind of people these are. Snooker? Really? Women's snooker? I understand that dopamine is the key to a lot of decisions people make, from supermarkets to the "like" systems on social media. But how can posting such abusive comments release dopamine?
 
  • Like
  • Sad
Likes collinsmark and Filip Larsen
  • #44
256bits said:
That could be a third way to regulate content - direct nationalisation of the media industry - so that only messaging supportive of the government initiatives are expressed.
Government funded media can be 'at arms length'. Australia's experience (possibly UK and elsewhere too) is that such media are not government mouthpieces and can and do criticise governments, parties and initiatives. I think more often with more freedom and less bias than commercial media - in some cases commercial media are clearly, unequivocally politically partisan and on some issues showing much more bias, eg promoting climate science denial and economic and social/political fears of climate initiatives and emissions reductions efforts has primarily been promoted via commercial media.

Politicians are always looking for a sniff of bias - but are often seeking bias in their favour as much as opposing bias per se. Commercial media themselves are in some cases constant critics, some mastheads having staff writers that relentlessly berate and criticise taxpayer funded media.

Taxpayer funded media like we have is not perfect, no, but intrinsically seeking to evade criticism and please Australian governments? No. Do elected governmentss try and 'stack' the Board in pursuit of partisanship, yes that happens too. But if anything I think they take their day to day cues from other media and are inclined to join in the sensationalist media pile ons and witch hunts.
 
Back
Top