The Jobs People Do to Protect Us from Ourselves

  • News
  • Thread starter jedishrfu
  • Start date
  • Tags
    Jobs
In summary: Fortunately, the article embedded in my last post gave......In summary, Facebook has over 100,000 content moderators working to remove disturbing content from their site in order to protect users. Moderators often suffer from PTSD after seeing hundreds of beheadings and other disturbing videos.
Physics news on Phys.org
  • #2
Really sad. :frown:
 
  • #3
So glad facebook will protect me (even though I never asked for it) from all the scary things in the world so I can just continue to live in a bubble of ignorance.

Maybe if there are hundreds of beheading videos out there, the problem isn't that there are too many beheading videos, but rather that there are too many beheadings happening. Maybe if people were forced to confront the realities of the world, we'd find solutions to our problems much quicker?
 
  • Like
Likes OmCheeto and Bystander
  • #4
dipole said:
Maybe if there are hundreds of beheading videos out there, the problem isn't that there are too many beheading videos, but rather that there are too many beheadings happening. Maybe if people were forced to confront the realities of the world, we'd find solutions to our problems much quicker?
You may well be correct, but as both a shareholder and user, that's not their job and it is unfair to pin that responsibility on them.
 
  • #5
This reminds me of the 3yr old seeing news coverage of 911. He got really scared because he saw building after building falling down and people running away with each replay of the news clip. I can imagine there are many sick people out there who keep reposting these same videos over and over again for the moderators to delete.

The sad part is that the moderators have no psychological support from the companies they work for and will likely suffer the effects of PTSD for a long time to come.
 
  • Like
Likes russ_watters
  • #6
After some googling, I discovered that Wired did an article on this 4 years ago.
Quite brutal reading. Not for the faint of heart.

...the number of content moderators scrubbing the world's social media sites, mobile apps, and cloud storage services runs to “well over 100,000”...

Eight years after the fact, Jake Swearingen can still recall the video that made him quit. ...

“Everybody hits the wall, generally between three and five months,”
 
  • #7
OmCheeto said:
After some googling, I discovered that Wired did an article on this 4 years ago.
Quite brutal reading. Not for the faint of heart.

...the number of content moderators scrubbing the world's social media sites, mobile apps, and cloud storage services runs to “well over 100,000”...

Eight years after the fact, Jake Swearingen can still recall the video that made him quit. ...

“Everybody hits the wall, generally between three and five months,”

Yes, I remember that article. No one should have to work a job like that and perhaps everyone should work a job like that if only to realize that cruelty that exists in this world and that we should actively find ways to stop it.
 
  • Like
Likes russ_watters
  • #8
jedishrfu said:
...
The sad part is that the moderators have no psychological support from the companies they work for and will likely suffer the effects of PTSD for a long time to come.

Speaking of PTSD, the following is from an article by The Atlantic, dated March 2017;

One example of this difficulty is in a recent and first-of-its-kind lawsuit filed by two Microsoft CCM workers who are now on permanent disability, they claim, due to their exposure to disturbing content as a central part of their work.

I've read at least 4 different articles this afternoon about this "Content Moderator" position.

jedishrfu said:
Yes, I remember that article. No one should have to work a job like that and perhaps everyone should work a job like that if only to realize that cruelty that exists in this world and that we should actively find ways to stop it.

Good idea, about making everyone do it, if even only for an hour or two.
 
  • #9
Yes, actually I was thinking it could be a community responsibility. But another thing we need to do is to automatically ban folks who post this content as quickly as we can and to coordinate across all social media platforms to make the ban a hard ban and to even track them down if possible.

However, we know all technology can be spoofed or that bad actors will always find a way to undermine any protections we put in place.

I guess the promise of AI is our only hope to identify bad content when it’s uploaded and delete and ban at that time since other schemes are doomed to fail too.
 
  • Like
Likes russ_watters
  • #10
jedishrfu said:
Yes, actually I was thinking it could be a community responsibility. But another thing we need to do is to automatically ban folks who post this content as quickly as we can and to coordinate across all social media platforms to make the ban a hard ban and to even track them down if possible.
...
I think this is why I read so many articles. I couldn't believe nothing was being done about the "freaks" posting that stuff.

Fortunately, the article embedded in my last post gave me a little bit of peace of mind.

https://www.mcclatchydc.com/news/nation-world/national/article125953194.html
2017.01.11

As part of their job, moderators for social websites have to view some of the most disturbing videos and photos on the internet. Once the employees have determined that the images violate the company’s community standards and the law, they delete the accounts of the people who posted them and report the incidents to the National Center for Missing & Exploited Children, per federal law.

My question now is, are the contracted employees in foreign countries doing anything similar? Or are they just clicking "Delete" on crimes?
 
  • #11
Wow, that's a good question. I bet the answer is no. So many times when we think people are doing the right thing we find we've been misled.

I had a thought on how to reduce the moderated content:

1) bad actor posts a bad video
2) people flag it as bad and after X clicks its removed from view
3) moderator decides its bad
4) CRC of the file is taken and registered with a deleted cross-social-platform content database (in a sense we are pooling all moderators together)
5) each time a new upload matches the CRC from the content database its removed from view
6) each time a file is viewed its CRC is checked with the content database and removed if flagged there and the uploading account is deleted

Alternatively, we could place the bad actor accounts into a kind of limbo where the bad actor thinks everything is cool but he/she is really interacting with AI characters and no one is seeing their content. Like the little child answering the phone and never giving it to the parent. We might even glean valuable info from the bad actor based on the AI interaction that can be used to finger them on other accounts.

This works for static files that re reuploaded when deleted. Video files that are changed are harder to catch. But this strategy would slow the multi-site reuploading process. The other problem is laws vary across countries and content found illegal here may not be there.
 
  • Like
Likes OmCheeto
  • #12
jedishrfu said:
Good idea, about making everyone do it, if even only for an hour or two.
I've watched exactly two such videos (one a beheading, the other a burning). I felt like I needed to. It's one thing to talk about evil in the abstract/hypothetical, but it's another thing entirely to see it.

However, I think seeing it in person would be another level entirely. The jobs that require experiencing it in person; military, police/detectives, EMT, firefighters, etc. it must stay with you for life.
 
  • #13
You can get PTSD just from seeing horrific scenes that you know or believe to be real. Sometimes this happens to jurors who view horrific crime scene photos and who are unprepared to process them.

We are somewhat jaded by our horror movie genre where they strive for more realism and yet we know its a movie and cringe but it doesn't really affect us the same way. However little children can be quite frightened by movies because they believe it to be real.
 
  • #14
jedishrfu said:
but he/she is really interacting with AI characters and no one is seeing their content.
Multiple identities/sockpuppets are how much a problem for the PF mods? And you can wager many marbles of chalk that genuinely bad actors are very familiar with all the phony ID tricks.
 
  • #15
Yes, we get them sometimes. However, @berkeman is ever vigilant as is the software that identifies and reports them.

It was interesting to see how the reporters identified the two suspects in the Skripal attack. They used a number of schemes including photo analysis and biography analysis where a fake bio has the same birthday but not the year or the town was nearby where the operative actually had lived as a child. The idea is that it's easier to make fake bios from the operatives real bio with certain identifying pieces changed and it's then much less to remember while on a mission.

https://www.bellingcat.com/news/uk-...ing-suspect-dr-alexander-mishkin-hero-russia/

Unlike the case of Anatoliy Chepiga, “Petrov”’s cover identity retained most of the biographical characteristics of the authentic Mishkin – such as the exact birth date, first and patronymic name, and first names of his parents. The family name was changed to Petrov, and the birthplace was moved to Kotlas, town approximately 100 km from his actual place of birth (reaching Kotlas from Loyga by car, ironically enough, takes 10 hours as it requires a 350-km detour). Under his cover identity, Mishkin was registered at a Moscow address occupied by a different individual who is likely unrelated to him and unaware of his existence. The real Mishkin, under his authentic identity, lived with his wife and two children at a different address in Moscow.

https://www.dw.com/en/bellingcat-identifies-skripal-poisoning-suspect-as-russian-colonel/a-45651916

https://www.cnn.com/2018/10/08/uk/skripal-russia-gru-petrov-suspect-intl/index.html
 
  • Like
Likes berkeman
  • #16
When did the moderation of content actually become mainstream? Recently I suppose.
Hasn't say Facebook been racked through the coals for now doing enough to combat "fake news" from users, and have had to put in place more monitoring of content. So say hello to the Phillipines.

It's an extra expense for the social media they would rather not have , and shutting down users cuts into their bottom line as the fan base for advertisers is reduced.
I don't think they are doing this just to keep the end user safe from unsuitable content but were dragged into it from accusations of irresponsibility, for which they should or should not be liable, or not, but that is another argument, of the type that they are just the carriers of content rather than the initiator. So censorship is brought about to fix the "problem", without regard for the people who have to actually view content.- a slight oversight from the complainers - we want sanitized content for ourselves.
.
Wasn't there a post here on PF about educational chemical reaction sites terminated, I think UTube videos, just because, and being swept up as collateral damage in the movement to sanitize. Who decides, ( any qualifications for the monitor job ) is a big question of what is acceptable to pass through. So some of the good goes out with the bad - how much does anyone know.

I remember the stupid naked picture scandals of days past where parents were brought up for charges of child porn, as the developer of the film on seeing a picture of a two year old splashing in the bathtub was enough "evidence" for the authorities to be brought in. Good citizenship run amok. Well you really don't know, do you so why take a chance and flag anything and everything as suspicious. Seems to have leveled off to some extent as the smell test has become more rational.

I take the social media purge to be of the same ilk, check it all that looks gory as one category of possible censorship that may be offensive to some but not all.
The companies do not want to look as if they are promoting any un-social behavior for their own survival and evade bad publicity. Our protection was the last rung of their checklist I would presume, as witnessed from the previous " wild west, anything goes. I guess it had moved up a notch.

I have seen a video or two of liposuction procedure which I found gross. Are medical procedures of that sort the type of stuff the monitors watch. I think one can get PTSD from just watching the doctor move the wand around under the skin. Should a video of something like that be banned from the general public.
I also saw a bus run over a pedestrian with the end result that the victim, under the bus, unless they were double, or triple jointed, or quadruple, has no chance of survival. I am sorry that I watched clicked on that one as it still gives me the jeepers just thinking and reflecting about it. .
 
  • #17
256bits said:
When did the moderation of content actually become mainstream? Recently I suppose.
Without knowing specifically what content on what forums you are referring to, I'll nevertheless answer: forever.
Hasn't say Facebook been racked through the coals for now doing enough to combat "fake news" from users, and have had to put in place more monitoring of content.
Yes, but that does not mean it was ever unmoderated.
It's an extra expense for the social media they would rather not have , and shutting down users cuts into their bottom line as the fan base for advertisers is reduced.
You're missing the point: The purpose of moderation is to improve quality to attract more users. Same goes for PF, and it's much of the reason why PF is so successful.
 
  • #18
I hope that the condition of PF mentors is not so bad...
 
  • Like
Likes berkeman
  • #19
russ_watters said:
The purpose of moderation is to improve quality to attract more users.
I'm not so sure about that if it is about 'social media' in general. Getting clicks is quite an art and it has plenty of dark sides. PF had chosen quality (and they got me for sure), but - for example - I'm from a different forum where at some point it came to policy to accept crackpots and worse (and drive away any real experts) to get more 'customers' through endless quarrels. Or, to occasionally allow to slip through some questionable post and then see that everybody reflects on it... It became just a common practice to pepper up the click counts.

I think @256bits gets this right about 'social media' (companies).
 
  • #20
Wrichik Basu said:
I hope that the condition of PF mentors is not so bad...
You wouldn't believe the things we see: things that travel faster than the speed of light, machines that never stop working, spooky action at a distance...

And we're not even allowed to remove pictures of dead (and alive) cats.
 
Last edited:
  • Like
Likes StoneTemplePython, BillTre, berkeman and 4 others
  • #21
And cracked pots... :oldeek:
 
  • Like
Likes Wrichik Basu and DrClaude
  • #22
russ_watters said:
Without knowing specifically what content on what forums you are referring to, I'll nevertheless answer: forever.

Yes, but that does not mean it was ever unmoderated.

You're missing the point: The purpose of moderation is to improve quality to attract more users. Same goes for PF, and it's much of the reason why PF is so successful.
The article mentions neither specifically to whom they are contracted out, so keeping it general as they did.
Remember file-sharing way back - pretty much gone due to infringement of copyright ( ie Napster ), and most other sites like that have been shut down or curtailed. Censorship backed up legally for obvious reasons. Privacy issues and sharing of personal information comes up in the news. In some ways the legal system is playing catch up to the fast changing digital age. I tend to think that the social media enterprises are realizing that they do owe a responsibility to the public, and that they had better play by some better rules so as to not attract undue attention and get the finger pointed at them. Wasn't there a recent uproar during an election of mining personal information where by the individuals had not given consent for that usage. Seems to have resulted in a hollowing out of a company or two that thought what they were doing was just fine. Looking back over their shoulder, recently several ultra conspiratory sites have been shut down - questionable content. So no, moderation of content was not a prime concern until recently.

PF is in a niche, and has tweaked its rules and user guidelines from suggestions and feedback from its shareholders, who by the way are its users. It is probably unique in that way, in that it does follow through in a decent and thoughtful manner.
Gotta run again...
 
  • #23
Facebook also "moderates" opinions and pages for more obscure reasons, but that's another story...
 
  • #24
jedishrfu said:
The other problem is laws vary across countries and content found illegal here may not be there.
That's for sure.
A couple of months ago on Facebook, I mentioned some small Asian country. One of my "buddies" chimed in that they are all a bunch of child molesting perverts. Having never heard of such a thing, I googled around until I found "Trafficking of children" at wiki. Wow! That little country has a lot of company. (No link provided, as it's kind of depressing)

I also found out, just a couple of days ago, that a "country in the news" employs capital punishment for "witchcraft and/or sorcery". :olduhh:

Anyways, one of the (now 9 I think) articles I read about "content moderators" stated that the Philippines was chosen as they have a lot of contact with America, and would therefore share some of our moral values. Of course, the fact that they only make about $2/hr is why these companies choose to outsource to 3rd world countries.

My maths says that even at those salaries, it's costing them $2 billion/year, vs $6 billion dollars per year at U.S. minimum wage.
$(y)/yr = 24 hr/day * 365 days/year * 100,000 employees * $(x)/hr
(Though that's the federal minimum wage, and it looks like there are a couple of states where it is actually lower: $4.5 billion/yr [ref]. Not sure how that works. Maybe we shouldn't get pedantic about that, and just accept that it's cheaper to outsource to the Philippines.)

So if I'm interpreting correctly what wiki says:

Alphabet Inc. ( Google LLC ( YouTube, LLC ))
net income: (-)$12.7 billion/year​
Facebook Inc.
net income: $15.9 billion/year​

it kind of makes sense, economically.
 
  • #26
jedishrfu said:
There's a related article too about journalists in the Philippines used by a company called Journatic that would generate content for community newspapers so they could keep their staff of reporters underpaid and to a minimum. They would write articles under false bylines using police reports.

https://www.thisamericanlife.org/468/switcheroo/act-two-0

https://www.thisamericanlife.org/extras/updates-about-journatic

Fascinating.

ps. Here's a transcript of the first link, which aired June 29, 2012, btw. (Old news!)
But, as I always say, old news is new news, if I haven't heard of it before.

pps. "Journatic" is now called "LocalLabs".
 
  • #27
OmCheeto said:
it looks like there are a couple of states where it is actually lower

Facebook and all other internet companies will be covered under federal law so the federal minimum applies. I can't imagine anyone subjecting themselves to this type of hazard for less than the federal minimum wage.

BoB
 

1. What is the meaning behind "The Jobs People Do to Protect Us from Ourselves"?

The phrase refers to the various roles and professions that individuals take on in order to protect society from harm caused by human behavior or actions.

2. Who are the people that perform these jobs?

These jobs can be performed by a wide range of individuals, including law enforcement officers, emergency responders, healthcare workers, educators, counselors, and many others who work to prevent and mitigate potential harm.

3. What are some examples of these jobs?

Some common examples include police officers, firefighters, doctors, nurses, teachers, therapists, social workers, and environmental inspectors. However, there are many other jobs that also play a role in protecting society from harm.

4. How do these jobs contribute to society?

By performing their duties and responsibilities, these individuals help to maintain order, safety, and well-being within society. They also educate and inform the public about potential risks and provide support and resources to those in need.

5. What are some challenges that these individuals face in their jobs?

Some challenges may include dealing with dangerous or unpredictable situations, balancing the needs and rights of individuals with the safety and security of society as a whole, and facing physical, emotional, and mental strain in their roles. Additionally, there may be political and social pressures that impact the effectiveness and perception of these jobs.

Similar threads

Replies
40
Views
6K
Replies
11
Views
3K
  • General Discussion
2
Replies
38
Views
5K
  • General Engineering
Replies
7
Views
4K
  • General Discussion
2
Replies
49
Views
6K
Replies
109
Views
54K
Replies
6
Views
3K
  • General Discussion
Replies
5
Views
2K
  • General Discussion
2
Replies
65
Views
8K
Back
Top