Working for Facebook Can Give You PTSD

Beheadings, infant rape, animal torture: content moderators are filtering these disturbing images and videos from your feeds every day. Moderating such brutal content takes a severe psychological toll on workers, but tech companies are doing little to improve their working conditions.

Content moderators are part of the invisible workforce that makes modern digital platforms go.

Working for Facebook can cause post-traumatic stress disorder (PTSD). The Financial Times recently received documents showing that Accenture, a global professional services firm that provides content moderation for Facebook in Europe, asked its employees to sign a waiver acknowledging that screening content for the social media company could result in PTSD.

Facebook claims that it didn’t know about or ask Accenture to distribute the waiver to moderators in Warsaw, Lisbon, and Dublin. But it is well aware that sorting through flagged content on the site can be bad for one’s health. Facebook is facing lawsuits in California and Ireland from former employees who say they’ve suffered severe psychological damage working as content moderators.

Content moderators aren’t talked about much — they’re part of the invisible workforce that makes modern digital platforms go. When we scroll through our algorithmically generated feeds, we don’t realize that these content streams are being managed by real people working tirelessly to ensure our screens remain (generally) horror-free.

“Horror” is the only word to describe the messages, images, and videos that content moderators see all day, every day at work: unspeakable child abuse, cruelty to animals, murder and other instances of gruesome violence, hate crimes and virulent racism — not to mention an endless stream of pornography, much of it deeply misogynistic.

Facebook has roughly 1.6 billion “daily active users,” so staying on top of the posting proclivities of the worst humanity has to offer is a gargantuan task. The social media giant employs an estimated 15,000 content moderators in countries around the world, both directly and indirectly through subcontractors.

And Facebook isn’t the only company that employs content moderators. Google, Microsoft, Twitter, Pinterest, and many other tech companies rely on these people — many of whom are paid a paltry wage — to maintain their family-friendly image. Adrian Chen wrote a piece for Wired back in 2014 that described life for content moderators in both the United States and the Philippines, where a great deal of this work is offshored for a fraction of the cost.

Chen describes an incredibly labor-intensive process. Content moderators are bombarded with comments, photos, and videos that have been flagged for review, and are expected to quickly determine whether to remove the content from user feeds. As more and more people have become regular social media users, the need for content moderators has increased dramatically. Tech companies are developing better machine-learning algorithms to flag questionable content, but in many cases, actual people are still needed to view messages, photographs, and videos to determine whether they should be eliminated.

The pressure on content moderators to make rapid and correct decisions that uphold tech companies’ constantly evolving “community guidelines” is intense. Casey Newton, who conducted an illuminating investigation of Cognizant, a Facebook subcontractor in Phoenix, says that “while Facebook employees enjoy a wide degree of freedom in how they manage their days, Cognizant workers’ time is managed down to the second.” Workers are routinely fired for making the wrong decisions about which content to remove or retain.

The psychological toll of the social media shop floor is stark. Moderators find they can no longer sleep at night. Many suffer from intrusive thoughts, perpetually re-seeing the beheadings, infant rape, and animal torture. Some workers fall into a deep depression, withdrawing from friends and family, struggling to get out of bed. Others self-medicate with drugs and alcohol.

A few years ago, two men and their families filed a lawsuit against Microsoft in Washington, accusing the company of “negligent infliction of emotional distress.” The men worked on Microsoft’s “online safety team,” tasked with reporting child abuse and other crimes. One of the plaintiffs (who claimed he had been transferred to the unit involuntarily) said that, as a result of his work, he suffered from an “internal video screen in his head” that replayed the disturbing images over and over. Benign elements of life — seeing his son, looking at any computer — began to trigger hallucinations and panic attacks.

Tech companies publicly praise the work performed by their content moderators and insist that they provide appropriate psychological support. Microsoft has a “wellness program”; Facebook provides “resiliency training”; YouTube has counselors. But the impact of these efforts — whether they are actually mitigating the psychological damage caused by this work — is unclear, because tech companies are intensely secretive.

The public is not privy to even the most basic information: How many content moderators are employed by American tech companies, directly and indirectly through subcontractors? How much are these workers paid? What is the exact nature of their work? How many workers suffer psychological damage as a result of their work? What meaningful steps are tech companies taking to make their workplaces safer?

Instead, we get snippets of information volunteered by companies, or insights gleaned from investigative journalists and academic researchers, or information that comes to light during lawsuits. Those who could provide the clearest picture — content moderators themselves — are often required to sign nondisclosure agreements, preventing them from speaking publicly about their experiences at work.

These aren’t hard questions to answer. Yet clarity remains elusive. The problem of managing horrific content is presented as a Sisyphean task. Tech companies promise that someday we’ll have algorithms to do the dirty work, but for now, we’re forced to rely on people.

This is a false narrative. Right now, the vast majority of US tech company resources are directed toward perfecting the “user experience,” ensuring that people spend as much time as possible on social media platforms, generating lucrative data to sell to advertisers. The intense suffering of the workers who provide this user experience is swept under the rug.

Workers shouldn’t be sacrificed to the gods of user experience. Tech companies have the know-how and resources to greatly improve the working conditions of content moderators — they simply choose not to use them.