Content Moderators: The Silent Victims of AI Development
- Amy Marquis

- Jan 25
- 2 min read
It took 75 years for the telephone to reach 100M users. Mobile phones hit the same milestone in 16 years. The internet took seven. ChatGPT took just two months.
With the use of Large Language Models (LLMs) exploding around the world, considering ethics around how Generative AI is developed is imperative.
Many don't realize that human moderators are responsible for sifting through content to cull "bad data" and identify "good data" to use in training AI models. This content can include child sexual abuse, bestiality, murder, suicide, torture, self-harm, and incest.
The reality of reviewing this kind of content daily is enough to shock anyone yet these moderators are rarely discussed in the conversation around the ethics of AI. Often, the people who are responsible for training AI models are negatively affected, and many are located in developing countries. These circumstances make for a concerning ethical landscape around the foundation of AI.
Content Moderators and AI Training
AI models are trained on massive datasets that are collected from various online sources. However, not all content is suitable for training—some of it may be harmful, explicit, violent, or otherwise problematic. To filter out undesirable data, companies rely on human content moderators who manually review and flag it.
A closer look at this process reveals the concerning exposure these moderators endure, reviewing explicit and damaging content day after day. Plus, these human moderators are burdened with the enormous responsibility of judging good and bad. The implications are certainly heavy, possibly ranging from emotional distress to downright exploitation. So what do solutions look like?
Transparency and Accountability: Because AI is so new, companies have gotten away with not sharing what's behind the curtain. This is a call for the industry to become more transparent about the role of content moderators and their working conditions, including acknowledging their labor and disclosing how content moderation policies are developed to protect workers.
Fair Compensation and Support: Moderators' contributions are essential to the safe development of AI systems, making it imperative that they receive fair wages and access to mental health resources at minimum. For example, Zevo Health provides tailored mental health support programs for content moderators.
Developing Solutions
Companies should be working to reduce reliance on human moderators to cull harmful content by improving automated filtering tools. A hybrid approach, combining ethical use of automation and better conditions for moderators, could go a long way toward mitigating harm.
The reliance on content moderators sheds light on the fact that AI development is far from clean or self-sufficient—it involves a significant amount of invisible human labor. Much of the work involved is emotionally taxing and ethically complex. Addressing this issue is critical for creating AI systems that are not just technically advanced but are ethically sound.
This article was originally published on LinkedIn.
.png)



Comments