Content moderation is a tricky and brutal business.
Nick Clegg, Meta’s President of Global Affairs, recently admitted the company’s systems have been, well, overzealous.
Too much content is being removed.
Too many innocent posts are being flagged.
And too many users are being penalized for doing nothing wrong.
But here’s the real question:
Is this about cleaning up mistakes—or navigating a political minefield?
Clegg didn’t mince words.
“We overdid it,” he said, reflecting on Meta’s heavy-handed moderation during the COVID-19 pandemic.
Large volumes of content were removed under the guise of safety.
But hindsight is 20/20, and now, Meta is acknowledging what users have been saying for years: the company’s automated systems and policies are far from perfect.
Examples of moderation failures have surfaced recently on Threads, Meta’s text-based platform.
One high-profile blunder? Suppressing images of Donald Trump surviving an attempted assassination – a mistake that sent shockwaves through its user base.
Even Meta’s own Oversight Board has sounded the alarm, warning that these errors could stifle political speech ahead of key elections.
Right now, Meta spends billions annually on moderation, using a mix of AI and human reviewers.
Still, Clegg’s admission highlights a deeper issue:
Even with cutting-edge tools, mistakes are rampant.
So, what’s the fix?
Clegg described Meta’s content rules as a “living, breathing document,” suggesting ongoing adjustments.
But what does that really mean?
Are we heading toward clearer guidelines – or just more Band-Aid solutions?
This isn’t just about algorithms or oversight.
It’s also about politics.
During the pandemic, the Biden administration urged platforms to crack down on misinformation, a move Meta now regrets enforcing so aggressively.
This back-and-forth between tech giants and governments creates a murky landscape.
Who decides what’s acceptable?
And how much influence should politics have on platforms designed for free expression?
We know this to be true: Moderation is a tightrope walk.
Platforms like X have taken the opposite approach, allowing everything that isn’t illegal to remain online.
The result?
A chaotic free-for-all where community notes and user moderation reign.
Meta, on the other hand, has tried to curate its spaces, but at what cost?
If moderation goes too far, platforms risk alienating users and suppressing critical voices.
But too little control, and you create a playground for harmful content.
The bigger picture…
Moderation woes aren’t about technology – they’re about trust.
And, trust is different from person to person and from region to region.
Users want a platform where they feel safe, but they also want their voices heard.
Can social media platform find the sweet spot between over-enforcement and under-regulation?
Or are we destined for a future where moderation becomes the new battleground for politics, profit, and public discourse?
For now, Meta’s errors are a reminder that even the biggest tech companies – who are spending billions on the solution – don’t have all the answers.
The question is:
What will fix this… technology… AI… the community… common sense?
This is what Elias Makos and I discussed on CJAD 800 AM. Listen in right here.
Before you go… ThinkersOne is a new way for organizations to buy bite-sized and personalized thought leadership video content (live and recorded) from the best Thinkers in the world. If you’re looking to add excitement and big smarts to your meetings, corporate events, company off-sites, “lunch & learns” and beyond, check it out.