What is Content Moderation?

 

Content Moderation Services, particularly active moderation, are a large and growing segment in outsourcing. Content moderation has grown out of the proliferation of new websites and apps that are set up to allow free expression and sharing of ideas, artistic creation, and views. While the majority of people posting are doing so in the spirit of the application they are engaging with, some are nefarious, and need to be managed.

From bullying, thru trolling, to pedaling hate, some people believe that the anonymity of these great new channels creates an opportunity. And somewhere out there, the channels fight back through rigorous vetting of new content as it is uploaded (hence active moderation), using technology and good old-fashioned human judgment to clean it all up.

We all want to get an opportunity to share, vent and be creative, and we all benefit from the work being done by what experts estimate to be over 100,000 professional content moderators, predominantly in the Philippines.

What media are effected?

We’ve seen a number of social media platforms aimed at promoting specific types of content. From disappearing images on Snap, through short-form videos on Snakt or memes on Whisper there is no shortage of different media that need to be moderated. And of course, there are platforms like Twitter, Facebook, and Medium too.

As well as social media, brands who engage with their fans (and enemies) open themselves up to abuse.

What constitutes offensive content?

Every app is different, and it depends on the target audience. Apps targeted at the young, particularly high school students, seem to mostly find abuse and bullying, in all its ugly shapes and forms. Sexual content, particularly pornography may be suitable in some apps but not others.

We have yet to see an app that has anything but a zero tolerance for racial hatred, torture, and murder.

What else needs to be moderated?

We have seen some very smart people try and subvert apps to use them for other purposes – hookups being a great example. While there are plenty of dating apps, some people seem to prefer to find other routes to find their next friend with benefits, utilizing app features where there could be unintended consequences such as geolocation.
In addition, and now mostly being solved by technology, copyright infringement can be a major problem for successful platforms.

Why do firms turn to outsourcers?

While technology is good in many cases in highlighting content that is blatantly unsuitable, it is hard for certain types of content to be automatically flagged or deleted. Take racial slurs and swearing as examples. You could inadvertently end up in what is known as the “Scunthorpe Problem” after this town found itself on the wrong end of spam filters from AOL back in the 1990s. So technology can help flag some, detect and delete some, but the people abusing platforms always find subtle ways around these.

Outsourcers can bring their experience and current book of tricks to help new clients more quickly eliminate the offensive content.
In addition, social media platforms are 24×7, and the reality is that offshore outsourcers are still the most cost-effective way to manage constant vigilance.

What else can the platforms do?

I have seen many things I disagree with on LinkedIn, but I have never seen anything that I would deem in need of moderation. Yes, there is some abuse and excessive self-promotion, but the fact that people are registered as themselves, and are held accountable through their connections, keeps this at very low levels (unless I don’t know it and LinkedIn also employ an army of moderators).

So, enforcing real names, verifying some details through independent verification channels, has been a great help to some platforms. But others where it has proven hard to get verification are open to abuse, and that is where the content moderator plies his or her trade.

Share this article on: