Taking too long? Close loading screen.
Enshored

The Future of Moderation? It’s all in the AI and Human Blend.

Written by on April 12th

Content Moderation

The Rise of AI

As the volume of user-generated content continues to grow, content moderators have begun to turn increasingly to AI to respond to the scale of the challenges. Under scrutiny by Governments proposing legislation to remove hateful and illegal content, online platforms are using everything in their AI toolbox to make sites safer.

From Instagram using AI to detect bullying in photos and captions, to Facebook using AI to catch content linked to organized hate groups, all the big players are now incorporating toxicity detection algorithms. “We’re getting to the point where most of our systems are probably close to, as good [as], or possibly better than an untrained person in that domain,” explains Mike Schroepfer, Facebook’s CTO. “My goal is to get us as good as experts. We’ll get there as fast as we can.”

Other technologies deployed include image recognition software, digital hashing, metadata filters, and natural language processing tools, frequently used to analyze text for hate speech and extremist content.   

Some AI tools and hash matching software are now helping moderators detect child abuse images with around 99 percent accuracy. But this is just one of many toxic subject areas the breadth of which is driving the development of a wide range of AI architecture.

Covid-19 has been a primary driver behind the priority shift, and 2020 has marked a sea-change for content moderation. Technology companies recognized that misinformation posed a clear health risk and began taking a much more proactive approach to filtering content. 

AI has been subject to considerable criticism in recent years for being unable to deal with nuanced cases and failing to understand the context. Like all technological advances, it will take time to embed fully, but the latest AI advances can do a lot of heavy lifting in the moderation battle. Getting the right blend of human and AI-empowered solutions is critical.

“You need a smart blend of both to make a difference,” explains Enshored COO Sang Won Hwang. “AI provides great efficiency and deals with scale, but it doesn’t deal with creativity too well. It can deal with a lot of low-hanging fruit in a tier system. So in tier one and two, for example, AI can flag harmful content and then funnel it. You’re pushing human beings to do tier three, tier four work, which requires a more complex understanding and a deft touch. AI is pushing our workforce to be more significant and demanding emotionally intelligent, analytically savvy and creative people.”

The AI algorithms that Hwang refers to are very reliable for performing singular tasks within a limited context. The main advantage is that they can do this at scale and sift millions of items of content quickly. And the latest improvements mean they’re getting better all the time. 

One area where AI has begun to achieve measurable success is in removing hate speech. Facebook is among the major platforms to emphasize AI and machine learning to identify hate speech and online abuses, eliminating millions of examples of hate content. For Hwang, the holy grail for content moderation now is doing it at scale. And we can not achieve scale without AI. “Imagine there’s a riot in the Capitol,” he said. “The spike in volume on social media will be incredible. If you’re to mobilize human beings, you simply won’t be able to manage the outpouring of misinformation and hate. However many people you have at one center, you just can’t do it without AI. It gives us the capability to inherently deal with uncertainties at a much higher level than what human content moderators can achieve.”

As Hwang looks to the future of content moderation, he says we should all get used to the fact that AI has a vital role to play. He envisages the next level of AI will not be boundary-based – but a system that enables scenario planning and pattern modeling to provide more transparency, better data, and improved options. This development will pose many questions for policymakers: is an algorithm fair and how do we get the right balance of censorship? It should be seen more as a partner than as a threat, explains Hwang. “The uncertainty level has gone up,” he says. “We’re dealing with a much higher level of complexity now. When society evolves, it naturally has to deal with more complexity. The pandemic has shown us that. AI is helping us, and it’s becoming indispensable. AI and humans can work well together and we should see it as an ally in the fight against toxic content.’

About

Scale your team. Scale your business. Ambitious businesses hire us when they need custom-built teams of elite outsourcers that other BPO firms can’t deliver.

Scale your team.
Scale your business.

Serious about scaling?

One call is all it takes to know if we’re a fit.

© 2024 Enshored · Privacy · GDPR · California · Cookies