Is content moderation a dead end?

Published on
April 16, 2021
by
Benedict Evans
No items found.
Is content moderation a dead end?

In the late 1990s, Microsoft was the evil empire, and a big part of ‘evil’ was that it was too closed - it made things too hard for developers. But then came the great malware explosion, which at one point shut down half the Pentagon, and we realised that the real problem was that Windows and Office were too open. Microsoft had built them as fluid and extensible platforms, where any developer could do pretty much whatever they wanted once they were on your PC, and when we combined that with the internet, this was a problem. Microsoft had to pivot to ‘trustworthy computing’ - it put a lot of effort into closing off APIs that could be abused, and checking for bugs that could be exploited, and it also had to create a whole infrastructure of scanning and monitoring. Microsoft made it much harder to do bad stuff, and wrote software to look for bad stuff.

This is exactly what happened to social in general and Facebook in particular in the last 5 years. Until 2016 or so, Facebook was ‘evil’ because it was too closed - because it was too hard for developers to access information. It even tried to make people use their real names - that was especially evil. Then, just as for Microsoft, that turned upside down, and we realised that the real problem was it was too open. This time the malware was aimed at cognitive biases instead buffer overflows, but the effect was the same. Facebook, like Microsoft, had to turn off half the APIs and lock down the other half, and had to create a whole infrastructure of scanning and monitoring (it now has over 30,000 human moderators), and every other social platform has had to do the same. Virus scanners and content moderation are essentially the same thing - they look for people abusing the system (and consume a lot of resources in the process).

However, it often now seems that content moderation is a Sisyphean task, where we can certainly reduce the problem, but almost by definition cannot solve it. The internet is people: all of society is online now, and so all of society’s problems are expressed, amplified and channelled in new ways by the internet. We can try to control that, but perhaps a certain level of bad behaviour on the internet and on social might just be inevitable, and we have to decide what we want, just as we did for cars or telephones - we require seat belts and safety standards, and speed limits, but don’t demand that cars be unable to exceed the speed limit.

In other words, it’s sometimes said that the internet is the densest city on earth, and cities have problems. Anyone can find their tribe, and in the 90s this made AOL a way for gay people to meet, but the internet is also a way for Nazis or jihadis to find each other. One could also think of big European cities before modern policing - 18th century London or Paris were awash with sin and prone to mob violence, because they were cities.

On the other hand, something came after trustworthy computing. Even by 2002 the development environment was shifting from native Windows apps to the web, and after 2008 it also shifted to smartphones. If your applications run in the cloud and are opened in a web browser, there’s not much point hacking your PC. On an iPhone, an app can’t run in the background, watch what you do, and steal your bank details, because the move to a sandboxed model means applications can’t run in the background and watch what you do. Half the point of a Chromebook was that it didn’t have apps at all. Moving to the cloud and to smartphones removed whole layers of attack that didn’t have to be guarded against or detected, because they became physically impossible. The answer was not, in the end, trustworthy computing - it was changing the model.

Hence, I wonder how far the answers to our problems with social media are not more moderators, just as the answer to PC security was not virus scanners, but to change the model - to remove whole layers of mechanics that enable abuse. So, for example, Instagram doesn’t have links, and Clubhouse doesn’t have replies, quotes or screenshots. Email newsletters don’t seem to have virality. Some people argue that the problem is ads, or algorithmic feeds (both of which ideas I disagree with pretty strongly - I wrote about newsfeeds here), but this gets at the same underlying point: instead of looking for bad stuff, perhaps we should change the paths that bad stuff can abuse. The wave of anonymous messaging apps that appeared a few years ago exemplify this - it turned out that bullying was such an inherent effect of the basic concept that they all had to shut down. Hogarth  contrasted dystopian Gin Lane with utopian Beer Street - alcohol is good, so long as it’s the right kind.

Of course, if the underlying problem is human nature, then you can still only channel it. No-one robs payroll trucks anymore, but I get lots of messages asking me to send my life savings to Nigeria. Moving enterprise applications to the cloud created phishing, and a sandboxed OS creates a bigger market for zero-day exploits. But, we did manage to fix cities, mostly. So I wonder how differently newsfeeds and sharing will work in 5 years, and how many more new social companies will shift assumptions about mechanics and abuse. I wonder if crypto can be used to create different incentives, though it will have to do it without saying 'Crypto!' (and avoid Goodhart’s Law). But I’m not sure that the answer to much of this is better virus scanners.

Benedict Evans is a Venture Partner at Mosaic Ventures and previously a partner at a16z. You can read more from Benedict here, or subscribe to his newsletter.