What tech regulation can teach us

Published on
August 23, 2020
by
Benedict Evans
No items found.
What tech regulation can teach us

Technology was a small industry until very recently. It was exciting and interesting, and it was on lots of magazine covers, but it wasn‘t actually an important part of most people’s lives. When Bill Gates was on every magazine cover, Microsoft was a small company that sold accounting tools to big companies. When Netscape kicked off the consumer internet in 1994, there were only 100m or so PCs on earth, and most of them were in offices. Today 4bn people have a smartphone - three quarters of all the adults on earth. In most developed countries, 90% of the adult population is online.

The change isn’t just that almost all of us have a computer now, but that we’ve changed how we use them.  This is my favourite chart to show this - in 2017, 40% of new couples in the USA met online. It’s probably over 50% now. Anyone does anything online now.

Tech has gone from being just one of many industries to being systemically important to society. My old colleague Marc Andreessen liked to say that ‘software is eating the world’ - well, it did.

The trouble is, when software becomes part of society, all of society’s problems get expressed in software. We connected everyone, so we connected the bad people, and more importantly we connected all of our own worst instincts. All the things we worried about before now happen online, and are amplified, changed and channeled in new ways. Meanwhile, the problems that tech always had matter much more, because they become so much bigger and touch so many more people.  And then, of course, all of these combine and feed off each other, and generate new externalities. The internet had hate speech in 1990, but it didn’t affect elections, and it didn’t involve foreign intelligence agencies.

When something is systemically important to society and has systemically important problems, this brings attention from governments and regulators. All industries are subject to general legislation, but some also have industry-specific legislation. All companies have to follow employment law, and accounting law, and workplace safety law, and indeed criminal law. But some also have their own laws as well, because they have some very specific and important questions that need them. This chart is an attempt to capture some of this industry-specific law. Banks, airlines and oil refineries are regulated industries, and technology is going to become a regulated industry as well.

Part of the point of this chart is that regulation isn’t simple, and that it can’t be. Each of these industries has lots of different issues, in different places, with different people in positions to do something and different kinds of solution. We regulate ‘banks’, but that’s not one thing - we regulate credit cards, mortgages, futures & options and the money supply and these are different kinds of question with different kinds of solutions. De La Rue is good at making banknotes that are hard to forge, but we don’t ask it what affordability tests should be applied to mortgages.

To take this point further, cars brought many different kinds of problem, and we understood that responsibility for doing something about them sat in different places, and that solutions are probably limited and probably have tradeoffs. We can tell car companies to make their cars safer, and punish them if they cut corners, but we can’t tell them to end all accidents or make gasoline that doesn’t burn. We can tell Ford to reduce emissions, but we can’t tell it to fix parking or congestion, or build more bike lanes - someone else has to do that, and we might or might not decide to pay for it out of taxes on cars. We worry that criminals use cars, but that’s a social problem and a law enforcement problem, not a mechanical engineering problem. And we mandate speed limits, but we don’t build them into cars. This is how policy works: there are many largely unrelated problems captured by words like ‘cars’ or ‘banking’ or ‘tech’, some things are impossible, most things are tradeoffs, there are generally unintended consequences, and everything is complicated.

‘Tech’, of course, has all of this complexity, but we’re having to work this out a lot more quickly. It took 75 years for seatbelts to become compulsory, but tech has gone from interesting to crucial only in the last five to ten years. That speed means we have to form opinions about things we didn’t grow up with and don’t always understand quite so well as, say, supermarkets.

In addition, in the US and UK I’d suggest the triggers for the change in awareness of just how much tech suddenly mattered were the election of Donald Trump and the Brexit referendum, both in 2016, and the roles that social media may or may not have played in those. This has polarised and intensified some of these conversations around tech by linking them to much broader polarising themes, and sometimes to a tendency to displacement - it can be tempting to blame an external force rather than ask yourself why your fellow-citizens didn’t vote the right way.

All of this means that the move towards regulation has sometimes been accompanied by a moral panic, and a rush for easy answers. That’s a lot of the appeal of a phrase like ‘break them up!’ - it has a comforting simplicity, but doesn’t really give us a route to solutions. Indeed, ‘break them up’ reminds me a lot of ‘Brexit’ - it sounds simple until you ask questions. Break them up into what, and what problems would that solve? The idea that you can solve the social issues connected to the internet with anti-trust intervention is rather like thinking that you can solve the social issues that come from cars by breaking up GM and Ford. Competition tends to produce better cars, but we didn’t rely on it to reduce emissions or improve safety, and breaking up GM wouldn’t solve traffic congestion. Equally, changing Instagram’s ownership would give advertisers more leverage, but it’s not a path to stopping teenaged girls from seeing self-harm content. Not everything is captured by the pricing system, which is why economics text books talk about ‘market failure’. And if you ask the average person on the street why they worry about ‘big tech’, they’re unlikely to reply that Facebook and Google might be overcharging Unilever for video prerolls.

Part of the appeal of applying anti-trust to any problem connected to ‘tech’ is that it sounds simple - it’s a way to avoid engaging with the complexity of real policy - but it’s also worth noting that the rise of tech to systemic importance has coincided with the second half of an industry cycle. Smartphones and social have matured and the leading companies in those industries mostly have dominant market shares, just as IBM did in mainframes in the late 1970s or Microsoft in PC software in the late 1990s (and because tech itself is so large, and global, being a leading company in tech makes you very big). This lends itself to a post hoc ergo propter hoc fallacy: these companies have gained big market shares at the same time as the problems emerged, so that must be the cause of the problem.  

The more one thinks about real policy, though, the more one sees that many of the most important debates we have around technology have deeply embedded tradeoffs. At the beginning of this year I attended a conference of European competition regulators: the head of one agency said that they tell a tech company that it must do X, and then the company goes down the road to the privacy regulator, who says ‘you must not under any circumstances do X’. Competition policy says ’remove friction in moving data between competing services’ and privacy policy says, well, ‘add friction’. In other words, policy objectives come with conflicts. We are probably about to get into another big argument about how Apple controls what you do on an iPhone, and there’s a Venn Digram to be drawn here: there are Apple policies that protect the user’s privacy and security, policies that protect Apple’s competitive position (or just make it money), policies that do both, and policies that really just reflect Apple’s preferences for the kind of apps it would like to see. How exactly do these intersect? You might not want to let privacy regulators or competition regulators have the only word on this.

Of course, this is how policy works - you have to pick tradeoffs. You can have cheaper food or more sustainable food supply chains; you can make home-owning a wealth-building asset class or you can have cheaper housing. As voters, of course, we want both - I want my parents’ home to appreciate and the home I plan to buy to get cheaper. A UK minister recently told me that his constituents complain about two aspects of government data collection: the government knows too much about them, and also they have to enter the same information into too many different government websites.

This is how policy works, but in the past these were all national debates. The UK, France and USA have very different models of libel law, but that wasn’t a big problem because no-one was really publishing a newspaper in all three countries. But network effects are global: any software platform of any scale grows globally, so how does it follow local law? For its first 25 years, the consumer internet has operated by default on US ideas of free speech, regulation, privacy law (or the absence of it) and competition, partly because that’s where most of the internet came from and partly because the internet wasn’t really important enough for clashes of these cultures to move from irritation to legislation. That’s clearly changed, and we’re moving to a world of multiple, overlapping regulatory spheres.

Some of this is undoubtedly nationalism and protectionism. Some rests on concerns about national security, either a fear of spying or of media tools being used to promote particular narratives by unfriendly states. But the core of it, I’d suggest, is the rather basic Westphalian principle that a country’s government has the right to say what can happen in that country. This isn’t just about ‘China’ versus ‘the west’ - different liberal democracies have different views on how free speech works, for example, and no-one outside the USA cares or even knows what the US constitution says about it. Equally, each sphere will find its own approach to the liability questions that America groups under ‘section 230’, and again, no-one will care what solution the US comes with. The same variance applies to privacy, competition itself and whole bunch of other issues, right down to very micro issues like whether an Uber driver is legally an employee or Airbnb’s impact on house prices.

Indeed, even the basic mechanisms of regulation themselves can look very different in different places. To simplify hugely, the US has a rules-based, lawyer-led system in which the basic unit of regulation is generally a court case, with a guilty or innocent verdict, and a punishment. Conversely, both the UK and EU have outcome-based, practitioner-led systems that focus on principles and ‘reasonableness tests’ rather than detailed rules and may never go to a court - this can look very alien to US lawyers, and vice versa.

These regulatory spheres are probably going to start bumping into each other. GDPR made it clear that rules would increasingly apply no matter where your servers are: if your users are in the EU, you have to obey EU rules, and for practical reasons that probably means you have to obey them for all of your users. CCPA effectively does the same in the USA, where California has increasingly become the national privacy regulator by default. An intriguing further step came from this case, in which an EU court held that Facebook must take down libellous content not just in Austria, where the case began, but globally. Meanwhile, the new Hong Kong security law appears to apply to behaviour by non-HK residents outside HK, which is truly extra-territorial. The obvious next question is what happens when an extraterritorial rule collides with a trade-off. What happens when the UK says you must do something and Germany says you must not?  

So, we have divergence in regulatory policy. We also have divergence in where the companies themselves are coming from. In this, Tiktok is the symptom of a broader change, or perhaps a catalyst for realising it. For most of the last 25 years the internet was American by default partly because that was where the companies came from, and that may be changing. In 2008 the US was 80% of global VC investing; now it's 50%.

Silicon Valley is still the global cluster, but it's no longer the only place you can make great products - software is diffusing. So what happens if now lots of people love a Chinese product? (For a fun contrast, note that the Russians tried to subvert US social networks but the Chinese built their own.) Tiktok claims 800m MAUs outside China. The US talks about banning it, but you can't ban every cool new app, and yet we need to be conscious that any Chinese company (or company with people in China) can be told to do anything by the Chinese state, and they don't have a choice. So, what are the scalable, repeatable rules?

This really just scratches the surface of the complexity we might see, as companies used to thinking in terms of inherently borderless platforms collide with five or ten or fifty different regulators and governments around the world. Complexity itself is an important consequence: this chart attempts to capture the cost of regulation for different US retail banks by size: the smaller the bank, the higher the proportionate cost of compliance. This isn’t an original observation: regulation is a regressive tax that tends to slow down innovation and impede startups.

Regulation, of course, is another tradeoff. Some time between 1850 and 1900 or so the industrial world worked out that regulating industry is necessary, and since then we’ve been arguing about how and how much, industry by industry, from industrial food to banking to airlines. Now that gets applied to tech. That’s not a great book title, but it’s probably the next decade or two. I started my career, in 1999, at an investment bank in London that had 900 people and one (1) compliance person. Today it might have 100x that. So, how many compliance people will Google have in five years?


Benedict Evans is a Venture Partner at Mosaic Ventures and previously a partner at A16Z. You can read more from Benedict here, or subscribe to his newsletter.