🤖EU, China, US: Three Different Attitudes Toward AI Regulation
US is the most open, EU is the least open, and China (knowing its redlines) is somewhere in between
One interesting aspect of the fast-evolving landscape of generative AI is the global consensus on the need to regulate it. This was not the case at the dawn of the Internet or the early days of mobile social media. Perhaps precisely because regulators regretted their “go with the flow” attitude towards the two previous moments of tech platform shifts, as generative AI ushers in a new platform shift (at least, this is the consensus among tech circles), the desire to regulate has never been stronger.
Even though it has only been nine months since ChatGPT’s first launch, we already have a good sense of how the three largest economic bodies – EU, China, US – plan to regulate generative AI. There are many good analyses out there on the substance of these regulations. So instead, I will do a comparison of the “why” behind the attitudes and motivations of European, Chinese, and American regulators.
It is arguably more important for both large companies and startups to develop a sense of the attitudes and incentives that drive regulators, rather than simply knowing what the rules say. Attitudes and incentives are, after all, what portend future behaviors, and the future of generative AI is by no means clear or settled.
EU: Chasing the Brussels Effect
EU regulators are the first to start regulating generative AI, and that “need to be first” is driven by their desire to achieve the so-called “Brussels effect”. This is not a speculation on my part; EU lawmakers have explicitly cited the Brussels effect as a key reason to quickly pass the EU’s AI Act, while spurring AI innovation is a lesser, secondary goal. That’s why when the EU parliament did pass the AI Act in June, it took pains to tell everyone that this is the “world’s first comprehensive AI law.”
Why is the Brussels effect such a big deal? Without sounding too cynical, it is the most direct way for EU regulators to gain power and relevance globally. And they’ve done it at least once before with GDPR (General Data Protection Regulation).
Let’s illustrate the Brussels effect with GDPR. This effect manifests itself in two ways: de facto (or in fact or in practice) and de jure (or by law).
By being the first, most comprehensive, and also the most stringent set of rules on data privacy, GDPR forced every company, big and small, to comply as long as it has a non-trivial number of European users and a website that may collect cookies. That’s why we all see those accept or reject cookie notifications – an example of GDPR’s de facto Brussels effect. Even though these companies don’t necessarily have to comply in the same way for their users in Brazil or India, internally, it is more straightforward to comply up to the toughest standards than to have different levels of compliance for different countries. Both Facebook and Microsoft did exactly that in 2018 – applying GDPR standards across all its users in all geographies. In practice, GDPR is a global standard, not just a European one.
GDPR also became the basis (or at least the inspiration) for other data privacy laws that have passed since. The best example is the California Consumer Privacy Act. This is an instance of the de jure effect, where the “first to market” law becomes the influential foundation for future similar laws.
As you can see, to achieve the full Brussels effect, you have to be both the first and the toughest. And that’s exactly what is happening with the EU AI Act.
The Act’s risk-first approach, by enumerating a list of “unacceptable risks” and “high risks” covering a wide swath of industries and use cases, is arguably the toughest framework we’ve seen yet. It’s so tough that Stanford’s Center for Research on Foundation Models have concluded that none of the foundation models on the market, from GPT4 to LLaMA, comes anywhere close to compliance. The EU lawmakers also appear to be dictating what they deem to be unacceptable risks or high risks without much input or collaboration with industry players, in stark contrast to both China and the US’s approach. Procedurally, the European Commission is pushing hard to wrangle all the EU member states to adopt the AI Act before the end of this year, so the rules can take effect in early 2024.
We won’t know for a few years whether the EU AI Act will achieve the Brussels effect; this analysis from Brookings is skeptical of that potential. But there is little doubt that the EU regulators’ main motivation is to reclaim the power trip they once felt with GDPR, while promoting technology innovation comes in at a distant second.
China: Regulate Like A Startup
China has also solidified its generative AI regulations in record speed, though the speed is not motivated by a desire to achieve the Brussels effect.
The Cyberspace Administration of China (CAC) first released a set of draft rules in April. Three months later, a set of interim or provisional rules were released and scheduled to take effect on August 15, after taking in comments and inputs from industry players. Regulations that would set the limits and guardrails on how 1.4 billion people use generative AI came together in about four months. Six other agencies, in addition to the CAC, also signed on to give the rules more enforcement teeth and consistency.
This speed of execution is more like a startup. This “startup-like” quality of Chinese tech regulators was well-articulated by Kendra Schaefer in a recent Sinica podcast episode. Because Chinese regulators have released rules previously on synthetic AI and deepfakes, they are also not starting from scratch (in contrast to American regulators), so has the foundation to move in “startup speed.”
Many of the rules in the first draft were very stringent, some impossibly so, but the regulators put them out anyways to get feedback (or pushback). The consensus analysis of the end result released earlier this month is that the rules are less stringent, more reasonable, and more watered-down. This post from
has a solid comparison of the changes between the draft and the soon-to-be-promulgated interim version. Matt Sheehan also did a good, fast-twitch thread on the differences:If I were to sum up Chinese regulators’ attitude in one line, it would be: startup pragmatism with redlines.
ChatGPT’s release made it painfully obvious that China is still behind the US in AI innovation, and US sanctions of high-end GPUs are widening that gap. But if unregulated, generative AI will quickly touch many of the redlines that no companies can cross in China, when it comes to products or outputs that could shape or influence public opinion. All Chinese entrepreneurs and executives are well-aware of what those lines are. Most multinationals are also aware of them, and used to put up with them in order to access the Chinese market, but are less willing to do so these days.
So the Chinese regulators need to thread the needle, by acting fast but not acting crude, in order to reinforce the redline guardrails without dampening innovation. The initial draft in April was most certainly crude, but got the process started. The current interim version limits the compliance hurdles to apply just to generative AI services with the capacity to guide public opinion or mobilize society – reinforcing the redline while leaving some room to innovate in other use cases, like enterprise B2B software. When this version takes effect in August, edge cases will pop up and regulators will keep changing the rules as they see fit.
The rulemaking around generative AI is by no means finished in China, but the attitude and iterative approach of the rulemakers is rather clear.
US: Laissez-Faire and Learn
The attitude by US regulators and lawmakers, thus far, is what I call: laissez-faire and learn. Even though there has been a lot of activities – from congressional hearings and White House meetings, to a hodgepodge of bill proposals – nothing close to a concrete set of rules has been put forward by either the legislative or executive branch to regulate generative AI.
Last month, Senate Majority Leader Chuck Schumer announced a framework he is personally pushing to help Congress come up with comprehensive legislation on AI. However, the most tangible next step is a series of listening sessions happening in the fall, for congressional members to learn more about AI’s potential and risks. Unlike their Chinese counterparts, Schumer admitted that Congress is “starting from scratch” when it comes to legislating on generative AI.
Last week, the White House convened the executives of seven leading companies that make foundation AI models (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, OpenAI) to commit to making their AI safe. The pledge basically amounted to these companies saying “we promise to do the right thing”, while the resulting eight commitments are mostly things these companies are already doing.
I don’t think this lack of concrete regulatory action is necessarily a bad thing or a dereliction of duty. I can think of a few reasons why US regulators are currently motivated to just not do much.
First, it is entirely plausible that legislators and executive branch officials honestly don’t know enough about AI to make rules. In that case, learning before regulating is the right thing to do.
Second, there is real fear and insecurity in Washington that regulation could kill innovation and diminish America's technological lead in AI, especially vis-a-vis China. While this fear may be unsubstantiated and unfounded, as this Foreign Affairs article argued, it is a powerful narrative that US regulators are very susceptible to. It is also a line of argument that tech incumbents (like the ones who showed up at the White House to make those voluntary commitments) are likely pushing. And why not? It has worked before with social media. When Mark Zuckerberg showed up in Congress in 2018 to try to stave off regulation of Facebook, one of his main arguments was the fierce competition he is feeling from Chinese Internet companies and that he is the lesser of two evils (you want Xi or Zuck?).
Third, pre-emptive regulation is perhaps just not in the American DNA. We like to wait for problems to get really bad first before we regulate. The Great Financial Crisis needs to happen first before Congress passes Dodd-Frank. Even then, the regulation gets watered down over time just so another financial crisis, albeit a more minor one, with SVB and First Republic Bank could happen.
This decidedly laissez-faire attitude certainly has many drawbacks. Many voices from the media and think tank world will harshly criticize this approach. It’s worth keeping in mind though that the media industry, in particular, feels very threatened by generative AI, already filing lawsuits, and would like to see strong regulations ASAP to protect itself. However, as I noted in a previous post, the US’s more light touch and grassroots-oriented way is distinct from the Chinese and EU approach, which is both top-down. And it is just as legitimate of an approach as any others to strike the elusive balance between innovation and safety (broadly defined).
Whether you are a large tech firm with plenty of legal resources or a young startup, it’s hard enough to stay updated on the latest regulatory movements in the EU, China, and the US, let alone any new variations that may pop up from India, Japan, Brazil, or Abu Dhabi, each trying to exert their own sovereignty over generative AI. Thus, knowing the attitudes and the “why” behind different national regulators’ actions is a helpful shorthand.
If you forget everything you’ve read so far, just remember: the US is the most open, the EU is the least open, and China (knowing its redlines) is somewhere in between.
It's so interesting. I would have expected China to be the most restrictive, but you make a good case for the EU being even more... stifling. I think that's the thing - there's a clear trade off, and the US and China simply cannot take their foot off of the gas for very long, lest the other widen the lead (US) or catch up (China).