The Case for Big Action to Regulate Artificial Intelligence
Federal oversight is necessary—not just to keep people safe, but also to scale technology in a sustainable way.
This essay originally appeared on The Information.
In technology circles, people balk at the mere mention of government regulation. The naysayers argue it interferes with innovation and is a bad byproduct of big government.
But history tells a different story. Transparent, accountable and expert oversight—even when implemented late or ineffectively at first—has proven to be an important part of scaling an economy.
For better or worse, regulation is a necessary and proven boundary condition of capitalism. So why does the topic of regulation ignite such a visceral reaction, especially as it relates to what might be humankind’s biggest technological leap yet—artificial intelligence?
Given all the noise being made about this issue lately, I thought it would be useful to run through some of the arguments against AI regulation and offer my rebuttals.
Argument #1: It’s premature to regulate something that barely exists.
Mark these down as famous last words. While it’s admittedly early in the life of AI and generative pre-trained transformers, progress is compounding at a rate that’s measured in days.
We have gone from breathlessly awaiting a new iPhone each year to anticipating awe-inspiring new AI innovations on a weekly basis. This cycle is only a few months old, but we can only guess where we will be a year from now.
Yet it takes just minutes to imagine the dangers that could arise. For example, the emergence of Auto-GPT creates a range of possibilities: Phishing attacks that relentlessly go after our finances and personal data. Hacking access to critical hardware like GPS navigation and utility infrastructure. Even the ability to generate politically and economically debilitating fake news stories corroborated by lifelike photos and videos.
If we can imagine it, bad actors will do it. And it would be naive not to think about managing and preventing these risks in both the near and long term. Society may be better off regretting an early leap into regulation than one that comes too late.
Argument #2: Regulation will censor a free and open internet and hinder America’s ability to compete globally.
Put this in the pro-America and anti-China bucket. We don’t want a closed internet or our own version of the Great Firewall. We want to preserve a free, open internet where everyone is welcome.
Reality check: As much as we like to think our internet is completely open and free, it’s not. It’s monitored and tracked, perhaps not as closely as China’s, but it happens nonetheless.
The Children’s Online Privacy Protection Act tells us what’s allowed when targeting services to kids. In the EU, the General Data Protection Regulation completely upended how websites collect data from their users. And California passed its own version with the California Consumer Privacy Act. Onerous? Perhaps. But laws like these have been and will continue to be introduced to protect a range of internet participants, whether governments, businesses or citizens.
If anything, this is America’s opportunity to set and lead global standards instead of allowing them to be written for us.
Argument #3: Government doesn’t have the credibility or track record to effectively regulate technology.
Experience suggests the contrary. In well-regulated industries like drugs and pharmaceuticals, for example, the Food and Drug Administration maintains a tight hold over what can be sold to ensure the safety of consumers.
To do its job effectively, the FDA employs subject matter experts who understand what’s at stake. These people come from industry and academia, and they are well equipped to evaluate clinical data, manage risk and take a view on the ultimate benefit to society of the products they’re tasked with evaluating.
The FDA also offers multiple approval pathways for different types of drugs. Some drugs can be designated for fast-track approval, while others require more rigorous testing and a larger quantum of clinical data to prove their safety and efficacy.
What’s the alternative… allowing companies to self-regulate? The tobacco industry spent decades telling the public cigarettes were safe. How did that work out?
Greed wins and the profit motives of capitalism are too strong, which is why we need reasonable safeguards designed to protect society’s best interests.
Argument #4: Oversight bodies create unnecessary bureaucracy.
It’s true that this is a risk when it comes to working with big government and all their underlying entities. But it’d be careless not to try.
This is probably the weakest of the arguments against regulating AI, in part because it’s shortsighted but also because it’s just lazy. If slowing progress by months or even years via a regulatory mechanism means we can prevent irreparable mistakes, we should have an opportunity to explicitly decide whether it’s a small price to pay to cultivate a healthy, durable framework for innovation to occur.
Argument #5: Regulation kills innovation.
This is perhaps the most obvious and closely-held argument—and there’s no point in arguing otherwise. But that is the point. The reason for regulation is not to slow down the rate of advancement merely for the sake of doing so, but to give ourselves time to evaluate the implications of those advancements.
As a reminder, you don’t need to be first to win. Google wasn’t the first search engine. Facebook wasn’t the first social network. Expectations and guardrails can become a trellis on which later blossomings of life can grow.
Learning from Existing Models
The next logical question is, of course, what does AI regulation actually look like?
Building a lasting regulatory framework, smart processes and sound organizational oversight can only happen with much careful debate. But we aren’t starting from scratch.
The fact is that gatekeepers already exist, including effective models for safeguarding new software development. But instead of relying on public sector agencies like the FDA, they exist in the private sector—in Apple’s App Store and on Google Play, for instance.
Any developer can build whatever they want, but third parties determine their ability to distribute it, depending on a number of factors. Is it an entirely new app or simply an iteration on a well-established model? Are there fundamentally new aspects? Is it doing something that hasn’t been done before?
When I was at Facebook, we would sandbox apps to observe their behavior before they launched publicly. Apple and Google employ similar techniques, which give them a reasonable level of certainty as to the risks each new app would pose to the broader platform, and how users could utilize or exploit it.
Given the emergence of so many different AI players and platforms, we need a public gatekeeper. With an effective regulatory framework, enforced by a new federal oversight body, we would be able to investigate new AI models, stress-test them for worst-case scenarios and determine whether they are safe for use in the wild.
Learning From Past Mistakes
Look no further than Section 230 for a case study in failing to establish the right regulatory framework for a rapidly evolving industry.
With the inflexibility of Section 230, we have lost the ability for lawmakers to pass reasonable regulation around internet content and its distribution. Our fate now lies with the nine individuals of the U.S. Supreme Court, four of whom are past the typical retirement age and none of whom has any technology background.
As a person in the room where we were developing some of the core features that took advantage of a largely ungoverned digital media landscape, I can say two things:
We did what we did to grow at all costs and beat the competition.
We all wish we had been more careful.
For something as important as AI, which is already upending entire sectors of the economy, rewriting the composition of our workforce and fundamentally changing how we live our daily lives, I think it’s more than reasonable to consider carefully how we can limit its potential for harm.
There’s no obvious right answer, and there are certainly mistakes to be made. But the alternative of waiting and hoping it all works itself out is simply too naive, too risky and too dangerous.
Hard not to see the Regulator being held captive to the same type of special interests that blight all industry such as Sugar, Banking, Pharma.........
if you use FDA as a positive example, you are in a fool's paradise. Maybe you should use FAA.
FDA completely failed to ensure that the mrna vaccine companies do proper trials to analyze safety or even effectiveness. The mrna randomized controlled trials (the only reliable method of assessing causality) only showed the reduction in symptomatic PCR-positivity, not in more meaningful outcomes like actually reducing hospitalizations, overall deaths, or transmission: the claims FDA itself promoted without good evidence https://www.bmj.com/content/371/bmj.m4037 . Even after data showing that young men are more at risk after hospitalization after the mrna covid vaccine than covid itself, the FDA continued to promote those vaccines and their mandates.
FDA routinely approves expensive drugs based on surrogate endpoints, not showing effectiveness in reducing actually meaningful outcomes: e.g. https://www.science.org/content/blog-post/aducanumab-approval
As another example, they approved Nexetol just based on LDL lowering, not based on actual reduction in cardivascular deaths or heart attacks (https://www.acc.org/latest-in-cardiology/articles/2020/02/24/10/09/fda-approves-bempedoic-acid-for-treatment-of-adults-with-hefh-or-established-ascvd read the last sentence), even though it is well known that many drugs that lower LDL actually increase cardiovascular deaths, e.g. Clofibrate https://www.nejm.org/doi/full/10.1056/NEJMoa0706628
FDA routinely hinders cheap but effective agents like NMN (https://twitter.com/davidasinclair/status/1603513997768138756)
and NAC, which is effective in many ways and has decades of proven safety trackrecord.
In summary, the FDA is more about protecting the financial interests of the pharma companies they have CoI with, much less about protecting consumers.
I do see a need for an agency that ensures that supplements/medicines have no contamination and have exactly what is stated in the label. But that could be done by market forces, such as "Amazon/ConsumerReports Certified Integrity" label