Should the AI makers also be the AI regulators?
Executives of seven tech companies gathered at the White House last month and committed to voluntarily address the risks posed by artificial intelligence. Just days later, a subset of those industry players, including OpenAI, Anthropic and Google, announced the formation of their own regulatory body called the Frontier Model Forum, which they said is focused on the responsible development of powerful AI tools.
The forum is set to have plenty of bells and whistles, including an advisory board and a public library of solutions to help support “best practices.”
But concrete targets to determine whether the oversight effort is working? Those are a bit more TBD.
Marketplace’s Lily Jamali asked Rumman Chowdhury, CEO and co-founder of Humane Intelligence and a responsible AI fellow at Harvard’s Berkman Klein Center for Internet & Society, about the pros and cons of this kind of group.
The following is an edited transcript of their conversation.
Rumman Chowdhury: A lot of the leaders of these companies will say in order to understand how best to regulate this technology, you have to have a deep hands-on experience building these models and accessing this data. However, there is a different perspective that you get when your mission and goal is not to profit maximize and not to build a product that is going to meet your bottom line. So, it’s kind of odd to say that the best people to understand the harms of these models are the ones building the models themselves because it’s as if you’re asking oil and gas companies to solve climate change. They’re incentivized very differently.
Lily Jamali: If we don’t want to cede control to the industry and have them essentially regulate themselves, what is a better way to do this?
Chowdhury: In my opinion, any governance body like this needs to be independent. Companies should be engaged, but it should not be beholden to companies. Companies do need a safe space to test, experiment and share best practices, but these should not be wholly reliant on the CEOs of these companies’ goodwill for this organization to exist.
I believe an organization like this should be comprised of governments, civil society organizations, companies that are building these technologies, companies that are building things on top of these technologies, as well as people who are representatives of global entities and global bodies and truly a world audience.
Jamali: In a perfect world, what would a group like this focus on?
Chowdhury: I actually think a group like this should focus on the concept of “human flourishing.” It’s a term from Aristotle, and it’s actually a very specifically defined thing. A lot of the questions around the impact of AI are about harm mitigation and making sure bad things don’t happen. But there is a very big difference between making sure bad things don’t happen and making sure good things do happen. So, I think a global governance body should be uniquely tasked to ensure that good things happen for humanity using these AI systems.
Companies are for-profit organizations, and benefiting humanity is a byproduct of those companies making a product people will use. They want to engage in harm mitigation because they want people to use their products, and people don’t want to use harmful products. I have not seen a global-level body that has sat with the level of knowledge and expertise in AI systems tasked with ensuring that technology is built to help us.
Jamali: What you’re describing is reminiscent of the Oversight Board used by Meta. It’s a panel of about 20 researchers and advocates who advise the company on content moderation. What is your assessment of that body a couple of years in?
Chowdhury: I wrote an article for Wired in April advocating for an AI global governance body, and there are two examples I used. One is the International Atomic Energy [Agency] and the other was the Oversight Board. There are a couple of things that I find interesting and intriguing about this board that should be part of what is considered in creating global governance bodies.
One is these are independent individuals who are actually paid out via a trust. Meta does not control the money; it is actually paid to a trust, and the trust determines the allocation and who’s on the board. The second thing that’s really important about the Oversight Board is that the decisions they make are enforceable, which is very important here. We don’t need yet another global advisory body, we have enough advice. And the third is that these people are actually compensated quite well, they are paid as if this is a job. I think this part is critically important. So many similar groups rely on volunteer work or the assumption that people who have a particular level of expertise will simply donate their time, when actually this should be, for some people, a full-time endeavor. Adding this level of compensation levels the playing field and enables people to be able to participate in this. It also imparts the responsibility of the task at hand.
Of course, there are plenty of critiques of the Oversight Board that people have. Some people have said they’re overly narrow in scope, but I do see them as a really interesting paradigm. So, we have done this before. We’ve tried to do this, let’s use that as a steppingstone rather than starting all over from scratch and saying the companies can self-regulate.
Jamali: Is it fair to say the tech industry doesn’t exactly have the best record when it comes to overseeing itself?
Chowdhury: That is a very fair assessment. I think for folks who are unfamiliar with how tech governance works, especially if they work in an industry like banking or health care, they’re often shocked to learn what companies are allowed to do and how they’re able to handle data, create products and influence our lives in the way that no other industry has really been able to before.
Jamali: Is the Frontier Model Forum at the very least a step in the right direction? Or should we kind of just turn our backs on this model?
Chowdhury: I suppose any attempt at creating a collaborative organization is a step in the right direction. What I hope is that they quickly realize that companies alone can’t and shouldn’t make this regulation independently and that they very quickly open up to civil society and governments to be able to join.
The Frontier Model Forum defines “frontier models” as “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing model.” Membership in the forum is open to any organization that has developed and deployed a frontier model.
I guess that rules me out.
You can read more about Rumman Chowdhury’s hopes for an AI governance board in the piece she wrote for Wired. In addition to the Meta Oversight Board that she mentioned in our conversation, she brings up the history of the International Atomic Energy Agency as a template for how bodies without government or corporate affiliations can work on problems as complex as preventing global disasters.
The future of this podcast starts with you.
Every day, the “Marketplace Tech” team demystifies the digital economy with stories that explore more than just Big Tech. We’re committed to covering topics that matter to you and the world around us, diving deep into how technology intersects with climate change, inequity, and disinformation.
As part of a nonprofit newsroom, we’re counting on listeners like you to keep this public service paywall-free and available to all.
Support “Marketplace Tech” in any amount today and become a partner in our mission.