Main Content

Is AI an Existential Threat?

Gillian Hadfield, Pedro Domingos and Jérémie Harris speak to Steve Paikin

Many people were surprised by the March 2023 open letter from tech leaders requesting a moratorium on AI development. Three thought leaders debate whether or not AI is an existential threat to humanity.

Header Image 

 

Steve Paikin: In March of 2023, an open letter was released by technology leaders calling for a six-month pause on AI development. Elon Musk signed it, as did Apple co-founder Steve Wozniak. It said, in part:

AI systems with human-competitive intelligence can pose profound risks to society and humanity…recent months have seen AI labs locked in an out-of-control race to develop and deploy ever-more powerful digital minds that no one — not even their creators — can understand, predict or reliably control.

Soon after, Geoffrey Hinton, known as the Godfather of AI, announced his resignation from Google, saying: “Look at how it was five years ago and how it is now. Take the difference. Propagate it forwards. That’s scary.” Gillian, how would you characterize this moment in history for AI?

 

Gillian Hadfield: I think we are at a real inflection point in thinking about AI. As you pointed out, we are seeing tremendous advances. What we saw in the fall of 2022 with ChatGPT was exciting and new, and I think people are now saying, ‘Hey folks, maybe we’re going a bit too fast. Maybe there’s a lot more happening here than we’ve understood.’ I think that’s what the pause letter is about.

SP: Pedro, what is your view on this?
Pedro Domingos:
I don’t think we’re going too fast at all. In fact, I don’t think we’re going fast enough. If AI is going to do things like cure cancer, do we want to have the cure years from now or yesterday? What is the point of a six-month moratorium? To me, that letter is a piece of hysteria. The bigger worry for me — and most AI researchers — is not that AI will exterminate us, it’s that a lot of harm will be done by putting in restrictions, regulations and moratoria that are not needed.

SP: Jérémie, what’s your take?
Jérémie Harris:
It’s clear that we’ve taken some significant steps towards human-level AI — in the last three years in particular. So much so that we have many of the world’s top AI researchers, including two out of three of its earliest pioneers, wondering aloud about it. I might not go that far, personally, but this is being talked about through that lens. By throwing more data and processing power at these techniques, we might be able to achieve something like human-level or even superhuman AI. If that is even a ballpark possibility, we need to contemplate some fairly radical shifts in the risk landscape that society is exposed to by this technology.

There are many dimensions to consider, but a key one is malicious use. As these systems become more powerful, the destructive footprint of malicious actors that use them will only grow. We’ve already seen, for example, China using powerful AI systems to interfere in Taiwan’s electoral process. We’ve seen cybersecurity breaches and malware being generated by people who don’t even know how to code. Then, of course, there’s the risk of catastrophic AI accidents, which folks like Geoff Hinton are flagging. Those two broad categories of risk are very significant.

 

divider

Given the state of our environment, the path forward must include key elements of the Indigenous worldview.

 

SP: Gillian, let’s circle back to Pedro’s initial comment, that if you want a cure for cancer, you don’t move slower, you move faster. What do you say to that?
GH:
I actually agree. There are lots of potential benefits to be had and we absolutely want them to materialize. We want to continue doing our research and building the system. That’s a really important point. But if you think about medical research, it takes place within a regulated structure. We have ways of testing it and there are clinical trials. We have ways of deciding which pharmaceuticals, medical devices and treatments to put out there. With AI, we’re seeing such a leap in capability that the ways in which it transforms research and work have basically outstripped our existing regulatory environments — and we haven’t yet built a system that ensures AI is working the way we want it to.

SP: Pedro, on the topic of searching for a cure for cancer, what about the notion that patient-protection regulations don’t yet exist for AI, and therefore we ought to be careful?
PD:
I think the analogy between AI, medicine and drug approval is mistaken. Each area faces problems of its own. The drug approval mechanisms that we have in place are costing lives, and even the Food and Drug Administration in the U.S. understands that it needs to change. So that is hardly a good model. Regulating AI is not like regulating drugs. It’s more like regulating Quantum Mechanics or Mechanical Engineering. You can regulate the nuclear industry, cars or planes; you can regulate — and should and do regulate — specific applications of AI. But regulating AI itself doesn’t make sense.

I think we definitely need to ‘ask questions before we shoot,’ make sure we understand the technology and figure out what needs to be regulated and what doesn’t. I think what OpenAI has been doing is great. The best way to make AI safe is to put it in the hands of everybody, so everybody can find the bugs. We know this from Computer Science: The more complex the system, the more people need to look at it. What we don’t need is to put this in the hands of a committee of regulators or experts to figure out what’s wrong with AI. I think that’s the wrong approach.

SP: Jérémie, do you think AI poses an existential risk to humanity?
JH:
I think the argument that it does is backed by a lot more evidence than most people realize. It’s not a coincidence that Geoff Hinton is on board here. It’s not a coincidence that when you talk to folks at the worlds’ leading AI labs — the ones that are building the world’s most powerful AI systems, the GPT4s — you hear people talking about the probability that this will amount to an existential risk.

One of the things these people are discussing is the concept of ‘power-seeking’ in sufficiently advanced systems. That’s one concern that Geoff Hinton put on the table, and it’s been written up and studied empirically. I think it’s something we should take seriously. Nothing is guaranteed. That’s part of the unique challenge of this moment. We’ve never before made intelligence systems smarter than us. We’ve never lived in a world where those systems exist. So we have to deal with that uncertainty in the best way we can. Part of that entails consulting with folks that actually understand these systems and who are experts in technological safety.

SP: Gillian, what was your reaction when you heard Geoffrey Hinton’s comments?
GH:
I think Geoff was truly surprised by the advances he’d seen in the previous six months. Obviously, he’s been tremendously close to this. I think a number of people didn’t realize that scaling up large language models and generative models would produce the kind of capabilities we’re seeing. I’ve had lots of discussions with Geoff — he’s on my Advisory Board at the Schwartz Reisman Institute — and this was an important moment for him as to the nature of the risk. If you listen to what he has to say, it’s not ‘I know for certain that there is an existential risk,’ it’s ‘There is so much uncertainty about the way these things behave that we should be studying that problem and not getting ahead of it.’ That is an important clarification.

SP: Pedro, when someone like Geoffrey Hinton rings the existential bell, does it not give you pause?
PD:
I’ve known Geoff for a long time. He’s a great researcher; but he’s also an anarchist from way back and a bit other-worldly, and I think we need to be careful about overinterpreting what he says. People need to know that most AI researchers do not think AI poses an existential threat. But of course, it’s interesting to try to understand why some do people believe that.

There are many researchers who don’t think we’ll ever get to human-level AI, which is quite possible. I do happen to think we’ll get there, but it won’t happen tomorrow. People need to understand that, number one, we are still very far from humanlevel intelligence AI. Geoff has said things like, “What if an AI wants to take over?” To which Meta’s Head of AI Yann LeCun responded, “But AIs don’t want anything.” It’s an algorithm; it’s not something that we can’t control.

When you use machine learning, the AI does something you can’t predict. But it is being done to optimize the functions that you have determined. That’s where the debate needs to be, is how to optimize that. AI is for solving problems that are intractable, meaning it would take exponential time to solve them. That’s the technical definition of AI. But it’s easy to check the solution.

GH: I want to put the existential threat question in a broader lens, as well. It’s true that malicious use is something to be concerned about, but I don’t think the risks are around a ‘rogue AI’ that develops its own goals and, Terminator-style, sets out to kill us all.

The thing I worry about is our complex systems — our financial systems, economic systems and social systems. I worry about the capacity for autonomous agents — which are out there already, by the way, trading on our financial markets and participating in our labour markets, reviewing candidates for jobs. When we start introducing more and more autonomous agents into the world, how do we make sure they don’t wreak havoc on our systems?

SP: Jérémie, do you worry that we are entering Terminator territory?
JH:
A couple things. First off, on the question of ‘These are just algorithms, what do they really want?’ and so on, this is where the entire domain of power-seeking comes up. As I indicated, this is an entire subfield in AI safety that is well researched. These objections are very robustly addressed, at least in my opinion. This is a real thing. And the closer you get to the centres of expertise at the world’s top labs — the Google Deep- Minds, OpenAIs, the very labs building ChatGPT and the next generation systems — the more you see the emphasis on this risk class.

A poll from a few months ago showed that 48 per cent of general AI researchers estimate a 10 per cent or greater probability of catastrophic risk from AI. Imagine if you were looking to get on a plane and 50 per cent of the engineers that built it said, ‘There is a 10 per cent chance that this plane is going to crash.’

SP: Yann LeCun recently tweeted the following: “We can design AI systems to be both super-intelligent and submissive to humans. I always wonder why people assume that intelligent entities will necessarily want to dominate. That’s just plain false, even with the human species.” Jérémie, why do you assume that if AI does become more intelligent than us, it will automatically want to conquer us?
JH:
This is actually not an assumption, it’s an inference based on a body of evidence in the domain of power-seeking. Pioneering work at frontier labs suggests that the default path for these systems is to look for situations to occupy and position themselves in. Basically, the systems seek high optionality, because that is useful for whatever objective they might be trained or programmed to pursue. As AI systems get more Intelligent, the concern is that they will start to recognize and act more and more on these incentives.

divider

The people who tend to be most hysterical about AI are the ones who are furthest away from using it in practice.

 

SP: While you were speaking, Pedro had a big smile on his face. What’s behind that smile, Pedro?
PD:
I just have a hard time taking all of this seriously. The people who tend to be most hysterical about AI are the ones who are furthest from actually using it in practice. There’s room for a few of these people in academia; the world needs that. But once you start making important societal decisions based on them, you need to think twice.

I agree with Gillian that these Terminator concerns are taking attention away from the real risk we need to be talking about, which is the risk of malicious use. We are going to need something like ‘AI cops’ to deal with AI criminals. We have to face the problem of AI in the hands of totalitarian regimes and democracies need to start making better use of the technology.

The biggest problem with AI today, not tomorrow, is that of incompetent, stupid AI making consequential decisions that hurt people. The mantra of the people trying to control AI is that we need to restrict it and slow it down, but it’s the opposite: Stupid AI is unsafe AI. The way to make AI safer is by making it smarter, which is precisely the opposite of what the moratorium letter is calling for.

GH: As indicated earlier, I signed that letter, and I signed it so that we would have these very conversations — not because I think it’s essential that we stop progressing. I don’t believe we’re on a precipice; but I do think it’s critical to think carefully about all this. These systems are being built almost exclusively inside private technology labs. I work with folks at OpenAI, so I know there is a lot of concern there about safety. They’re making decisions internally about how to train AI so it ‘speaks better’ to people, when to release it, to whom and what limits and guardrails should be put in place. These are things we should be deciding publicly and democratically, with expertise outside of the engineering expertise that is currently dominating this arena. As a social scientist, an economist and a legal scholar, I think about the legal and regulatory infrastructure that we need to build, and I don’t see us paying enough attention to that set of questions.

SP: When the printing press was invented, we didn’t know what impact it would have. We had to use it for a while in order to understand that. Same goes for the Internet, the cotton gin and the steam engine. Why is this technological moment any different from previous discoveries?
JH:
Because we are currently on a trajectory to build something potentially smarter than ourselves. That may or may not happen, but if it does, we’re going to find ourselves in a place where we just can’t predict anything important around how the future is going to unfold.

Throughout history, human intellectual capacity has been the one constant. We’re all born with biological thinking hardware, and that’s all we’ve had to work with. But we just don’t have a point of reference here, which is why it’s so important to track this and think deeply about where it might go.

PD: It’s true that AI introduces uncertainty because it is powerful and can be used for lots of different things, and we can’t possibly predict them all. But that is good! The best technology is like the examples you gave earlier. Most of the best applications are things no one could have anticipated. What happens is, we all work to optimize good applications and to contain bad ones. AI is still subject to the laws of physics and the laws of computation, and to the sociology of sociotechnical systems. We’re actually going to have a lot of control over it, even if there are aspects of it that we don’t yet understand.

A good analogy here is a car. As a user, you don’t feel an urgent need to understand exactly how the engine works just because it could blow up one day. That’s for the mechanic. What you do need to know is where the steering wheel and pedals are, so you can drive it. And AI does have a ‘steering wheel’ — it’s called the objective function, and we need to understand how to ‘drive’ that.

I very much agree that we need more than just the technologists thinking about this. The limiting factor in the progress of AI is actually not on the technical side, it’s around the human ability to use it and come to terms with it. That’s what’s going to set the right limits.

SP: Gillian, Do you think this technological moment is different from previous ones?
GH:
I do. What is critical here is the speed with which AI transforms entire markets. For example, in the legal domain, I’ve seen how, in just a few minutes, tools like ChatGPT can do what it would usually take a week for a lawyer to do. That is going to be very disruptive, and the potential scale is massive. Because it’s a general purpose technology, it can and will show up everywhere.

To return to the analogy of the automobile, what I am concerned about is that we live in a world with copius regulation around how to build and drive automobiles. It took 50 years before we built all that regulatory structure. There was basically nothing in the beginning. And that’s just one part of our economy. I don’t think we can approach AI as we have with previous technologies and say, ‘Let’s just put it out there, find out how it works and then figure out how to regulate it.’ I think we have to be much more proactive about it.

SP: There appears to be consensus that AI must be developed in a responsible way. How do we do that?
PD:
We don’t even know what a real AI is going to look like, because we don’t have one yet. So trying to regulate in advance is almost guaranteed to be a mistake. What needs to happen is, the government and the regulatory organizations need to have their own AI, whose job it is to deal with the AIs of the Googles and the Amazons and so on.

There is no fixed, old-fashioned set of regulations that will work with AI. We need something as adaptive on the government side as it is on the corporate side, so AIs can talk to each other. This is already starting to happen in the financial markets, because there is no choice. There’s a lot of bad activity going on and you’ve got to have the AI in place to deal with it.

GH: There are two things I want to pick up on. First, I agree that we’re going to need AI to regulate AI. I actually think we need to build that as a competitive sector unto itself. But it’s important to recognize that there are some building blocks that we don’t have in place currently that allow us to regulate other parts of the economy. For example, one thing we can do right now is create a national registration body — a registry system so that we have eyes on where AIs are, how they’ve been trained and what they look like.

I think this should be a government function. Every corporation in the country has to register with a government agency. They have an address on record; they have the name of someone who is responsible. The government can say, ‘Okay, we know you’re out there.’ We register our cars so we know where all the cars are. Right now we don’t have that kind of visibility into AIs. So that’s the starting point. People would have to disclose basic pieces of information about the models to government — not publicly, not on the Internet. That would give us visibility into this as a collective. It would also provide us with the tools needed if a dangerous method of training or a dangerous capability emerges. We don’t even have that basic infrastructure in place yet.

SP: Last word goes to Jérémie. How do we need to proceed?
JH:
I think it’s worth noting that the most advanced AI capabilities require giant processing-power budgets — on the order of hundreds of millions of dollars. OpenAI’s latest model cost is in the hundreds of millions. And we’re seeing that cost rise and rise. That immediately implies a bunch of counter-proliferation levers. And OpenAI has really led by example by inviting third parties to audit their AI models for behaviours like power-seeking and malicious capability. It would be great to see a lot more of that in the AI community. 

 


Gillian Hadfield is a Professor of Law at the University of Toronto and Professor of Strategic Management at the Rotman School of Management. She is Director of the Schwartz Reisman Institute for Technology and Society at the UofT as well as AI Chair of the Canadian Institute for Advanced Research (CIFAR). Pedro Domingos is a Professor Emeritus of Computer Science and Engineering at the University of Washington and author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2018). Jérémie Harris is Co-founder of Gladstone AI and the author of Quantum Physics Made Me Do It: A Simple Guide to the Fundamental Nature of Everything (Viking, 2023).

This article is a condensed version of an interview from TVO’s The Agenda, hosted by Steve Paikin. Video of the entire conversation is available on the TVO website: www.tvo.org/theagenda


Share this article:


Read More Follow Us on twitter Email List Subscribe Today