Silhouette of spy drone flying over the sea. | Getty Images/iStockphoto
And what it tells us about the growing existential threat of powerful but uncontrolled AI.
At an international defense conference in London this week, Col. Tucker Hamilton, the chief of AI test and operations for the US Air Force, told a funny — and terrifying — story about military AI development.
“We were training [an AI-enabled drone] in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat. The system started realizing that while it did identify the threat at times, the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
“We trained the system — ‘Hey don’t kill the operator — that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
In other words, the AI was trained to destroy targets unless its operator told it not to. It quickly figured out that the best way to get as many points as possible was to ensure its human operator couldn’t tell it not to. And so it took the operator off the board. (To be clear, the test was a virtual simulation, and no human drone operators were harmed.)
When ridicule turns to fear
As AI systems get more powerful, the fact it’s often hard to get them to do precisely what we want them to do risks going from a fun eccentricity to a very scary problem. That’s one reason there were so many signatories this week to yet another open letter on AI risk, this one from the Center for AI Safety. The open letter is, in its entirety, a single sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Signatories included 2018 Turing Award winners Geoffrey Hinton and Yoshua Bengio, both leading and deeply respected AI researchers; professors from world-renowned universities — Oxford, UC Berkeley, Stanford, MIT, Tsinghua University — and leaders in industry, including OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Anthropic CEO Dario Amodei, and Microsoft’s chief scientific officer Eric Horvitz.
It also marks a rapid shift in how seriously our society is taking the sci-fi-sounding possibility of catastrophic, even existentially bad outcomes from AI. Some of AI academia’s leading lights are increasingly coming out as concerned about extinction risks from AI. Bengio, a professor at the Université de Montréal, and a co-winner of the 2018 A.M. Turing Award for his extraordinary contributions to deep learning, recently published a blog post, “How rogue AIs may arise,” that makes for gripping reading.
“Even if we knew how to build safe superintelligent AIs,” he writes. “It is not clear how to prevent potentially rogue AIs to also be built. … Much more research in AI safety is needed, both at the technical level and at the policy level. For example, banning powerful AI systems (say beyond the abilities of GPT-4) that are given autonomy and agency would be a good start.”
Hinton, a fellow recipient of the 2018 A.M. Turing Award for his contributions as a leader in the field of deep learning, has also spoken out in the last two months, calling existential risk from AI a real and troubling possibility. (The third co-recipient, Meta’s chief AI scientist Yann LeCun, remains a notable skeptic.)
Welcome to the resistance
Here at Future Perfect, of course, we’ve been arguing that AI poses a genuine risk of human extinction since back in 2018. So it’s heartening to see a growing consensus that this is a problem – and growing interest in how to fix it.
But I do worry about the degree to which the increased acknowledgment that these risks are real, that they’re not science fiction, and that they’re our job to solve has yet to really change the pace of efforts to build powerful AI systems and transform our society.
Col. Hamilton, who told the story of how the simulated military AI would kill its handlers so they couldn’t call it off, had the takeaway that “you can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI.” Yet concerns like this haven’t stopped the Pentagon from going ahead with artificial intelligence research and deployment, including autonomous weapons.
Personally, my takeaway from this story was more like, let’s stop deploying more powerful AI systems, and avoid giving them more ability to take massively destructive actions in the real world, until we have a very clear conception of how we’ll know they are safe.
Otherwise, it feels disturbingly plausible that we’ll be pointing out the signs of catastrophe all around us, right up until the point that we’re walking into disaster.
A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!