Sam Altman, chief executive officer and co-founder of OpenAI, during a Senate Judiciary Subcommittee hearing in Washington, DC, on May 16, 2023. | Eric Lee/Bloomberg via Getty Images
There’s something missing in the heart of the conversation about AI.
Recently, a number of viral stories — including one by Vox — described an Air Force simulation in which an autonomous drone identified its operator as a barrier to executing its mission and then sought to eliminate the operator. This story featured everything that prominent individuals have been sounding the alarm over: misaligned objectives, humans outside of the loop, and an eventual killer robot. The only problem? The “simulation” never happened — the Air Force official who related the story later said that it was only a “thought exercise,” not an actual simulation.
The proliferation of sensationalist narratives surrounding artificial intelligence — fueled by interest, ignorance, and opportunism — threatens to derail essential discussions on AI governance and responsible implementation. The demand for AI stories has created a perfect storm for misinformation, as self-styled experts peddle exaggerations and fabrications that perpetuate sloppy thinking and flawed metaphors. Tabloid-style reporting on AI only serves to fan the flames of hysteria further.
These types of common exaggerations ultimately detract from effective policymaking aimed at addressing both immediate risks and potential catastrophic threats posed by certain AI technologies. For instance, one of us was able to trick ChatGPT into giving precise instructions on how to build explosives made out of fertilizer and diesel fuel, as well as how to adapt that combination into a dirty bomb using radiological materials.
If machine learning were merely an academic curiosity, we could shrug this off. But as its potential applications extend into government, education, medicine, and national defense, it’s vital that we all push back against hype-driven narratives and put our weight behind sober scrutiny. To responsibly harness the power of AI, it’s essential that we strive for nuanced regulations and resist simplistic solutions that might strangle the very potential we’re striving to unleash.
But what we are seeing too often is a calorie-free media panic where prominent individuals — including scientists and experts we deeply admire — keep showing up in our push alerts because they vaguely liken AI to nuclear weapons or the future risk from misaligned AI to pandemics. Even if their concerns are accurate in the medium to long term, getting addicted to the news cycle in the service of prudent risk management gets counterproductive very quickly.
AI and nuclear weapons are not the same
From ChatGPT to the proliferation of increasingly realistic AI-generated images, there’s little doubt that machine learning is progressing rapidly. Yet there’s often a striking lack of understanding about what exactly is happening. This curious blend of keen interest and vague comprehension has fueled a torrent of chattering-class clickbait, teeming with muddled analogies. Take, for instance, the pervasive comparison likening AI to nuclear weapons — a trope that continues to sweep through media outlets and congressional chambers alike.
While AI and nuclear weapons are both capable of ushering in consequential change, they remain fundamentally distinct. Nuclear weapons are a specific class of technology developed for destruction on a massive scale, and — despite some ill-fated and short-lived Cold War attempts to use nuclear weapons for peaceful construction — they have no utility other than causing (or threatening to cause) destruction. Moreover, any potential use of nuclear weapons lies entirely in the hands of nation-states. In contrast, AI covers a vast field ranging from social media algorithms to national security to advanced medical diagnostics. It can be employed by both governments and private citizens with relative ease.
As a result, regulatory approaches for these two technologies take very different forms. Broadly speaking, the frameworks for nuclear risk reduction come in two distinct, and often competing, flavors: pursuing complete elimination and pursuing incremental regulation. The former is best exemplified by the Treaty on the Prohibition of Nuclear Weapons, which entered into force in 2021 and effectively banned nuclear weapons under international law. Although it is unlikely to yield tangible steps towards disarmament in the short term — largely because no current nuclear powers, including the US, Russia, or China, have signed on — the treaty constitutes a defensible use case for a wholesale ban on a specific existential technology.
In contrast, the latter approach to nuclear regulation is exemplified by New START — the last remaining bilateral US-Russia nuclear arms control agreement — which limited the number of warheads both sides could deploy, but in doing so enshrined and validated both countries’ continued possession of nuclear weapons.
The unfortunate conflation of AI and nuclear weapons has prompted some advocates to suggest that both of these approaches could potentially be adapted to the regulation of AI; however, it is only the latter approach that translates cleanly. Given the ubiquity of artificial intelligence and its wide range of practitioners, its regulation must focus on the application of such a technology, rather than a wholesale ban. Attempting to regulate artificial intelligence indiscriminately would be akin to regulating the concept of nuclear fission itself. And, as with most tools, AI is initially governed by the ethical frameworks and objectives imposed by its developers and users (though pursuing misaligned objectives could lead to divergence from human-intended goals): The technology is neither inherently good nor evil; in contrast, philosophers, ethicists, and even the pope have argued that the same could not necessarily be said about nuclear weapons, because their mere possession is an inherent threat to kill millions of people.
In contrast to a wholesale ban, the most tangible risk reduction efforts surrounding nuclear weapons over the past several decades have come through hard-won negotiations and international agreements surrounding nuclear testing, proliferation, and export controls. To that end, if we draw lessons from the decades of nuclear arms control, it should be that transparency, nuance, and active dialogue matter most to meaningful risk reduction.
Others call attention to potential extinction-level risks, asking that these be taken just as seriously as those from nuclear weapons or pandemics. OpenAI CEO Sam Altman, for example, along with his fellow CEOs from Google DeepMind and Anthropic and several prominent AI researchers, signed a recent open letter warning that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
While it is essential not to dismiss those genuinely worried about catastrophic risks altogether, leveraging such towering claims in every conversation distracts from the grounded conversations necessary to develop well-informed policies around AI governance. There are genuine catastrophic risks surrounding AI that we might encounter: rogue actors using large AI models to dismantle cybersecurity around critical infrastructure; political parties using disinformation at scale to destabilize fragile democratic governments; domestic terrorists using these models to learn how to build homemade weapons; and dictatorial regimes using them to surveil their populations or build dystopian social credit systems, among others.
But by labeling AI as an “extinction-level” threat, the conversation around such risks gets mired in unprecedented alarmism rather than focusing on addressing these more proximate — and much more likely — challenges.
Do we really need — or want — a “Manhattan Project” for AI safety?
These existential concerns have provoked calls for a Manhattan Project-like undertaking to address the “alignment problem,” the fear that powerful AI models might not act in a way we ask of them; or to address mechanistic interpretability, the ability to understand the function of each neuron in a neural network.
“A Manhattan Project for X” is one of those clichés of American politics that seldom merit the hype. And AI is no exception. Many people have called for large-scale governmental research projects targeting potential existential risks resulting from an alignment problem. Such projects demand vast investments without offering concrete solutions and risk diverting resources from more pressing matters.
Moreover, the “Manhattan Project”-like approach is a wholly inappropriate analogy for what we actually need to make AI safer. As historian Alex Wellerstein has written, the Manhattan Project was undertaken with virtually zero external oversight in near-complete secrecy, such that only a handful of people had a clear view of the goal, while thousands of the individuals actually doing the hands-on work didn’t even know what it was they were building. While the Manhattan Project did ultimately accomplish its goal, hindsight obscures the fact that the project itself was a tremendous financial and technological gamble with far-reaching consequences that could not have been foreseen at its inception.
Furthermore, while the Manhattan Project’s ultimate goal was relatively singular — design and build the atomic bomb — AI safety encompasses numerous ambiguities ranging from the meaning of concepts like “mechanistic interpretability” to “value alignment.” Mastering a thorough understanding of these terms requires academia’s exploratory capabilities rather than an exploitation-oriented mega-project.
Another problem with a Manhattan Project-like approach for “AI safety,” though, is that ten thousand researchers have ten thousand different ideas on what it means and how to achieve it. Proposals for centralized government-backed projects underestimate the sheer diversity of opinions among AI researchers. There is no one-size-fits-all answer to what exactly defines “interpretability” or how to achieve it; discussions require meticulous consideration rooted in diverse perspectives from ethicists and engineers to policymakers themselves. Bureaucracy-laden mega-projects simply cannot offer the freedom of exploration needed to surmount current theoretical challenges.
While pouring funds into government-backed research programs may seem advantageous in theory, real progress demands nuance: Academic institutions boast a wealth of expertise when it comes to exploring and iterating novel concepts, fine-tuning definitions, and allowing projects to evolve organically. This mode of exploration is especially appropriate given that there exists no consensus concerning what the end goal for such AI safety projects ought to be; therefore, funneling funds toward top-down, singular-aim initiatives seems disproportionate, if not outright detrimental.
The path forward
The prevailing alarmist sentiment is inadvertently diverting attention from efforts to enhance our capacity for responsible technological governance. Instead of dystopian nightmares à la the Terminator, a wiser approach would prioritize creating stringent risk management frameworks and ethical guidelines, fostering transparent operations, and enforcing accountability within AI applications. Some open letters propose genuine concerns but suffer from overly dramatic language — and dampen innovation in the process.
Acknowledging these issues while steering clear of speculation would promote a more precise understanding of AI in the public conversation. But what it would not generate is clicks, likes, and retweets.
Various recommendations have already been outlined for responsible governance of AI: instituting stronger risk management frameworks and liability regimes; implementing export controls; increasing investments in standard-setting initiatives; and deploying skilled talent within the government, among others.
Building on these suggestions, there are several additional measures that could effectively bolster AI governance in the face of emerging risks.
First, the government must limit abuse across applications using existing laws, such as those governing data privacy and discrimination. Then it should establish a comprehensive “compute governance” framework to regulate access to the infrastructure required to develop powerful models like GPT-4, though it is important to balance that framework with the needs of open source development.
Second, it is paramount that we implement retention and reproducibility requirements for AI research. By doing so, researchers and technology users will not only be able to reproduce study findings in an academic context, but could also furnish evidence in litigation arising from misuses or negligent applications of AI systems.
Third, addressing data privacy reform is essential. This involves updating existing data protection regulations and adopting new measures that protect user privacy while ensuring responsible AI development and deployment. Such reforms must strike a balance between maintaining data security, respecting individuals’ privacy rights, and fostering innovation.
Fourth, there should be a strategic shift in the allocation of National Science Foundation (NSF) funding toward responsible AI research. Currently, resources are directed primarily toward enhancing capabilities — what if we reversed this investment pattern and prioritized safety-related initiatives that may lead to more sustainable innovations and fewer unintended consequences?
Last but not least, the United States must modernize its immigration system to attract and retain top AI talent. China has been explicit in its desire to be the world’s leader in AI by 2030. With the best minds working on AI here, we will be able to design it responsibly and set the rules of the road.
Developing effective policy measures also depends on strong collaborations between academia and industry partners worldwide. By instituting new frameworks to foster accountability and transparency within these collaborations, we minimize risks while proactively addressing issues as they arise.
By refocusing the conversation’s heart to better balance critical considerations and the desire for progress in unexplored areas, we might lay foundations for practical policies that make a difference. We should prioritize targeted regulation for specific applications — recognizing that each domain comes with its own set of ethical dilemmas and policy challenges.
Simultaneously, in eschewing sensationalistic rhetoric, we must not dismiss legitimate concerns regarding the alignment problem. While there may not be policy solutions immediately available to tackle this issue, governments still have a critical role to play in spearheading research projects aimed at better understanding the long-term risks involved with AI integration growth.
Our organization — the Federation of American Scientists — was founded over 75 years ago by many of the same scientists who built the world’s first atomic weapons. After the devastating bombings of Hiroshima and Nagasaki, they created an organization committed to using science and technology to benefit humanity and to minimize the risks of global catastrophic threats. These individuals understood that true risk reduction was best achieved through collaborative policymaking based on factual and clear-eyed analysis — not sensationalism.
By acknowledging that artificial intelligence is an ever-evolving tool imbued with ethical considerations too complex for a top-down, one-size-fits-all solution, we can chart a more robust course toward sustainable progress. To that end, instigating constructive dialogue focused on responsible governance and ethics — rather than fetishizing dystopian conjecture — provides the requisite foundation to harness AI’s tremendous potential as an engine of change guided by sound principles and shared human values.
Divyansh Kaushik is the associate director for emerging technologies and national security at the Federation of American Scientists, and holds a PhD in machine learning from Carnegie Mellon University.
Matt Korda is a senior research associate and project manager for the Nuclear Information Project at the Federation of American Scientists, where he co-authors the Nuclear Notebook — an authoritative open source estimate of global nuclear forces and trends.