By Sophy Macartney
Computers are passing the exascale threshold, performing as many calculations in a single second as an individual could in 31,688,765,000 years. In 2017, experts stated that “the national security implications of AI will be revolutionary, not merely different.” And this was six years ago, a lifetime in technology. AI advancements are progressing far faster than regulation can keep up with. These advancements can undisputedly provide great benefits to efficiency and expand access to information, but with these opportunities also comes potential for disaster that must be considered moving forward with AI policy.
The current Oppenheimer era must encourage us to learn from past experience. In addition to experts in the field, director Christopher Nolan has also warned against the eerie parallels between AI and nuclear weapons — except this time it is more difficult to track because AI is easier to develop covertly than nuclear weapons are to build. AI experts are calling this their “Oppenheimer moment,” the time to claim responsibility to mitigate unintended consequences amid the great advantages that AI can have. Regulation must be as important as developing beneficial uses, if not more.
Despite warnings, Pentagon officials chose to continue the development of the next generations of AI due to the perceived risk of falling behind Russia and China. Sound familiar? Development has also progressed by focusing on the silver lining of how AI can indeed be helpful in national security situations.
Computers do not have emotions that may influence decisions and also do not interpret data with confirmation bias, perhaps creating more logical decisions. Due to AI’s ability to enhance reliable communication, support decision-making, and increase situational awareness, AI could be useful as a supplementary resource in decision-making, as opposed to being the sole decision-maker. AI increases efficiency and provides information to support high-stakes decisions. AI etiquette is still up for debate, especially in military and nuclear weapons applications, as no treaties or international agreements that keep up with AI advancements in these areas exist yet, if they ever will.
Despite the undeniable positive side of AI, the dangers center around the fact that AI could lessen the need for humans to be in the decision-making process, leaving decisions up to pre-programmed robots that could compound human error. AI poses threats in the security nexus when combined with nuclear weapons and the lack of adequate regulation on AI advancements and uses.
Even with the general consensus that AI should not be in charge of missile-launch decisions, there are other nuke applications of concern.
Learning-based AI uses large amounts of data to train decision-making, but some may not account for certain information. If AI is applied to nuclear weapon attack evaluations and relies on satellite imagery, environmental circumstances like fog, rain, etc. may be misinterpreted, creating insufficient and inaccurate information leading to misinterpretation.
Test missiles are fired frequently, and there have been more nuclear false alarms than actual attacks. Since nuclear weapons have only been used in real attacks twice, there is a lack of concrete data examples with which to train AI in nuclear weapons use, creating a greater risk for error, false positives, and skewed, unpredictable results.
On the other hand, AI is changing conflict and raising the costs of military aggression. Since Russia’s invasion of Crimea in 2014, Ukraine has undertaken a massive technical modernization mission leading to AI in the current war that in some ways surpasses Western military capabilities and has sustained successes. Ukraine launched an AI platform in 2022 that enables them to use only the data necessary and UAVs have helped identify war criminals and track troops. If AI applications in the military can be beneficial in pushing back against irresponsible nuclear states like Russia, regulating AI use in these cases becomes even trickier.
This introduces the idea of ethical AI: How is it to be determined that this is a “good” use of AI versus another military application? Who makes the call, what criteria should be followed, and when would an actor be condemned for misuse? Discussion of what ethical AI use means is tricky, but important for the goal of establishing AI etiquette and what is acceptable not just for nuclear uses, but also national and human security contexts.
There has indeed been progress in AI ethics development with UNESCO Ethics of AI, National Artificial Intelligence Initiative, and the Blueprint for an AI Bill of Rights, but these developments are not legally enforceable; they are more symbolic. Congress is working to pass a bill that would prohibit the use of artificial intelligence in the U.S. military nuclear launch decisions, but this is not a comprehensive regulation strategy.
Traditional export control regimes were not designed for intangible and continuously developing technology like AI, so regulation is not straightforward. This follows the familiar trend of dual-use technologies proliferating faster than regulations. Not only does this contribute to the challenge of fitting AI and other emerging technologies into export controls, but Russia’s inclusion in the Wassenaar Arrangement makes it even more difficult to reach consensus on how to manage the proliferation of such technologies, including AI.
Although investments are being dedicated toward AI research and development, there is a glaring gap in the attention being given to legislation and horizontal regulation change — and for a comprehensive approach to AI legislation taking control anytime soon in place of the current incohesive and sporadic AI approach. AI advancements create several battles. First, ensuring ethical and safe uses of AI in nuclear, national, and human security applications is key. Second, ensuring adequate management and regulation of what types of AI are permitted and for whom should also be at the forefront of policy considerations. The benefits of AI cannot be realized until appropriate regulations are implemented. It is a long road, but staying away from the “the horse is already out of the barn” mentality will propel progress forward.