September 20232023AIFocus

FOCUS on the Rise of AI: Warfare

Madeline Field
Staff Writer

Vladimir Putin, speaking in 2017 at a meeting with students, according to The Associated Press, said of AI that “Whoever becomes the leader in this sphere will become ruler of the world.” Sure enough, AI has the capacity, like gunpowder, radar, and nuclear weapons, to completely reinvent warfare.

The U.S. Department of Defense (DOD) released the Global Trends 2040 project in May 2021, which predicts that AI, as well as improvements in automation, sensors, and hypersonic technologies, will produce deadlier, faster, and more accurate weapons that will proliferate on a broader scale than ever before. While combat, they argue, is unlikely to be made more decisive, war may become more violent than ever before. 

These battlefield revolutions, the DOD alleges, are likely to occur in “four broad areas—connectivity, lethality, autonomy, and sustainability.” What this will concretely look like is not yet understood by the general public, but vast arrays of global resources are being committed to the research and design of AI military concepts, from reconnaissance to surveillance.

Regardless, the rapid development of inexpensive autonomous, artificial submarines such as Australia’s Ghost Shark, which is set to be operational by 2025, makes clear how many militaries see AI as a force multiplier. According to Reuters, autonomous submarines allow ships to reach depths and perform riskier maneuvers that humans cannot handle, greatly expanding spying and combat capabilities in the ocean. 

Submarines, Reuters adds, are hardly the only vehicles being developed with AI software. A variety of machines, from subs to “warships, fighter jets, swarming aerial drones and ground combat vehicles” are likely to launch with AI components in the following years. 

However, simpler AI systems have already begun to be used in warfare.

According to The Defense Post, the Israeli military began using AI systems this July for target identification and war action plan creation. The Israeli AI can calculate munition loads, create schedules, and crunch vast arrays of data, allowing for better target selection. Israel’s AI-generated decisions remain subject to the approval of human operators, but the pioneering Israeli system has reportedly vastly enhanced decision-making speed, making broader adoption likely.  

Most critical to the development of AI technology and innovation has been the war in Ukraine, The Washington Post reports. 

Both Ukraine and Russia have used AI to enhance drone capabilities. In the past, when trained using more simple technology, drones were unable to complete targeted maneuvers when the target moved or when electrical interferences occurred. Now, advanced AI embedded in drones have enabled drones to stay locked in on targets, giving them the ability to complete its missions even if they go offline. 

Admittedly, such drones, limited in range and untrained for complex environments, are unlikely to shift the war’s landscape drastically. However, the real-life application of AI in warfare, from Israel to Ukraine, raises ethical issues about not only its use, but the relative ease with which it can proliferate. Dangerous AI technologies can be downloaded off the internet to be used by rogue governments and non-state actors alike, almost democratizing the spread of advanced lethal weaponry.

Naturally, such developments have led to concerns and attempts to regulate the military proliferation of AI. According to PBS, roughly 30 countries have expressed interest in drafting a “preemptive legally binding treaty that would ban autonomous weapons before they can be built,” but none of those 30 are leading military powers. The U.S. and China, the two largest world powers, have not engaged in dialogue about controlling the military spread of AI.  

The United States has displayed caution towards widely adopting AI technologies in the armed forces, publishing a Responsible Artificial Intelligence Strategy and Implementation Pathway in 2022 and creating the Responsible AI Working Council to manage AI’s broad implementation.

But China, amidst great power rivalry with the U.S., leaves less to the imagination. China has expressed a desire to become the world’s leading AI superpower by 2030, outlined in their 2017 New Generation AI Development Plan. While U.S. officials have stated that they have attempted to speak to the Chinese about regulating AI in the military, discussions have not been fruitful.  

Luckily, AI in warfare does not yet pose a real threat to human existence. Anthony King writing for War on the Rocks argues that AI can only function well in perfect environments and with perfect data. Strategic command decisions, which must take into account dozens of complexities, are hardly that. Military leaders, aware that AI is only capable of interacting in environments it is trained in, have displayed extreme caution towards adopting it unilaterally.  

But, what AI will look like five years from now is impossible to predict; its computational power, according to Time, is doubling every six to ten months. As such, it remains critical in the coming years that countries slow the proliferation of AI in warfare and refrain from designing systems that give AI undue autonomy in decision-making on human lives. 

Image courtesy of Flickr

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This