NEWARK WEATHER

‘Slaughterbots’: The AI Arms Race Begins – The American Spectator


While open-source artificial intelligence is still in its infancy, the military has explored using more advanced AI versions for years, from gathering intelligence, acquiring targets, streamlining decision-making, and analyzing data.

Of course, we’re not the only ones interested in using the technology: Drones equipped with AI are being employed in Ukraine; the Chinese People’s Liberation Army is investing just as much as the Pentagon in AI; and Russia’s Vladimir Putin said in 2017 that “whoever becomes the leader in this sphere will become the ruler of the world.” (READ MORE: AI Comes for Fast Food)

With potentially opposed nations competing to develop and accumulate AI weapons, we’ve entered the dawn of an AI arms race.

Concerns Surrounding AI Development in the Military

Armies have been using AI on the battlefield for a while, but as the technology develops, concerns surrounding lethal autonomous weapons (LAWS) and the use of AI to streamline decision-making processes have risen to the top of international discussions.

In 2020, a Turkish-manufactured drone made the first fully autonomous kill of a human target — something frequently seen in Ukraine in its ongoing war against Russia. Ukrainian drone companies rely on AI software to keep drones locked on preselected targets, even when they lose contact with their human operators or when their target moves. These improvements have enabled the Ukrainian military to “destroy Russian vehicles, blow up surveillance posts,” and target specific building projects.

Meanwhile, in May, the Department of Defense clarified that while the United States does not currently possess any LAWS, it would be open to developing that technology if “U.S. competitors choose to do so.” It also insisted that all AI systems, including LAWS, must allow humans to exercise some level of judgment in every scenario.

It’s certainly concerning to think that a machine without morals could be operating a drone based on parameters set by humans without consideration of the specific situation, but even more worrying is the use of AI to make war-related strategic decisions.

In April, Michael Hirsch from Foreign Policy noted that the latest AI technology, generative pre-trained transformers (GPTs), promised to “utterly transform the geopolitics of war and deterrence.” A war fought between unmanned drones on behalf of humans could save lives, although it would be an economic drain. In addition, AI could cut down on the time the military takes to make decisions — including when to release an atomic bomb. (READ MORE: Will There Be a New Government Agency for AI?)

“AI-driven software could lead the major powers to cut down their decision-making window to minutes instead of hours or days,” Hirsch said. “[The military] could come to depend far too much on AI strategic and tactical assessments, even when it comes to nuclear war.”

Are ‘Killer Bots’ Immoral or a Necessity?

During a Senate hearing on July 25, Sen. Richard Blumenthal (D-Conn.), chairman of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, noted that the U.S. would need to invest in developing countermeasures to prevent AI from making future decisions to begin a nuclear war.

Yoshua Bengio, the founder and scientific director of the Montreal Institute for Learning Algorithms, agreed, adding that the United States needs to work with international partners to develop “highly secure and decentralized labs operating under multilateral oversight to mitigate an AI arms race.”

In a recent interview with MIT Technology Review, Bengio noted that “we need to make it immoral to have killer robots. We need to change the culture, and that includes changing laws and treaties. That can go a long way.”

Bengio’s focus on the need for international cooperation will be especially important in the years to come, while the United States still leads the world in developing AI technology.

Stuart Russell, a computer science professor at the University of California, Berkeley, pointed out during the July Senate hearing that, right now, the U.K. is perhaps the United States’ closest competitor in developing advanced forms of AI. Meanwhile, countries like China — which could pose a threat — are still years behind: “[T]hey have mostly been building copycat systems that turn out not to be nearly as good as the systems” U.S. companies are creating.

However, a report from the think tank New American Security suggests that although Chinese investment in AI is currently focused primarily on tracking and identifying its citizens by recognizing their faces, voices, and gaits, the changes currently being adopted could become revolutionary in future development. “China provides little transparency on its military modernization efforts, including for AI,” the report states, “which could someday lead to strategic surprise for the United States.”

Meanwhile, Russia claimed its “first kill by artificial intelligence” in its war against Ukraine just a month ago. Putin clearly believes that becoming a global player in the near future requires investment in AI. (READ MORE: Are Robot Rights Next?)

And, of course, Putin is right. At the moment, no single country seems willing to call attention to its own interest in developing AI. Perhaps they are afraid that, after doing so, the race will truly start. They neglect that the very fact that so many major global players are interested in investing in AI suggests we are witnessing the beginning of a global AI arms race, whether or not we are willing to admit it.





Read More: ‘Slaughterbots’: The AI Arms Race Begins – The American Spectator