Line of Sight logo

Brass Tacks About Artificial Intelligence

By Robert C. Engen  - November 21, 2021

Reading Time: 10 min  content from Canadian Forces College

 

Here are a few considerations related to the military applications of artificial intelligence (A.I.) to use as a starting point for thinking about what it may mean for the profession of arms.

What It Is. Artificial intelligence is usefully defined as the science and engineering of making computers do the sorts of things that human minds can do.[1] The most important thing for service members to remember about A.I. in 2021 is that it cannot reason, understand, comprehend, or think in any sense that is meaningfully comparable to human beings. Data-driven algorithms are inherently brittle and cannot generalize.[2]

A.I. is just an algorithmic weighting scheme for probabilities. Any judgment (or lack thereof) behind it remains either depressingly human or distressingly opaque.

Expense. A.I. systems come with a huge price tag.[3] A.I. systems cannot discern anything without computationally intensive training. In this new machine learning era, the computational power required to keep up with the state-of-the-art doubles every 3.4 months, and the “prodigious appetite” for computing power imposes hard limits on further improvements. The headline-grabbing “breakthroughs” in A.I. over the past decade have come from an economically unattractive method: running algorithms for more time on more machines, not running them more efficiently.[4] This comes at a staggering price in energy consumption and carbon emissions. Training one large neural network model on Natural Language Processing tasks in 2019 generated as many CO2 emissions as the total lifetime emissions of five automobiles; and training an A.I. only accounts for ten to twenty percent of the total cost of running it in real-world production settings.[5]

“Grid” versus “Battery” Approach. One of the most useful analogies for A.I. in recent years comes from Kai-Fu Lee, a technology entrepreneur and author of the recent book A.I. Superpowers. He characterizes the A.I. field in terms of how one distributes the “electricity” of A.I. across the economy. The “grid” approach commoditizes A.I., and turns machine learning into a standardized service that can be purchased by any company and accessed via cloud computing platforms that act as the grid, performing machine-learning optimization on whatever data problems users pay to upload. Alternately, there is the “battery” A.I. approach, with highly specific A.I. products for each use-case, with new products and algorithms being developed for specific cases and embodied within specific devices rather than on the cloud.[6] Military systems that leverage A.I. will need either incredibly powerful onboard computational resources (the “battery” approach), or a stable and secure connection to the A.I. “grid” that is running the algorithms. Both approaches have dangers. The “grid” approach will no doubt be cheaper but will probably be prohibitively insecure and vulnerable for military field use.

Contractors. As the complexity of a piece of equipment increases, the total support that is required to maintain its functionality also increases. Professor Daniel Lake, author of The Pursuit of Technological Superiority, noted that for pieces of equipment, “as the number of parts (its complexity) increases, the probability that they will all be working at the same time goes down … and in addition, complex equipment takes longer to repair when it fails.” Recent generations of military hardware have become so complex that their support needs have chained militaries to contractors to fulfill these functions.[7] A.I. systems may be prohibitively expensive to develop or own, and smaller armed forces will likely have to contract to the Big Tech companies for these capabilities rather than develop them in-house. Outsourcing itself increases various threat windows and arguably creates as many problems as it solves.[8]

 

Reliability. In August 2018, the science-themed webcomic xkcd, written by former NASA roboticist Randall Moore, addressed software engineers:[9]

 

https://imgs.xkcd.com/comics/voting_software.png

 

Bugs, errors, and failures are inescapable in modern software and much more likely to occur as the complexity of these systems increases, particularly if time and resources are not spent on robust security certification and rigorous testing.[10] The more complex a system is, the more likely it will fail in unpredictable ways, with many “downstream” consequences.[11] As economist Gary Smith has put it, complexity and reliability problems both plague A.I.: when we do not know how an A.I. system is reaching its conclusions (and we normally do not), “we have no way of assessing whether [errors] are logical mistakes, programming errors, or other problems … the inputs are numerous, the process is mysterious, and the output is dubious.”[12]

Deployment. The “grid” approach to A.I. will probably work for institutional number crunching, intelligence, analysis, and cyber. But when it comes to weapon systems for field forces, that dog will not hunt. The security and connectivity issues on an electromagnetic (EM) battlefield make a “grid” approach that requires calling home from the field for A.I. services a non-starter. We will need a “battery” approach at the sharp end if we want our forces to ever operate in a contested EM environment. Barring unforeseen fundamental breakthroughs (which do not happen predictably)[13] it will take a while to adapt A.I. systems to kinetic warfighting, at least on small platforms like individual soldiers or fighting vehicles (larger platforms such as warships are a different story). Complex, computationally intensive A.I. systems therefore also need to be ruggedized for real world use if they are going forward of headquarters. Dr. Sarah Taber, an industrial safety expert, recently highlighted problems with actual service performance:

If the problems of marrying artificial intelligence and robotics in fruit-picking machines is bad, the problems for the military applications of A.I. are far worse. The expense of a “battery” approach and the problems of software reliability make the leap from development to operational implementation of A.I. systems a vast one, and to date there have been few successes in bringing these technologies to maturity.[14]

Robert C. Engen is an assistant professor in the Department of Defence Studies at the Canadian Forces College, and is deputy chair of Military Planning and Operations. He covers most of CFC’s content on artificial intelligence. He is the author of three books on combat motivation, and is the author of the Canadian Land Warfare Centre’s forthcoming novel of future warfare, Crisis in Baltika. He is also the official regimental historian for Princess Patricia’s Canadian Light Infantry.

 

Notes

[1] Margaret A. Borden, “Artificial Intelligence,” in What’s Next?, by Jim Al-Khalili (London: Profile Books, 2017), 1.

[2] Mary Cummings, “Artificial Intelligence and the Future of War,” Research Paper (London: Chatham House, 2017), 7.

[3] Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World (Cambridge: MIT Press, 2018), 191.

[4] Neil C. Thompson, et al., “The Computational Limits of Deep Learning,” MIT Initiative on the Digital Economy Research Brief (MIT, 2020), https://ide.mit.edu/wp-content/uploads/2020/09/RBN.Thompson.pdf.

[5] Emma Strubell, Ananya Ganesh, and Andrew McCallum, “Energy and Policy Considerations for Deep Learning in NLP,” in 57th Annual Meeting of the Association for Computational Linguistics (Florence, Italy, 2019), https://arxiv.org/abs/1906.02243.

[6] Kai-Fu Lee, A.I. Superpowers: China, Silicon Valley, and the New World Order (New York: Houghton Mifflin Harcourt, 2018), 94–95.

[7] Daniel R. Lake, The Pursuit of Technological Superiority and the Shrinking American Military (London: Palgrave Macmillan, 2019), 52–54.

[8] André Tchokogué, Jean Nollet, and Julie Fortin, “Outsourcing Canadian Armed Forces Logistics in a Foreign Theatre,” Canadian Journal of Administrative Sciences 32, no. 2 (2015): 113–27; Paul R. Camacho, “Privatization of Military Capability Has Gone Too Far: A Response to Lindy Henecken’s ‘Outsourcing Public Security: The Unforeseen Consequences for the Military Profession,’” Armed Forces & Society 41, no. 1 (2015): 174–88.

[9] Randall Monroe, “Voting Software,” Webcomic, xkcd, August 8, 2018, https://xkcd.com/2030/.

[10] Bruce Schneier, Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World (New York: Norton, 2018), 34–35.

[11] Kartik Hosanagar, A Human’s Guide to Machine Intelligence (New York: Viking, 2019), 61.

[12] Gary Smith, The A.I. Delusion (Oxford: Oxford University Press, 2018), 81.

[13] Kai-Fu Lee, A.I. Superpowers, 91–92.

[14] Cummings, “Artificial Intelligence and the Future of War,” 9.

Image of College Entrance used for a section break.

Related Content

preview title
Search All Content

Page details

Date modified: