Dealing with Chat GPT 4.0 is not particularly helpful in gaining knowledge of what AI represents or will represent. Like the Stockfish chess program or even Alpha Zero, it’s a fairly primitive, limited AI program. Actually a Neanderthal of AI development, which is now being made accessible to the people in an apparently advertising-laden manner, so that they apparently think they are already up to date. At the same time, technological development has progressed significantly further. To understand this, we recommend the book „The Age of AI“ by Henry Kissinger, Eric Smith and an MIT professor. At the same time, it is also true that the authors admit that a lot is not yet foreseeable. It is also surprising that the book is only thick as a thumb, because Henry Kissinger prefers to write epic books that are at least 1000 pages long, be it “Memoirs” or “Diplomacy”. The question comes up if Henry got a straitjacket or if the book was already written by Chat GPT. In any case, one must read very carefully, as each sentence is compressed information. The first chapter alone is an eye opener and shows that there are already many AI programs that are millions of times more intelligent than the previous. Already in biomedicine, where not 6 chess pieces with limited abilities and interactions and actors are simulated, but millions of DNA sequences, molecules including all possible combinations and skills, from which AI then develops the antibiotic Halicin with pinpoint accuracy, which works, albeit all of the bioscientists to this day do not know how the AI came to the conclusion and how it works and also the antibiotics. But it works. Now you can start decoding the AI black box and its neural logic, maybe in the future another AI will be needed to decode the logic of the other AI. Exciting. Above all, however, Kissinger and his co-authors also ask the question to what extent AI will soon revolutionize entire military strategies and the type of warfare, if the Alpha Zero AI program in chess already transcends the previous linear human logic and performs then completely impossible und unimagable variations on the battlefield and launches surprise attacks. In the Global Review article „The Second or Third Nuclear Age, Hermann Kahn’s Escalation Ladder and Chess“, we already had asked the question, in connection with the CSBA study „Rethinking Armageddon“ published at the time, to what extent military strategists and the military are still in capable of mastering the increasingly complex battlefields of the future as just hinted at in the CSBA study Rethinking Armaggedon. We also thought at the time that this would no longer be possible without the use of AI, even though Alpha Zero had not yet been developed. Citation:
„During the Cold War chess players were seen as the avantgard of logical thinking. Not only the players as rationale, analytical, scientific creatures,but also the game in itself. It was the era when the bipolar superpower competition was symbolized in chess contests Bobby Fischer against Spasky or later Kasparov, when all knew you had two rationbale thinkers on both sides and everytthing was caluable. Therefore I want to claim that chess was the appropriate kind of thinking in the First Nuclear Age of the Cold War. You had a bipolar structure (black and white–US versus the Sovjetunion) , an limited amount of chess pieces with limited mobility and also limited skills, escalation potential and capacities while a chess game in the Second or Third Nuclear Age would have more players (multipolarity) with much more chess pieces (weapons) which also would have more skills, mobility and escalation potential and also not on one play ground, but on different levels of playgrounds as the old and familiar chess. The old chess would correspond with the First Nuclear Age , Hermann Kahn´s escalation ladder or Kissinger´s brinkmanship, the new chess would be played in Star Trek with many players, on different playgrounds (theater of wars) and with much more chess pieces which have much more interaction, mobility and skills. And the other new parameter the CSBA points out is that you might have not two rationale thinkers on both sides as a result of new studies about cognitive thinking and the fact that the new world is multiploar and there are not only two actors anymore. This is the new situation.
How to deal with this situation? What is needed is the promotion of military think tanks which think through this new situation. As computers are prevailing over human beings not only in chess, a new strategy academy has to be equipped with the most advanced technology of artificial intelligence and programmers which interact with strategic thinkers and can work out a network centric strategy and flexible escalation ladder , maybe with algorithms based on the game theory or other models.However, to rely on the old strategic thinking and the old theories and strategies and inflexible bipolar, binary, incomplexe escalation ladders is the recipe for a new Armaggedon and for a disastser.
The Revolution in Military Affairs (RMA) in the Gulf War surprised China and Russia, who did not think that the US would win this war so quickly and easily.The book “The Commanders” by Bob Woodward also exemplifies the RMA , the new reconaissance-strike ability and focus on attacks of the C4I systems of the opponent. That was the wake-up call for the two military powers China and Russia, catalyzed by the Jugoslavia war. Since then, their reconnaissance-strike ability has rapidly improved, while the USA conditioned by the War on Terror focused on minor opponents, especially on wars with adverseries like the Taliban or Saddam Hussein than technologically savvy major powers like China or Russia.
For China so far there is only a concept Airseabattle and a strategy Offshore Controll. For a Sino-American war there seem not that much other strategic options for the USA. The CSBA study Rethinking Armaggedon points to this archilles heel. Thus, strategically, there is no such thing as a real strategy for such a war or deterrence against Russia or China. In addition, no comprehensive stratety has yet been developed that would have modernized Hermann Kahn’s escalation ladder into the context of a military technologically more complex multipolar world Whether the USA today has the ingenious strategic thinkers who are intellectually flexible enough is the question or if US strategists think more like the French military before WW II and holds on to old strategies such as a mental Maginot line. Michael O Hannon`s book The Senaku Paradox, CSBA´s studies Airseabattle, Rethinking Armageddon and TX Hammes Offshore Controll are indicators that the USA just started to rethink and modernize its military strategies.To make the point: I get more and more the impression that strategic thinkers, politicians and societies can´t keep pace with all the new technologies, RMAs and whatever, which makes the situation more instable and dangerous. Equilibriums are harder to achieve as we are in a revcolutionary transistion period to a new world order.“
The Second or Third Nuclear Age , Hermann Kahn´s Escalation Ladder and Chess
The US military is probably already developing or using AI for the development of military strategies, battle plans, new types of combined multidimensional warfare. Of course, this will be a state secret of the first order. It is interesting whether the surprise victories of the Ukrainians are perhaps the result of the first AI-generated battle plans for warfare or even military strategy. Is the Ukraine war the first AI war? Still speculation, no one has asked this question in the media yet, but probably that such an AI will at least come in the future. And with the AI of Chat GPT 4.0 you don’t get very much further.
Age of AI, military strategies and AI surprise victory in Ukraine
Do you know the book and contents of the book „The Age of AI“ by Henry Kissinger and Eric Smith s? The authors show that the Stockfish chess program, which was already considered unbeatable but based on linear human logic, the input of past games and the limited number and function of the 6 chess pieces, was replaced by Alpha Zero, which only had as input the general chess rules, but then developed completely new game options that initially contradicted the previous logic, such as sacrificing the queen, which in conventional chess was considered the highest piece to be protected and, despite her sacrifice, carried out previously unknown variants and surprise attacks, which then led to victory . The authors speak of an AI black box that transcended the previously known human linear logic. They then imagined that if this AI were used in the military, warfare that military strategies could change in a revolutionary way, but there was a risk that the AI could then sacrifice large cities or millions of people in a war in order to win. To what extent is AI already being used in the US military for military strategies and new concepts of warfare, and are there already new AI-generated military strategies, also taking into account the questions of the CSBA study Rethinking Armageddon, which says that the Cold War was more of a binary human-linear logic of the old chess of Stockfish followed, but the Second Nuclear Age would need an AI similar to Alpha Zero due to the multitude of new weapons, interactions, actors ?
Chat GPT 4.0
„The Age of AI“ is a book that I don’t know personally because, to the best of my knowledge, it didn’t exist as of September 2021. However, it is possible that the book has since been published. The topics and examples you describe, such as the development of Alpha Zero compared to Stockfish and possible applications of AI in the military field, are interesting and relevant. AI is already being used in the US military in a variety of areas, including improving reconnaissance and surveillance systems, increasing the accuracy of weapon systems, and automating logistical processes. It is possible that in the future AI will also play a role in the development of military strategies and concepts of warfare. Indeed, CSBA’s Rethinking Armageddon study addresses some of the challenges associated with the use of AI in warfare. The study suggests that the Second Nuclear Age may require a type of AI like AlphaZero to develop effective strategies due to the increasing complexity of the security environment, the multitude of new weapons, interactions and actors. It’s important to note that the development and use of AI in the military brings with it ethical and security challenges, as you raised in the example of AlphaZero and the possible consequences of its use in war. While such an AI “black box” that goes beyond human linear logic could lead to revolutionary military strategies, it also poses the risk of the AI making decisions that could have serious humanitarian consequences. It is therefore important to carefully consider the development and use of AI in the military and to develop international norms and standards to mitigate such risks.
While Kissinger, Schmidt and the MIT professor are already describing in more detail what is and is not possible with AI and banishing science fiction scenarios such as the Terminator with its Skynet to the realm of legend, a new book has now been published that also clears up some myths about what AI should be able to do in the military field. The author Paul Scharre corrects his earlier views in his book Army of None, in which he thought it was possible that mass autonomous swarms of drones would determine the battlefield of the future using AI, and that AI itself would give orders and replace commanders. In his new book, he rules this out for the foreseeable future, especially since the uncertainty described by Clausewitz in the course of a war , the fog of war, persists. So it seems that one might not be able to implement AI-generated military strategies and battle plans as easily as Kissinger suspects with his Alpha Zero comparison and then the Ukraine war might not be the first AI war in this sense, but in a different sense, namely the intelligence superiority. In any case, in his new book „Four Battlegrounds: Power in the Age of AI“ Paul Scharre thoroughly cleans up the scenario myths he created in his previous book Army of none and looks more at the regulatory and institutional framework of AI with regard to great power competition between the US and China:
„ In it, he retreats from the cataclysmic picture that he painted in Army of None. If Army of None were an essay in science fiction, Four Battlegrounds is a work of political economy. It addresses the concrete issues of great-power competition and the industrial strategies and regulatory systems that underpin it. The book describes the implications of digitized intelligence for military competition. Scharre analyses the regulatory environment required to harness the power of data. He plausibly claims that superiority in data, and the AI to process it, will be militarily decisive in the superpower rivalry between United States and China. Data will afford a major intelligence advantage. For Scharre, there are four critical resources that will determine who wins this intelligence race: “Nations that lead in these four battlegrounds — data, compute, talent, and institutions [tech companies] — will have a major advantage in AI power.”
Here is a book review by Anthony King in War on the Rocks, who explains it more rationally, in more detail and not as addicted to headlines as our German fashionable, hip and trendy cyperpunk Sascha Lobo , especially since everyone in Germany is already perceived as a dedicated IT and AI -expert who can turn on a computer and “ has to do something with computers“ and has a red mohawk and a SPIEGEL column or is such an incompetent, hollow nut as the showgirl Dorothee Baer who was the digitization officer of the Merkel government and who came up with such idiotic ideas as air taxis as mass transportation vehilcles of the future (greetings from Metropolis, Blade Runner, Dubai Disneyland and the helipads on the high-rise buildings in Seoul that are still empty and have not been used to today as a result of the hallucinating visions of the future of the then trendy futuristic RAND futurologist Hermann Kahn) and with the beautiful Andi Scheuer who want to pretend digital and youthful competence in front of pubescent gamers at the Gamecom in Wonderwoman dress and with a Star Wars laser sword and can’t even manage a simple toll system.
AI At War
Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence(New York: W. W. Norton & Company, 2023).
It is widely believed that the world is on the brink of another military revolution. AI is about to transform the character of warfare, as gunpowder, tanks, aircraft, and the atomic bomb have in previous eras. Today, states are actively seeking to harness the power of AI for military advantage. China, for instance, has announced its intention to become the world leader in AI by 2030. Its “New General AI Plan” proclaimed that: “AI is a strategic technology that will lead the future.” Similarly, Russian President Vladimir Putin declared: “Whoever becomes the leader in this sphere will become ruler of the world.” In response to the challenge posed by China and Russia, the United States has committed to a “third offset” strategy. It will invest heavily in AI, autonomy, and robotics to sustain its advantage in defense.
In light of these dramatic developments, military commentators have become deeply interested by the question of the military application of AI. For instance, in a recent monograph, Ben Buchanan and Andrew Imrie have claimed that AI is the “new fire.” Autonomous weapons controlled by AI — not by humans — will become increasingly accurate, rapid, and lethal. They represent the future of war. Many other scholars and experts concur with them. For instance, Stuart Russell, the eminent computer scientist and AI pioneer, dedicated one of his 2020 BBC Reith Lectures to the military potential of AI. He professed the rise of slaughterbots and killer robots. He described a scenario in which a lethal quad-copter the size of a jar could be armed with an explosive device: “Anti-personnel mines could wipe out all the males in a city between 16 and 60 or all the Jewish citizens in Israel and unlike nuclear weapons, it would leave the city infrastructure.” Russell concluded: “There will be 8 million people wondering why you can’t give them protection against being hunted down and killed by robots.” Many other scholars, including Christian Brose, Ken Payne, John Arquilla, David Hambling, and John Antal, share Russell’s belief that with the development of second-generation AI, lethal autonomous weapons — such as killer drone swarms — may be imminent.
Military revolutions have often been less radical than initially presumed by their advocates. The revolution of military affairs of the 1990s was certainly important in opening up new operational possibilities, but it did not eliminate uncertainty. Similarly, some of the debate about lethal autonomy and AI has been hyperbolic. It has misrepresented how AI currently works, and what its potential effects on military operations might, therefore, be in any conceivable future. Although remote and autonomous systems are becoming increasingly important, there is little chance of autonomous drone swarms substituting troops on the battlefield, or supercomputers replacing human commanders. AI became a major research program in the 1950s. At that time, it operated on the basis of symbolic logic — programmers coded input for AI to process. This system was known as good old fashioned artificial intelligence. AI made some progress, but because it was based on the manipulation of assigned symbols, its utility was very limited, especially in the real world. An AI “winter,” therefore, closed in from the late 1970s and throughout the 1980s.
Since the late 1990s, second-generation AI has produced some remarkable breakthroughs on the basis of big data, massive computing power, and algorithms. There were three seminal events. On May 11 1997, IBM’s Deep Blue beat Garry Kasparov, the world chess champion. In 2011, IBM’s Watson won Jeopardy!. Even more remarkably, in March 2016, AlphaGo beat the world champion Go player, Lee Seedol, 4-1.
Deep Blue, Watson, and AlphaGo were important waypoints on an extraordinary trajectory. Within two decades, AI had gone from disappointment and failure to unimagined triumphs. However, it is important recognize what second-generation AI can and cannot do. It has been developed around neural networks. Machine learning programs process huge amounts of data through their networks, re-calibrating the weight that a program assigns to particular pieces of data, until, finally, it generates coherent answers. The system is probabilistic and inductive. Programs and algorithms know nothing. They are unaware of the real world and, in a human sense, unaware of the meaning of the data they process. Using algorithms, machine learning AI simply builds models of statistical probability from massively reiterated trials. In this way, second-generation AI identifies multiple correlations in the data. As long as it has enough data, probabilistic induction has become a powerful predictive tool. Yet, AI does not recognize causation or intention. Peter Thiel, a leading Silicon Valley tech entrepreneur, has articulated AI’s limitations eloquently: “Forget science-fiction fantasy, what is powerful about actually existing AI is its application to relatively mundane tasks like computer vision and data analysis.” Consequently, although machine learning is far superior to a human at limited, bounded, mathematizable tasks, it is very brittle. Utterly dependent on the data on which it has been trained, even the tiniest change in the actual environment — or the data — renders it useless.
The brittleness of data-based inductive machine learning is very significant to the prospect of an AI military revolution. Proponents or opponents of AI imply that, in the near future, it will be relatively easy for autonomous drones to fly through, identify, and engage targets in an urban areas, for instance. After all, autonomous drone swarms have already been demonstrated — in admittedly contrived and controlled environments. However, in reality, it will be very hard to train a drone to operate autonomously for combat in land warfare. The environment is dynamic and complex, especially in towns and cities — civilians and soldiers are intermixed. There do not seem to be any obvious data on which to train a drone swarm reliably — the situation is too fluid. Similarly, it is not easy to see how an algorithm could make command decisions. Command decisions require the interpretation of heterogeneous information, balancing political and military factors, all of which require judgement. In a recent article, Avi Goldfarb and Jon R. Lindsay have argued that data and AI are best for simple decisions with perfect data. Almost by definition, military command decisions involve complexity and uncertainty. It is notable that, while Google and Amazon are the pre-eminent data companies, their managers do not envisage a day when an algorithm will make their strategic and operational decisions for them. Data, processed rapidly with algorithms, helps their executives to understand the market to a depth and fidelity that their competitors cannot match. Information advantage has propelled them to dominance. However, machine learning has not superseded the executive function.
It is, therefore, very unlikely that lethal autonomous drones or killer robots enabled by AI will take over the battlefield in the near future. It is also improbable that commanders will be replaced by computers or supercomputers. However, this does not mean that AI, data, and machine learning are not crucial to contemporary and future military operations. They are. However, the function of AI and data is not primarily lethality — they is not the new fire, as some claim. Data — digitized information stored in cyberspace — are crucial because it provides states with a wider, deeper, and more faithful understanding of themselves and their competitors. When massive data sets are processed effectively by AI, this will allow military commanders to perceive the battlespace to a hitherto unachievable depth, speed and resolution. Data and AI are also crucial for cyber operations and informational campaigns. They have become indispensable for defense and attack. AI and data are not so much the new fire as a new form of digitized military intelligence, therefore, exploiting cyberspace as a vast new resource for information. AI is a revolutionary way of seeing “the other side of the hill.” Data and AI are a — maybe even the — critical intelligence function for contemporary warfare.
Paul Scharre, the well-known military commentator, once argued that AI would inevitably lead to lethal autonomy. In 2019, he published his best-selling book, Army of None, which plotted the rise of remote and autonomous weapon systems. There, Scharre proposed that AI was about to revolutionize warfare: “In future wars, machines may make life and death decisions.” Even if the potential of AI still enthuses him, he has now substantially changed his mind. Scharre’s new book, Four Battlegrounds, published in February 2023, represents a profound revision of his original argument. In it, he retreats from the cataclysmic picture that he painted in Army of None. If Army of None were an essay in science fiction, Four Battlegrounds is a work of political economy. It addresses the concrete issues of great-power competition and the industrial strategies and regulatory systems that underpin it. The book describes the implications of digitized intelligence for military competition. Scharre analyses the regulatory environment required to harness the power of data. He plausibly claims that superiority in data, and the AI to process it, will be militarily decisive in the superpower rivalry between United States and China. Data will afford a major intelligence advantage. For Scharre, there are four critical resources that will determine who wins this intelligence race: “Nations that lead in these four battlegrounds — data, compute, talent, and institutions [tech companies] — will have a major advantage in AI power.” He argues that the United States and China are locked into a mortal struggle for these four resources. Both China and the United States are now fully aware that whoever gains the edge in AI will, therefore, be significantly advantaged politically, economically, and, crucially, militarily. They will know more than their adversary. They will be more efficient in the application of military force. They will dominate the information and cyber spaces. They will be more lethal.
Four Battlegrounds plots this emerging competition for data and AI between China and the United States. It lays out recent developments and assesses the relative strengths of both nations. China is still behind the United States in several areas. The United States has the leading talent, and is ahead in terms of research and technology: “China is a backwater in chip production.” However, Scharre warns against U.S. complacency. Indeed, the book is animated by the fear that the United States will fall behind in the data race. Scharre, therefore, highlights China’s advantages — and its rapid advances. With 900 million internet users already, China has far more data than the United States. Some parts of the economy, such as ride-hailing, are far more digitized than in the United States. WeChat, for instance, has no American parallel. Many Chinese apps are superior to U.S. ones. In addition, the Chinese state is also uninhibited by legal constraints or by civil concerns about privacy. The Chinese Communist Party actively monitors the digital profiles of its citizens — it harvests their data and logs their activities. In cities, it employs facial recognition technology to identify individuals.
State control has benefited Chinese tech companies: “The CCP’s massive investment in intelligence surveillance and social control boosted Chinese AI companies and tied them close to government.” The synergies between government and tech in China are close. China also has significant regulatory advantages over the United States. The Chinese Communist Party has underwritten tech giants like Baidu and Alibaba: “Chinese investment in technology is paying dividends.” Scharre concludes: “China is not just forging a new model of digital authoritarianism but is actively exporting it.”
How will the U.S. government oppose China’s bid for data and AI dominance? Here Four Battlefields is very interesting — and it contrasts markedly with Scharre’s speculations in Army of None. In order for the U.S. government to be able to harness the military potential of data, there needs to be a major regulatory change. The armed forces need to form deep partnerships with the tech sector. They “will have to look beyond traditional defense contractors and engage in start-ups.” This is not easy. Scharre documents the challenging regulatory environment in the United States in comparison with China: “in the U.S., the big tech corporations Amazon, Apple, Meta (formerly Facebook) and Google are independent centers of power, often at odds with government on specific issues.” Indeed, Scharre discusses the notorious protest at Google in 2017, when employees refused to work on the Department of Defense’s Project Maven contract. Skepticism about military applications of AI remain in some parts of the U.S. tech sector.
American tech companies may have been reluctant to work with the armed forces but the Department of Defense has not helped. It has unwittingly obstructed military partnerships with the tech sector. The Department of Defense has always had a close relationship with the defense industry. For instance, in 1961, President Dwight D. Eisenhower warned about the threat that the “military-industrial complex” posed to democracy. The Department of Defense has developed an acquisition and contracting process that has been primarily designed for the procurement of exquisite platforms: tanks, ships, and aircraft. Lockheed Martin and Northrop Grumman have become adept at delivering weapon systems to discrete Department of Defense specifications. Tech companies do not work like this. As one of Scharre’s interviewees noted: “You don’t buy AI like you buy ammunition.” Tech companies are not selling a specific capability, like a gun. They are selling data, software, computing power — ultimately, they are selling expertise. Algorithms and programs are best developed iteratively in relation to a very specific problem. The full potential of some software or algorithms to a military task may not be immediately obvious even to a tech company. Operating in competitive markets, tech companies, therefore, prefer a more flexible, open-ended contractual system with the Department of Defense — they need security and quick financial returns. Tech companies are looking for collaborative engagement, rather than just a contract to build a platform.
The U.S. military and especially the Department of Defense has not always found this novel approach to contracting easy. In the past, the bureaucracy was too sluggish to respond to their needs — the acquisition process took seven to 10 years. However, although many tensions exist and the system is far from perfect, Scharre records a transforming regulatory environment. He describes the rise of a new military-tech complex in the United States. Project Maven, of course, exemplifies the process. In 2017, Bob Work issued a now famous memo announcing the “Algorithmic Warfare Cross Functional Team” — Project Maven. Since the emergence of surveillance drones and military satellite during the Global War on Terror, the U.S. military had been inundated with full-motion video feeds. That footage was invaluable. For instance, using Gorgon Stare, a 24-hour aerial surveillance system, the U.S. Air Force had been able to plot back from a car bomb explosion in Kabul in 2019, which killed 126 civilians, to find the location of safe houses used to execute the attack. Yet, the process was very slow for humans. Consequently, the Air Force started to experiment with computer vision algorithms to sift through their full-motion videos. Project Maven sought to scale up the Air Force’s successes. It required a new contracting environment, though. Instead of a long acquisition process, Work introduced 90-day sprints. Companies had three months to show their utility. If they made progress, their contracts were extended — if not, they were out. At the same time, Work de-classified drone footage in order that Project Maven could train its algorithms. By July 2017, Project Maven had an initial operating system, able to detect 38 different classes of object. By the end of the year, it was deployed on operations against ISIS: “the tool was relatively simple, and identified and tracked people, vehicles, and other objects in video from ScanEagle drones used by special operators.”
Since Project Maven, the Department of Defense has introduced some other initiatives to catalyze military-tech partnerships. The Defense Innovation Unit has accelerated relations between the department and companies in Silicon Valley, offering contracts in 26 days rather than in months or years. In its first five years, the Defense Innovation Unit issued contracts to 120 “non-traditional” companies. Under Lt. Gen. Jack Shanahan, the Joint Artificial Intelligence Centre has played an important role in advancing the partnership between the armed forces and tech companies for human assistance and disaster relief operations, developing software to map wildfires and post-disaster assessments — whether these examples in Scharre’s text imply more military applications is unclear. After early difficulties, the Joint Enterprise Defense Infrastructure, created by Gen. James Mattis when he was secretary of defense, has reformed the acquisition system for tech. For instance, in 2021, the Department of Defense helped Anduril develop an AI-based counter-drone system with nearly $100 million.
Four Battlegrounds is an excellent and informative addition to the current literature on AI and warfare. It complements the recently published works of Lindsay, Goldfarb, Benjamin Jensen, Christopher Whyte, and Scott Cuomo. The central message of this literature is clear. Data and AI are and will be very important for the armed forces. However, data and AI will not radically transform combat itself — humans will still overwhelmingly operate the lethal weapon systems, including remote ones, which kill people, as the savage war in Ukraine shows. The situation in combat is complex and confusing. Human judgement, skill, and cunning are required to employ weapons to their greatest effect there. However, any military force that wants to prevail on the battlefields of the future will need to harness the potential of big data — it will have to master digitized information flooding through the battlespace. Humans simply do not have the capacity to do this. Headquarters will, therefore, need algorithms and software to process that data. They will need close partnerships with tech companies to create these systems and data scientists, engineers, and programmers in operational command posts themselves to make them work. If the armed forces are able to do this, data will allow them to see across the depth and breadth of the battlespace. It will not solve the problems of military operations — fog and friction will persist. However, empowered by data, commanders might be able to employ their forces more effectively and efficiently. Data will enhance the lethality of the armed forces and their human combat teams. The Russo-Ukrainian War already gives a pre-emptive insight into the advantages that data-centric military operations afford over an opponent still operating in analogue. Scharre’s book is a call to ensure that the fate of the Russian army in Ukraine does not befall the United States when its next war comes.
Anthony King is the Chair of War Studies at the University of Warwick. His latest book, Urban Warfare in the Twenty-First Century, was published by Polity Press in July 2021. He currently holds a Leverhulme Major Research Fellowship and is researching into AI and urban operations. He is planning to write a book on this topic in 2024.