0%

Artificial Intelligence, China, Russia, and the Global Order

10 MIN READ

Artificial Intelligence, China, Russia, and the Global Order

Air University Press and Air University Library have relaunched the Fairchild Series, which is an academic series that publishes cutting-edge research.

The series is named after General Muir Stephen Fairchild, who served as the first leader of the Air University, located at the Maxwell Air Force Base in Alabama.

This timely volume discusses the impact of advances in artificial intelligence (AI) that will lead to panoptic surveillance and directly contribute to highly authoritarian forms of political control.

This edited volume aims to prepare Anglo-American security practitioners for the impact of AI-related technologies on a country’s domestic political system.

This book contains 27 chapters, which is divided into six sections with 24 expert contributors drawing their insights from mixed professional backgrounds.

Particularly, this book traces the differential impact of AI technology on competing domestic regime types.

Chapters in the book describe how China will seek to further increase its authoritarian control by utilizing AI, while making its citizens prosperous and shielding them from external knowledge influences.

The Chinese model of digital authoritarianism or digital social and political control is likely to emerge as a major and direct rival to free, open, and democratic society — a model championed by the Anglo-American alliance.

The Russian model, offers a hybrid approach that relies on a variety of manipulative digital tools to destabilize challenger regimes while maintaining tight state control over critical resources and quashing political rivals.

Part 1 of the book with four framing chapters authored by the editor—Nicholas D. Wright—focuses on the impact of AI technologies on domestic politics and its far-reaching impact on the evolving global order.

The remaining five sections of the book are filled with contributions from 23 authors, who are some of the world’s leading experts in the field of AI and Internet technologies.

Part two of the book, with five chapters, focuses on how the Chinese and Russian models of digital authoritarianism are shaping domestic political regimes with tools of surveillance, monitoring, big data-fueled AI led governance, facial recognition, and behavioral pattern recognition.

Collectively these technologies are leading to intensifying political control of citizens. The third section of the book is on the export and emulation of Chinese and Russian models of digital authoritarianism to other parts of the world.

Part four contains four chapters on how AI technologies influence China’s domestic and foreign policy decision making.

Focus of the fifth section, with five chapters, is on the various military dimensions of AI and its application to the development of modern weapon systems such as hypersonic glide weapons and enhancement of Chinese command authority through artificial intelligence.

Probably the most provocative section in this book is the final part of the book that focuses on Artistic Perspectives and the Humanities.

This section draws on science fiction writings, movies, and art to present various telling scenarios of the future.

The set of five chapters offers a vivid and frightening rendering of AI driven technological futures such as precognition to prevent crime, drones to monitor public spaces and summarily execute offenders, a color-coded social credit ranking system to categorize people in a society by obedience to authority, and AI applications that goes beyond facial recognition to diagnosing depression and mood conditions in individuals.

Drawing linkages between AI technologies and terrifying dystopian futures, this set of chapters has issued a clarion call to policy makers to develop robust rules and regulations for democratic governance of the digital world without which corporate and authoritarian control will become the norm.

For the purposes of this book, AI is defined as a “constellation of new technologies” that combines big data, machine learning, and digital things (e.g., the “Internet of Things”).

Application of AI implies the analysis of data in which inferences from models are used to “predict and anticipate possible future events” (p.3).

Critically, what is important to understand is that “AI programs do not simply analyze data in the way they were originally programmed,” instead the AI programs respond “intelligently to new data and adapt their outputs accordingly” (p. 3).

Ultimately AI is understood as giving computers new behaviors and knowledge “which would be thought intelligent in human beings” (p. 3).

The authors argue that the greatest strength of AI capabilities are primarily perceptual, the ability to process images, speeches, and other patterns of behavior and choosing bounded actions to guide decision making.

Google’s Deepmind AI is one such example, which draws data from Google’s datacenters and accurately predicts when the data-load is going to increase or decrease and correctly adjusts the cooling systems for the datacenters (p.7).

This book raises legitimate concerns with regards to singularity that represents the fear that an “exponentially accelerating technological progress will create an AI that exceeds human intelligence and escapes our control” (p. 18).

AI systems will self-learn from data without any human input or management. The precise concern is that AI will become super-intelligent, which may “then deliberately or inadvertently destroy humanity” or usher changes that are outside the control of humans (p. 18).

The terror of singularity is well captured in the five excellent chapters in the concluding section of the book, which draw on sources from reality, fiction, and art to depict an Orwellian dystopia in which conscious human beings either fight back as depicted in the movie series—Matrix or the Terminator—or they become mindless tools of these self-thinking and regenerating machines (p. 194).

Middle sections of book focusing on the Chinese model of digital authoritarianism, the hybrid Russian model of authoritarianism, and the American model of digital openness, but dependent on corporate control are temporary predictions of AI usage.

The Chinese, Russian, and American models assume that governments could, should, and will be able to control AI and maybe deploy AI toward social control and military applications.

“Given the rate of progress, the singularity may occur at some point this century” (p. 18).

The lead author, Wright, adds that “although clearly momentous, given that nobody knows when, if or how a possible singularity will occur” and “limits clearly exist on what can sensibly be said or planned for now” (p. 18).

The authors are hoping that humans would be able to master and control AI in the same way that we have been (so far) successful in controlling the use and spread of nuclear weapons, albeit imperfectly.

The key assertion here is that much like nuclear weapons, singularity issues related to AI “will require managing within the international order as best we can, although our best will inevitably be grossly imperfect” (p. 18).

Our solutions are likely to incomplete, inadequate, imperfect, and potentially counterproductive because “singularity potentially represents a qualitatively new challenge for humanity that we need to think through and discuss internationally” (p. 18). This is a serious and a major claim of the book that readers should take note!

At a more temporal level, the contributors to this important volume proffer three key recommendations: (1) the United States must pursue robust policies to keep ahead of the digital curve and it must respond by preventing the emergence of a military-industrial complex that is managed by an AI corporate oligopoly and a surveillance state; (2) the United States must build a new global order of norms and institutions required to persuade the world that the American model of free and open digital democracy offers an attractive and viable alternative to the Chinese and Russian models of digital authoritarianism; and (3) the United States should fight back against digital authoritarianism and hybridism so that it manages the risks associated with a multifaceted interstate AI competition.

(The author is a professor of security studies at the DOD’s Daniel K. Inouye Center for Asia-Pacific Studies in Honolulu, Hawaii)
(The Air Force Journal of Indo-Pacific Affairs (JIPA) — United States’ Air Force and Khabarhub — Nepal’s popular news portal, have agreed on a sole partnership to disseminate JIPA research-based articles from Nepal. This article appears courtesy of Journal of Indo-Pacific Affairs and may be found in its original form here https://www.airuniversity.af.edu/JIPA/)
0