Power in the Age of AI

With all the talk of AI of recent, I picked up "Four Battlegrounds: Power in the Age of Artificial Intelligence" (2023) by Paul Scharre to see where the new changes might fit into courses I have covered of recent (political economy, ethics, evaluation). A complaint to start: many figures in my copy of this book were blank, grey useless boxes. Clearly a printing error on the part of the publisher, but frustrating nonetheless. The book has eight parts. Part 1 is a good summary of the key issues (data, computational power, talent, institutions), which is probably the most useful high-level part of this book. Part 2 begins to cover ethics, introduces a number of stories, and the role of corporate engagement. Parts 3 and 5 are largely anti-China (that is not to defend China; that is only to point out that the negative examples this ex-US army ranger chooses are Chinese ones; when discussing consent, privacy, and monitoring we learn all about Chinese evils, but no mention of the NSA or Snowden, or others – this consistent inclusion/exclusion bias makes the book largely pro-American and anti-Chinese). Part 4 covers the unhealthy information environment we all live within, but is largely outdated (expectedly, books are slow to publish). Part 6 returns to US military projects (a common theme of the book, not surprisingly given the author's background). Part 7 outlines some of the problems with AI: bias, risks, limitations, vulnerabilities. Part 8 concludes with perspectives on the future of war. A few notes:

"AI has many constructive applications. AI will save lives and increase efficiency and productivity. It is also being used as a weapon of repression and to gain military advantage. This book is about the darker side of AI." (p. 4)

"In fact, machine learning systems are often so narrowly constrained by the datasets on which they've been trained that their performance can often drop if they are used for tasks that are not well-represented in the training data. For example, a facial recognition system may perform poorly on people of races or ethnicities that are not adequately represented in its training data. A machine learning algorithm used for predictive maintenance on one aircraft won't work on another aircraft—it would need to be retrained on data for the new aircraft. It may not even be effective at predicting maintenance needs on the same aircraft in a new environment, since maintenance needs may differ based on environmental conditions, such as in a desert where sand can clog parts or in a maritime environment where there is saltwater corrosion." (p. 21)

"Facial recognition systems are being merged with other tools for big data analysis in the Ministry of Public Security's "Police Cloud" system. Police cloud computing data centers, which are being implemented in numerous cities and provinces across China, include not only criminal records, facial recognition, and other biometric data, but also addresses, religious affiliations, medical records, birth control records, travel bookings, online purchases, package deliveries, and social media comments. These databases are not merely repositories of information but are intended to automatically fuse and analyze data for police. They could be used to monitor and track individuals of interest and also connect them to associates." (p. 89) 

21 Lessons for the 21st Century
Ali Shariati and the Future of Social Theory
Subscribe to receive new blog posts via email