C O R P O R A T I O N
FORREST E. MORGAN, BENJAMIN BOUDREAUX, ANDREW J. LOHN, MARK ASHBY,  
CHRISTIAN CURRIDEN, KELLY KLIMA, DEREK GROSSMAN
Military 
Applications of 
Artificial Intelligence
Ethical Concerns in an Uncertain World
For more information on this publication, visit www.rand.org/t/RR3139-1
Library of Congress Cataloging-in-Publication Data is available for this publication.
ISBN: 978-1-9774-0492-3
Published by the RAND Corporation, Santa Monica, Calif.
© Copyright 2020 RAND Corporation
R® is a registered trademark.
Cover: Drones: boscorelli/stock.adobe.com 
Data: Anatoly Stojko/stock.adobe.com
Cover design: Rick Penn-Kraus
Limited Print and Electronic Distribution Rights
This  document  and  trademark(s)  contained  herein  are  protected  by  law.  This  representation  of  RAND 
intellectual  property  is  provided  for  noncommercial  use  only.  Unauthorized  posting  of  this  publication 
online  is  prohibited.  Permission  is  given  to  duplicate  this  document  for  personal  use  only,  as  long  as  it 
is  unaltered  and  complete.  Permission  is  required  from  RAND  to  reproduce,  or  reuse  in  another  form,  any  of 
its  research  documents  for  commercial  use.  For  information  on  reprint  and  linking  permissions,  please  visit  
www.rand.org/pubs/permissions.
The  RAND  Corporation  is  a  research  organization  that  develops  solutions  to  public  policy  challenges  to  help  make 
communities  throughout  the  world  safer  and  more  secure,  healthier  and  more  prosperous.  RAND  is  nonprofit, 
nonpartisan, and committed to the public interest. 
RAND’s publications do not necessarily reflect the opinions of its research clients and sponsors.
Support RAND
Make a tax-deductible charitable contribution at  
www.rand.org/giving/contribute
www.rand.org
Preface 
The research in this report was conducted over the course of one year, from October 2017 to 
September 2018. The completed report was originally delivered to the sponsor in October 2018. 
It was approved for public distribution in March 2020. Since the research was completed and 
delivered, new organizations have been created and important steps have been taken to address 
many of the topics the report describes. As a result, this report does not capture the current state 
of the topic at the time of publication. Although expert and public opinions may have shifted, we 
believe the report documents a useful view of perspectives. 
The field of artificial intelligence (AI) has advanced at an ever-increasing pace over the last 
two decades. Systems incorporating intelligent technologies have touched many aspects of the 
lives of citizens in the United States and other developed countries. It should be no wonder then 
that AI also offers great promise for national defense. A growing number of robotic vehicles and 
autonomous weapons can operate in areas too hazardous for human combatants. Intelligent 
defensive systems are increasingly able to detect, analyze, and respond to attacks faster and more 
effectively than human operators can. And big data analysis and decision support systems offer 
the promise of digesting volumes of information that no group of human analysts, however 
large, could consume and helping military decisionmakers choose better courses of action more 
quickly. 
But thoughtful people have expressed serious reservations about the legal and ethical 
implications of using AI in war or even to enhance security in peacetime. Anxieties about the 
prospects of “killer robots” run amok and facial recognition systems mistakenly labeling 
innocent citizens as criminals or terrorists are but a few of the concerns that are fueling national 
and international debate about these systems.  
These issues raise serious questions about the ethical implications of military applications 
of AI and the extent to which U.S. leaders should regulate their development or restrain their 
employment. But equally serious questions revolve around whether potential adversaries would 
be willing to impose comparable guidelines and restraints and, if not, whether the United States’ 
self-restraint might put it at a disadvantage in future conflicts.  
With these concerns in mind, the Director of Intelligence, Surveillance, and Reconnaissance 
Resources, Headquarters, United States Air Force (USAF), commissioned a fiscal year 2017 
Project AIR FORCE study to help the Air Force understand the ethical implications of military 
applications of AI and how those capabilities might change the character of war. This report, 
which is one of the products of that study, seeks to answer the following questions: (1) What 
significant military applications of AI are currently available or expected to emerge in the next 
10–15 years? (2) What legal, moral, or ethical issues would developing or employing such 
systems raise? (3) What significant military applications of AI are China and Russia currently 
 
 
iii 
pursuing? (4) Does China, Russia, or the United States have exploitable vulnerabilities due to 
ethical or cultural limits on the development or employment of military applications of AI? 
(5) How can USAF maximize the benefits potentially available from military applications of 
AI while mitigating the risks they entail? 
The research described in this report was conducted within the Strategy and Doctrine 
Program of RAND Project AIR FORCE. 
RAND Project AIR FORCE 
RAND Project AIR FORCE (PAF), a division of the RAND Corporation, is USAF’s federally 
funded research and development (R&D) center for studies and analyses. PAF provides the Air 
Force with independent analyses of policy alternatives affecting the development, employment, 
combat readiness, and support of current and future air, space, and cyber forces. Research is 
conducted in four programs: Strategy and Doctrine; Force Modernization and Employment; 
Manpower, Personnel, and Training; and Resource Management. The research reported here was 
prepared under contract FA7014-16-D-1000. 
Additional information about PAF is available on our website:  
www.rand.org/paf 
This report documents work originally shared with USAF on September 27, 2018. The draft 
report, issued on October 10, 2018, was reviewed by formal peer reviewers and USAF subject 
matter experts (SMEs). 
 
 
 
 
iv 
Table of Contents 
Preface ............................................................................................................................................ iii	
Figures........................................................................................................................................... vii	
Tables .............................................................................................................................................. x	
Summary ........................................................................................................................................ xi	
Acknowledgments ........................................................................................................................ xix	
Abbreviations ................................................................................................................................ xx	
1. Introduction ................................................................................................................................. 1	
Background ................................................................................................................................. 2	
Purpose and Scope ...................................................................................................................... 5	
Methodology ............................................................................................................................... 6	
Organization ................................................................................................................................ 7	
2. The Military Applications of Artificial Intelligence ................................................................... 8	
What Is Artificial Intelligence? ................................................................................................... 8	
Autonomy and Automation ........................................................................................................ 9	
Approaches to Control and Oversight ...................................................................................... 11	
Recent Progress in Artificial Intelligence That Is Driving Military Applications .................... 13	
Benefits of Artificial Intelligence in Warfare ........................................................................... 15	
Risks of Artificial Intelligence in Warfare ............................................................................... 21	
The Need for a Closer Examination of the Risks of Military Artificial Intelligence ............... 22	
3. Risks of Military Artificial Intelligence: Ethical, Operational, and Strategic .......................... 24	
Stakeholder Concerns About Military Artificial Intelligence ................................................... 24	
Taxonomy of Artificial Intelligence Risks ............................................................................... 29	
Perspectives on Mitigating Risks of Military Artificial Intelligence ........................................ 40	
Conclusion ................................................................................................................................ 48	
4. Military Artificial Intelligence in the United States ................................................................. 49	
Brief History of Military Artificial Intelligence Development in the United States ................ 49	
Mistakes and Near Misses Lead to Caution .............................................................................. 52	
Summary of Current Capabilities and Future Projections ........................................................ 53	
U.S. Policies to Mitigate Risks ................................................................................................. 57	
Conclusion ................................................................................................................................ 59	
5. Military Artificial Intelligence in China ................................................................................... 60	
Current Artificial Intelligence Systems and Plans for the Future ............................................. 60	
Intelligence Development ................................................................................................... 70	
Chinese Ethics and Artificial Intelligence ............................................................................... 72	
Conclusion ................................................................................................................................ 81	
The People’s Liberation Army’s Unique Advantages and Disadvantages in Artificial 
 
 
v 
6. Military Artificial Intelligence in Russia .................................................................................. 83	
Current Capabilities and Future Projections ............................................................................. 83	
Translating the Vision to Reality .............................................................................................. 90	
Ethical Considerations .............................................................................................................. 94	
Conclusion ................................................................................................................................ 98	
7. Assessment of U.S. Public Attitudes Regarding Military Artificial Intelligence ................... 100	
Importance of Public Perception ............................................................................................. 100	
Prior Surveys ........................................................................................................................... 101	
Results of the Survey of U.S. Public Attitudes ....................................................................... 102	
Conclusion .............................................................................................................................. 116	
8. Findings and Recommendations ............................................................................................. 118	
Findings .................................................................................................................................. 118	
Recommendations ................................................................................................................... 125	
Final Thoughts ........................................................................................................................ 127	
Appendix A. Expert Interviews: Methods, Data, and Analysis .................................................. 129	
Appendix B. Public Attitudes Survey: Methods, Data, and Analysis ........................................ 142	
References ................................................................................................................................... 169	
 
 
 
 
 
vi 
Figure 2.3. Risks of Military Applications of Artificial Intelligence Identified in 
Figure 3.2. United Kingdom Framework for Considering Human Control Throughout the   
Figure 2.2. Potential Benefits of Military Applications of Artificial Intelligence Identified in 
Figure S.1. Taxonomy of Artificial Intelligence Risk ................................................................. xiii	
Figure 2.1. Taxonomy of Artificial Intelligence Technologies .................................................... 11	
Structured Interviews ............................................................................................................ 16	
Structured Interviews ............................................................................................................ 21	
Figure 3.1. Taxonomy of Artificial Intelligence Risk ................................................................... 30	
Life Cycle of a Weapon System ........................................................................................... 44	
Figure 7.1. Public Concerns About the Lack of Accountability of Autonomous Weapons ....... 103	
Figure 7.2. Public Sentiment Regarding Autonomous Weapons and Human Dignity ............... 104	
Figure 7.3. Public Sentiment on Human Emotion and War ....................................................... 104	
Figure 7.4. Public Opinions About Autonomous Weapons and the Likelihood of War ............ 105	
Figure 7.5. Public Opinion on the Need for an International Ban on Autonomous Weapons .... 105	
Figure 7.6. Public Belief That Autonomous Weapons Will Be More Precise Than Humans .... 106	
Figure 7.7. Public Support for Continued U.S. Investment in Military Artificial Intelligence .... 107	
Figure 7.8. Autonomous Missiles in Offensive Operations Without Human Authorization ...... 108	
Figure 7.9. Autonomous Missiles in Offensive Operations With Human Authorization ........... 108	
Figure 7.10. Defensive Use of Autonomous Drones Against Enemy Drones ............................ 109	
Figure 7.11. Preemptive Use of Autonomous Drones Against Enemy Drones .......................... 109	
Figure 7.12. Use of Autonomous Drones to Attack Enemy Combatants ................................... 110	
Autonomous Weapons ........................................................................................................ 111	
Autonomous Weapons ........................................................................................................ 112	
Figure 7.15. Use of Military Artificial Intelligence to Identify Enemy Targets ......................... 113	
Attack Enemy Targets ......................................................................................................... 113	
Enemy Combatants ............................................................................................................. 114	
Enemy Combatants ............................................................................................................. 115	
Figure 7.19. Concerns About Exploiting Vulnerabilities in Commercial Software ................... 115	
Leaders in Compromising Situations .................................................................................. 116	
Figure 7.13. Use of Autonomous Weapons to Avoid Defeat When the Enemy Is Not Using 
Figure 7.20. Use of Artificial Intelligence to Generate Fake Videos to Show Foreign  
Figures 
Figure 7.14. Use of Autonomous Weapons to Avoid Defeat When the Enemy Is Using 
Figure 7.16. Use of Military Artificial Intelligence to Advise Commanders on How to  
Figure 7.17. Use of Biometric Analysis at Military Checkpoints to Identify  
Figure 7.18. Use of Biometric Analysis and Robotics to Identify and Subdue  
 
 
vii 
Figure B.7. The United States Should Work With Other Countries to Ban  
Figure B.8. The Development of Autonomous Weapons Will Make the Occurrence of Wars 
Figure B.4. Autonomous Weapons Are Ethically Prohibited Because They Violate the  
Figure B.5. Autonomous Weapons Are Ethically Prohibited Because They Cannot Be Held 
Figure B.6. The U.S. Military’s Testing Process Will Ensure That Autonomous Weapons Are 
Figure A.1. Time Ranges Considered by Experts Interviewed .................................................. 137	
Figure A.2. Number of Times a Benefit Is Mentioned (N = 24) ................................................ 138	
Figure A.3. Number of Times a Risk Is Mentioned (N = 24) ..................................................... 140	
Figure B.1. War Is Always Wrong ............................................................................................. 143	
Figure B.2. Autonomous Weapons Are More Likely to Make Mistakes Than Humans ............ 143	
Figure B.3. Removing Human Emotions From Decisions in War Is Beneficial ........................ 144	
Dignity of Human Life ........................................................................................................ 144	
Accountable or Punished for Wrongful Actions ................................................................. 145	
Safe to Use .......................................................................................................................... 145	
Autonomous Weapons ........................................................................................................ 146	
More Likely ........................................................................................................................ 146	
Figure B.9. Autonomous Weapons Will Be More Accurate and Precise Than Humans ........... 147	
Artificial Intelligence Technology for Military Use ........................................................... 147	
Missiles Have Human Authorization .................................................................................. 148	
Human Authorization .......................................................................................................... 148	
Without Human Authorization ........................................................................................... 149	
Report Enemy Combatants ................................................................................................. 149	
and Subdue Enemy Combatants ......................................................................................... 150	
Swarm That Is Attacking .................................................................................................... 150	
Autonomous Drones to Preemptively Destroy an Enemy Autonomous Drone Swarm ..... 151	
Figure B.15. It Is Ethically Permissible for the U.S. Military to Use a Robot With Facial 
Recognition or Other Biometric Analysis at a U.S. Military Checkpoint to Identify  
Figure B.14. It Is Ethically Permissible for the U.S. Military to Use a Robot With Facial 
Recognition or Other Biometric Analysis at a Military Checkpoint to Identify and  
Figure B.16. It Is Ethically Permissible for the U.S. Military to Use a Swarm of Armed 
Autonomous Drones to Protect U.S. Soldiers from an Enemy Autonomous Drone  
Figure B.13. It Is Ethically Permissible for the U.S. Military to Use Missiles That  
Autonomously Search for and Destroy Enemy Targets in Close Proximity to Civilians 
Figure B.10. It Is Ethically Permissible for the U.S. Military to Continue to Invest in  
Figure B.11. It Is Ethically Permissible for the U.S. Military to Use Missiles That 
Autonomously Search for and Destroy Enemy Targets in War Zones Only If the  
Figure B.12. It Is Ethically Permissible for the U.S. Military to Use Missiles That  
Autonomously Search for and Destroy Enemy Targets in War Zones Without  
Figure B.17. It Is Ethically Permissible for the U.S. Military to Use a Swarm of Armed 
 
 
viii