Skip Navigation
Search

Bruce Swett, PhD

Chief Artificial Intelligence Architect & NG Fellow
Northrop Grumman Corporation

Bruce Swett is the Chief Artificial Intelligence Architect within the Mission Systems sector of Northrop Grumman, a leading global provider of security systems and solutions. In this role, he is responsible for the design and implementation of integrated cloud computing and artificial intelligence (AI) capabilities across the enterprise. This capability dramatically increases the speed of innovation from Northrop Grumman’s commercial and academic partners to fieldable systems. Swett serves as a subject matter expert and consultant in the areas of AI, brain-computer interfaces, and robotics – both nationally and internationally. He recently served on the Pontifical Academy of Science, advising Pope Francis on AI and robotic technologies. He has created intellectual property and patent applications on seven topics related to neurally-inspired AI. Swett completed his Ph.D. in Neuroscience and Cognitive Sciences at the University of Maryland College Park, and completed his Post-Doctoral studies at the National Institute of Deafness and Communications Disorders at the National Institutes of Health. His experimental and computational research focused on using high performance computing to understand how the brain learns and automates sequences, a topic that applies to novel forms of AI. Northrop Grumman solves the toughest problems in space, aeronautics, defense and cyberspace to meet the ever-evolving needs of our customers worldwide. Our 90,000 employees define possible every day using science, technology and engineering to create and deliver advanced systems, products and services.

ABSTRACT

Responsible Artificial Intelligence (RAI): An Approach to Policy, Requirements, & Contracting

As the Department of Defense (DoD) seeks to gain new capabilities through the rapid deployment of Artificial Intelligence (AI), issues of the responsible development and use of AI (Responsible AI, or RAI) have become increasingly important. RAI is intended to ensure that DoD uses of AI comply with democratic values, the Laws of Armed Conflict, Rules of Engagement, and U.S. law and policy. The U.S. Deputy Secretary of Defense recently highlighted the importance of RAI for the DoD, and assigned the implementation of the DoD’s 5 Ethical Principles for AI to the Joint Artificial Intelligence Center (JAIC). The current presentation starts with the list of known AI challenges and vulnerabilities in order to define the AI risk landscape. Issues of data security and bias, vulnerability to adversarial attack, AI model corruption, automated AI model testing and characterization, operational effectiveness testing, cyber security, and explainability will be reviewed. From this risk analysis, we will work backward to identify the tests, processes, decisions, and auditable information needed to ensure that the DoD has justified confidence in AI models and systems. This data-driven foundation will enable an examination of the types of tests, information gathering, stakeholder gates, and evidence that need to be required by policy to eliminate AI vulnerabilities. The AI risk landscape will align the proposed DoD policies with specific AI procurement contracting requirement language. The levels of AI governance, auditing, and oversight – from both the government DoD and defense contractor perspectives – are also presented. An important element of this analysis is connecting the technical test results addressing elements of the AI risk landscape to decision processes involving non-technical AI stakeholders performing AI governance functions. Finally, an overview of the contracting, sustainment, and Intellectual Property (IP) implications for the procurement of AI systems is provided, and a framework for contracting involving AI systems and sub-systems is proposed.