Are There Limits to Artificial Intelligence? What Challenges Does it Introduce?

The Department of Defense recently released a Strategic Multilayer Assessment on great power competition in AI

-- Warrior Maven Video Report Above - New Pentagon-DARPA AI-Enhanced Cybersecurity--

By Ross Rustici - Senior Warrior Maven Writer

Ross previously served as Technical Lead - DoD, East Asia Cyber Lead - DoD, and Intrusion Analyst - DoD.

AI: The False Prophet

The Department of Defense recently released a Strategic Multilayer Assessment on great power competition in Artificial Intelligence. This assessment compiles viewpoints from the United States and the United Kingdom focusing on the divergence in systems, resources, and implementations of Artificial Intelligence in the US, Russia, and China. The primary question this asks is: Who is winning the AI race and what are the implications for the Western Liberal Order? While a fascinating question and one that deserves serious consideration, it is the wrong one for the military to be concerned with. Instead, we should be focused on the ways in which adversaries adopt AI as a means to understand a new and unique set of asymmetric vulnerabilities that are being created as a result.

One essay by Martin Libicki in this volume hints at but does not explore in enough detail the far more pertinent question. How does AI reduce control, and thus increase vulnerability, for those who employ it? Currently, and for the foreseeable future, AI is essentially a series of rapid calculations that are distilled into a pair of binary questions. Does this input match previously known data? If so, is it classified as true or false? Lack of control over the decision process and the “black box” nature of it makes AI incredibly easy to manipulate. Fundamentally, statistics do not handle edge cases well. As a result, current iterations of AI are prone to error when dealing with data that is outside of the expected input.

In practice we have already seen how algorithmic approaches to decision making can be easily and dramatically influenced by malign actors. In April 2013, an AP Twitter account was hacked and released a tweet stating that the White House was attacked and that President Obama was injured. This one twitter message was picked up by Wall Street trading algorithms and caused a flash crash of 143 points in minutes. This type of manipulation is crude but highly effective. If an adversary understands the data types that an AI is trained on and can process, slipping in false information to reduce its efficacy or create an unexpected or negative result is fairly easy.

The main advantage of AI is the ability to process more inputs faster than a human can. In a complex rapidly changing environment, this seems like a significant advantage. However, when an adversary understands what weaknesses the AI is designed to buttress, it becomes a simple game of feeding bad data into the system to exacerbate and compound the weaknesses inherent in the decision-making processes. This allows for more subtle and effective manipulation by creating a higher probability that the manipulated data doesn’t just break the system but moves the adversary in a desired way.

More research must be done not on who is winning the race to implement the new technology fastest, but rather on how they are choosing to leverage it. Understanding the motivation behind the adoption is first step in understanding the underlying vulnerabilities that can be taken advantage of, if necessary. Information is the most important asset today; often the presumption is whomever can make the most sense out of the vast stores of data will win. The fundamental problem with this is that data manipulation is one of the easiest cyber attacks to carry out and one of the hardest to detect. This means the promise of AI is fundamentally flawed because the data required to make it useful is corruptible, and worse, undetectable until after the system has failed.

By Ross Rustici - Senior Warrior Maven Writer

Ross previously served as Technical Lead - DoD, East Asia Cyber Lead - DoD, and Intrusion Analyst - DoD.

-- The contents of this essay reflect only the views of the author and do not represent DoD or any US government entity. -

-More Weapons and Technology -WARRIOR MAVEN (CLICK HERE)--

Comments