Intelligent cyber-physical systems (iCPS) integrate computational and network resources, and physical system components, such as sensors and actuators, with learning-based controllers. Some examples include autonomous vehicles, robotic swarms and smart grids.
While iCPS’ are increasingly being implemented in various applications with significant benefits, learning-based controllers are vulnerable to cyberattacks. Professor Vignesh Narayanan from the Artificial Intelligence Institute of the University of South Carolina recently started a research project that aims to understand these vulnerabilities and enhance an understanding of these attacks.
Narayanan’s three-year, $300,000 research project is funded by the National Science Foundation. While his previous research has included cyber physical systems, Narayanan also wanted to pursue this project since protecting iCPS are more complicated because the learning component brings in more variables that give adversaries another outlet to exploit.
“This is an important group of systems that needs to be understood,” Narayanan says. “With the increased integration of artificial intelligence models, the iCPS and their vulnerabilities should be thoroughly understood and secured to ensure that adversaries do not exploit them.”
Narayanan will study the reinforcement learning mechanism-based controllers, a decision-making body that works with a physical system to modify or change its behavior and relies on feedback or incentives from the system. Instead of compromising physical sensors, which is how adversarial attacks of cyber-physical systems are often studied, the incentive mechanism can now be manipulated. As a result, incentives could be sent that do not match the system or reflect the decision maker’s performance.
Narayanan’s research will begin with a single, standalone system and eventually expand to a network of interconnected systems to determine if cyberattacks can be detected.
“The fundamental goal is to understand this information asymmetry. The decision-making body may receive some information about a system, but it could be an adversary trying to manipulate the information flow,” Narayanan says. “We’re trying to understand if there are any fundamental limits that cannot be overcome in order to secure the system.”
Manipulating performance feedback performance, which is fundamental to the controllers’ learning process, and adopting policies of adversaries by reinforcement learning-based controllers over time is an example of a simple, yet devastating attack that undermines safety and security. Narayanan and his collaborators from the University of Alabama at Huntsville (UAH) will work on strategies to protect against these attacks. This includes developing a foundational theory incorporating secure-by-design principles, such as secure learning and adversarial intent estimation for real-time attack detection. The research aims to model, characterize and synthesize adversarial information patterns.
In many safety critical applications, such as flights and driving, there is still a hesitancy for not integrating AI because there is a lack of understanding of their vulnerability to things that are unknown.
- Vignesh Narayanan
Narayanan and his collaborators aim to force the learning-based controllers to adopt a predetermined adversarial policy without detection, which would result in biased decision-making and increased control costs. To solve the problem, the iCPS will be reformulated and analyzed using a distributed control and differential game framework. They will also develop methods to gain insights into attack strategies or attack models for real-time detection without requiring offline training. The success of this research will foster technological breakthroughs to ensure secure and trustworthy autonomy, precise control and safe operations.
“Adversaries can manipulate incentives. Since it’s not easy to monitor directly with physical sensors, we need software sensors that can identify or detect this manipulation,” Narayanan says. “To understand this, we want to take the role of an adversary to figure out the best way to manipulate these incentives without getting detected.”
Assistant Professor Avimanyu Sahoo, Narayanan’s collaborator at UAH, will utilize a microgrid test bed for tests and evaluations. He will also perform additional evaluations by using robots at the University of South Carolina. Narayanan believes that if they can predict the behavior of AI components by placing it in a standalone system like a robotic car, then it can work up to a network, like a microgrid simulator.
“To validate the results, we have chosen the Smart Installation simulation platform, which consists of a microgrid with generators connected to various types of loads,” Sahoo says. “We will first develop learning-based controllers for the generating plants to control their voltage and frequency under varying loading conditions. Then, we will develop an attack scheme to manipulate these controllers.”
If successful, the work will have implications across various applications, such as autonomous connected vehicles, multi-robotic systems for collaboration in uncertain environments, and wireless sensor networks. Narayanan is excited to work on the different testbeds (robots and microgrids), where he can see how his work can be implemented and see everything that is happening.
“The success of this project would be to have a better understanding of when and how these learning or AI components that are used along with the cyber physical system can be made secure,” Narayanan says. “If that happens, we should be able to provide the metrics that the learning component must meet in order to certify a system that uses this component as secure.”