Skip to main content

Ruchkin Seeks Greater Confidence (in Cyber-physical Systems)

Ivan Ruchkin, Ph.D.

Imagine that you’ve deployed a cyber-physical system (CPS)—a physical device controlled and monitored by computer algorithms—a self-driving car, for example. Something feels off. Is something glitching? If the system is indeed encountering an anomaly, it would be reasonable, and even obligatory, to ask questions such as: how much does the anomaly affect the system? How is safety affected? Ivan Ruchkin, Ph.D., is not satisfied with the answers provided by state-of-the-art systems and is determined to find a better way. He’s on a mission to improve the operation and safety of CPS.

A project recently funded by the National Science Foundation aims to fill in the gaps in current methodologies. “Confidently Safe Autonomy Under Anomalies” will develop a framework called Methodology for Anomalous Safety Confidence (MASC). This framework will adjust (on the fly) the system’s confidence in its own safety/correctness based on the anomalies that it is detecting.

Ruchkin with his students in the TEA Lab

An Example

At the first sign that the car is behaving strangely, a whole series of questions come up.

The obvious first question would be: Is the car glitching? Researchers would call that anomaly detection and according to Ruchkin, those methodologies are relatively well-developed. Let’s say for the example that the system determines that yes, the car is glitching.

Next question: what’s causing it? Dirt on the camera? An unexpected visual pattern in the road throwing off the neural network tasked with identifying objects? A problem with the tires? This type of determination is difficult to make on the fly, in the middle of a road. And it may not be possible to bring the car to a shop and do exhaustive testing and analysis.

Now that we know there is a glitch, and we don’t know the cause of it, what will happen now? This question is the heart of Ruchkin’s work. Since the cause of the problem is unknown, it’s difficult to predict what will happen. The system needs to translate all the current information into some type of a confidence-aware prediction (e.g. the car will probably not be able to stay perfectly in the lane but it is very unlikely to enter the oncoming traffic).

His unique response to this problem—how to create confidence-aware predictions in the face of anomalous data—is to inject uncertainty into the system.

Uncertainty Injectors

Yes, dear reader, Ruchkin has developed uncertainty injectors. Why? Despite recent advances in anomaly detection, there is little connection between anomaly severity and safety violations in a CPS—beyond vague statistical correlation. Ruchkin’s work seeks to close this gap by designing a general methodology to compute safety confidence in learning-enabled CPS encountering anomalous behavior.

The key idea? State-of-the-art anomaly measures can be aligned with the typical CPS components to inform online safety prediction. Leveraging this insight, Ruchkin aims to develop a collection of modular, meaningful, and safety-relevant anomaly scores for perception, dynamics, and control components. These scores are then used to inject uncertainty into the safety monitoring layer using symbolic functions that provide formal guarantees of calibrated prediction. What kind of uncertainty? The system essentially incorporates questions like, “how sure am I that I actually see a stop sign?” or “given what I know, will I be safe in a few seconds from now?” The end result is that the system operates less confidently, perhaps more cautiously, more accurately reflecting the uncertainty of the actual environment. The car might slow down, for example, or reduce the sharpness of an upcoming turn.

The proposed methodology will be validated on small-scale autonomous racing cars. To aid other researchers, the code and data will be publicly released. As part of the work, Ruchkin will host physical demonstrations of anomaly-aware small-scale physical racing cars in a variety of venues. In addition, the research findings will be disseminated through a graduate course module on anomaly detection for safety. Finally, to involve undergraduate students in hands-on lab research, Ruchkin will partner with two well-established programs for undergraduate research mentoring at the University of Florida.