Skip to main content

Ruchkin & Príncipe get visual

Professors Ivan Ruchkin, Ph.D., and José Príncipe, Ph.D., are collaborating to create an end-to-end methodology to model, analyze, quantify, detect and adapt to changes in the visual environment of an autonomous cyber-physical system. 

Funded by the National Science Foundation, the $550,00project, “VISUALS: Verifiable Information-Theoretic Safety Under Augmented Latent Shifts,” brings together techniques and insights from formal methods, information theory and uncertainty quantification. Former ECE assistant professor Yuheng Bu, Ph.D., is also on the project.

Overview

Deep learning-based perception and control are increasingly popular in autonomous systems due to their state-of-the-art performance. These systems involve pushing sensor data (what a camera sees) to a machine learning (ML) system and comparing the images iteratively to what is “known” according to complex models. For many common situations and environments, these ML systems work well.

However, deep learning is vulnerable to various changes in the visual environment. Researchers call these visual distribution shifts that can trigger unpredictable behaviors in closed-loop control systems and threaten their performance and safety. At the same time, it is particularly difficult to provide guarantees for such systems because the shifts are usually latent: they lack an explicit, first-principles representation in dynamical models.

Ruchkin aims to ensure the safety and performance of vision-based autonomous systems subject to latent shifts through an end-to-end methodology that leverages information-theoretic and statistical techniques in modeling, analysis, training, control and adaptation.

At the foundation of their methodology is a robust framework for probabilistic verification and control synthesis, which will provide conservative models and safety estimates under latent shifts. Building on these models is a neurosymbolic training process that bridges the gap between visual perception, safety and control.

In order to protect the system from overreacting to unseen shifts, Ruchkin plans to develop an online adaptation framework based on quick-change detection and perception/control switching. The team will validate their methodology on a variety of autonomous systems from different domains, including physical experiments using small-scale autonomous racing cars.

Some Concrete Examples

While some of Ruchkin’s work can be very broad in scope, this project’s focus is quite narrow. Essentially, it aims to help cyber-physical systems make more intelligent choices when faced with a visual anomaly. The focus is on safe outcomes. Instead of focusing on the changes in system dynamics (e.g., what if the car gets a flat tire) or anomalous agents (e.g., avoiding a driver under influence), it focuses on the shifting style of visual inputs — and how the rest of the system needs to adapt to them to stay safe. 

For example, assume a team of engineers is building an autonomous car with a camera and a perception-neural network to interpret the images. 

First, engineers will make a compressed description (a formal model) of how the perception-neural network works at different times of the day – and how it affects the car’s behavior. This description will not be complete: For some conditions (at night), the model will have less information (data uncertainty). Second, the engineers will enhance the neural network to announce how certain it is about its interpretation of the image. When seeing a daytime image, it should be more certain on average because it had more daytime training data. Third, the engineers will train another neural network—a controller—to drive the car safely given the rich information about how the car moves, how the perception works and what/how well it can see at the current moment. Fourth, to adapt to previously unforeseen conditions (e.g., a polar night), the engineers will build in an extra layer of intelligence. When the car faces unexpected visual inputs, this layer will improve the car’s behavior by slotting in the most suitable combination of perception and control neural networks.

Collaborators

Also on the project is ECE Distinguished Professor José Príncipe. His decades-long career has focused on adaptive systems, neural perception and uncertainty quantification. His research starts with the view that perception is an active process which is based on what the system wants to accomplish, the available data (e.g., training images from the daytime) and additional prior knowledge (e.g., no matter what the image the car is seeing, it is a rigid object with certain mass and inertia on Earth’s surface). 

Príncipe’s computationally efficient methods will provide crucial insights on the uncertainty of the visual inputs at high frequency (up to 30 times per second). 

Yuheng Bu, Ph.D., previously an assistant professor at with ECE, now an assistant professor in the Department of Computer Science (CS) at UC Santa Barbara, also contributed to the work. His involvement was chiefly in the area of information-theoretic techniques enabling CPS to gain the most value from the observed images in order to drive the car safely. 

Expected Impacts

The project has two broad-impact goals: make visual autonomy safer in the automotive industry and increase awareness of the complex, non-monotonic relationship between control performance, safety and perception accuracy.

The former goal is achieved by collaborating with a group of Toyota researchers led by Danil Prokhorov, Ph.D., and transferring our results to their product groups. Another key deliverable is releasing a dataset of visually shifted trajectories and a car racing simulator instrumented with visual degradations, catalyzing other researchers to further develop visual safety.

For the second goal, Ruchkin plans a series of hackathons where university students will get hands-on experience in developing safe perception and control for small-scale racing cars. The investigators will also incorporate the resulting techniques, data and simulations into their four courses, reaching over 100 students every year.