Researchers at TU Graz in Austria in collaboration with Infineon have modeled an AI system for automotive radar sensors that filters out interfering signals caused by other radar sensors, which they say dramatically improves object detection. Research is apparently now focusing on making the system more robust against weather and environmental influences as well as new types of interference.
“The better the denoising of interfering signals works, the more reliably the position and speed of objects can be determined,” explained Franz Pernkopf from the Institute of Signal Processing and Speech Communication. Together with his team and with partners from Infineon, he has developed an AI system based on neural networks that mitigates mutual interference in radar signals, far surpassing the current state of the art. They now want to optimize this model so that it also works outside of learned patterns and recognizes objects even more reliably.
To this end, the researchers first developed model architectures for automatic noise suppression based on so-called convolutional neural networks (CNNs). “These architectures are modeled on the layer hierarchy of our visual cortex and are already being used successfully in image and signal processing,” said Pernkopf.
CNNs filter the visual information, recognize connections and complete the image using familiar patterns. Due to their structure, they consume considerably less memory than other neural networks, but still exceed the available capacities of radar sensors for autonomous driving.
The goal was to increase efficiency and to this end, the TU Graz team trained various of these neural networks with noisy data and desired output values. In experiments, they identified particularly small and fast model architectures by analyzing the memory space and the number of computing operations required per denoising process. The most efficient models were then compressed again by reducing the bit widths, i.e. the number of bits used to store the model parameters.
The result, they say, was an AI model with high filter performance and low energy consumption at one and the same time. The excellent denoising results, with an F1 score (a measure of the accuracy of a test) of 89%, are almost equivalent to an object detection rate of undisturbed radar signals. The interfering signals are thus almost completely removed from the measurement signal.
Expressed in figures: with a bit width of 8 bits, the model achieves the same performance as comparable models with a bit width of 32 bits, but only requires 218kB of memory. This corresponds to a storage space reduction of 75%, which means that the model far surpasses the current state of the art.
In the FFG project REPAIR (Robust and ExPlainable AI for Radarsensors), Pernkopf and his team will work with Infineon over the next three years to optimize the system.
“For our successful tests, we used data (note: interfering signals) similar to what we used for the training,” explained Pernkopf. “We now want to improve the model so that it still works when the input signal deviates significantly from learned patterns.”
This would make radar sensors many times more robust with respect to interference from the environment. After all, the sensor is also confronted with different, sometimes unknown situations in reality. “Until now, even the smallest changes to the measurement data were enough for the output to collapse and objects not to be detected or to be detected incorrectly, something which would be devastating in the autonomous driving use case,” he said.
The system has to be able to cope with such challenges and notice when its own predictions are uncertain. Then, for example, it could respond with a secured emergency routine. To this end, the researchers want to find out how the system determines predictions and which influencing factors are decisive for this. This complex process within the network has previously only been comprehensible to a limited extent.
For this purpose, the complicated model architecture is transferred into a linear model and simplified. In Pernkopf’s words: “We want to make CNNs’ behavior a bit more explainable. We are not only interested in the output result, but also in its range of variation. The smaller the variance, the more certain the network is.”