Arvind Srivastav, a software engineer at Zoox, discusses the use of radars in autonomy, ahead of his presentation on the same theme at the ADAS & Autonomous Vehicle Technology Expo California Conference, which takes place September 20 & 21, 2023, in Santa Clara. Arvind currently leads R&D projects on perception at Zoox, which is on a mission to provide safe transportation via its purpose-built robotaxis.
A graduate of Stanford University, Arvind’s expertise lies in radar perception and early sensor fusion. Prior to Stanford, he led the development of an advanced underground imaging radar system in Cleveland, Ohio. This novel technology facilitated proactive monitoring of urban infrastructure, leading to reduced financial and environmental costs and improved safety of the city’s aging infrastructure. A fervent advocate of a safer and more sustainable future of our cities supported by autonomous vehicles, Arvind holds six publications and seven patents.
Describe your presentation?
My presentation aims to shed light on the importance of radars in autonomous driving and the challenges inherent in their use. Radars are known for their ability to detect objects from long distances, under occlusions, and in adverse weather conditions. However, fully harnessing their full potential through deep learning methods is no easy task. Imagine trying to identify objects in a foggy environment; while the human eye might struggle, radars can “see” in these conditions, but interpreting this radar data accurately presents a significant challenge.
One of the strategies I will explore is early sensor fusion. Here, we combine radar data with information from other sensors like cameras and lidar at an early stage to achieve a more robust perception of objects in a scene. It’s like piecing together a jigsaw puzzle of scene perception, where each sensor provides different pieces. Radars might offer velocity information, lidars contribute precise shape details, and cameras add color and fine-grained class data. We’ll also delve into occupancy estimation, which involves predicting where objects could be in the environment around the vehicle – essentially reconstructing the scene from sensor data and estimating which regions are safe for driving.
What are some common misconceptions about radar and their use in ADAS and autonomous driving?
A common misunderstanding is that all radars are similar. However, a radar’s capability can vary greatly based on its specifications like imaging range, resolution and other technical parameters. For instance, the new-generation radars, called 4D radars, provide far superior resolution imaging and data compared to previous generation 3D radars.
Another misconception is that radar perception models need to provide full scene perception, just like camera and lidar models do. However, radar models inherit strengths of radars as well as their weaknesses – which are many (sparsity, low resolution and noise in data, to name a few). Arguably, it’s better to use radar data in a complementary fashion with other sensor’s data so that we inherit the strengths of radars and compensate for their weaknesses. An emerging approach is early sensor fusion, in which perception models leverage strengths of radars like velocity, long-range imaging, robustness to adverse weather, and compensate for their weaknesses using data from other sensors like lidar and cameras.
Lastly, radars have high uncertainty in imaging which permeates through all of their data. For example, a car could very well appear like a bike under certain imaging conditions. Thus, it’s important to accurately estimate a radar model’s detection uncertainty and bake it into the final output.
What are some of radar’s key weaknesses and how are they best resolved?
One of the key weaknesses of radar is its lower resolution compared to cameras or lidar. This can be mitigated to some extent by accumulating radar data over multiple measurement cycles, which improves the shape and location estimation of detected objects.
High uncertainty in radar measurements presents another challenge. This can be addressed by incorporating uncertainty estimates into perception models, thus ensuring safe and reliable perception even when the data lacks sufficient knowledge.
Radars frequently generate ghost objects due to their imaging physics. Advanced signal processing techniques can assist in filtering out most of these ghost objects, thereby enhancing the reliability of the detections.
Finally, the scarcity of large public radar data sets for developing and validating models is a significant challenge. This situation underscores the need for increased availability of better data sets.
What is the future for autonomous radar?
The future holds immense promise for radars in autonomous driving. New 4D radars offer denser data with higher resolution and lower uncertainty compared to previous generation radars, leading to superior radar perception. Additionally, a new type of radars, called terahertz radars, are under development, which promise even greater resolution in imaging, resulting in improved perception of smaller objects, such as cyclists and pedestrians.
Moreover, transformers, which power ChatGPT, are gaining wide adoption in perception as well. Transformers are emerging as universal learners with the ability to learn from disparate multi-modality data. They are increasingly playing an instrumental role in designing early sensor fusion models to better leverage strengths of radar in tandem with other sensor data and offer a unified and robust perception of the surroundings.
What is your key message to the audience in Santa Clara?
Safety and reliability form the cornerstone of our envisioned autonomous driving future, and radars, with their unique capabilities and potential for continuous advancement, are set to play a crucial role in this endeavor. As we continue to innovate and push the boundaries of what’s possible with radars, we move one step closer to making the dream of safe and reliable autonomous driving a reality.
Don’t miss Arvind Srivastav’s presentation, which is part of the ‘Sensor test, development, fusion, calibration and data’ session taking place on Day 2 (Sept 21) of the conference (rates apply), alongside the free-to-attend exhibition at ADAS & Autonomous Vehicle Technology Expo California.