Engineers at the University of California San Diego say they have developed a way to improve the imaging capability of existing radar sensors, so that they accurately predict the shape and size of objects in a scene.
“It’s a lidar-like radar,” explained Dinesh Bharadia, a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering. “It’s an inexpensive approach to achieving bad weather perception in self-driving cars. Fusing lidar and radar can also be done with our techniques, but radars are cheap. This way, we don’t need to use expensive lidars.”
The system consists of two radar sensors placed on the hood of a vehicle and spaced an average car’s width apart (1.5m). Having two radar sensors arranged this way is key — they give the system a wider field of view and provide more detail than a single radar sensor.
During test drives on clear days and nights, its developers claim the system performed as well as a lidar sensor at determining the dimensions of cars moving in traffic. Its performance did not change in tests simulating foggy weather. The team ‘hid’ another vehicle using a fog machine and say the system accurately predicted its 3D geometry. The lidar sensor to all practical purposes failed this test.
The reason radar traditionally suffers from poor imaging quality is because when radio waves are transmitted and bounced off objects, only a small fraction of signals ever get reflected back to the sensor. As a result, vehicles, pedestrians and other objects appear as a sparse set of points.
“This is the problem with using a single radar for imaging. It receives just a few points to represent the scene, so the perception is poor. There can be other cars in the environment that you don’t see,” said Kshitiz Bansal, a computer science and engineering PhD student at UC San Diego. “So if a single radar is causing this blindness, a multi-radar setup will improve perception by increasing the number of points that are reflected back.”
The team found that spacing two radar sensors 1.5m apart was the optimal arrangement. “By having two radars at different vantage points with an overlapping field of view, we create a region of high-resolution, with a high probability of detecting the objects that are present,” Bansal noted.
It’s developers also say the system overcomes another problem with radar – noise. It is common to see random points, which do not belong to any objects, appear in radar images. The sensor can also pick up what are called echo signals, which are reflections of radio waves that are not directly from the objects that are being detected.
More radars mean more noise, Bharadia pointed out. To address this issue, the team developed new algorithms that fuse the information from two different radar sensors together and produce an image free of noise.
They also had to construct the first data set combining data from two radars. “There are currently no publicly available data sets with this kind of data, from multiple radars with an overlapping field of view,” Bharadia commented. “We collected our own data and built our own data set for training our algorithms and for testing.”
The data set consists of 54,000 radar frames of driving scenes during the day and night in live traffic, and in simulated fog conditions. Future work will include collecting more data in the rain but the developers will first need to build better protective covers for the hardware.
The team is now working with Toyota to fuse the new radar technology with cameras and say this approach could potentially replace lidar entirely. “Radar alone cannot tell us the color, make or model of a car. These features are also important for improving perception in self-driving cars,” Bharadia concluded.