At its recent global AI conference, US technology corporation Nvidia revealed several new developments directly relating to its automotive and autonomous driving solutions.
The most notable announcements were the introduction of Drive Map, described as a multimodal mapping platform, and Drive Hyperion 9, the company’s next-generation platform for software-defined autonomous vehicles.
According to Nvidia, Drive Map combines the accuracy of the maps developed by DeepMap (a startup that Nvidia acquired in 2021) with the scale and flexibility of crowdsourced AI-based mapping. By combining three layers of localization, using camera, lidar and radar data, Drive Map is claimed to provide the redundancy and flexibility needed for advanced autonomous operations.
Nvidia states that the new system will give survey-level ground truth mapping coverage to 500,000 kilometers of roadway in North America, Europe and Asia by 2024. It will also be constantly updated with fresh data from consumers’ vehicles.
Detailing the various localization layers, the company said that the camera layer consists of map attributes such as lane dividers, road markings, road boundaries, traffic lights, signs and poles. The radar localization layer is an aggregate point cloud of radar returns. This is particularly useful in poor lighting conditions, which are challenging for cameras, and in poor weather conditions, which are challenging for cameras and lidars.
Finally, the lidar voxel layer provides a precise and reliable representation of the physical environment. It builds a 3D representation of the world at 5cm resolution – accuracy that Nvidia says is impossible to achieve with camera and radar. Once localized to the map, the vehicle AI can use the detailed semantic information provided by the map to plan ahead and safely perform driving decisions.
Drive Map is built with two map engines: a ground truth survey map engine and a crowdsourced map engine. Nvidia claims that this approach combines the best of both worlds, achieving centimeter-level accuracy with dedicated survey vehicles, as well as the freshness and scale that can only be achieved with millions of passenger vehicles continuously updating and expanding the map.
Looking to the next generation of Nvidia’s Hyperion platform, Hyperion 9 – the programmable architecture slated for 2026 production vehicles – is built on multiple Drive Atlan computers to provide intelligent driving and in-cabin functionality.
The platform includes the computer architecture, sensor set and full Drive Chauffeur and Concierge applications. It is designed to be open and modular, so customers can select what they need. Current-generation systems scale from NCAP to Level 3 driving and Level 4 parking with advanced AI cockpit capabilities.
Thanks to the use of the Atlan SoC, Nvidia says the platform will have double the performance of its current Orin-based systems, but with the same power consumption. Using Nvidia’s GPU architecture, Arm CPU cores and deep learning and computer vision accelerators, it will facilitate the implementation of multiple deep neural networks with capacity for future developments to be added.
The platform will make use of this added compute capacity to harness a greater range of sensors than current systems. Its upgraded sensor suite will include surround imaging radar, enhanced cameras with higher frame rates, two additional side lidar and improved undercarriage sensing with better camera and ultrasonic placement.
In total, the Hyperion 9 architecture includes 14 cameras, nine radars, three lidars and 20 ultrasonics for automated and autonomous driving, as well as three cameras and one radar for interior occupant sensing.