Nvidia has released details of its Drive Concierge, a self-parking system, which the company claims redefines not only the in-vehicle experience, but also the chore of parking. This, it says, is thanks to advanced intelligent summon capabilities, claimed to enable seamless drop-off and pickup operations.
According to the company, Drive Concierge uses various sensors, high-performance AI compute and flexible, modular software to handle a range of driving conditions covering the picking up and dropping off of drivers, and finding parking spots to wait in the meantime.
Nvidia highlights that a reliable parking solution begins with a diverse hardware setup with redundancy for safe operation. At its base is the company’s Hyperion 8 system, which relies on its Orin SoC solution coupled with a comprehensive sensor architecture. The SoC achieves 254 trillion operations per second and is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles, while achieving systematic safety standards such as ISO 26262 ASIL-D.
The sensor suite consists of high-fidelity surround cameras, radars, ultrasonics and front lidar to provide 360° surround scene interpretation. Nvidia says that this diversity of sensor types delivers the redundancy required for safe and robust parking capabilities, especially in complex urban scenarios.
The Concierge software runs on top of Orin and the Drive SDK, which consists of the DriveWorks middleware compute graph framework and Drive OS safe operating system, meaning a wide range of deep neural networks can run simultaneously with improved runtime latency.
Nvidia notes that traditionally, automated parking functions have used high-level features from ultrasonic sensors to develop a sparse representation of the environment around the vehicle. However, this method is difficult to employ in an environment with dynamic actors, such as pedestrians, or obstacles around the vehicle. To address this issue, Concierge fuses the data from ultrasonic sensors and fish-eye cameras. Its Evidence Grid Map DNN then uses data from the sensors to generate a real-time dense grid map in the immediate vicinity of the vehicle.
Dense fusion generates information about whether the space surrounding the vehicle is free and the likelihood of whether any space is likely to be occupied by a dynamic obstacle or a stationary obstacle. Then the ParkNet DNN fuses images from multiple cameras and provides a list of potential parking spaces to choose from. Finally, parking spot perception fuses data from multiple cameras to associate parking spaces and parking signs to determine which parking space to use.
In addition to dense perception, the parking function includes sparse perception, which perceives far-range objects using both camera and radar data. The dense-perception and sparse-perception modules use redundant sensor data, complementing each other to build an accurate 4D world model for the downstream path and trajectory planner module. Lastly, the vehicle’s path and trajectory planner module use this data to plan the vehicle’s maneuvers for parking, avoiding collisions with immediate close objects and high-speed far-range obstacles.