The latest edition of Nvidia’s Drive Labs series takes a special look at how Nvidia Drive AV Software combines the essential building blocks of perception, localization and planning to drive autonomously on public roads around the company’s headquarters in Santa Clara, California.
With a safety-certified driver at the wheel and a co-pilot monitoring the system, the vehicle handles highway interchanges, lane changes and other maneuvers, testing the various software components in the real world.
Perception
At the core of Nvidia’s perception building blocks are deep neural networks (DNNs). These algorithms are mathematical models inspired by the human brain and learn by experience. The company uses its DriveNet DNN to enable a data-driven understanding of obstacles (for example, cars versus pedestrians) as well as compute distance to these obstacles. LaneNet is used to detect lane information, while an ensemble of DNNs perceive drivable paths.
WaitNet, LightNet and SignNet detect and classify wait conditions — that is, intersections, traffic lights and traffic signs, respectively. Nvidia’s ClearSightNet DNN also runs in the background and assesses whether the cameras see clearly, are occluded or are blocked.
For certain functions, such as object tracking, traditional computer vision techniques are also used and enable important efficiency benefits. With multi-camera surround perception, both DNN-based and traditional computer vision capabilities provide simultaneous 360-degree coverage around the car.
Planning
With the inputs provided by both perception and localization, the planning and control layer enables the self-driving car to physically drive itself. Planning software consumes perception and localization results to determine the physical trajectory the car needs to take to complete a particular maneuver.
For example, for the autonomous lane changes shown in the video above, planning software first performs a lane change safety check using surround camera and radar perception to ensure the intended maneuver can be executed.
Next, it computes the longitudinal speed profile as well as the lateral path plan needed to move from the center of the current lane to the center of the target lane. Control software then issues the acceleration/deceleration and left/right steering commands to execute the lane change plan.
The engine that runs these components is the Nvidia Drive AGX platform. Drive AGX makes it possible to simultaneously run feature-rich, 360-degree surround perception, localization, and planning and control software in real time.
Together with localization technologies, these elements enable Nvidia’s vehicles to drive autonomously.
For more on Nvidia’s work in self-driving vehicles, attend the presentation of Kevin Williams, Nvidia’s enterprise automotive manager, at the Autonomous Vehicle Test & Development Symposium in Novi, MI, in October. Full details and rates here: https://www.autonomousvehiclesymposium.com/detroit/