“Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary,” explained MIT Professor and CSAIL director Daniela Rus, senior author on a paper about the research. “With this release, the research community will have access to a powerful new tool for accelerating the research and development of adaptive robust control for autonomous driving.”
Building off of the team’s previous model, VISTA, MIT claims Vista 2.0 is fundamentally different from existing AV simulators since it’s data-driven – meaning it was built and photorealistically rendered from real-world data – thereby enabling direct transfer to reality. While the initial iteration supported only single car lane-following with one camera sensor, achieving high-fidelity data-driven simulation required rethinking the foundations of how different sensors and behavioral interactions can be synthesized.
Enter VISTA 2.0: a data-driven system that can simulate complex sensor types and massively interactive scenarios and intersections at scale. With much less data than previous models, the team was able to train autonomous vehicles that could be substantially more robust than those trained on large amounts of real-world data.
“This is a massive jump in capabilities of data-driven simulation for autonomous vehicles, as well as the increase of scale and ability to handle greater driving complexity,” said Alexander Amini, CSAIL PhD student and co-lead author on two new papers, together with fellow PhD student Tsun-Hsuan Wang. “VISTA 2.0 demonstrates the ability to simulate sensor data far beyond 2D RGB cameras, but also extremely high dimensional 3D lidars with millions of points, irregularly timed event-based cameras, and even interactive and dynamic scenarios with other vehicles as well.”
The team was able to scale the complexity of the interactive driving tasks for things like overtaking, following and negotiating, including multi-agent scenarios in highly photorealistic environments.
Recently, there’s been a shift away from more classic, human-designed simulation environments to those built up from real-world data. The latter have immense photorealism, but the former can easily model virtual cameras and lidars. With this paradigm shift, a key question has emerged: can the richness and complexity of all of the sensors that autonomous vehicles need, such as lidar and event-based cameras that are more sparse, be more accurately synthesized?
Lidar sensor data is much harder to interpret in a data-driven world – you’re effectively trying to generate brand-new 3D point clouds with millions of points, only from sparse views of the world. To synthesize 3D lidar point clouds, the team used the data that the car collected, projected it into a 3D space coming from the lidar data, and then let a new virtual vehicle drive around locally from where that original vehicle was. Finally, they projected all of that sensory information back into the frame of view of this new virtual vehicle, with the help of neural networks.
Together with the simulation of event-based cameras, which operate at speeds greater than thousands of events per second, the simulator was capable of not only simulating this multimodal information, but also doing so all in real time – making it possible to train neural nets offline, but also test online on the car in augmented reality set-ups for safe evaluations. “The question of if multisensor simulation at this scale of complexity and photorealism was possible in the realm of data-driven simulation was very much an open question,” continued Amini.
VISTA 2.0 allows the vehicle to move around, have different types of controllers, simulate different types of events, create interactive scenarios, and just drop in brand new vehicles that weren’t even in the original data. It can test for lane following, lane turning, car following and more risky scenarios such as static and dynamic overtaking (seeing obstacles and moving around so you don’t collide). With the multi-agency, both real and simulated agents interact, and new agents can be dropped into the scene and controlled any which way, according to the team at MIT.
Taking their full-scale car out into the ‘wild’ – Devens, Massachusetts to be precise – the team saw immediate transferability of results, with both failures and successes. “The central algorithm of this research is how we can take a data set and build a completely synthetic world for learning and autonomy,” concluded Amini. “It’s a platform that I believe one day could extend in many different axes across robotics. Not just autonomous driving, but many areas that rely on vision and complex behaviors. We’re excited to release VISTA 2.0 to help enable the community to collect their own data sets and convert them into virtual worlds where they can directly simulate their own virtual autonomous vehicles, drive around these virtual terrains, train autonomous vehicles in these worlds, and then can directly transfer them to full-sized, real self-driving cars.”
Amini and Wang wrote the paper alongside Zhijian Liu, MIT CSAIL PhD student; Igor Gilitschenski, assistant professor in computer science at the University of Toronto; Wilko Schwarting, AI research scientist and MIT CSAIL PhD ’20; Song Han, associate professor at MIT’s Department of Electrical Engineering and Computer Science; Sertac Karaman, associate professor of aeronautics and astronautics at MIT; and Daniela Rus, MIT professor and CSAIL director. The researchers presented the work at the IEEE International Conference on Robotics and Automation (ICRA) in Philadelphia, Pennsylvania.
This work was supported by the National Science Foundation and Toyota Research Institute. The team acknowledges the support of Nvidia with the donation of the Drive AGX Pegasus.