The process whereby autonomous systems can locate themselves, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s and works by comparing nearby terrain to high-resolution satellite images.
The problem, according to researchers at NASA’s JPL (Jet Propulsion Lab) and Caltech, is that, in order for it to work, the current generation of VTRN requires that the terrain it is looking at closely matches the images in its database. Anything that alters or obscures the terrain, such as snow cover or fallen leaves, causes the images to not match up and fouls up the system. So, unless there is a database of the landscape images under every conceivable condition, VTRN systems can be easily confused.
To overcome this challenge, a team from the lab of Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and research scientist at JPL, which Caltech manages for NASA, turned to deep learning and artificial intelligence (AI) to remove seasonal content that hinders current VTRN systems.
“The rule of thumb is that both images – the one from the satellite and the one from the autonomous vehicle – have to have identical content for current techniques to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image’s hues,” explained Anthony Fragoso, lecturer staff scientist and lead author of a recent Science Robotics paper on the research. “In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared.”
The process – developed by Chung and Fragoso in collaboration with graduate student Connor Lee and undergraduate student Austin McCoy – uses what is known as ‘self-supervised learning’. While most computer-vision strategies rely on human annotators who carefully curate large data sets to teach an algorithm how to recognize what it is seeing, this one instead lets the algorithm teach itself. The AI looks for patterns in images by teasing out details and features that would likely be missed by humans.
The team found that by supplementing the current generation of VTRN with the new system it could yield more accurate localization: in one experiment, the researchers attempted to localize images of summer foliage against winter leaf-off imagery using a correlation-based VTRN technique. They found that performance was no better than a coin flip, with 50% of attempts resulting in navigation failures. In contrast, insertion of the new algorithm into the VTRN worked far better: 92% of attempts were correctly matched, and the remaining 8% could be identified as problematic in advance, and then easily managed using other established navigation techniques.
“Computers can find obscure patterns that our eyes can’t see and can pick up even the smallest trend,” said Lee, who explained that VTRN was in danger of turning into an infeasible technology in common but challenging environments. “We rescued decades of work in solving this problem.”
Beyond the utility for autonomous systems on Earth, the system also has applications for space missions. The entry, descent and landing (EDL) system on JPL’s Mars 2020 Perseverance rover mission, for example, used VTRN for the first time on the Red Planet to land at the Jezero Crater, a site that was previously considered too hazardous for a safe entry. With rovers such as Perseverance, “A certain amount of autonomous driving is necessary,” Chung noted, “since transmissions could take 20 minutes to travel between Earth and Mars, and there is no GPS on Mars.” The team considered the Martian polar regions that also have intense seasonal changes, conditions similar to Earth, and the new system could allow for improved navigation to support scientific objectives including the search for water.
Next, Fragoso, Lee and Chung are looking to expand the technology to account for changes in the weather as well: fog, rain, snow and so on. If successful, their work could help improve navigation systems for autonomous driving systems.
The Science Robotics paper is titled A Seasonally Invariant Deep Transform for Visual Terrain-Relative Navigation. This project was funded by the Boeing Company, and the National Science Foundation. McCoy participated though Caltech’s Summer Undergraduate Research Fellowship (SURF) program