For autonomous and self-driving cars to become a reality on public roads around the world, manufacturers need to implement methodical, effective and accurate detection systems to increase safety and reliability for passengers. For this reason, an international research team, led by Professor Gwanggil Jeon from Incheon National University, Korea, has developed an end-to-end neural network that – in conjunction with Internet-of-Things technology – detects objects with high accuracy (>96%) in 2D and 3D. The new method is stated to outperform current methods as the team aims to forge a path toward 2D and 3D detection systems for AVs.
To enable vehicles to navigate autonomously, safely and reliably in a wide range of environments, an array of technologies relating to signal processing, image processing, artificial intelligence deep learning, edge computing, and IoT need to be utilized. To ensure a safe driving experience, it is imperative for AVs to accurately monitor and distinguish their surroundings in addition to understanding potential threats to passenger safety. Because of this, AVs make use of sensors including lidar, radar and RGB cameras which produce significant amounts of data as RGB images and 3D measurement points, called a point cloud.
Efficiently and effectively processing and interpreting this collected information is crucial if AVs are to identify pedestrians and other road users. This can be achieved through the integration of advanced computing methods and the Internet-of-Things (IoT) into these vehicles, enabling efficient, on-site data processing and the navigation of various environments and obstacles faster.
A study was published in the IEEE Transactions of Intelligent Transport Systems journal on October 17, 2022, which outlines how a team of researchers, led byIncheon National University’s Professor Gwanggil Jeon, managed to develop a smart IoT-enabled end-to-end system for 3D object detection in real time based on deep learning and specialized for autonomous driving situations.
“For autonomous vehicles, environment perception is critical to answer a core question, ‘What is around me?’ It is essential that an autonomous vehicle can effectively and accurately understand its surrounding conditions and environments in order to perform a responsive action,” explained Professor Jeon. “We devised a detection model based on YOLOv3, a well-known identification algorithm. The model was first used for 2D object detection and then modified for 3D objects.”
The collected RGB images and point cloud data was then fed as an input to YOLOv3, which output classification labels and bounding boxes with confidence scores. The team subsequently tested its performance with the Lyft data set, and the early results revealed that YOLOv3 achieved an extremely high accuracy of detection (>96%) for both 2D and 3D objects.
The method can be applied to autonomous vehicles, autonomous parking, autonomous delivery and future autonomous robots as well as in applications where object and obstacle detection, tracking and visual localization is required.
“At present, autonomous driving is being performed through lidar-based image processing, but it is predicted that a general camera will replace the role of lidar in the future. As such, the technology used in autonomous vehicles is changing every moment, and we are at the forefront,” said Professor Jeon. “Based on the development of element technologies, autonomous vehicles with improved safety should be available in the next 5-10 years.”
Authors: Imran Ahmed1, Gwanggil Jeon2,*, and Abdellah Chehri3
Title of original paper: A Smart IoT Enabled End-to-End 3D Object Detection System for Autonomous VehiclesJournal: IEEE Transactions of Intelligent Transport Systems