California-based Helm-ai , a provider of AI software for ADAS, autonomous driving and robotics automation, has announced the release of neural network-based virtual scenario generation models for perception simulation.
The new technology expands the company’s AI software suite for developing advanced ADAS (Levels 2 and 3) and Level 4 autonomous driving systems.
The simulation models have been created by training neural networks with extensive image data sets and can produce highly realistic images of virtual driving environments, including various parameters like illumination; weather conditions; time of day; geography; highway and urban scenarios; road geometries; and road markings.
The generated synthetic images include accurate label information for surrounding agents, obstacles, pedestrians, vehicles, lane markings and traffic cones. This enables the generation of realistic synthetic labeled image data for large-scale training and validation, particularly addressing rare corner cases.
Users can input text- or image-based prompts to instantly create life-like driving scenes replicating real-world situations or entirely synthetic environments. Such AI-based simulation capabilities facilitate scalable training and validation of perception software for autonomous systems.
Simulation is important for the development and validation of ADAS and autonomous driving systems, especially for rare corner cases like challenging lighting conditions, complex road geometries or encounters with unusual obstacles.
Unlike physics-based simulators limited by accurately modeling physical interactions and realistic appearances, generative AI-based simulation learns directly from real image data, Helm-ai says, which enables realistic appearance modeling, rapid asset generation and scalability to accommodate diverse driving scenarios.
The company says the simulation models can be developed further to construct any object class or environmental condition, for development and validation requirements of auto makers.