Add pointcloud_densifier package #10225
Labels
component:perception
Advanced sensor data processing and environment understanding. (auto-assigned)
component:sensing
Data acquisition from sensors, drivers, preprocessing. (auto-assigned)
perception
run:build-and-test-differential
Mark to enable build-and-test-differential workflow. (used-by-ci)
Checklist
Description
This issue addresses the implementation of a PointCloud Densifier component for Autoware, which enhances sparse LiDAR point cloud data by leveraging information from previous frames. Long-range LiDAR data is often sparse(after 80-90 meters), limiting the effectiveness of downstream perception algorithms. This component overcomes this limitation by selectively integrating points from previous frames into the current frame, resulting in a denser representation of the environment.
Purpose
The main purpose is to detect long range stationary vehicles on the road, such as those stopped at an intersection or traffic light.
This package increases the density of long-range points for detects stationary vehicles with rule-based LiDAR algorithms. Additionally, we plan to create a pull request (PR) to add roi_excluded_downsample_filter for preventing the downsampling of long-range points, with OT128 stationary objects detection accuracy improved to around 150-200 meters.
Possible approaches
The algorithm works as follows:
1- ROI Filtering: First filters the input point cloud to only keep points in a specific region of interest (ROI), typically focused on the distant area in front of the vehicle.
2- Occupancy Grid Creation: Creates a basşc 2D occupancy grid from the filtered points to track which areas contain valid points in the current frame.
3- Previous Frame Integration: Transforms points from previous frames into the current frame's coordinate system using TF transformations.
4- Selective Point Addition: Adds points from previous frames only if they fall into grid cells that are occupied in the current frame. This ensures that only relevant points are added, avoiding ghost points from dynamic objects.
5- Combined Output: Returns a combined point cloud that includes both the current frame's points and selected points from previous frames.
The implementation follows Autoware's filtering architecture by inheriting from the
Filter
base class, providing compatibility with the existing preprocessing pipeline and support for point cloud indices.Definition of done
The text was updated successfully, but these errors were encountered: