The lidar_transfusion
package is used for 3D object detection based on lidar data (x, y, z, intensity).
The implementation bases on TransFusion [1] work. It uses TensorRT library for data process and network inference.
We trained the models using https://github.com/open-mmlab/mmdetection3d.
Name | Type | Description |
---|---|---|
~/input/pointcloud |
sensor_msgs::msg::PointCloud2 |
Input pointcloud. |
Name | Type | Description |
---|---|---|
~/output/objects |
autoware_perception_msgs::msg::DetectedObjects |
Detected objects. |
debug/cyclic_time_ms |
tier4_debug_msgs::msg::Float64Stamped |
Cyclic time (ms). |
debug/pipeline_latency_ms |
tier4_debug_msgs::msg::Float64Stamped |
Pipeline latency time (ms). |
debug/processing_time/preprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Preprocess (ms). |
debug/processing_time/inference_ms |
tier4_debug_msgs::msg::Float64Stamped |
Inference time (ms). |
debug/processing_time/postprocess_ms |
tier4_debug_msgs::msg::Float64Stamped |
Postprocess time (ms). |
debug/processing_time/total_ms |
tier4_debug_msgs::msg::Float64Stamped |
Total processing time (ms). |
{{ json_to_markdown("perception/lidar_transfusion/schema/transfusion.schema.json") }}
{{ json_to_markdown("perception/lidar_transfusion/schema/detection_class_remapper.schema.json") }}
The lidar_transfusion
node has build_only
option to build the TensorRT engine file from the ONNX file.
Although it is preferred to move all the ROS parameters in .param.yaml
file in Autoware Universe, the build_only
option is not moved to the .param.yaml
file for now, because it may be used as a flag to execute the build as a pre-task. You can execute with the following command:
ros2 launch lidar_transfusion lidar_transfusion.launch.xml build_only:=true
The default logging severity level for lidar_transfusion
is info
. For debugging purposes, the developer may decrease severity level using log_level
parameter:
ros2 launch lidar_transfusion lidar_transfusion.launch.xml log_level:=debug
This library operates on raw cloud data (bytes). It is assumed that the input pointcloud message has following format:
[
sensor_msgs.msg.PointField(name='x', offset=0, datatype=7, count=1),
sensor_msgs.msg.PointField(name='y', offset=4, datatype=7, count=1),
sensor_msgs.msg.PointField(name='z', offset=8, datatype=7, count=1),
sensor_msgs.msg.PointField(name='intensity', offset=12, datatype=2, count=1)
]
This input may consist of other fields as well - shown format is required minimum. For debug purposes, you can validate your pointcloud topic using simple command:
ros2 topic echo <input_topic> --field fields
You can download the onnx format of trained models by clicking on the links below.
- TransFusion: transfusion.onnx
The model was trained in TIER IV's internal database (~11k lidar frames) for 20 epochs.
[1] Xuyang Bai, Zeyu Hu, Xinge Zhu, Qingqiu Huang, Yilun Chen, Hongbo Fu and Chiew-Lan Tai. "TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers." arXiv preprint arXiv:2203.11496 (2022).
[2] https://github.com/wep21/CUDA-TransFusion
[3] https://github.com/open-mmlab/mmdetection3d