Skip to content

Commit 2fd344a

Browse files
kaancolakKhalilSelyan
authored and
KhalilSelyan
committed
refactor(lidar_centerpoint): add training docs (#5570)
Signed-off-by: Kaan Çolak <kaancolak95@gmail.com>
1 parent d4460e2 commit 2fd344a

File tree

1 file changed

+207
-1
lines changed

1 file changed

+207
-1
lines changed

perception/lidar_centerpoint/README.md

+207-1
Original file line numberDiff line numberDiff line change
@@ -64,12 +64,207 @@ ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoi
6464

6565
You can download the onnx format of trained models by clicking on the links below.
6666

67-
- Centerpoint : [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx)
67+
- Centerpoint: [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx)
6868
- Centerpoint tiny: [pts_voxel_encoder_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx), [pts_backbone_neck_head_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx)
6969

7070
`Centerpoint` was trained in `nuScenes` (~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs.
7171
`Centerpoint tiny` was trained in `Argoverse 2` (~110k lidar frames) [9] and TIER IV's internal database (~11k lidar frames) for 20 epochs.
7272

73+
## Training CenterPoint Model and Deploying to the Autoware
74+
75+
### Overview
76+
77+
This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository
78+
and seamlessly deploying it within Autoware.
79+
80+
### Installation
81+
82+
#### Install prerequisites
83+
84+
**Step 1.** Download and install Miniconda from the [official website](https://mmpretrain.readthedocs.io/en/latest/get_started.html).
85+
86+
**Step 2.** Create a conda virtual environment and activate it
87+
88+
```bash
89+
conda create --name train-centerpoint python=3.8 -y
90+
conda activate train-centerpoint
91+
```
92+
93+
**Step 3.** Install PyTorch
94+
95+
Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware.
96+
97+
```bash
98+
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
99+
```
100+
101+
#### Install mmdetection3d
102+
103+
**Step 1.** Install MMEngine, MMCV, and MMDetection using MIM
104+
105+
```bash
106+
pip install -U openmim
107+
mim install mmengine
108+
mim install 'mmcv>=2.0.0rc4'
109+
mim install 'mmdet>=3.0.0rc5, <3.3.0'
110+
```
111+
112+
**Step 2.** Install mmdetection3d forked repository
113+
114+
Introduced several valuable enhancements in our fork of the mmdetection3d repository.
115+
Notably, we've made the PointPillar z voxel feature input optional to maintain compatibility with the original paper.
116+
In addition, we've integrated a PyTorch to ONNX converter and a T4 format reader for added functionality.
117+
118+
```bash
119+
git clone https://github.com/autowarefoundation/mmdetection3d.git
120+
cd mmdetection3d
121+
pip install -v -e .
122+
```
123+
124+
#### Use Training Repository with Docker
125+
126+
Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.
127+
128+
Clone fork of the mmdetection3d repository
129+
130+
```bash
131+
git clone https://github.com/autowarefoundation/mmdetection3d.git
132+
```
133+
134+
Build the Docker image by running the following command:
135+
136+
```bash
137+
cd mmdetection3d
138+
docker build -t mmdetection3d -f docker/Dockerfile .
139+
```
140+
141+
Run the Docker container:
142+
143+
```bash
144+
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
145+
```
146+
147+
### Preparing NuScenes dataset for training
148+
149+
**Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice.
150+
151+
**Note:** The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.
152+
153+
**Step 2.** Create a symbolic link to the dataset folder
154+
155+
```bash
156+
ln -s /path/to/nuscenes/dataset/ /path/to/mmdetection3d/data/nuscenes/
157+
```
158+
159+
**Step 3.** Prepare the NuScenes data by running:
160+
161+
```bash
162+
cd mmdetection3d
163+
python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes
164+
```
165+
166+
### Training CenterPoint with NuScenes Dataset
167+
168+
#### Prepare the config file
169+
170+
The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is
171+
located at `mmdetection3d/projects/AutowareCenterPoint/configs`. This configuration file is a derived version of
172+
[this centerpoint configuration file](https://github.com/autowarefoundation/mmdetection3d/blob/5c0613be29bd2e51771ec5e046d89ba3089887c7/configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py)
173+
from mmdetection3D.
174+
In this custom configuration, the **use_voxel_center_z parameter** is set as **False** to deactivate the z coordinate of the voxel center,
175+
aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**.
176+
177+
The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file.
178+
This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more.
179+
180+
#### Start training
181+
182+
```bash
183+
python tools/train.py projects/AutowareCenterPoint/configs/centerpoint_custom.py --work-dir ./work_dirs/centerpoint_custom
184+
```
185+
186+
#### Evaluation of the trained model
187+
188+
For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors:
189+
1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians,
190+
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes,
191+
including training, evaluation, and fine-tuning of models. It is organized in the T4 format.
192+
193+
##### Download the sample dataset
194+
195+
```bash
196+
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz
197+
#Extract the dataset to a folder of your choice
198+
tar -xvf lidar_detection_sample_dataset.tar.gz
199+
#Create a symbolic link to the dataset folder
200+
ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/
201+
```
202+
203+
##### Prepare dataset and evaluate trained model
204+
205+
Create `.pkl` files for training, evaluation, and testing.
206+
207+
The dataset was formatted according to T4Dataset specifications, with 'sample_dataset' designated as one of its versions.
208+
209+
```bash
210+
python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2
211+
```
212+
213+
Run evaluation
214+
215+
```bash
216+
python tools/test.py projects/AutowareCenterPoint/configs/centerpoint_custom_test.py /PATH/OF/THE/CHECKPOINT --task lidar_det
217+
```
218+
219+
Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset
220+
and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar
221+
sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at
222+
the base link location of the vehicle.
223+
224+
### Deploying CenterPoint model to Autoware
225+
226+
#### Convert CenterPoint PyTorch model to ONNX Format
227+
228+
The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network,
229+
such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository,
230+
we have included a script that converts the CenterPoint model to Autoware compatible ONNX format.
231+
You can find it in `mmdetection3d/projects/AutowareCenterPoint` file.
232+
233+
```bash
234+
python projects/AutowareCenterPoint/centerpoint_onnx_converter.py --cfg projects/AutowareCenterPoint/configs/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth --work-dir ./work_dirs/onnx_models
235+
```
236+
237+
#### Create the config file for the custom model
238+
239+
Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like
240+
point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.
241+
242+
```yaml
243+
/**:
244+
ros__parameters:
245+
class_names: ["CAR", "TRUCK", "BUS", "BICYCLE", "PEDESTRIAN"]
246+
point_feature_size: 4
247+
max_voxel_size: 40000
248+
point_cloud_range: [-51.2, -51.2, -3.0, 51.2, 51.2, 5.0]
249+
voxel_size: [0.2, 0.2, 8.0]
250+
downsample_factor: 1
251+
encoder_in_feature_size: 9
252+
# post-process params
253+
circle_nms_dist_threshold: 0.5
254+
iou_nms_target_class_names: ["CAR"]
255+
iou_nms_search_distance_2d: 10.0
256+
iou_nms_threshold: 0.1
257+
yaw_norm_thresholds: [0.3, 0.3, 0.3, 0.3, 0.0]
258+
```
259+
260+
#### Launch the lidar_centerpoint node
261+
262+
```bash
263+
cd /YOUR/AUTOWARE/PATH/Autoware
264+
source install/setup.bash
265+
ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoint_custom model_path:=/PATH/TO/ONNX/FILE/
266+
```
267+
73268
### Changelog
74269

75270
#### v1 (2022/07/06)
@@ -144,3 +339,14 @@ Example:
144339
[v1-head-centerpoint]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint.onnx
145340
[v1-encoder-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_voxel_encoder_centerpoint_tiny.onnx
146341
[v1-head-centerpoint-tiny]: https://awf.ml.dev.web.auto/perception/models/centerpoint/v1/pts_backbone_neck_head_centerpoint_tiny.onnx
342+
343+
## Acknowledgment: deepen.ai's 3D Annotation Tools Contribution
344+
345+
Special thanks to [Deepen AI](https://www.deepen.ai/) for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset.
346+
347+
## Legal Notice
348+
349+
_The nuScenes dataset is released publicly for non-commercial use under the Creative
350+
Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License.
351+
Additional Terms of Use can be found at <https://www.nuscenes.org/terms-of-use>.
352+
To inquire about a commercial license please contact nuscenes@motional.com._

0 commit comments

Comments
 (0)