Skip to content

Commit 48afb5c

Browse files
committed
refactor(lidar_centerpoint): and link and small fix
Signed-off-by: Kaan Çolak <kaancolak95@gmail.com>
1 parent 20ef77c commit 48afb5c

File tree

1 file changed

+12
-12
lines changed

1 file changed

+12
-12
lines changed

perception/lidar_centerpoint/README.md

+12-12
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ You can download the onnx format of trained models by clicking on the links belo
7373
### Overview
7474

7575
This guide provides instructions on training a CenterPoint model using the **mmdetection3d** repository
76-
and seamlessly deploying it within the Autoware.
76+
and seamlessly deploying it within Autoware.
7777

7878
### Installation
7979

@@ -90,15 +90,15 @@ conda activate train-centerpoint
9090

9191
**Step 3.** Install PyTorch
9292

93-
Please ensure you have PyTorch installed, compatible with CUDA 11.6, as it is a requirement for current Autoware.
93+
Please ensure you have PyTorch installed, and compatible with CUDA 11.6, as it is a requirement for current Autoware.
9494

9595
```bash
9696
conda install pytorch==1.13.1 torchvision==0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
9797
```
9898

9999
#### Install mmdetection3d
100100

101-
**Step 1.** Install MMEngine, MMCV and MMDetection using MIM
101+
**Step 1.** Install MMEngine, MMCV, and MMDetection using MIM
102102

103103
```bash
104104
pip install -U openmim
@@ -114,7 +114,7 @@ Notably, we've made the PointPillar z voxel feature input optional to maintain c
114114
In addition, we've integrated a PyTorch to ONNX converter and a Tier4 Dataset format reader for added functionality.
115115

116116
```bash
117-
git clone https://github.com/autowarefoundation/mmdetection3d.git -b dev-1.x-autoware
117+
git clone https://github.com/autowarefoundation/mmdetection3d.git
118118
cd mmdetection3d
119119
pip install -v -e .
120120
```
@@ -157,25 +157,25 @@ python tools/train.py configs/centerpoint/centerpoint_custom.py --work-dir ./wor
157157

158158
#### Evaluation of the trained model
159159

160-
For evaluation purposes, we have included a sample dataset captured from vehicle which consists of the following LiDAR sensors:
160+
For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors:
161161
1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians,
162-
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames annotatated as a 2 frame, each second. You can employ this dataset for a wide range of purposes,
162+
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes,
163163
including training, evaluation, and fine-tuning of models. It is organized in the Tier4Dataset format.
164164

165165
##### Download the sample dataset
166166

167167
```bash
168-
TODO(kaancolak): add the link to the sample dataset
169168

169+
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz
170170
#Extract the dataset to a folder of your choice
171-
171+
tar -xvf lidar_detection_sample_dataset.tar.gz
172172
#Create a symbolic link to the dataset folder
173173
ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/
174174
```
175175

176176
##### Prepare dataset and evaluate trained model
177177

178-
Create .pkl files for the purposes of training, evaluation, and testing.
178+
Create .pkl files for training, evaluation, and testing.
179179

180180
```bash
181181

@@ -188,7 +188,7 @@ Run evaluation
188188
python tools/test.py ./configs/centerpoint/test-centerpoint.py /PATH/OF/THE/CHECKPOINT --task lidar_det
189189
```
190190

191-
Evaluation result could be relatively low because of the e to variations in sensor modalities between the sample dataset
191+
Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset
192192
and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar
193193
sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at
194194
the base link location of the vehicle.
@@ -199,14 +199,14 @@ the base link location of the vehicle.
199199

200200
The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network,
201201
such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository,
202-
we have included a script that converts the CenterPoint model to Autoware compitible ONNX format.
202+
we have included a script that converts the CenterPoint model to Autoware compatible ONNX format.
203203
You can find it in `mmdetection3d/tools/centerpoint_onnx_converter.py` file.
204204

205205
```bash
206206
python tools/centerpoint_onnx_converter.py --cfg configs/centerpoint/centerpoint_custom.py --ckpt work_dirs/centerpoint_custom/YOUR_BEST_MODEL.pth -work-dir ./work_dirs/onnx_models
207207
```
208208

209-
#### Create the config file for custom model
209+
#### Create the config file for the custom model
210210

211211
Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like
212212
point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.

0 commit comments

Comments
 (0)