You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
145
+
```
146
+
147
+
### Preparing NuScenes dataset for training
148
+
149
+
**Step 1.** Download the NuScenes dataset from the [official website](https://www.nuscenes.org/download) and extract the dataset to a folder of your choice.
150
+
151
+
**Note:** The NuScenes dataset is large and requires significant disk space. Ensure you have enough storage available before proceeding.
152
+
153
+
**Step 2.** Create a symbolic link to the dataset folder
In this custom configuration, the **use_voxel_center_z parameter** is set as **False** to deactivate the z coordinate of the voxel center,
175
+
aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**.
176
+
177
+
The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file.
178
+
This includes adjustments related to preprocessing operations, training, testing, model architecture, dataset, optimizer, learning rate scheduler, and more.
For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors:
189
+
1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians,
190
+
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes,
191
+
including training, evaluation, and fine-tuning of models. It is organized in the T4 format.
Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset
220
+
and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar
221
+
sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at
222
+
the base link location of the vehicle.
223
+
224
+
### Deploying CenterPoint model to Autoware
225
+
226
+
#### Convert CenterPoint PyTorch model to ONNX Format
227
+
228
+
The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network,
229
+
such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository,
230
+
we have included a script that converts the CenterPoint model to Autoware compatible ONNX format.
231
+
You can find it in `mmdetection3d/projects/AutowareCenterPoint` file.
Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like
240
+
point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.
## Acknowledgment: deepen.ai's 3D Annotation Tools Contribution
344
+
345
+
Special thanks to [Deepen AI](https://www.deepen.ai/) for providing their 3D Annotation tools, which have been instrumental in creating our sample dataset.
346
+
347
+
## Legal Notice
348
+
349
+
_The nuScenes dataset is released publicly for non-commercial use under the Creative
350
+
Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License.
351
+
Additional Terms of Use can be found at <https://www.nuscenes.org/terms-of-use>.
352
+
To inquire about a commercial license please contact nuscenes@motional.com._
0 commit comments