@@ -62,7 +62,7 @@ ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoi
62
62
63
63
You can download the onnx format of trained models by clicking on the links below.
64
64
65
- - Centerpoint : [ pts_voxel_encoder_centerpoint.onnx] ( https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx ) , [ pts_backbone_neck_head_centerpoint.onnx] ( https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx )
65
+ - Centerpoint: [ pts_voxel_encoder_centerpoint.onnx] ( https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx ) , [ pts_backbone_neck_head_centerpoint.onnx] ( https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx )
66
66
- Centerpoint tiny: [ pts_voxel_encoder_centerpoint_tiny.onnx] ( https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx ) , [ pts_backbone_neck_head_centerpoint_tiny.onnx] ( https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx )
67
67
68
68
` Centerpoint ` was trained in ` nuScenes ` (~ 28k lidar frames) [ 8] and TIER IV's internal database (~ 11k lidar frames) for 60 epochs.
@@ -121,22 +121,22 @@ pip install -v -e .
121
121
122
122
#### Use Training Repository with Docker
123
123
124
- Alternatively, you can use Docker to run the mmdetection3d repository.We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.
124
+ Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.
125
125
126
126
Clone fork of the mmdetection3d repository
127
127
128
128
``` bash
129
129
git clone https://github.com/autowarefoundation/mmdetection3d.git
130
130
```
131
131
132
- Build the Docker image by running the following command
132
+ Build the Docker image by running the following command:
133
133
134
134
``` bash
135
135
cd mmdetection3d
136
136
docker build -t mmdetection3d -f docker/Dockerfile .
137
137
```
138
138
139
- Run the Docker container
139
+ Run the Docker container:
140
140
141
141
``` bash
142
142
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
@@ -166,9 +166,10 @@ python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./dat
166
166
#### Prepare the config file
167
167
168
168
The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is
169
- located at mmdetection3d/projects/AutowareCenterPoint/configs. This configuration file is a derived version of the
170
- ` centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py ` configuration file from mmdetection3D.
171
- In this custom configuration, the ** use_voxel_center_z parameter** is set to ** False** to deactivate the z coordinate of the voxel center,
169
+ located at ` mmdetection3d/projects/AutowareCenterPoint/configs ` . This configuration file is a derived version of
170
+ [ this centerpoint configuration file] ( https://github.com/autowarefoundation/mmdetection3d/blob/5c0613be29bd2e51771ec5e046d89ba3089887c7/configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py )
171
+ from mmdetection3D.
172
+ In this custom configuration, the ** use_voxel_center_z parameter** is set as ** False** to deactivate the z coordinate of the voxel center,
172
173
aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as ** [ 32, 32] ** .
173
174
174
175
The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file.
@@ -190,7 +191,6 @@ including training, evaluation, and fine-tuning of models. It is organized in th
190
191
##### Download the sample dataset
191
192
192
193
``` bash
193
-
194
194
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz
195
195
# Extract the dataset to a folder of your choice
196
196
tar -xvf lidar_detection_sample_dataset.tar.gz
@@ -200,10 +200,9 @@ ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/
200
200
201
201
##### Prepare dataset and evaluate trained model
202
202
203
- Create .pkl files for training, evaluation, and testing.
203
+ Create ` .pkl ` files for training, evaluation, and testing.
204
204
205
205
``` bash
206
-
207
206
python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2
208
207
```
209
208
0 commit comments