Skip to content

Commit ef15632

Browse files
author
M. Fatih Cırıt
committed
small fixes and spellcheck
Signed-off-by: M. Fatih Cırıt <mfc@leodrive.ai>
1 parent 2f780bd commit ef15632

File tree

1 file changed

+9
-10
lines changed

1 file changed

+9
-10
lines changed

perception/lidar_centerpoint/README.md

+9-10
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ ros2 launch lidar_centerpoint lidar_centerpoint.launch.xml model_name:=centerpoi
6262

6363
You can download the onnx format of trained models by clicking on the links below.
6464

65-
- Centerpoint : [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx)
65+
- Centerpoint: [pts_voxel_encoder_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx), [pts_backbone_neck_head_centerpoint.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx)
6666
- Centerpoint tiny: [pts_voxel_encoder_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx), [pts_backbone_neck_head_centerpoint_tiny.onnx](https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx)
6767

6868
`Centerpoint` was trained in `nuScenes` (~28k lidar frames) [8] and TIER IV's internal database (~11k lidar frames) for 60 epochs.
@@ -121,22 +121,22 @@ pip install -v -e .
121121

122122
#### Use Training Repository with Docker
123123

124-
Alternatively, you can use Docker to run the mmdetection3d repository.We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.
124+
Alternatively, you can use Docker to run the mmdetection3d repository. We provide a Dockerfile to build a Docker image with the mmdetection3d repository and its dependencies.
125125

126126
Clone fork of the mmdetection3d repository
127127

128128
```bash
129129
git clone https://github.com/autowarefoundation/mmdetection3d.git
130130
```
131131

132-
Build the Docker image by running the following command
132+
Build the Docker image by running the following command:
133133

134134
```bash
135135
cd mmdetection3d
136136
docker build -t mmdetection3d -f docker/Dockerfile .
137137
```
138138

139-
Run the Docker container
139+
Run the Docker container:
140140

141141
```bash
142142
docker run --gpus all --shm-size=8g -it -v {DATA_DIR}:/mmdetection3d/data mmdetection3d
@@ -166,9 +166,10 @@ python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./dat
166166
#### Prepare the config file
167167

168168
The configuration file that illustrates how to train the CenterPoint model with the NuScenes dataset is
169-
located at mmdetection3d/projects/AutowareCenterPoint/configs. This configuration file is a derived version of the
170-
`centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py` configuration file from mmdetection3D.
171-
In this custom configuration, the **use_voxel_center_z parameter** is set to **False** to deactivate the z coordinate of the voxel center,
169+
located at `mmdetection3d/projects/AutowareCenterPoint/configs`. This configuration file is a derived version of
170+
[this centerpoint configuration file](https://github.com/autowarefoundation/mmdetection3d/blob/5c0613be29bd2e51771ec5e046d89ba3089887c7/configs/centerpoint/centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d.py)
171+
from mmdetection3D.
172+
In this custom configuration, the **use_voxel_center_z parameter** is set as **False** to deactivate the z coordinate of the voxel center,
172173
aligning with the original paper's specifications and making the model compatible with Autoware. Additionally, the filter size is set as **[32, 32]**.
173174

174175
The CenterPoint model can be tailored to your specific requirements by modifying various parameters within the configuration file.
@@ -190,7 +191,6 @@ including training, evaluation, and fine-tuning of models. It is organized in th
190191
##### Download the sample dataset
191192

192193
```bash
193-
194194
wget https://autoware-files.s3.us-west-2.amazonaws.com/dataset/lidar_detection_sample_dataset.tar.gz
195195
#Extract the dataset to a folder of your choice
196196
tar -xvf lidar_detection_sample_dataset.tar.gz
@@ -200,10 +200,9 @@ ln -s /PATH/TO/DATASET/ /PATH/TO/mmdetection3d/data/tier4_dataset/
200200

201201
##### Prepare dataset and evaluate trained model
202202

203-
Create .pkl files for training, evaluation, and testing.
203+
Create `.pkl` files for training, evaluation, and testing.
204204

205205
```bash
206-
207206
python tools/create_data.py T4Dataset --root-path data/sample_dataset/ --out-dir data/sample_dataset/ --extra-tag T4Dataset --version sample_dataset --annotation-hz 2
208207
```
209208

0 commit comments

Comments
 (0)