You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For evaluation purposes, we have included a sample dataset captured from vehicle which consists of the following LiDAR sensors:
160
+
For evaluation purposes, we have included a sample dataset captured from the vehicle which consists of the following LiDAR sensors:
161
161
1 x Velodyne VLS128, 4 x Velodyne VLP16, and 1 x Robosense RS Bpearl. This dataset comprises 600 LiDAR frames and encompasses 5 distinct classes, 6905 cars, 3951 pedestrians,
162
-
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames annotatated as a 2 frame, each second. You can employ this dataset for a wide range of purposes,
162
+
75 cyclists, 162 buses, and 326 trucks 3D annotation. In the sample dataset, frames are annotated as 2 frames for each second. You can employ this dataset for a wide range of purposes,
163
163
including training, evaluation, and fine-tuning of models. It is organized in the Tier4Dataset format.
164
164
165
165
##### Download the sample dataset
166
166
167
167
```bash
168
-
TODO(kaancolak): add the link to the sample dataset
Evaluation result could be relatively low because of the e to variations in sensor modalities between the sample dataset
191
+
Evaluation results could be relatively low because of the e to variations in sensor modalities between the sample dataset
192
192
and the training dataset. The model's training parameters are originally tailored to the NuScenes dataset, which employs a single lidar
193
193
sensor positioned atop the vehicle. In contrast, the provided sample dataset comprises concatenated point clouds positioned at
194
194
the base link location of the vehicle.
@@ -199,14 +199,14 @@ the base link location of the vehicle.
199
199
200
200
The lidar_centerpoint implementation requires two ONNX models as input the voxel encoder and the backbone-neck-head of the CenterPoint model, other aspects of the network,
201
201
such as preprocessing operations, are implemented externally. Under the fork of the mmdetection3d repository,
202
-
we have included a script that converts the CenterPoint model to Autoware compitible ONNX format.
202
+
we have included a script that converts the CenterPoint model to Autoware compatible ONNX format.
203
203
You can find it in `mmdetection3d/tools/centerpoint_onnx_converter.py` file.
Create a new config file named **centerpoint_custom.param.yaml** under the config file directory of the lidar_centerpoint node. Sets the parameters of the config file like
212
212
point_cloud_range, point_feature_size, voxel_size, etc. according to the training config file.
0 commit comments