|
1 | 1 | # Autoware artifacts
|
2 | 2 |
|
3 |
| -The Autoware perception stack uses models for inference. These models are automatically downloaded if using `ansible`, but they can also be downloaded manually. |
| 3 | +The Autoware perception stack uses models for inference. These models are automatically downloaded as part of the `setup-dev-env.sh` script. |
4 | 4 |
|
5 |
| -## Download instructions |
6 |
| - |
7 |
| -The artifacts files are stored in a common location, hosted by Web.Auto |
8 |
| - |
9 |
| -Any tool that can download files from the web (e.g. `wget` or `curl`) is the only requirement for downloading these files: |
10 |
| - |
11 |
| -```console |
12 |
| -# yabloc_pose_initializer |
13 |
| - |
14 |
| -$ mkdir -p ~/autoware_data/yabloc_pose_initializer/ |
15 |
| -$ wget -P ~/autoware_data/yabloc_pose_initializer/ \ |
16 |
| - https://s3.ap-northeast-2.wasabisys.com/pinto-model-zoo/136_road-segmentation-adas-0001/resources.tar.gz |
17 |
| - |
18 |
| - |
19 |
| -# image_projection_based_fusion |
20 |
| - |
21 |
| -$ mkdir -p ~/autoware_data/image_projection_based_fusion/ |
22 |
| -$ wget -P ~/autoware_data/image_projection_based_fusion/ \ |
23 |
| - https://awf.ml.dev.web.auto/perception/models/pointpainting/v4/pts_voxel_encoder_pointpainting.onnx \ |
24 |
| - https://awf.ml.dev.web.auto/perception/models/pointpainting/v4/pts_backbone_neck_head_pointpainting.onnx |
25 |
| - |
26 |
| - |
27 |
| -# lidar_apollo_instance_segmentation |
28 |
| - |
29 |
| -$ mkdir -p ~/autoware_data/lidar_apollo_instance_segmentation/ |
30 |
| -$ wget -P ~/autoware_data/lidar_apollo_instance_segmentation/ \ |
31 |
| - https://awf.ml.dev.web.auto/perception/models/lidar_apollo_instance_segmentation/vlp-16.onnx \ |
32 |
| - https://awf.ml.dev.web.auto/perception/models/lidar_apollo_instance_segmentation/hdl-64.onnx \ |
33 |
| - https://awf.ml.dev.web.auto/perception/models/lidar_apollo_instance_segmentation/vls-128.onnx |
34 |
| - |
35 |
| - |
36 |
| -# lidar_centerpoint |
37 |
| - |
38 |
| -$ mkdir -p ~/autoware_data/lidar_centerpoint/ |
39 |
| -$ wget -P ~/autoware_data/lidar_centerpoint/ \ |
40 |
| - https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint.onnx \ |
41 |
| - https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint.onnx \ |
42 |
| - https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_voxel_encoder_centerpoint_tiny.onnx \ |
43 |
| - https://awf.ml.dev.web.auto/perception/models/centerpoint/v2/pts_backbone_neck_head_centerpoint_tiny.onnx |
44 |
| - |
45 |
| - |
46 |
| -# tensorrt_yolox |
47 |
| - |
48 |
| -$ mkdir -p ~/autoware_data/tensorrt_yolox/ |
49 |
| -$ wget -P ~/autoware_data/tensorrt_yolox/ \ |
50 |
| - https://awf.ml.dev.web.auto/perception/models/yolox-tiny.onnx \ |
51 |
| - https://awf.ml.dev.web.auto/perception/models/yolox-sPlus-opt.onnx \ |
52 |
| - https://awf.ml.dev.web.auto/perception/models/yolox-sPlus-opt.EntropyV2-calibration.table \ |
53 |
| - https://awf.ml.dev.web.auto/perception/models/object_detection_yolox_s/v1/yolox-sPlus-T4-960x960-pseudo-finetune.onnx \ |
54 |
| - https://awf.ml.dev.web.auto/perception/models/object_detection_yolox_s/v1/yolox-sPlus-T4-960x960-pseudo-finetune.EntropyV2-calibration.table \ |
55 |
| - https://awf.ml.dev.web.auto/perception/models/label.txt |
56 |
| - |
57 |
| - |
58 |
| -# traffic_light_classifier |
59 |
| - |
60 |
| -$ mkdir -p ~/autoware_data/traffic_light_classifier/ |
61 |
| -$ wget -P ~/autoware_data/traffic_light_classifier/ \ |
62 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_mobilenetv2_batch_1.onnx \ |
63 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_mobilenetv2_batch_4.onnx \ |
64 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_mobilenetv2_batch_6.onnx \ |
65 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_efficientNet_b1_batch_1.onnx \ |
66 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_efficientNet_b1_batch_4.onnx \ |
67 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/traffic_light_classifier_efficientNet_b1_batch_6.onnx \ |
68 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v2/lamp_labels.txt \ |
69 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v3/ped_traffic_light_classifier_mobilenetv2_batch_1.onnx \ |
70 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v3/ped_traffic_light_classifier_mobilenetv2_batch_4.onnx \ |
71 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v3/ped_traffic_light_classifier_mobilenetv2_batch_6.onnx \ |
72 |
| - https://awf.ml.dev.web.auto/perception/models/traffic_light_classifier/v3/lamp_labels_ped.txt |
| 5 | +The models are hosted by Web.Auto. |
73 | 6 |
|
| 7 | +Default `data_dir` location is `~/autoware_data`. |
74 | 8 |
|
| 9 | +## Download instructions |
75 | 10 |
|
76 |
| -# traffic_light_fine_detector |
77 |
| - |
78 |
| -$ mkdir -p ~/autoware_data/traffic_light_fine_detector/ |
79 |
| -$ wget -P ~/autoware_data/traffic_light_fine_detector/ \ |
80 |
| - https://awf.ml.dev.web.auto/perception/models/tlr_yolox_s/v3/tlr_car_ped_yolox_s_batch_1.onnx \ |
81 |
| - https://awf.ml.dev.web.auto/perception/models/tlr_yolox_s/v3/tlr_car_ped_yolox_s_batch_4.onnx \ |
82 |
| - https://awf.ml.dev.web.auto/perception/models/tlr_yolox_s/v3/tlr_car_ped_yolox_s_batch_6.onnx \ |
83 |
| - https://awf.ml.dev.web.auto/perception/models/tlr_yolox_s/v3/tlr_labels.txt |
84 |
| - |
85 |
| - |
86 |
| -# tvm_utility |
87 |
| - |
88 |
| -$ mkdir -p ~/autoware_data/tvm_utility/models/yolo_v2_tiny |
89 |
| -$ wget -P ~/autoware_data/tvm_utility/ \ |
90 |
| - https://autoware-modelzoo.s3.us-east-2.amazonaws.com/models/3.0.0-20221221/yolo_v2_tiny-x86_64-llvm-3.0.0-20221221.tar.gz |
91 |
| - |
92 |
| - |
93 |
| -# lidar_centerpoint_tvm |
94 |
| - |
95 |
| -$ mkdir -p ~/autoware_data/lidar_centerpoint_tvm/models/centerpoint_encoder |
96 |
| -$ mkdir -p ~/autoware_data/lidar_centerpoint_tvm/models/centerpoint_backbone |
97 |
| -$ wget -P ~/autoware_data/lidar_centerpoint_tvm/ \ |
98 |
| - https://autoware-modelzoo.s3.us-east-2.amazonaws.com/models/3.0.0-20221221/centerpoint_encoder-x86_64-llvm-3.0.0-20221221.tar.gz \ |
99 |
| - https://autoware-modelzoo.s3.us-east-2.amazonaws.com/models/3.0.0-20221221/centerpoint_backbone-x86_64-llvm-3.0.0-20221221.tar.gz |
100 |
| - |
| 11 | +### Requirements |
101 | 12 |
|
102 |
| -# lidar_apollo_segmentation_tvm |
| 13 | +Install ansible following the instructions in the [ansible installation guide](../../README.md#ansible-installation). |
103 | 14 |
|
104 |
| -$ mkdir -p ~/autoware_data/lidar_apollo_segmentation_tvm/models/baidu_cnn |
105 |
| -$ wget -P ~/autoware_data/lidar_apollo_segmentation_tvm/ \ |
106 |
| - https://autoware-modelzoo.s3.us-east-2.amazonaws.com/models/3.0.0-20221221/baidu_cnn-x86_64-llvm-3.0.0-20221221.tar.gz |
107 |
| -``` |
| 15 | +### Download artifacts |
108 | 16 |
|
109 |
| -After downloading you can check integrity of the files with `sha256sum`: |
| 17 | +#### Install ansible collections |
110 | 18 |
|
111 |
| -```console |
112 |
| -# |
113 |
| -$ cd ~/autoware_data/ |
114 |
| -$ wget -q -O - https://raw.githubusercontent.com/autowarefoundation/autoware/main/ansible/roles/artifacts/SHA256SUMS | sha256sum -c |
| 19 | +```bash |
| 20 | +cd ~/autoware # The root directory of the cloned repository |
| 21 | +ansible-galaxy collection install -f -r "ansible-galaxy-requirements.yaml" |
115 | 22 | ```
|
116 | 23 |
|
117 |
| -Extracting files: |
| 24 | +This step should be repeated when a new playbook is added. |
118 | 25 |
|
119 |
| -```console |
120 |
| -# yabloc_pose_initializer |
| 26 | +#### Run the playbook |
121 | 27 |
|
122 |
| -$ tar -xf ~/autoware_data/yabloc_pose_initializer/resources.tar.gz \ |
123 |
| - -C ~/autoware_data/yabloc_pose_initializer/ |
124 |
| - |
125 |
| - |
126 |
| -# tvm_utility |
127 |
| - |
128 |
| -$ tar -xf ~/autoware_data/tvm_utility/yolo_v2_tiny-x86_64-llvm-3.0.0-20221221.tar.gz \ |
129 |
| - -C ~/autoware_data/tvm_utility/models/yolo_v2_tiny/ |
130 |
| - |
131 |
| - |
132 |
| -# lidar_centerpoint_tvm |
133 |
| - |
134 |
| -$ tar -xf ~/autoware_data/lidar_centerpoint_tvm/centerpoint_encoder-x86_64-llvm-3.0.0-20221221.tar.gz \ |
135 |
| - -C ~/autoware_data/lidar_centerpoint_tvm/models/centerpoint_encoder |
136 |
| -$ tar -xf ~/autoware_data/lidar_centerpoint_tvm/centerpoint_backbone-x86_64-llvm-3.0.0-20221221.tar.gz \ |
137 |
| - -C ~/autoware_data/lidar_centerpoint_tvm/models/centerpoint_backbone |
138 |
| - |
139 |
| - |
140 |
| -# lidar_apollo_segmentation_tvm |
141 |
| - |
142 |
| -$ tar -xf ~/autoware_data/lidar_apollo_segmentation_tvm/baidu_cnn-x86_64-llvm-3.0.0-20221221.tar.gz \ |
143 |
| - -C ~/autoware_data/lidar_apollo_segmentation_tvm/models/baidu_cnn |
| 28 | +```bash |
| 29 | +ansible-playbook autoware.dev_env.download_artifacts -e "data_dir=$HOME/autoware_data" --ask-become-pass |
144 | 30 | ```
|
| 31 | + |
| 32 | +This will download and extract the artifacts to the specified directory and validate the checksums. |
0 commit comments