Skip to content

Commit cee5c76

Browse files
author
gz-chenxiangrong
committed
cpp_update
0 parents  commit cee5c76

File tree

318 files changed

+46368
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

318 files changed

+46368
-0
lines changed

.gitignore

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
__pycache__
2+
datasets/
3+
models/__pycache__
4+
runs
5+
utils/__pycache__
6+
utils/wandb_logging/__pycache__
7+
wandb

LICENSE

+674
Large diffs are not rendered by default.

README.md

+316
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,316 @@
1+
# YOLOv5-Lite:Lighter, faster and easier to deploy ![](https://zenodo.org/badge/DOI/10.5281/zenodo.5241425.svg)
2+
3+
![论文插图](https://user-images.githubusercontent.com/82716366/167448925-a431d3a4-ad5d-491d-be95-c90701122a54.png)
4+
5+
Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, and fewer parameters) and faster (add shuffle channel, yolov5 head for channel reduce. It can infer at least 10+ FPS On the Raspberry Pi 4B when input the frame with 320×320) and is easier to deploy (removing the Focus layer and four slice operations, reducing the model quantization accuracy to an acceptable range).
6+
7+
![image](https://user-images.githubusercontent.com/82716366/135564164-3ec169c8-93a7-4ea3-b0dc-40f1059601ef.png)
8+
9+
## Comparison of ablation experiment results
10+
11+
ID|Model | Input_size|Flops| Params | Size(M) |Map@0.5|Map@.5:0.95
12+
:-----:|:-----:|:-----:|:----------:|:----:|:----:|:----:|:----:|
13+
001| yolo-fastest| 320×320|0.25G|0.35M|1.4| 24.4| -
14+
002| YOLOv5-Lite<sub>e</sub><sup>ours</sup>|320×320|0.73G|0.78M|1.7| 35.1|-|
15+
003| NanoDet-m| 320×320| 0.72G|0.95M|1.8|- |20.6
16+
004| yolo-fastest-xl| 320×320|0.72G|0.92M|3.5| 34.3| -
17+
005| YOLOX<sub>Nano</sub>|416×416|1.08G|0.91M|7.3(fp32)| -|25.8|
18+
006| yolov3-tiny| 416×416| 6.96G|6.06M|23.0| 33.1|16.6
19+
007| yolov4-tiny| 416×416| 5.62G|8.86M| 33.7|40.2|21.7
20+
008| YOLOv5-Lite<sub>s</sub><sup>ours</sup>| 416×416|1.66G |1.64M|3.4| 42.0|25.2
21+
009| YOLOv5-Lite<sub>c</sub><sup>ours</sup>| 512×512|5.92G |4.57M|9.2| 50.9|32.5|
22+
010| NanoDet-EfficientLite2| 512×512| 7.12G|4.71M|18.3|- |32.6
23+
011| YOLOv5s(6.0)| 640×640| 16.5G|7.23M|14.0| 56.0|37.2
24+
012| YOLOv5-Lite<sub>g</sub><sup>ours</sup>| 640×640|15.6G |5.39M|10.9| 57.6|39.1|
25+
26+
See the wiki: https://github.com/ppogg/YOLOv5-Lite/wiki/Test-the-map-of-models-about-coco
27+
28+
## Comparison on different platforms
29+
30+
Equipment|Computing backend|System|Input|Framework|v5lite-e|v5lite-s|v5lite-c|v5lite-g|YOLOv5s
31+
:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:
32+
Inter|@i5-10210U|window(x86)|640×640|openvino|-|-|46ms|-|131ms
33+
Nvidia|@RTX 2080Ti|Linux(x86)|640×640|torch|-|-|-|15ms|14ms
34+
Redmi K30|@Snapdragon 730G|Android(armv8)|320×320|ncnn|27ms|38ms|-|-|163ms
35+
Xiaomi 10|@Snapdragon 865|Android(armv8)|320×320|ncnn|10ms|14ms|-|-|163ms
36+
Raspberrypi 4B|@ARM Cortex-A72|Linux(arm64)|320×320|ncnn|-|84ms|-|-|371ms
37+
Raspberrypi 4B|@ARM Cortex-A72|Linux(arm64)|320×320|mnn|-|76ms|-|-|356ms
38+
AXera-Pi|Cortex A7@CPU<br />3.6TOPs @NPU|Linux(arm64)|640×640|axpi|-|-|-|22ms|22ms
39+
40+
41+
* The above is a 4-thread test benchmark
42+
* Raspberrypi 4B enable bf16s optimization,[Raspberrypi 64 Bit OS](http://downloads.raspberrypi.org/raspios_arm64/images/raspios_arm64-2020-08-24/)
43+
44+
### qq交流群:993965802
45+
46+
入群答案:剪枝 or 蒸馏 or 量化 or 低秩分解(任意其一均可)
47+
48+
## ·Model Zoo·
49+
50+
#### @v5lite-e:
51+
52+
Model|Size|Backbone|Head|Framework|Design for
53+
:---:|:---:|:---:|:---:|:---:|:---
54+
v5Lite-e.pt|1.7m|shufflenetv2(Megvii)|v5Litee-head|Pytorch|Arm-cpu
55+
v5Lite-e.bin<br />v5Lite-e.param|1.7m|shufflenetv2|v5Litee-head|ncnn|Arm-cpu
56+
v5Lite-e-int8.bin<br />v5Lite-e-int8.param|0.9m|shufflenetv2|v5Litee-head|ncnn|Arm-cpu
57+
v5Lite-e-fp32.mnn|3.0m|shufflenetv2|v5Litee-head|mnn|Arm-cpu
58+
v5Lite-e-fp32.tnnmodel<br />v5Lite-e-fp32.tnnproto|2.9m|shufflenetv2|v5Litee-head|tnn|arm-cpu
59+
v5Lite-e-320.onnx|3.1m|shufflenetv2|v5Litee-head|onnxruntime|x86-cpu
60+
61+
#### @v5lite-s:
62+
63+
Model|Size|Backbone|Head|Framework|Design for
64+
:---:|:---:|:---:|:---:|:---:|:---
65+
v5Lite-s.pt|3.4m|shufflenetv2(Megvii)|v5Lites-head|Pytorch|Arm-cpu
66+
v5Lite-s.bin<br />v5Lite-s.param|3.3m|shufflenetv2|v5Lites-head|ncnn|Arm-cpu
67+
v5Lite-s-int8.bin<br />v5Lite-s-int8.param|1.7m|shufflenetv2|v5Lites-head|ncnn|Arm-cpu
68+
v5Lite-s.mnn|3.3m|shufflenetv2|v5Lites-head|mnn|Arm-cpu
69+
v5Lite-s-int4.mnn|987k|shufflenetv2|v5Lites-head|mnn|Arm-cpu
70+
v5Lite-s-fp16.bin<br />v5Lite-s-fp16.xml|3.4m|shufflenetv2|v5Lites-head|openvivo|x86-cpu
71+
v5Lite-s-fp32.bin<br />v5Lite-s-fp32.xml|6.8m|shufflenetv2|v5Lites-head|openvivo|x86-cpu
72+
v5Lite-s-fp16.tflite|3.3m|shufflenetv2|v5Lites-head|tflite|arm-cpu
73+
v5Lite-s-fp32.tflite|6.7m|shufflenetv2|v5Lites-head|tflite|arm-cpu
74+
v5Lite-s-int8.tflite|1.8m|shufflenetv2|v5Lites-head|tflite|arm-cpu
75+
v5Lite-s-416.onnx|6.4m|shufflenetv2|v5Lites-head|onnxruntime|x86-cpu
76+
77+
#### @v5lite-c:
78+
79+
Model|Size|Backbone|Head|Framework|Design for
80+
:---:|:---:|:---:|:---:|:---:|:---:
81+
v5Lite-c.pt|9m|PPLcnet(Baidu)|v5s-head|Pytorch|x86-cpu / x86-vpu
82+
v5Lite-c.bin<br />v5Lite-c.xml|8.7m|PPLcnet|v5s-head|openvivo|x86-cpu / x86-vpu
83+
v5Lite-c-512.onnx|18m|PPLcnet|v5s-head|onnxruntime|x86-cpu
84+
85+
#### @v5lite-g:
86+
87+
Model|Size|Backbone|Head|Framework|Design for
88+
:---:|:---:|:---:|:---:|:---:|:---:
89+
v5Lite-g.pt|10.9m|Repvgg(Tsinghua)|v5Liteg-head|Pytorch|x86-gpu / arm-gpu / arm-npu
90+
v5Lite-g-int8.engine|8.5m|Repvgg-yolov5|v5Liteg-head|Tensorrt|x86-gpu / arm-gpu / arm-npu
91+
v5lite-g-int8.tmfile|8.7m|Repvgg-yolov5|v5Liteg-head|Tengine| arm-npu
92+
v5Lite-g-640.onnx|21m|Repvgg-yolov5|yolov5-head|onnxruntime|x86-cpu
93+
v5Lite-g-640.joint|7.1m|Repvgg-yolov5|yolov5-head|axpi|arm-npu
94+
95+
#### Download Link:
96+
97+
> - [ ] `v5lite-e.pt`: | [Baidu Drive](https://pan.baidu.com/s/1bjXo7KIFkOnB3pxixHeMPQ) | [Google Drive](https://drive.google.com/file/d/1_DvT_qjznuE-ev_pDdGKwRV3MjZ3Zos8/view?usp=sharing) |<br>
98+
>> |──────`ncnn-fp16`: | [Baidu Drive](https://pan.baidu.com/s/1_QvWvkhHB7kdcRZ6k4at1g) | [Google Drive](https://drive.google.com/drive/folders/1w4mThJmqjhT1deIXMQAQ5xjWI3JNyzUl?usp=sharing) |<br>
99+
>> |──────`ncnn-int8`: | [Baidu Drive](https://pan.baidu.com/s/1JO8qbbVx6zJ-6aq5EgM6PA) | [Google Drive](https://drive.google.com/drive/folders/1YNtNVWlRqN8Dwc_9AtRkN0LFkDeJ92gN?usp=sharing) |<br>
100+
>> └──────`onnx-fp32`: | [Baidu Drive](https://pan.baidu.com/s/1gwLqiPLTDjlSqWJ7AnEB1A) | [Google Drive](https://drive.google.com/file/d/15_z6VlbWuonsHak-7bdtw-QOcvOaMddB/view?usp=sharing) |<br>
101+
> - [ ] `v5lite-s.pt`: | [Baidu Drive](https://pan.baidu.com/s/1j0n0K1kqfv1Ouwa2QSnzCQ) | [Google Drive](https://drive.google.com/file/d/1ccLTmGB5AkKPjDOyxF3tW7JxGWemph9f/view?usp=sharing) |<br>
102+
>> |──────`ncnn-fp16`: | [Baidu Drive](https://pan.baidu.com/s/1kWtwx1C0OTTxbwqJyIyXWg) | [Google Drive](https://drive.google.com/drive/folders/1w4mThJmqjhT1deIXMQAQ5xjWI3JNyzUl?usp=sharing) |<br>
103+
>> |──────`ncnn-int8`: | [Baidu Drive](https://pan.baidu.com/s/1QX6-oNynrW-f3i0P0Hqe4w) | [Google Drive](https://drive.google.com/drive/folders/1YNtNVWlRqN8Dwc_9AtRkN0LFkDeJ92gN?usp=sharing) |<br>
104+
>> |──────`mnn-fp16`: | [Baidu Drive](https://pan.baidu.com/s/12lOtPTl4xujWm5BbFJh3zA) | [Google Drive](https://drive.google.com/drive/folders/1PpFoZ4b8mVs1GmMxgf0WUtXUWaGK_JZe?usp=sharing) |<br>
105+
>> |──────`mnn-int4`: | [Baidu Drive](https://pan.baidu.com/s/11fbjFi18xkq4ltAKUKDOCA) | [Google Drive](https://drive.google.com/drive/folders/1mSU8g94c77KKsHC-07p5V3tJOZYPQ-g6?usp=sharing) |<br>
106+
>> |──────`onnx-fp32`: | [Baidu Drive](https://pan.baidu.com/s/1gwLqiPLTDjlSqWJ7AnEB1A) | [Google Drive](https://drive.google.com/file/d/123feVchyuqCRZV038I1Gn1gpJEVK4GFh/view?usp=sharing) |<br>
107+
>> └──────`tengine-fp32`: | [Baidu Drive](https://pan.baidu.com/s/123r630O8Fco7X59wFU1crA) | [Google Drive](https://drive.google.com/drive/folders/1VWmI2BC9MjH7BsrOz4VlSDVnZMXaxGOE?usp=sharing) |<br>
108+
> - [ ] `v5lite-c.pt`: [Baidu Drive](https://pan.baidu.com/s/1obs6uRB79m8e3uASVR6P1A) | [Google Drive](https://drive.google.com/file/d/1lHYRQKjqKCRXghUjwWkUB0HQ8ccKH6qa/view?usp=sharing) |<br>
109+
>> |──────`onnx-fp32`: | [Baidu Drive](https://pan.baidu.com/s/1gwLqiPLTDjlSqWJ7AnEB1A) | [Google Drive](https://drive.google.com/file/d/1VJBfZPikTce5vUatC2ZsAWQlmMdcArs2/view?usp=sharing) |<br>
110+
>> └──────`openvino-fp16`: | [Baidu Drive](https://pan.baidu.com/s/18p8HAyGJdmo2hham250b4A) | [Google Drive](https://drive.google.com/drive/folders/1s4KPSC4B0shG0INmQ6kZuPLnlUKAATyv?usp=sharing) |<br>
111+
> - [ ] `v5lite-g.pt`: | [Baidu Drive](https://pan.baidu.com/s/14zdTiTMI_9yTBgKGbv9pQw) | [Google Drive](https://drive.google.com/file/d/1oftzqOREGqDCerf7DtD5BZp9YWELlkMe/view?usp=sharing) |<br>
112+
>> |──────`onnx-fp32`: | [Baidu Drive](https://pan.baidu.com/s/1gwLqiPLTDjlSqWJ7AnEB1A) | [Google Drive](https://drive.google.com/file/d/1bJByk9eoS6pv8Z3N4bcLRCV3i7uk24aU/view?usp=sharing) |<br>
113+
>> └──────`axpi-int8`: [Google Drive](https://github.com/AXERA-TECH/ax-models/blob/main/ax620/v5Lite-g-sim-640.joint) |<br>
114+
115+
116+
117+
Baidu Drive Password: `pogg`
118+
119+
#### v5lite-s model: TFLite Float32, Float16, INT8, Dynamic range quantization, ONNX, TFJS, TensorRT, OpenVINO IR FP32/FP16, Myriad Inference Engin Blob, CoreML
120+
[https://github.com/PINTO0309/PINTO_model_zoo/tree/main/180_YOLOv5-Lite](https://github.com/PINTO0309/PINTO_model_zoo/tree/main/180_YOLOv5-Lite)
121+
122+
#### Thanks for PINTO0309:[https://github.com/PINTO0309/PINTO_model_zoo/tree/main/180_YOLOv5-Lite](https://github.com/PINTO0309/PINTO_model_zoo/tree/main/180_YOLOv5-Lite)
123+
124+
125+
## <div>How to use</div>
126+
127+
<details open>
128+
<summary>Install</summary>
129+
130+
[**Python>=3.6.0**](https://www.python.org/) is required with all
131+
[requirements.txt](https://github.com/ppogg/YOLOv5-Lite/blob/master/requirements.txt) installed including
132+
[**PyTorch>=1.7**](https://pytorch.org/get-started/locally/):
133+
<!-- $ sudo apt update && apt install -y libgl1-mesa-glx libsm6 libxext6 libxrender-dev -->
134+
135+
```bash
136+
$ git clone https://github.com/ppogg/YOLOv5-Lite
137+
$ cd YOLOv5-Lite
138+
$ pip install -r requirements.txt
139+
```
140+
141+
</details>
142+
143+
<details>
144+
<summary>Inference with detect.py</summary>
145+
146+
`detect.py` runs inference on a variety of sources, downloading models automatically from
147+
the [latest YOLOv5-Lite release](https://github.com/ppogg/YOLOv5-Lite/releases) and saving results to `runs/detect`.
148+
149+
```bash
150+
$ python detect.py --source 0 # webcam
151+
file.jpg # image
152+
file.mp4 # video
153+
path/ # directory
154+
path/*.jpg # glob
155+
'https://youtu.be/NUsoVlDFqZg' # YouTube
156+
'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream
157+
```
158+
159+
</details>
160+
161+
<details open>
162+
<summary>Training</summary>
163+
164+
```bash
165+
$ python train.py --data coco.yaml --cfg v5lite-e.yaml --weights v5lite-e.pt --batch-size 128
166+
v5lite-s.yaml v5lite-s.pt 128
167+
v5lite-c.yaml v5lite-c.pt 96
168+
v5lite-g.yaml v5lite-g.pt 64
169+
```
170+
171+
If you use multi-gpu. It's faster several times:
172+
173+
```bash
174+
$ python -m torch.distributed.launch --nproc_per_node 2 train.py
175+
```
176+
177+
</details>
178+
179+
</details>
180+
181+
<details open>
182+
<summary>DataSet</summary>
183+
184+
Training set and test set distribution (the path with xx.jpg)
185+
186+
```bash
187+
train: ../coco/images/train2017/
188+
val: ../coco/images/val2017/
189+
```
190+
```bash
191+
├── images # xx.jpg example
192+
│ ├── train2017
193+
│ │ ├── 000001.jpg
194+
│ │ ├── 000002.jpg
195+
│ │ └── 000003.jpg
196+
│ └── val2017
197+
│ ├── 100001.jpg
198+
│ ├── 100002.jpg
199+
│ └── 100003.jpg
200+
└── labels # xx.txt example
201+
├── train2017
202+
│ ├── 000001.txt
203+
│ ├── 000002.txt
204+
│ └── 000003.txt
205+
└── val2017
206+
├── 100001.txt
207+
├── 100002.txt
208+
└── 100003.txt
209+
```
210+
211+
</details>
212+
213+
<details open>
214+
<summary>Auto LabelImg</summary>
215+
216+
[**Link**https://github.com/ppogg/AutoLabelImg](https://github.com/ppogg/AutoLabelImg)
217+
218+
You can use LabelImg based YOLOv5-5.0 and YOLOv5-Lite to AutoAnnotate, biubiubiu 🚀 🚀 🚀
219+
<img src="https://user-images.githubusercontent.com/82716366/177030174-dc3a5827-2821-4d8c-8d78-babe83c42fbf.JPG" width="950"/><br/>
220+
221+
222+
</details>
223+
224+
<details open>
225+
<summary>Model Hub</summary>
226+
227+
Here, the original components of YOLOv5 and the reproduced components of YOLOv5-Lite are organized and stored in the [model hub](https://github.com/ppogg/YOLOv5-Lite/tree/master/models/hub)
228+
229+
![modelhub](https://user-images.githubusercontent.com/82716366/146787562-e2c1c4c1-726e-4efc-9eae-d92f34333e8d.jpg)
230+
231+
<details open>
232+
<summary>Heatmap Analysis</summary>
233+
234+
235+
```bash
236+
$ python main.py --type all
237+
```
238+
239+
![论文插图2](https://user-images.githubusercontent.com/82716366/167449474-3689c2bf-197a-4403-849c-b85db6bcc476.png)
240+
241+
Updating ...
242+
243+
</details>
244+
245+
## How to deploy
246+
247+
[**ncnn**](https://github.com/ppogg/YOLOv5-Lite/blob/master/cpp_demo/ncnn/README.md) for arm-cpu
248+
249+
[**mnn**](https://github.com/ppogg/YOLOv5-Lite/blob/master/cpp_demo/mnn/README.md) for arm-cpu
250+
251+
[**openvino**](https://github.com/ppogg/YOLOv5-Lite/blob/master/python_demo/openvino/README.md) x86-cpu or x86-vpu
252+
253+
[**tensorrt(C++)**](https://github.com/ppogg/YOLOv5-Lite/blob/master/cpp_demo/tensorrt/README.md) for arm-gpu or arm-npu or x86-gpu
254+
255+
[**tensorrt(Python)**](https://github.com/ppogg/YOLOv5-Lite/tree/master/python_demo/tensorrt) for arm-gpu or arm-npu or x86-gpu
256+
257+
[**Android**](https://github.com/ppogg/YOLOv5-Lite/blob/master/android_demo/ncnn-android-v5lite/README.md) for arm-cpu
258+
259+
## Android_demo
260+
261+
This is a Redmi phone, the processor is Snapdragon 730G, and yolov5-lite is used for detection. The performance is as follows:
262+
263+
link: https://github.com/ppogg/YOLOv5-Lite/tree/master/android_demo/ncnn-android-v5lite
264+
265+
Android_v5Lite-s: https://drive.google.com/file/d/1CtohY68N2B9XYuqFLiTp-Nd2kuFWgAUR/view?usp=sharing
266+
267+
Android_v5Lite-g: https://drive.google.com/file/d/1FnvkWxxP_aZwhi000xjIuhJ_OhqOUJcj/view?usp=sharing
268+
269+
new android app:[link] https://pan.baidu.com/s/1PRhW4fI1jq8VboPyishcIQ [keyword] pogg
270+
271+
<img src="https://user-images.githubusercontent.com/82716366/149959014-5f027b1c-67b6-47e2-976b-59a7c631b0f2.jpg" width="650"/><br/>
272+
273+
## More detailed explanation
274+
275+
#### Detailed model link:
276+
277+
What is YOLOv5-Lite S/E model:
278+
zhihu link (Chinese): https://zhuanlan.zhihu.com/p/400545131
279+
280+
What is YOLOv5-Lite C model:
281+
zhihu link (Chinese): https://zhuanlan.zhihu.com/p/420737659
282+
283+
What is YOLOv5-Lite G model:
284+
zhihu link (Chinese): https://zhuanlan.zhihu.com/p/410874403
285+
286+
How to deploy on ncnn with fp16 or int8:
287+
csdn link (Chinese): https://blog.csdn.net/weixin_45829462/article/details/119787840
288+
289+
How to deploy on onnxruntime:
290+
zhihu link (Chinese): https://zhuanlan.zhihu.com/p/476533259
291+
292+
How to deploy on tensorrt:
293+
zhihu link (Chinese): https://zhuanlan.zhihu.com/p/478630138
294+
295+
How to optimize on tensorrt:
296+
zhihu link (Chinese): https://zhuanlan.zhihu.com/p/463074494
297+
298+
## Reference
299+
300+
https://github.com/ultralytics/yolov5
301+
302+
https://github.com/megvii-model/ShuffleNet-Series
303+
304+
https://github.com/Tencent/ncnn
305+
306+
## Citing YOLOv5-Lite
307+
If you use YOLOv5-Lite in your research, please cite our work and give a star ⭐:
308+
309+
```
310+
@misc{yolov5lite2021,
311+
title = {YOLOv5-Lite: Lighter, faster and easier to deploy},
312+
author = {Xiangrong Chen and Ziman Gong},
313+
doi = {10.5281/zenodo.5241425}
314+
year={2021}
315+
}
316+
```
+53
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
The yolov5-lite object detection
2+
3+
This is a sample ncnn android project, it depends on ncnn library and opencv
4+
5+
https://github.com/Tencent/ncnn
6+
7+
https://github.com/nihui/opencv-mobile
8+
9+
## model_zoo
10+
11+
https://github.com/ppogg/ncnn-android-v5lite/tree/master/app/src/main/assets
12+
13+
14+
## how to build and run
15+
### step1
16+
https://github.com/Tencent/ncnn/releases
17+
18+
* Download ncnn-YYYYMMDD-android-vulkan.zip or build ncnn for android yourself
19+
* Extract ncnn-YYYYMMDD-android-vulkan.zip into **app/src/main/jni** and change the **ncnn_DIR** path to yours in **app/src/main/jni/CMakeLists.txt**
20+
21+
### step2
22+
https://github.com/nihui/opencv-mobile
23+
24+
* Download opencv-mobile-XYZ-android.zip
25+
* Extract opencv-mobile-XYZ-android.zip into **app/src/main/jni** and change the **OpenCV_DIR** path to yours in **app/src/main/jni/CMakeLists.txt**
26+
27+
### step3
28+
```
29+
cd ncnn_Android/ncnn-android-yolov5/app/src/main/assets
30+
wget all the *.param and *.bin
31+
```
32+
33+
### step4
34+
* Open this project with Android Studio, build it and enjoy!
35+
36+
## some notes
37+
* Android ndk camera is used for best efficiency
38+
* Crash may happen on very old devices for lacking HAL3 camera interface
39+
* All models are manually modified to accept dynamic input shape
40+
* Most small models run slower on GPU than on CPU, this is common
41+
* FPS may be lower in dark environment because of longer camera exposure time
42+
43+
## screenshot
44+
<img src="https://user-images.githubusercontent.com/82716366/151705519-de3ad1f1-e297-4125-989a-04e49dcf2876.jpg" width="600"/><br/>
45+
46+
<img src="https://pic1.zhimg.com/80/v2-c013df3638fd41d10103ea259b18e588_720w.jpg" width="300"/><br/>
47+
48+
## reference
49+
https://github.com/nihui/ncnn-android-yolov5
50+
51+
https://github.com/FeiGeChuanShu/ncnn-android-yolox
52+
53+
https://github.com/ppogg/YOLOv5-Lite

0 commit comments

Comments
 (0)