Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add yolov10 node #9198

Open
wants to merge 63 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
f93a2f3
feat: add yolov10 node
Oct 30, 2024
be204c7
style(pre-commit): autofix
pre-commit-ci[bot] Oct 30, 2024
763063e
feat: add yolov10 node
Oct 30, 2024
f5ef6ba
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Oct 30, 2024
032ac7f
style(pre-commit): autofix
pre-commit-ci[bot] Oct 30, 2024
b5bbede
fix ci error
Oct 31, 2024
f997928
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Oct 31, 2024
7261a5b
style(pre-commit): autofix
pre-commit-ci[bot] Oct 31, 2024
a20535e
fix ci error
Oct 31, 2024
0f623b5
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Oct 31, 2024
4a44dc4
style(pre-commit): autofix
pre-commit-ci[bot] Oct 31, 2024
b27a55b
fix ci error
Oct 31, 2024
46662df
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Oct 31, 2024
bba9239
style(pre-commit): autofix
pre-commit-ci[bot] Oct 31, 2024
80b6ac8
add cls in result
Nov 1, 2024
eeb76f3
add cls in result
Nov 1, 2024
10a5a85
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Nov 1, 2024
5bb1c57
style(pre-commit): autofix
pre-commit-ci[bot] Nov 1, 2024
fad0f0b
fix ci error
Nov 1, 2024
6757148
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Nov 1, 2024
63c5898
fix ci error
Nov 1, 2024
cd2470c
style(pre-commit): autofix
pre-commit-ci[bot] Nov 1, 2024
df9a7cc
fix ci error
Nov 1, 2024
3fffd23
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Nov 1, 2024
515f864
style(pre-commit): autofix
pre-commit-ci[bot] Nov 1, 2024
743caa5
Merge branch 'main' into yolov10
storrrrrrrrm Nov 4, 2024
76ce9f0
update readme
Nov 18, 2024
08f21d6
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Nov 18, 2024
726c120
style(pre-commit): autofix
pre-commit-ci[bot] Nov 18, 2024
46c14a3
update readme
Nov 26, 2024
b6da7ac
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Nov 26, 2024
1e8a265
style(pre-commit): autofix
pre-commit-ci[bot] Nov 26, 2024
1309d74
Update perception/autoware_tensorrt_yolov10/README.md
storrrrrrrrm Dec 2, 2024
74273f5
Merge branch 'main' into yolov10
storrrrrrrrm Dec 2, 2024
c964c1d
Merge branch 'main' of github.com:storrrrrrrrm/autoware.universe
Jan 22, 2025
9718c1f
Merge branch 'main' into yolov10
storrrrrrrrm Feb 10, 2025
3dee24b
Merge branch 'main' into yolov10
Feb 14, 2025
c071078
feat: fix ci
Feb 14, 2025
b3d8e16
apply autoware_prefix to dependency
xmfcx Feb 14, 2025
79d3833
style(pre-commit): autofix
pre-commit-ci[bot] Feb 14, 2025
0d36bee
feat: adpt to latest tensorrt_common
Feb 18, 2025
9490304
feat: adpt to latest tensorrt_common
Feb 18, 2025
805c8df
Merge branch 'main' into yolov10
storrrrrrrrm Feb 18, 2025
7b7e8c8
style(pre-commit): autofix
pre-commit-ci[bot] Feb 18, 2025
7ab0b9a
feat: fix ci
Feb 18, 2025
8378ad6
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Feb 18, 2025
62ca2f8
style(pre-commit): autofix
pre-commit-ci[bot] Feb 18, 2025
15f5d4e
feat: fix ci
Feb 19, 2025
dd43168
Merge branch 'main' into yolov10
storrrrrrrrm Feb 19, 2025
8d535e4
feat: fix ci
Feb 19, 2025
fbf3f72
Merge branch 'yolov10' of github.com:storrrrrrrrm/autoware.universe i…
Feb 19, 2025
055ea8a
style(pre-commit): autofix
pre-commit-ci[bot] Feb 19, 2025
ee78a5c
feat: fix ci
Mar 4, 2025
207f819
feat: fix ci
Mar 4, 2025
33d784f
Merge branch 'main' into yolov10
storrrrrrrrm Mar 4, 2025
c513eaf
feat: adapt to latest code
Mar 5, 2025
3a7d481
style(pre-commit): autofix
pre-commit-ci[bot] Mar 5, 2025
82274dd
Merge branch 'main' into yolov10
storrrrrrrrm Mar 5, 2025
1ce4ccf
Merge branch 'main' into yolov10
storrrrrrrrm Mar 5, 2025
4381b95
Merge branch 'main' into yolov10
storrrrrrrrm Mar 6, 2025
e38d87a
Merge branch 'main' into yolov10
storrrrrrrrm Mar 14, 2025
f175e06
Merge branch 'main' into yolov10
storrrrrrrrm Mar 17, 2025
d70fff8
Merge branch 'main' into yolov10
storrrrrrrrm Mar 18, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
177 changes: 177 additions & 0 deletions perception/autoware_tensorrt_yolov10/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
cmake_minimum_required(VERSION 3.17)
project(autoware_tensorrt_yolov10)

find_package(autoware_tensorrt_common)
if(NOT ${autoware_tensorrt_common_FOUND})
message(WARNING "The autoware_tensorrt_common package is not found. Please check its dependencies.")
return()
endif()

find_package(autoware_cmake REQUIRED)
autoware_package()

add_compile_options(-Wno-deprecated-declarations)

find_package(OpenCV REQUIRED)

option(CUDA_VERBOSE "Verbose output of CUDA modules" OFF)

# set flags for CUDA availability
option(CUDA_AVAIL "CUDA available" OFF)
find_package(CUDA)
if(CUDA_FOUND)
find_library(CUBLAS_LIBRARIES cublas HINTS
${CUDA_TOOLKIT_ROOT_DIR}/lib64
${CUDA_TOOLKIT_ROOT_DIR}/lib
)
if(CUDA_VERBOSE)
message("CUDA is available!")
message("CUDA Libs: ${CUDA_LIBRARIES}")
message("CUDA Headers: ${CUDA_INCLUDE_DIRS}")
endif()
# Note: cublas_device was depreciated in CUDA version 9.2
# https://forums.developer.nvidia.com/t/where-can-i-find-libcublas-device-so-or-libcublas-device-a/67251/4
# In LibTorch, CUDA_cublas_device_LIBRARY is used.
unset(CUDA_cublas_device_LIBRARY CACHE)
set(CUDA_AVAIL ON)
else()
message("CUDA NOT FOUND")
set(CUDA_AVAIL OFF)
endif()

# set flags for TensorRT availability
option(TRT_AVAIL "TensorRT available" OFF)
# try to find the tensorRT modules
find_library(NVINFER nvinfer)
find_library(NVONNXPARSER nvonnxparser)
if(NVINFER AND NVONNXPARSER)
if(CUDA_VERBOSE)
message("TensorRT is available!")
message("NVINFER: ${NVINFER}")
message("NVONNXPARSER: ${NVONNXPARSER}")
endif()
set(TRT_AVAIL ON)
else()
message("TensorRT is NOT Available")
set(TRT_AVAIL OFF)
endif()

# set flags for CUDNN availability
option(CUDNN_AVAIL "CUDNN available" OFF)
# try to find the CUDNN module
find_library(CUDNN_LIBRARY
NAMES libcudnn.so${__cudnn_ver_suffix} libcudnn${__cudnn_ver_suffix}.dylib ${__cudnn_lib_win_name}
PATHS $ENV{LD_LIBRARY_PATH} ${__libpath_cudart} ${CUDNN_ROOT_DIR} ${PC_CUDNN_LIBRARY_DIRS} ${CMAKE_INSTALL_PREFIX}
PATH_SUFFIXES lib lib64 bin
DOC "CUDNN library."
)
if(CUDNN_LIBRARY)
if(CUDA_VERBOSE)
message(STATUS "CUDNN is available!")
message(STATUS "CUDNN_LIBRARY: ${CUDNN_LIBRARY}")
endif()
set(CUDNN_AVAIL ON)
else()
message("CUDNN is NOT Available")
set(CUDNN_AVAIL OFF)
endif()

find_package(OpenMP)
if(OpenMP_FOUND)
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
endif()

##########
# tensorrt_yolov10
ament_auto_add_library(${PROJECT_NAME} SHARED
src/tensorrt_yolov10.cpp
src/tensorrt_yolov10_node.cpp
)

ament_target_dependencies(${PROJECT_NAME}
OpenCV
)

if(TRT_AVAIL AND CUDA_AVAIL AND CUDNN_AVAIL)
# Officially, add_library supports .cu file compilation.
# However, as of cmake 3.22.1, it seems to fail compilation because compiler flags for
# C++ are directly passed to nvcc (they are originally space separated
# but nvcc assume comma separated as argument of `-Xcompiler` option).
# That is why `cuda_add_library` is used here.
# cuda_add_library(${PROJECT_NAME}_gpu_preprocess
# SHARED
# src/preprocess.cu
# )

# target_include_directories(${PROJECT_NAME}_gpu_preprocess PUBLIC
# $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include>
# $<INSTALL_INTERFACE:include/${PROJECT_NAME}>
# )

# target_link_libraries(${PROJECT_NAME}
# ${autoware_tensorrt_common_LIBRARIES}
# ${PROJECT_NAME}_gpu_preprocess
# )
else()
target_link_libraries(${PROJECT_NAME}
${autoware_tensorrt_common_LIBRARIES}
)
endif()

target_compile_definitions(${PROJECT_NAME} PRIVATE
TENSORRT_VERSION_MAJOR=${TENSORRT_VERSION_MAJOR}
)

ament_auto_add_library(yolov10_single_image_inference_node SHARED
src/yolov10_single_image_inference_node.cpp
)

ament_target_dependencies(yolov10_single_image_inference_node
OpenCV
)

target_link_libraries(yolov10_single_image_inference_node
${PROJECT_NAME}
stdc++fs
)

target_compile_definitions(yolov10_single_image_inference_node PRIVATE
TENSORRT_VERSION_MAJOR=${TENSORRT_VERSION_MAJOR}
)

rclcpp_components_register_node(yolov10_single_image_inference_node
PLUGIN "autoware::tensorrt_yolov10::Yolov10SingleImageInferenceNode"
EXECUTABLE yolov10_single_image_inference
)

ament_auto_add_library(${PROJECT_NAME}_node SHARED
src/tensorrt_yolov10_node.cpp
)

ament_target_dependencies(${PROJECT_NAME}_node
OpenCV
)

target_link_libraries(${PROJECT_NAME}_node
${PROJECT_NAME}
)

target_compile_definitions(${PROJECT_NAME}_node PRIVATE
TENSORRT_VERSION_MAJOR=${TENSORRT_VERSION_MAJOR}
)

rclcpp_components_register_node(${PROJECT_NAME}_node
PLUGIN "autoware::tensorrt_yolov10::TrtYolov10Node"
EXECUTABLE ${PROJECT_NAME}_node_exe
)

if(BUILD_TESTING)
find_package(ament_lint_auto REQUIRED)
ament_lint_auto_find_test_dependencies()
endif()

ament_auto_package(INSTALL_TO_SHARE
launch
config
)
136 changes: 136 additions & 0 deletions perception/autoware_tensorrt_yolov10/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
# autoware_tensorrt_yolov10

## Purpose

This package detects target objects e.g., cars, trucks, bicycles, pedestrians,etc on a image based on [YOLOV10](https://github.com/THU-MIG/yolov10) model.

## Inputs / Outputs

### Input

| Name | Type | Description |
| ---------- | ------------------- | --------------- |
| `in/image` | `sensor_msgs/Image` | The input image |

### Output

| Name | Type | Description |
| ------------- | -------------------------------------------------- | -------------------------------------------------- |
| `out/objects` | `tier4_perception_msgs/DetectedObjectsWithFeature` | The detected objects with 2D bounding boxes |
| `out/image` | `sensor_msgs/Image` | The image with 2D bounding boxes for visualization |

## Assumptions / Known limits

The label contained in detected 2D bounding boxes (i.e., `out/objects`) will be either one of the followings:

- CAR
- PEDESTRIAN ("PERSON" will also be categorized as "PEDESTRIAN")
- BUS
- TRUCK
- BICYCLE
- MOTORCYCLE

If other labels (case insensitive) are contained in the file specified via the `label_file` parameter,
those are labeled as `UNKNOWN`, while detected rectangles are drawn in the visualization result (`out/image`).

## Onnx model

you can download yolov10m.onnx from [releases](https://github.com/THU-MIG/yolov10/releases)

## Label file

This file represents the correspondence between class index (integer outputted from YOLOV10 network) and
class label (strings making understanding easier). This package maps class IDs (incremented from 0)
with labels according to the order in this file.

currently, this file is actually a coco label which contains the following labels:

```text
person
bicycle
car
motorcycle
airplane
bus
train
truck
boat
traffic light
fire hydrant
stop sign
parking meter
bench
bird
cat
dog
horse
sheep
cow
elephant
bear
zebra
giraffe
backpack
umbrella
handbag
tie
suitcase
frisbee
skis
snowboard
sports ball
kite
baseball bat
baseball glove
skateboard
surfboard
tennis racket
bottle
wine glass
cup
fork
knife
spoon
bowl
banana
apple
sandwich
orange
broccoli
carrot
hot dog
pizza
donut
cake
chair
couch
potted plant
bed
dining table
toilet
tv
laptop
mouse
remote
keyboard
cell phone
microwave
oven
toaster
sink
refrigerator
book
clock
vase
scissors
teddy bear
hair drier
```

## Reference repositories

- <https://github.com/THU-MIG/yolov10>

## Legal Notice

The inference code is licensed under Apache 2.0. The model and training code are licensed under AGPL-3.0. you can check details from <https://github.com/THU-MIG/yolov10?tab=AGPL-3.0-1-ov-file>. To inquire about a commercial license when using trained model weights please contact yolov10 author.
41 changes: 41 additions & 0 deletions perception/autoware_tensorrt_yolov10/config/yolov10.param.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# cspell:ignore semseg
/**:
ros__parameters:

# refine segmentation mask by overlay roi class
# disable when sematic segmentation accuracy is good enough
is_roi_overlap_segment: true

# minimum existence_probability of detected roi considered to replace segmentation
overlap_roi_score_threshold: 0.3

# publish color mask for result visualization
is_publish_color_mask: false

roi_overlay_segment_label:
UNKNOWN : true
CAR : false
TRUCK : false
BUS : false
MOTORCYCLE : true
BICYCLE : true
PEDESTRIAN : true
ANIMAL: true

image_path: "$(var data_path)/tensorrt_yolov10/000000000307.jpg"
model_path: "$(var data_path)/tensorrt_yolov10/$(var model_name).onnx" # The onnx file name for YOLOX model.
label_path: "$(var data_path)/tensorrt_yolov10/label.txt" # The label file path for YOLOX model.
color_map_path: "$(var data_path)/tensorrt_yolov10/semseg_color_map.csv"
score_threshold: 0.35 # Objects with a score lower than this value will be ignored. This threshold will be ignored if specified model contains EfficientNMS_TRT module in it.
nms_threshold: 0.7 # Detection results will be ignored if IoU over this value. This threshold will be ignored if specified model contains EfficientNMS_TRT module in it.

precision: "fp16" # Operation precision to be used on inference. Valid value is one of: [fp32, fp16, int8].
calibration_algorithm: "MinMax" # Calibration algorithm to be used for quantization when precision==int8. Valid value is one of: [Entropy, (Legacy | Percentile), MinMax].
dla_core_id: -1 # If positive ID value is specified, the node assign inference task to the DLA core.
quantize_first_layer: false # If true, set the operating precision for the first (input) layer to be fp16. This option is valid only when precision==int8.
quantize_last_layer: false # If true, set the operating precision for the last (output) layer to be fp16. This option is valid only when precision==int8.
profile_per_layer: false # If true, profiler function will be enabled. Since the profile function may affect execution speed, it is recommended to set this flag true only for development purpose.
clip_value: 0.0 # If positive value is specified, the value of each layer output will be clipped between [0.0, clip_value]. This option is valid only when precision==int8 and used to manually specify the dynamic range instead of using any calibration.
preprocess_on_gpu: true # If true, pre-processing is performed on GPU.
gpu_id: 0 # GPU ID to select CUDA Device
calibration_image_list_path: "" # Path to a file which contains path to images. Those images will be used for int8 quantization.
Loading
Loading