You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: launch/tier4_perception_launch/launch/object_recognition/detection/camera_lidar_fusion_based_detection.launch.xml
Copy file name to clipboardexpand all lines: launch/tier4_perception_launch/launch/object_recognition/detection/camera_lidar_radar_fusion_based_detection.launch.xml
|`output_frame_id`| string | The header frame_id of output topic. | base_link |
31
+
32
+
## Assumptions / Known limits
33
+
34
+
<!-- Write assumptions and limitations of your implementation.
35
+
36
+
Example:
37
+
This algorithm assumes obstacles are not moving, so if they rapidly move after the vehicle started to avoid them, it might collide with them.
38
+
Also, this algorithm doesn't care about blind spots. In general, since too close obstacles aren't visible due to the sensing performance limit, please take enough margin to obstacles.
39
+
-->
40
+
41
+
## (Optional) Error detection and handling
42
+
43
+
<!-- Write how to detect errors and how to recover from them.
44
+
45
+
Example:
46
+
This package can handle up to 20 obstacles. If more obstacles found, this node will give up and raise diagnostic errors.
47
+
-->
48
+
49
+
## (Optional) Performance characterization
50
+
51
+
<!-- Write performance information like complexity. If it wouldn't be the bottleneck, not necessary.
52
+
53
+
Example:
54
+
### Complexity
55
+
56
+
This algorithm is O(N).
57
+
58
+
### Processing time
59
+
60
+
...
61
+
-->
62
+
63
+
## (Optional) References/External links
64
+
65
+
<!-- Write links you referred to when you implemented.
66
+
67
+
Example:
68
+
[1] {link_to_a_thesis}
69
+
[2] {link_to_an_issue}
70
+
-->
71
+
72
+
## (Optional) Future extensions / Unimplemented parts
0 commit comments