Skip to content

Commit 104ca70

Browse files
authored
feat: #370 annotation less perception evaluator (#373)
1 parent 5026960 commit 104ca70

27 files changed

+54624
-4
lines changed

.driving_log_replayer.cspell.json

+2-1
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@
2828
"pydantic",
2929
"Kotaro",
3030
"Uetake",
31-
"conlist"
31+
"conlist",
32+
"annotationless"
3233
]
3334
}

docs/quick_start/.pages

+2
Original file line numberDiff line numberDiff line change
@@ -17,3 +17,5 @@ nav:
1717
- perception.ja.md
1818
- performance_diag.en.md
1919
- performance_diag.ja.md
20+
- annotationless_perception.en.md
21+
- annotationless_perception.ja.md
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Annotationless Perception
2+
3+
## Preparation
4+
5+
1. Copy sample scenario
6+
7+
```shell
8+
mkdir -p ~/driving_log_replayer_data/annotationless_perception/sample
9+
cp -r ~/autoware/src/simulator/driving_log_replayer/sample/annotationless_perception/scenario.yaml ~/driving_log_replayer_data/annotationless_perception/sample
10+
```
11+
12+
2. Copy bag file from dataset
13+
14+
```shell
15+
cp -r ~/driving_log_replayer_data/sample_dataset/input_bag ~/driving_log_replayer_data/annotationless_perception/sample
16+
```
17+
18+
## How to run
19+
20+
1. Run the simulation
21+
22+
```shell
23+
dlr simulation run -p annotationless_perception -l "play_rate:=0.5"
24+
```
25+
26+
2. Check the results
27+
28+
Results are displayed in the terminal like below.
29+
30+
```shell
31+
scenario: sample
32+
--------------------------------------------------
33+
TestResult: Passed
34+
Failed: AnnotationlessPerception Deviation (Success)
35+
```
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Annotationless認識機能の評価
2+
3+
## 準備
4+
5+
1. サンプルのシナリオのコピー
6+
7+
```shell
8+
mkdir -p ~/driving_log_replayer_data/annotationless_perception/sample
9+
cp -r ~/autoware/src/simulator/driving_log_replayer/sample/annotationless_perception/scenario.yaml ~/driving_log_replayer_data/annotationless_perception/sample
10+
```
11+
12+
2. サンプルのデータセットをコピー
13+
14+
```shell
15+
cp -r ~/driving_log_replayer_data/sample_dataset/input_bag ~/driving_log_replayer_data/annotationless_perception/sample
16+
```
17+
18+
## 実行方法
19+
20+
1. シミュレーションの実行
21+
22+
```shell
23+
dlr simulation run -p annotationless_perception -l "play_rate:=0.5"
24+
```
25+
26+
2. 結果の確認
27+
28+
以下のような結果がターミナルに表示されます。
29+
30+
```shell
31+
scenario: sample
32+
--------------------------------------------------
33+
TestResult: Passed
34+
Failed: AnnotationlessPerception Deviation (Success)
35+
```

docs/use_case/.pages

+2
Original file line numberDiff line numberDiff line change
@@ -15,3 +15,5 @@ nav:
1515
- perception.ja.md
1616
- performance_diag.en.md
1717
- performance_diag.ja.md
18+
- annotationless_perception.en.md
19+
- annotationless_perception.ja.md
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,297 @@
1+
# Evaluate Annotationless Perception
2+
3+
Evaluate Autoware's recognition features (perception) without annotations using the perception_online_evaluator.
4+
5+
Requires Autoware with the following PR features.
6+
<https://github.com/autowarefoundation/autoware.universe/pull/6493>
7+
8+
## Evaluation method
9+
10+
The annotationless_perception evaluation is executed by launching the `annotationless_perception.launch.py` file.
11+
Launching the file executes the following steps:
12+
13+
1. Execute launch of evaluation node (`annotationless_perception_evaluator_node`), `logging_simulator.launch` file and `ros2 bag play` command
14+
2. Autoware receives sensor data output from input rosbag and the perception module performs recognition.
15+
3. The perception_online_evaluator publishes diagnostic topic to `/diagnostic/perception_online_evaluator/metrics`
16+
4. The evaluation node subscribes to the topic and evaluates data. The result is dumped into a file.
17+
5. When the playback of the rosbag is finished, Autoware's launch is automatically terminated, and the evaluation is completed.
18+
19+
## Evaluation results
20+
21+
The results are calculated for each subscription. The format and available states are described below.
22+
23+
### Deviation Normal
24+
25+
The following two values specified in the scenario or launch argument are used to judge
26+
27+
- Threshold
28+
- Threshold for judging the success or failure of each item
29+
- PassRange(Coefficient to correct threshold)
30+
- The range between `threshold * lower_limit` and `threshold * upper limit` is considered to pass the test.
31+
32+
Add the min, max, and mean values for each status.name in `/diagnostic/perception_online_evaluator/metrics` and calculate the average value.
33+
If the `threshold * lower limit` <= `calculated_average` <= `threshold value * upper_limit`, it is assumed to be normal.
34+
35+
An illustration is shown below.
36+
37+
![metrics](./images/annotationless_metrics.drawio.svg)
38+
39+
### Deviation Error
40+
41+
When the deviation normal condition is not met
42+
43+
## Topic name and data type used by evaluation node
44+
45+
Subscribed topics:
46+
47+
| Topic name | Data type |
48+
| ----------------------------------------------- | ------------------------------------- |
49+
| /diagnostic/perception_online_evaluator/metrics | diagnostic_msgs::msg::DiagnosticArray |
50+
51+
Published topics:
52+
53+
| Topic name | Data type |
54+
| ---------- | --------- |
55+
| N/A | N/A |
56+
57+
### Method of specifying conditions
58+
59+
The conditions can be given in two ways
60+
61+
#### Describe in scenario
62+
63+
```yaml
64+
Evaluation:
65+
UseCaseName: annotationless_perception
66+
UseCaseFormatVersion: 0.1.0
67+
Conditions:
68+
# Threshold: {} # If Metrics are specified from result.jsonl of a previous test, the value here will be overwritten. If it is a dictionary type, it can be empty.
69+
Threshold:
70+
lateral_deviation: { min: 10.0, max: 10.0, mean: 10.0 }
71+
yaw_deviation: { min: 10.0, max: 10.0, mean: 10.0 }
72+
predicted_path_deviation_5.00: { min: 10.0, max: 10.0, mean: 10.0 }
73+
predicted_path_deviation_3.00: { min: 10.0, max: 10.0, mean: 10.0 }
74+
predicted_path_deviation_2.00: { min: 10.0, max: 10.0, mean: 10.0 }
75+
predicted_path_deviation_1.00: { min: 10.0, max: 10.0, mean: 10.0 }
76+
PassRange: 0.5-1.05 # lower[<=1.0]-upper[>=1.0] # The test will pass under the following `condition threshold * lower <= Σ deviation / len(deviation) <= threshold * upper`
77+
```
78+
79+
#### Specify by launch argument
80+
81+
This method is assumed to be used mainly.
82+
If the file path of result.jsonl output from a past test is specified, the metrics values from past tests can be used as threshold values.
83+
The passing range can also be specified as an argument.
84+
85+
An image of its use is shown below.
86+
87+
![threshold](./images/annotationless_threshold.drawio.svg)
88+
89+
##### driving-log-replayer-cli
90+
91+
```shell
92+
dlr simulation run -p annotationless_perception -l "annotationless_thresold_file:=${previous_test_result.jsonl_path},annotationless_pass_range:=${lower-upper}
93+
```
94+
95+
##### WebAutoCLI
96+
97+
```shell
98+
webauto ci scenario run --project-id ${project-id} --scenario-id ${scenario-id} --scenario-version-id ${scenario-version-id} --simulator-parameter-overrides annotationless_thresold_file=${previous_test_result.jsonl_path},annotaionless_pass_rate=${lower-upper}
99+
```
100+
101+
##### Autoware Evaluator
102+
103+
Add to parameters in the simulator configuration in `.webauto-ci.yml`.
104+
105+
```yaml
106+
simulations:
107+
- name: annotationless_perception
108+
type: annotationless_perception
109+
simulator:
110+
deployment:
111+
type: container
112+
artifact: main
113+
runtime:
114+
type: simulator/standard1/amd64/medium
115+
parameters:
116+
annotationless_threshold_file: ${previous_test_result.jsonl_path}
117+
annotationless_pass_range: ${upper-lower}
118+
```
119+
120+
## Arguments passed to logging_simulator.launch
121+
122+
To make Autoware processing less resource-consuming, modules that are not relevant to evaluation are disabled by passing the `false` parameter as a launch argument.
123+
The following parameters are set to `false`:
124+
125+
- perception: true
126+
- planning: false
127+
- control: false
128+
- sensing: false / true (default false, set by launch argument)
129+
130+
### How to specify the sensing argument
131+
132+
#### driving-log-replayer-cli
133+
134+
```shell
135+
dlr simulation run -p annotationless_perception -l "sensing:=true"
136+
```
137+
138+
#### WebAutoCLI
139+
140+
```shell
141+
webauto ci scenario run --project-id ${project-id} --scenario-id ${scenario-id} --scenario-version-id ${scenario-version-id} --simulator-parameter-overrides sensing=true
142+
```
143+
144+
#### Autoware Evaluator
145+
146+
Add to parameters in the simulator configuration in `.webauto-ci.yml`.
147+
148+
```yaml
149+
simulations:
150+
- name: annotationless_perception
151+
type: annotationless_perception
152+
simulator:
153+
deployment:
154+
type: container
155+
artifact: main
156+
runtime:
157+
type: simulator/standard1/amd64/medium
158+
parameters:
159+
sensing: "true"
160+
```
161+
162+
## simulation
163+
164+
State the information required to run the simulation.
165+
166+
### Topic to be included in the input rosbag
167+
168+
| Topic name | Data type |
169+
| -------------------------------------- | -------------------------------------------- |
170+
| /gsm8/from_can_bus | can_msgs/msg/Frame |
171+
| /localization/kinematic_state | nav_msgs/msg/Odometry |
172+
| /sensing/gnss/ublox/fix_velocity | geometry_msgs/msg/TwistWithCovarianceStamped |
173+
| /sensing/gnss/ublox/nav_sat_fix | sensor_msgs/msg/NavSatFix |
174+
| /sensing/gnss/ublox/navpvt | ublox_msgs/msg/NavPVT |
175+
| /sensing/imu/tamagawa/imu_raw | sensor_msgs/msg/Imu |
176+
| /sensing/lidar/concatenated/pointcloud | sensor_msgs/msg/PointCloud2 |
177+
| /sensing/lidar/\*/velodyne_packets | velodyne_msgs/VelodyneScan |
178+
| /tf | tf2_msgs/msg/TFMessage |
179+
180+
The vehicle topics can be included instead of CAN.
181+
182+
| Topic name | Data type |
183+
| -------------------------------------- | --------------------------------------------------- |
184+
| /localization/kinematic_state | nav_msgs/msg/Odometry |
185+
| /sensing/gnss/ublox/fix_velocity | geometry_msgs/msg/TwistWithCovarianceStamped |
186+
| /sensing/gnss/ublox/nav_sat_fix | sensor_msgs/msg/NavSatFix |
187+
| /sensing/gnss/ublox/navpvt | ublox_msgs/msg/NavPVT |
188+
| /sensing/imu/tamagawa/imu_raw | sensor_msgs/msg/Imu |
189+
| /sensing/lidar/concatenated/pointcloud | sensor_msgs/msg/PointCloud2 |
190+
| /sensing/lidar/\*/velodyne_packets | velodyne_msgs/VelodyneScan |
191+
| /tf | tf2_msgs/msg/TFMessage |
192+
| /vehicle/status/control_mode | autoware_auto_vehicle_msgs/msg/ControlModeReport |
193+
| /vehicle/status/gear_status | autoware_auto_vehicle_msgs/msg/GearReport |
194+
| /vehicle/status/steering_status | autoware_auto_vehicle_msgs/SteeringReport |
195+
| /vehicle/status/turn_indicators_status | autoware_auto_vehicle_msgs/msg/TurnIndicatorsReport |
196+
| /vehicle/status/velocity_status | autoware_auto_vehicle_msgs/msg/VelocityReport |
197+
198+
### Topics that must not be included in the input rosbag
199+
200+
| Topic name | Data type |
201+
| ---------- | ----------------------- |
202+
| /clock | rosgraph_msgs/msg/Clock |
203+
204+
The clock is output by the --clock option of ros2 bag play, so if it is recorded in the bag itself, it is output twice, so it is not included in the bag.
205+
206+
## evaluation
207+
208+
State the information necessary for the evaluation.
209+
210+
### Scenario Format
211+
212+
See [sample](https://github.com/tier4/driving_log_replayer/blob/main/sample/annotationless_perception/scenario.yaml)
213+
214+
### Evaluation Result Format
215+
216+
See [sample](https://github.com/tier4/driving_log_replayer/blob/main/sample/annotationless_perception/result.json)
217+
218+
The format of each frame and the metrics format are shown below.
219+
**NOTE: common part of the result file format, which has already been explained, is omitted.**
220+
221+
```json
222+
{
223+
"Deviation": {
224+
"Result": { "Total": "Success or Fail", "Frame": "Success or Fail" }, // The results for Total and Frame are the same. The same values are output to make the data structure the same as other evaluations.
225+
"Info": {
226+
"lateral_deviation": {
227+
"min": "Minimum distance",
228+
"max": "Maximum distance",
229+
"mean": "Mean distance"
230+
},
231+
"yaw_deviation": {
232+
"min": "Minimum Angle Difference",
233+
"max": "Maximum Angle Difference",
234+
"mean": "Mean Angle Difference"
235+
},
236+
"predicted_path_deviation_5.00": {
237+
"min": "Minimum distance",
238+
"max": "Maximum distance",
239+
"mean": "Mean distance"
240+
},
241+
"predicted_path_deviation_3.00": {
242+
"min": "Minimum distance",
243+
"max": "Maximum distance",
244+
"mean": "Mean distance"
245+
},
246+
"predicted_path_deviation_2.00": {
247+
"min": "Minimum distance",
248+
"max": "Maximum distance",
249+
"mean": "Mean distance"
250+
},
251+
"predicted_path_deviation_1.00": {
252+
"min": "Minimum distance",
253+
"max": "Maximum distance",
254+
"mean": "Mean distance"
255+
}
256+
},
257+
"Metrics": {
258+
"lateral_deviation": {
259+
"min": "Average Minimum distance",
260+
"max": "Average Maximum distance",
261+
"mean": "Average Mean distance"
262+
},
263+
"yaw_deviation": {
264+
"min": "Average Minimum Angle Difference",
265+
"max": "Average Maximum Angle Difference",
266+
"mean": "Average Mean Angle Difference"
267+
},
268+
"predicted_path_deviation_5.00": {
269+
"min": "Average Minimum distance",
270+
"max": "Average Maximum distance",
271+
"mean": "Average Mean distance"
272+
},
273+
"predicted_path_deviation_3.00": {
274+
"min": "Average Minimum distance",
275+
"max": "Average Maximum distance",
276+
"mean": "Average Mean distance"
277+
},
278+
"predicted_path_deviation_2.00": {
279+
"min": "Average Minimum distance",
280+
"max": "Average Maximum distance",
281+
"mean": "Average Mean distance"
282+
},
283+
"predicted_path_deviation_1.00": {
284+
"min": "Average Minimum distance",
285+
"max": "Average Maximum distance",
286+
"mean": "Average Mean distance"
287+
}
288+
}
289+
}
290+
}
291+
```
292+
293+
See the figure below for the meaning of items
294+
295+
![lateral_deviation](./images/lateral_deviation.png)
296+
297+
![predicted_path_deviation](./images/predicted_path_deviation.png)

0 commit comments

Comments
 (0)