Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: #370 annotation less perception evaluator #373

Merged
merged 44 commits into from
Mar 7, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
7890e0a
feat: add annotation_less_perception files
hayato-m126 Feb 28, 2024
da3efee
feat: update node
hayato-m126 Feb 29, 2024
be77183
fix: typo
hayato-m126 Feb 29, 2024
0adaca0
fix: bug
hayato-m126 Feb 29, 2024
b26214d
feat: update threshold
hayato-m126 Feb 29, 2024
27d432f
docs: update
hayato-m126 Feb 29, 2024
d662ba2
feat: update cli
hayato-m126 Feb 29, 2024
9775b4b
fix: lint
hayato-m126 Mar 1, 2024
081eafd
revert: LaunchSensing
hayato-m126 Mar 1, 2024
ae9c58d
docs: update use case annotation_less_perception
hayato-m126 Mar 1, 2024
c28f561
docs: update
hayato-m126 Mar 1, 2024
695703a
feat: update set frame
hayato-m126 Mar 1, 2024
2194b5c
docs: update document
hayato-m126 Mar 1, 2024
8b9ab45
feat: override threshold
hayato-m126 Mar 4, 2024
ccbc061
debug: pass argument
hayato-m126 Mar 4, 2024
ef4784d
fix: pre-commit
hayato-m126 Mar 4, 2024
5afc8c5
fix: pre-commit
hayato-m126 Mar 4, 2024
582141a
chore: add word to dict
hayato-m126 Mar 4, 2024
6191eb9
fix: typo
hayato-m126 Mar 4, 2024
0d1cd28
fix: set additional_parameter to evaluator node
hayato-m126 Mar 4, 2024
f864c39
feat: If the threshold value for judgment does not exist, set a provi…
hayato-m126 Mar 4, 2024
b9da37a
chore: update result.json
hayato-m126 Mar 4, 2024
65641b7
docs: update document
hayato-m126 Mar 4, 2024
bcefc99
chore: format
hayato-m126 Mar 4, 2024
da2c4e5
docs: update
hayato-m126 Mar 4, 2024
3efb6a0
chore: replace annotation_less with annotationless
hayato-m126 Mar 5, 2024
8e77722
fix: launch module
hayato-m126 Mar 5, 2024
a7972f3
fix: typo
hayato-m126 Mar 5, 2024
9673e8c
fix: typo
hayato-m126 Mar 5, 2024
c899221
docs: add quick start
hayato-m126 Mar 5, 2024
2b0503c
feat: add PassRange
hayato-m126 Mar 5, 2024
a9db904
feat: support update pass_range using launch argument
hayato-m126 Mar 5, 2024
6305532
feat: update pass_range
hayato-m126 Mar 5, 2024
b1260ca
docs: add English document
hayato-m126 Mar 6, 2024
197275d
docs: add index
hayato-m126 Mar 6, 2024
395c909
docs: fix file name
hayato-m126 Mar 6, 2024
96a20a9
fix: pre-commit
hayato-m126 Mar 6, 2024
bb009f6
fix: file name
hayato-m126 Mar 6, 2024
8bf8a14
fix: file name
hayato-m126 Mar 6, 2024
24b9ac3
chore: annotation less -> annotationless
hayato-m126 Mar 6, 2024
becaa29
chore: title
hayato-m126 Mar 6, 2024
be94a21
chore: add reuslt.jsonl for unit test
hayato-m126 Mar 7, 2024
68f93f7
chore: add unit test
hayato-m126 Mar 7, 2024
b1eb897
fix: pre-commit
hayato-m126 Mar 7, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion .driving_log_replayer.cspell.json
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,7 @@
"pydantic",
"Kotaro",
"Uetake",
"conlist"
"conlist",
"annotationless"
]
}
2 changes: 2 additions & 0 deletions docs/quick_start/.pages
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,5 @@ nav:
- perception.ja.md
- performance_diag.en.md
- performance_diag.ja.md
- annotationless_perception.en.md
- annotationless_perception.ja.md
35 changes: 35 additions & 0 deletions docs/quick_start/annotationless_perception.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Annotationless Perception

## Preparation

1. Copy sample scenario

```shell
mkdir -p ~/driving_log_replayer_data/annotationless_perception/sample
cp -r ~/autoware/src/simulator/driving_log_replayer/sample/annotationless_perception/scenario.yaml ~/driving_log_replayer_data/annotationless_perception/sample
```

2. Copy bag file from dataset

```shell
cp -r ~/driving_log_replayer_data/sample_dataset/input_bag ~/driving_log_replayer_data/annotationless_perception/sample
```

## How to run

1. Run the simulation

```shell
dlr simulation run -p annotationless_perception -l "play_rate:=0.5"
```

2. Check the results

Results are displayed in the terminal like below.

```shell
scenario: sample
--------------------------------------------------
TestResult: Passed
Failed: AnnotationlessPerception Deviation (Success)
```
35 changes: 35 additions & 0 deletions docs/quick_start/annotationless_perception.ja.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Annotationless認識機能の評価

## 準備

1. サンプルのシナリオのコピー

```shell
mkdir -p ~/driving_log_replayer_data/annotationless_perception/sample
cp -r ~/autoware/src/simulator/driving_log_replayer/sample/annotationless_perception/scenario.yaml ~/driving_log_replayer_data/annotationless_perception/sample
```

2. サンプルのデータセットをコピー

```shell
cp -r ~/driving_log_replayer_data/sample_dataset/input_bag ~/driving_log_replayer_data/annotationless_perception/sample
```

## 実行方法

1. シミュレーションの実行

```shell
dlr simulation run -p annotationless_perception -l "play_rate:=0.5"
```

2. 結果の確認

以下のような結果がターミナルに表示されます。

```shell
scenario: sample
--------------------------------------------------
TestResult: Passed
Failed: AnnotationlessPerception Deviation (Success)
```
2 changes: 2 additions & 0 deletions docs/use_case/.pages
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,5 @@ nav:
- perception.ja.md
- performance_diag.en.md
- performance_diag.ja.md
- annotationless_perception.en.md
- annotationless_perception.ja.md
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll add the English document today.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

297 changes: 297 additions & 0 deletions docs/use_case/annotationless_perception.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,297 @@
# Evaluate Annotationless Perception

Evaluate Autoware's recognition features (perception) without annotations using the perception_online_evaluator.

Requires Autoware with the following PR features.
<https://github.com/autowarefoundation/autoware.universe/pull/6493>

## Evaluation method

The annotationless_perception evaluation is executed by launching the `annotationless_perception.launch.py` file.
Launching the file executes the following steps:

1. Execute launch of evaluation node (`annotationless_perception_evaluator_node`), `logging_simulator.launch` file and `ros2 bag play` command
2. Autoware receives sensor data output from input rosbag and the perception module performs recognition.
3. The perception_online_evaluator publishes diagnostic topic to `/diagnostic/perception_online_evaluator/metrics`
4. The evaluation node subscribes to the topic and evaluates data. The result is dumped into a file.
5. When the playback of the rosbag is finished, Autoware's launch is automatically terminated, and the evaluation is completed.

## Evaluation results

The results are calculated for each subscription. The format and available states are described below.

### Deviation Normal

The following two values specified in the scenario or launch argument are used to judge

- Threshold
- Threshold for judging the success or failure of each item
- PassRange(Coefficient to correct threshold)
- The range between `threshold * lower_limit` and `threshold * upper limit` is considered to pass the test.

Add the min, max, and mean values for each status.name in `/diagnostic/perception_online_evaluator/metrics` and calculate the average value.
If the `threshold * lower limit` <= `calculated_average` <= `threshold value * upper_limit`, it is assumed to be normal.

An illustration is shown below.

![metrics](./images/annotationless_metrics.drawio.svg)

### Deviation Error

When the deviation normal condition is not met

## Topic name and data type used by evaluation node

Subscribed topics:

| Topic name | Data type |
| ----------------------------------------------- | ------------------------------------- |
| /diagnostic/perception_online_evaluator/metrics | diagnostic_msgs::msg::DiagnosticArray |

Published topics:

| Topic name | Data type |
| ---------- | --------- |
| N/A | N/A |

### Method of specifying conditions

The conditions can be given in two ways

#### Describe in scenario

```yaml
Evaluation:
UseCaseName: annotationless_perception
UseCaseFormatVersion: 0.1.0
Conditions:
# Threshold: {} # If Metrics are specified from result.jsonl of a previous test, the value here will be overwritten. If it is a dictionary type, it can be empty.
Threshold:
lateral_deviation: { min: 10.0, max: 10.0, mean: 10.0 }
yaw_deviation: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_5.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_3.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_2.00: { min: 10.0, max: 10.0, mean: 10.0 }
predicted_path_deviation_1.00: { min: 10.0, max: 10.0, mean: 10.0 }
PassRange: 0.5-1.05 # lower[<=1.0]-upper[>=1.0] # The test will pass under the following `condition threshold * lower <= Σ deviation / len(deviation) <= threshold * upper`
```

#### Specify by launch argument

This method is assumed to be used mainly.
If the file path of result.jsonl output from a past test is specified, the metrics values from past tests can be used as threshold values.
The passing range can also be specified as an argument.

An image of its use is shown below.

![threshold](./images/annotationless_threshold.drawio.svg)

##### driving-log-replayer-cli

```shell
dlr simulation run -p annotationless_perception -l "annotationless_thresold_file:=${previous_test_result.jsonl_path},annotationless_pass_range:=${lower-upper}
```

##### WebAutoCLI

```shell
webauto ci scenario run --project-id ${project-id} --scenario-id ${scenario-id} --scenario-version-id ${scenario-version-id} --simulator-parameter-overrides annotationless_thresold_file=${previous_test_result.jsonl_path},annotaionless_pass_rate=${lower-upper}
```

##### Autoware Evaluator

Add to parameters in the simulator configuration in `.webauto-ci.yml`.

```yaml
simulations:
- name: annotationless_perception
type: annotationless_perception
simulator:
deployment:
type: container
artifact: main
runtime:
type: simulator/standard1/amd64/medium
parameters:
annotationless_threshold_file: ${previous_test_result.jsonl_path}
annotationless_pass_range: ${upper-lower}
```

## Arguments passed to logging_simulator.launch

To make Autoware processing less resource-consuming, modules that are not relevant to evaluation are disabled by passing the `false` parameter as a launch argument.
The following parameters are set to `false`:

- perception: true
- planning: false
- control: false
- sensing: false / true (default false, set by launch argument)

### How to specify the sensing argument

#### driving-log-replayer-cli

```shell
dlr simulation run -p annotationless_perception -l "sensing:=true"
```

#### WebAutoCLI

```shell
webauto ci scenario run --project-id ${project-id} --scenario-id ${scenario-id} --scenario-version-id ${scenario-version-id} --simulator-parameter-overrides sensing=true
```

#### Autoware Evaluator

Add to parameters in the simulator configuration in `.webauto-ci.yml`.

```yaml
simulations:
- name: annotationless_perception
type: annotationless_perception
simulator:
deployment:
type: container
artifact: main
runtime:
type: simulator/standard1/amd64/medium
parameters:
sensing: "true"
```

## simulation

State the information required to run the simulation.

### Topic to be included in the input rosbag

| Topic name | Data type |
| -------------------------------------- | -------------------------------------------- |
| /gsm8/from_can_bus | can_msgs/msg/Frame |
| /localization/kinematic_state | nav_msgs/msg/Odometry |
| /sensing/gnss/ublox/fix_velocity | geometry_msgs/msg/TwistWithCovarianceStamped |
| /sensing/gnss/ublox/nav_sat_fix | sensor_msgs/msg/NavSatFix |
| /sensing/gnss/ublox/navpvt | ublox_msgs/msg/NavPVT |
| /sensing/imu/tamagawa/imu_raw | sensor_msgs/msg/Imu |
| /sensing/lidar/concatenated/pointcloud | sensor_msgs/msg/PointCloud2 |
| /sensing/lidar/\*/velodyne_packets | velodyne_msgs/VelodyneScan |
| /tf | tf2_msgs/msg/TFMessage |

The vehicle topics can be included instead of CAN.

| Topic name | Data type |
| -------------------------------------- | --------------------------------------------------- |
| /localization/kinematic_state | nav_msgs/msg/Odometry |
| /sensing/gnss/ublox/fix_velocity | geometry_msgs/msg/TwistWithCovarianceStamped |
| /sensing/gnss/ublox/nav_sat_fix | sensor_msgs/msg/NavSatFix |
| /sensing/gnss/ublox/navpvt | ublox_msgs/msg/NavPVT |
| /sensing/imu/tamagawa/imu_raw | sensor_msgs/msg/Imu |
| /sensing/lidar/concatenated/pointcloud | sensor_msgs/msg/PointCloud2 |
| /sensing/lidar/\*/velodyne_packets | velodyne_msgs/VelodyneScan |
| /tf | tf2_msgs/msg/TFMessage |
| /vehicle/status/control_mode | autoware_auto_vehicle_msgs/msg/ControlModeReport |
| /vehicle/status/gear_status | autoware_auto_vehicle_msgs/msg/GearReport |
| /vehicle/status/steering_status | autoware_auto_vehicle_msgs/SteeringReport |
| /vehicle/status/turn_indicators_status | autoware_auto_vehicle_msgs/msg/TurnIndicatorsReport |
| /vehicle/status/velocity_status | autoware_auto_vehicle_msgs/msg/VelocityReport |

### Topics that must not be included in the input rosbag

| Topic name | Data type |
| ---------- | ----------------------- |
| /clock | rosgraph_msgs/msg/Clock |

The clock is output by the --clock option of ros2 bag play, so if it is recorded in the bag itself, it is output twice, so it is not included in the bag.

## evaluation

State the information necessary for the evaluation.

### Scenario Format

See [sample](https://github.com/tier4/driving_log_replayer/blob/main/sample/annotationless_perception/scenario.yaml)

### Evaluation Result Format

See [sample](https://github.com/tier4/driving_log_replayer/blob/main/sample/annotationless_perception/result.json)

The format of each frame and the metrics format are shown below.
**NOTE: common part of the result file format, which has already been explained, is omitted.**

```json
{
"Deviation": {
"Result": { "Total": "Success or Fail", "Frame": "Success or Fail" }, // The results for Total and Frame are the same. The same values are output to make the data structure the same as other evaluations.
"Info": {
"lateral_deviation": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"yaw_deviation": {
"min": "Minimum Angle Difference",
"max": "Maximum Angle Difference",
"mean": "Mean Angle Difference"
},
"predicted_path_deviation_5.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_3.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_2.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
},
"predicted_path_deviation_1.00": {
"min": "Minimum distance",
"max": "Maximum distance",
"mean": "Mean distance"
}
},
"Metrics": {
"lateral_deviation": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"yaw_deviation": {
"min": "Average Minimum Angle Difference",
"max": "Average Maximum Angle Difference",
"mean": "Average Mean Angle Difference"
},
"predicted_path_deviation_5.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"predicted_path_deviation_3.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"predicted_path_deviation_2.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
},
"predicted_path_deviation_1.00": {
"min": "Average Minimum distance",
"max": "Average Maximum distance",
"mean": "Average Mean distance"
}
}
}
}
```

See the figure below for the meaning of items

![lateral_deviation](./images/lateral_deviation.png)

![predicted_path_deviation](./images/predicted_path_deviation.png)
Loading
Loading