Skip to content

Commit 05e0d1f

Browse files
committed
chore: update URLs for v1.0 in README
Signed-off-by: Ryohsuke Mitsudome <ryohsuke.mitsudome@tier4.jp>
1 parent 90d5b94 commit 05e0d1f

File tree

36 files changed

+68
-68
lines changed

36 files changed

+68
-68
lines changed

common/tier4_logging_level_configure_rviz_plugin/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,4 +6,4 @@ This package provides an rviz_plugin that can easily change the logger level of
66

77
This plugin dispatches services to the "logger name" associated with "nodes" specified in YAML, adjusting the logger level.
88

9-
As of November 2023, in ROS 2 Humble, users are required to initiate a service server in the node to use this feature. (This might be integrated into ROS standards in the future.) For easy service server generation, you can use the [LoggerLevelConfigure](https://github.com/autowarefoundation/autoware.universe/blob/main/common/tier4_autoware_utils/include/tier4_autoware_utils/ros/logger_level_configure.hpp) utility.
9+
As of November 2023, in ROS 2 Humble, users are required to initiate a service server in the node to use this feature. (This might be integrated into ROS standards in the future.) For easy service server generation, you can use the [LoggerLevelConfigure](https://github.com/autowarefoundation/autoware.universe/blob/v1.0/common/tier4_autoware_utils/include/tier4_autoware_utils/ros/logger_level_configure.hpp) utility.

localization/pose_initializer/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,6 @@ This node depends on the map height fitter library.
5050

5151
## Connection with Default AD API
5252

53-
This `pose_initializer` is used via default AD API. For detailed description of the API description, please refer to [the description of `default_ad_api`](https://github.com/autowarefoundation/autoware.universe/blob/main/system/default_ad_api/document/localization.md).
53+
This `pose_initializer` is used via default AD API. For detailed description of the API description, please refer to [the description of `default_ad_api`](https://github.com/autowarefoundation/autoware.universe/blob/v1.0/system/default_ad_api/document/localization.md).
5454

5555
<img src="../../system/default_ad_api/document/images/localization.drawio.svg" alt="drawing" width="800"/>

localization/yabloc/yabloc_pose_initializer/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This package contains a node related to initial pose estimation.
44

55
- [camera_pose_initializer](#camera_pose_initializer)
66

7-
This package requires the pre-trained semantic segmentation model for runtime. This model is usually downloaded by `ansible` during env preparation phase of the [installation](https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/source-installation/).
7+
This package requires the pre-trained semantic segmentation model for runtime. This model is usually downloaded by `ansible` during env preparation phase of the [installation](https://autowarefoundation.github.io/autoware-documentation/v1.0/installation/autoware/source-installation/).
88
It is also possible to download it manually. Even if the model is not downloaded, initialization will still complete, but the accuracy may be compromised.
99

1010
To download and extract the model manually:

map/map_loader/README.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -22,14 +22,14 @@ NOTE: **We strongly recommend to use divided maps when using large pointcloud ma
2222

2323
You may provide either a single .pcd file or multiple .pcd files. If you are using multiple PCD data and either of `enable_partial_load`, `enable_differential_load` or `enable_selected_load` are set true, it MUST obey the following rules:
2424

25-
1. **The pointcloud map should be projected on the same coordinate defined in `map_projection_loader`**, in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to [the readme of `map_projection_loader`](https://github.com/autowarefoundation/autoware.universe/tree/main/map/map_projection_loader/README.md).
25+
1. **The pointcloud map should be projected on the same coordinate defined in `map_projection_loader`**, in order to be consistent with the lanelet2 map and other packages that converts between local and geodetic coordinates. For more information, please refer to [the readme of `map_projection_loader`](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/map/map_projection_loader/README.md).
2626
2. **It must be divided by straight lines parallel to the x-axis and y-axis**. The system does not support division by diagonal lines or curved lines.
2727
3. **The division size along each axis should be equal.**
28-
4. **The division size should be about 20m x 20m.** Particularly, care should be taken as using too large division size (for example, more than 100m) may have adverse effects on dynamic map loading features in [ndt_scan_matcher](https://github.com/autowarefoundation/autoware.universe/tree/main/localization/ndt_scan_matcher) and [compare_map_segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/compare_map_segmentation).
28+
4. **The division size should be about 20m x 20m.** Particularly, care should be taken as using too large division size (for example, more than 100m) may have adverse effects on dynamic map loading features in [ndt_scan_matcher](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/localization/ndt_scan_matcher) and [compare_map_segmentation](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/compare_map_segmentation).
2929
5. **All the split maps should not overlap with each other.**
3030
6. **Metadata file should also be provided.** The metadata structure description is provided below.
3131

32-
Note that these rules are not applicable when `enable_partial_load`, `enable_differential_load` and `enable_selected_load` are all set false. In this case, however, you also need to disable dynamic map loading mode for other nodes as well ([ndt_scan_matcher](https://github.com/autowarefoundation/autoware.universe/tree/main/localization/ndt_scan_matcher) and [compare_map_segmentation](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/compare_map_segmentation) as of June 2023).
32+
Note that these rules are not applicable when `enable_partial_load`, `enable_differential_load` and `enable_selected_load` are all set false. In this case, however, you also need to disable dynamic map loading mode for other nodes as well ([ndt_scan_matcher](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/localization/ndt_scan_matcher) and [compare_map_segmentation](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/compare_map_segmentation) as of June 2023).
3333

3434
#### Metadata structure
3535

@@ -88,28 +88,28 @@ The node publishes the downsampled pointcloud map loaded from the `.pcd` file(s)
8888

8989
#### Publish metadata of pointcloud map (ROS 2 topic)
9090

91-
The node publishes the pointcloud metadata attached with an ID. Metadata is loaded from the `.yaml` file. Please see [the description of `PointCloudMapMetaData.msg`](https://github.com/autowarefoundation/autoware_msgs/tree/main/autoware_map_msgs#pointcloudmapmetadatamsg) for details.
91+
The node publishes the pointcloud metadata attached with an ID. Metadata is loaded from the `.yaml` file. Please see [the description of `PointCloudMapMetaData.msg`](https://github.com/autowarefoundation/autoware_msgs/tree/v1.0/autoware_map_msgs#pointcloudmapmetadatamsg) for details.
9292

9393
#### Send partial pointcloud map (ROS 2 service)
9494

9595
Here, we assume that the pointcloud maps are divided into grids.
9696

9797
Given a query from a client node, the node sends a set of pointcloud maps that overlaps with the queried area.
98-
Please see [the description of `GetPartialPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/main/autoware_map_msgs#getpartialpointcloudmapsrv) for details.
98+
Please see [the description of `GetPartialPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/v1.0/autoware_map_msgs#getpartialpointcloudmapsrv) for details.
9999

100100
#### Send differential pointcloud map (ROS 2 service)
101101

102102
Here, we assume that the pointcloud maps are divided into grids.
103103

104104
Given a query and set of map IDs, the node sends a set of pointcloud maps that overlap with the queried area and are not included in the set of map IDs.
105-
Please see [the description of `GetDifferentialPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/main/autoware_map_msgs#getdifferentialpointcloudmapsrv) for details.
105+
Please see [the description of `GetDifferentialPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/v1.0/autoware_map_msgs#getdifferentialpointcloudmapsrv) for details.
106106

107107
#### Send selected pointcloud map (ROS 2 service)
108108

109109
Here, we assume that the pointcloud maps are divided into grids.
110110

111111
Given IDs query from a client node, the node sends a set of pointcloud maps (each of which attached with unique ID) specified by query.
112-
Please see [the description of `GetSelectedPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/main/autoware_map_msgs#getselectedpointcloudmapsrv) for details.
112+
Please see [the description of `GetSelectedPointCloudMap.srv`](https://github.com/autowarefoundation/autoware_msgs/tree/v1.0/autoware_map_msgs#getselectedpointcloudmapsrv) for details.
113113

114114
### Parameters
115115

map/map_projection_loader/src/map_projection_loader.cpp

+1-1
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ MapProjectionLoader::MapProjectionLoader() : Node("map_projection_loader")
7373
this->get_logger(),
7474
"DEPRECATED WARNING: Loading map projection info from lanelet2 map may soon be deleted. "
7575
"Please use map_projector_info.yaml instead. For more info, visit "
76-
"https://github.com/autowarefoundation/autoware.universe/blob/main/map/map_projection_loader/"
76+
"https://github.com/autowarefoundation/autoware.universe/blob/v1.0/map/map_projection_loader/"
7777
"README.md");
7878
msg = load_info_from_lanelet2_map(lanelet2_map_filename);
7979
}

perception/lidar_apollo_segmentation_tvm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
#### Neural network
88

99
This package will not run without a neural network for its inference.
10-
The network is provided by ansible script during the installation of Autoware or can be downloaded manually according to [Manual Downloading](https://github.com/autowarefoundation/autoware/tree/main/ansible/roles/artifacts).
10+
The network is provided by ansible script during the installation of Autoware or can be downloaded manually according to [Manual Downloading](https://github.com/autowarefoundation/autoware/tree/v1.0/ansible/roles/artifacts).
1111
This package uses 'get_neural_network' function from tvm_utility package to create and provide proper dependency.
1212
See its design page for more information on how to handle user-compiled networks.
1313

perception/lidar_centerpoint/launch/centerpoint_vs_centerpoint-tiny/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ This tutorial is for showing `centerpoint` and `centerpoint_tiny`models’ resul
44

55
## Setup Development Environment
66

7-
Follow the steps in the Source Installation ([link](https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/source-installation/)) in Autoware doc.
7+
Follow the steps in the Source Installation ([link](https://autowarefoundation.github.io/autoware-documentation/v1.0/installation/autoware/source-installation/)) in Autoware doc.
88

99
If you fail to build autoware environment according to lack of memory, then it is recommended to build autoware sequentially.
1010

@@ -42,7 +42,7 @@ ros2 bag play /YOUR/ROSBAG/PATH/ --clock 100
4242

4343
Don't forget to add `clock` in order to sync between two rviz display.
4444

45-
You can also use the sample rosbag provided by autoware [here](https://autowarefoundation.github.io/autoware-documentation/main/tutorials/ad-hoc-simulation/rosbag-replay-simulation/).
45+
You can also use the sample rosbag provided by autoware [here](https://autowarefoundation.github.io/autoware-documentation/v1.0/tutorials/ad-hoc-simulation/rosbag-replay-simulation/).
4646

4747
If you want to merge several rosbags into one, you can refer to [this tool](https://github.com/jerry73204/rosbag2-merge).
4848

perception/object_merger/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ The successive shortest path algorithm is used to solve the data association pro
3333
| `min_area_matrix` | double | Minimum area table for data association |
3434
| `max_rad_matrix` | double | Maximum angle table for data association |
3535
| `base_link_frame_id` | double | association frame |
36-
| `distance_threshold_list` | `std::vector<double>` | Distance threshold for each class used in judging overlap. The class order depends on [ObjectClassification](https://github.com/tier4/autoware_auto_msgs/blob/tier4/main/autoware_auto_perception_msgs/msg/ObjectClassification.idl). |
36+
| `distance_threshold_list` | `std::vector<double>` | Distance threshold for each class used in judging overlap. The class order depends on [ObjectClassification](https://github.com/tier4/autoware_auto_msgs/blob/tier4/v1.0/autoware_auto_perception_msgs/msg/ObjectClassification.idl). |
3737
| `generalized_iou_threshold` | `std::vector<double>` | Generalized IoU threshold for each class |
3838

3939
## Tips

perception/radar_crossing_objects_noise_filter/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ Velocity estimation fails on static objects, resulting in ghost objects crossing
2424

2525
- 2. Turning around by ego vehicle affect the output from radar.
2626

27-
When the ego vehicle turns around, the radars outputting at the object level sometimes fail to estimate the twist of objects correctly even if [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter) compensates by the ego vehicle twist.
27+
When the ego vehicle turns around, the radars outputting at the object level sometimes fail to estimate the twist of objects correctly even if [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/radar_tracks_msgs_converter) compensates by the ego vehicle twist.
2828
So if an object detected by radars has circular motion viewing from base_link, it is likely that the speed is estimated incorrectly and that the object is a static object.
2929

3030
The example is below figure.

perception/radar_object_clustering/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This package contains a radar object clustering for [autoware_auto_perception_msgs/msg/DetectedObject](https://gitlab.com/autowarefoundation/autoware.auto/autoware_auto_msgs/-/blob/master/autoware_auto_perception_msgs/msg/DetectedObject.idl) input.
44

5-
This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_tracks_msgs_converter) and is processed by noise filter.
5+
This package can make clustered objects from radar DetectedObjects, the objects which is converted from RadarTracks by [radar_tracks_msgs_converter](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/radar_tracks_msgs_converter) and is processed by noise filter.
66
In other word, this package can combine multiple radar detections from one object into one and adjust class and size.
77

88
![radar_clustering](docs/radar_clustering.drawio.svg)
@@ -44,7 +44,7 @@ When the size information from radar outputs lack accuracy, `is_fixed_size` para
4444
If the parameter is true, the size of a clustered object is overwritten by the label set by `size_x`, `size_y`, and `size_z` parameters.
4545
If this package use for faraway dynamic object detection with radar, the parameter is recommended to set to
4646
`size_x`, `size_y`, `size_z`, as average of vehicle size.
47-
Note that to use for [multi_objects_tracker](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/multi_object_tracker), the size parameters need to exceed `min_area_matrix` parameters of it.
47+
Note that to use for [multi_objects_tracker](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/multi_object_tracker), the size parameters need to exceed `min_area_matrix` parameters of it.
4848

4949
### Limitation
5050

perception/simple_object_merger/README.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,9 @@ This package can merge multiple topics of [autoware_auto_perception_msgs/msg/Det
77
### Background
88

99
This package can merge multiple DetectedObjects without matching processing.
10-
[Object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger) solve data association algorithm like Hungarian algorithm for matching problem, but it needs computational cost.
11-
In addition, for now, [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger) can handle only 2 DetectedObjects topics and cannot handle more than 2 topics in one node.
12-
To merge 6 DetectedObjects topics, 6 [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/object_merger) nodes need to stand.
10+
[Object_merger](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/object_merger) solve data association algorithm like Hungarian algorithm for matching problem, but it needs computational cost.
11+
In addition, for now, [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/object_merger) can handle only 2 DetectedObjects topics and cannot handle more than 2 topics in one node.
12+
To merge 6 DetectedObjects topics, 6 [object_merger](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/object_merger) nodes need to stand.
1313

1414
So this package aim to merge DetectedObjects simply.
1515
This package do not use data association algorithm to reduce the computational cost, and it can handle more than 2 topics in one node to prevent launching a large number of nodes.
@@ -27,7 +27,7 @@ The timeout parameter should be determined by sensor cycle time.
2727
- Post-processing
2828

2929
Because this package does not have matching processing, so it can be used only when post-processing is used.
30-
For now, [clustering processing](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_object_clustering) can be used as post-processing.
30+
For now, [clustering processing](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/radar_object_clustering) can be used as post-processing.
3131

3232
### Use case
3333

@@ -36,7 +36,7 @@ Use case is as below.
3636
- Multiple radar detection
3737

3838
This package can be used for multiple radar detection.
39-
Since [clustering processing](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/radar_object_clustering) will be included later process in radar faraway detection, this package can be used.
39+
Since [clustering processing](https://github.com/autowarefoundation/autoware.universe/tree/v1.0/perception/radar_object_clustering) will be included later process in radar faraway detection, this package can be used.
4040

4141
## Input
4242

perception/tensorrt_yolo/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ This package includes multiple licenses.
7272

7373
All YOLO ONNX models are converted from the officially trained model. If you need information about training datasets and conditions, please refer to the official repositories.
7474

75-
All models are downloaded during env preparation by ansible (as mention in [installation](https://autowarefoundation.github.io/autoware-documentation/main/installation/autoware/source-installation/)). It is also possible to download them manually, see [Manual downloading of artifacts](https://github.com/autowarefoundation/autoware/tree/main/ansible/roles/artifacts) . When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.
75+
All models are downloaded during env preparation by ansible (as mention in [installation](https://autowarefoundation.github.io/autoware-documentation/v1.0/installation/autoware/source-installation/)). It is also possible to download them manually, see [Manual downloading of artifacts](https://github.com/autowarefoundation/autoware/tree/v1.0/ansible/roles/artifacts) . When launching the node with a model for the first time, the model is automatically converted to TensorRT, although this may take some time.
7676

7777
### YOLOv3
7878

0 commit comments

Comments
 (0)