We provide lots of useful tools under the tools/
directory.
tools/analysis/mot/mot_param_search.py
can search the parameters of the tracker
in MOT models.
It is used as the same manner with tools/test.py
but different in the configs.
Here is an example that shows how to modify the configs:
-
Define the desirable evaluation metrics to record.
For example, you can define the search metrics as
search_metrics = ['MOTA', 'IDF1', 'FN', 'FP', 'IDs', 'MT', 'ML']
-
Define the parameters and the values to search.
Assume you have a tracker like
model = dict( tracker=dict( type='BaseTracker', obj_score_thr=0.5, match_iou_thr=0.5 ) )
If you want to search the parameters of the tracker, just change the value to a list as follow
model = dict( tracker=dict( type='BaseTracker', obj_score_thr=[0.4, 0.5, 0.6], match_iou_thr=[0.4, 0.5, 0.6, 0.7] ) )
Then the script will test the totally 12 cases and log the results.
tools/analysis/sot/sot_siamrpn_param_search.py
can search the test-time tracking parameters in SiameseRPN++: penalty_k
, lr
and window_influence
. You need to pass the searching range of each parameter into the argparser.
Example on UAV123 dataset:
./tools/analysis/sot/dist_sot_siamrpn_param_search.sh [${CONFIG_FILE}] [$GPUS] \
[--checkpoint ${CHECKPOINT}] [--log ${LOG_FILENAME}] [--eval ${EVAL}] \
[--penalty-k-range 0.01,0.22,0.05] [--lr-range 0.4,0.61,0.05] [--win-infu-range 0.01,0.22,0.05]
Example on OTB100 dataset:
./tools/analysis/sot/dist_sot_siamrpn_param_search.sh [${CONFIG_FILE}] [$GPUS] \
[--checkpoint ${CHECKPOINT}] [--log ${LOG_FILENAME}] [--eval ${EVAL}] \
[--penalty-k-range 0.3,0.45,0.02] [--lr-range 0.35,0.5,0.02] [--win-infu-range 0.46,0.55,0.02]
tools/analysis/analyze_logs.py
plots loss/mAP curves given a training log file.
python tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
Examples:
-
Plot the classification loss of some run.
python tools/analysis/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
-
Plot the classification and regression loss of some run, and save the figure to a pdf.
python tools/analysis/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
-
Compare the bbox mAP of two runs in the same figure.
python tools/analysis/analyze_logs.py plot_curve log1.json log2.json --keys bbox_mAP --legend run1 run2
-
Compute the average training speed.
python tools/analysis/analyze_logs.py cal_train_time log.json [--include-outliers]
The output is expected to be like the following.
-----Analyze train time of work_dirs/some_exp/20190611_192040.log.json----- slowest epoch 11, average time is 1.2024 fastest epoch 1, average time is 1.1909 time std over epochs is 0.0028 average iter time: 1.1959 s/iter
tools/analysis/publish_model.py
helps users to prepare their model for publishing.
Before you upload a model to AWS, you may want to
- convert model weights to CPU tensors
- delete the optimizer states and
- compute the hash of the checkpoint file and append the hash id to the filename.
python tools/analysis/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
E.g.,
python tools/analysis/publish_model.py work_dirs/dff_faster_rcnn_r101_dc5_1x_imagenetvid/latest.pth dff_faster_rcnn_r101_dc5_1x_imagenetvid.pth
The final output filename will be dff_faster_rcnn_r101_dc5_1x_imagenetvid_20201230-{hash id}.pth
.
tools/analysis/print_config.py
prints the whole config verbatim, expanding all its imports.
python tools/analysis/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}]