Shortcuts

We provide lots of useful tools under the tools/ directory.

MOT Error Visualize

tools/analysis_tools/mot/mot_error_visualize.py can visualize errors for multiple object tracking. This script needs the result of inference. By Default, the red bounding box denotes false positive, the yellow bounding box denotes the false negative and the blue bounding box denotes ID switch.

python tools/analysis_tools/mot/mot_error_visualize.py \
    ${CONFIG_FILE}\
    --input ${INPUT} \
    --result-dir ${RESULT_DIR} \
    [--out-dir ${OUTPUT}] \
    [--fps ${FPS}] \
    [--show] \
    [--backend ${BACKEND}]

The RESULT_DIR contains the inference results of all videos and the inference result is a txt file.

Optional arguments:

  • OUTPUT: Output of the visualized demo. If not specified, the --show is obligate to show the video on the fly.

  • FPS: FPS of the output video.

  • --show: Whether show the video on the fly.

  • BACKEND: The backend to visualize the boxes. Options are cv2 and plt.

Log Analysis

tools/analysis_tools/analyze_logs.py plots loss/mAP curves given a training log file.

python tools/analysis_tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]

Examples:

  • Plot the classification loss of some run.

    python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls
    
  • Plot the classification and regression loss of some run, and save the figure to a pdf.

    python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
    
  • Compare the bbox mAP of two runs in the same figure.

    python tools/analysis_tools/analyze_logs.py plot_curve log1.json log2.json --keys bbox_mAP --legend run1 run2
    
  • Compute the average training speed.

    python tools/analysis_tools/analyze_logs.py cal_train_time log.json [--include-outliers]
    

    The output is expected to be like the following.

    -----Analyze train time of work_dirs/some_exp/20190611_192040.log.json-----
    slowest epoch 11, average time is 1.2024
    fastest epoch 1, average time is 1.1909
    time std over epochs is 0.0028
    average iter time: 1.1959 s/iter
    

Browse dataset

tools/analysis_tools/browse_dataset.py can visualize the training dataset to check whether the dataset configuration is correct.

Examples:

python tools/analysis_tools/browse_dataset.py ${CONFIG_FILE} [--show-interval ${SHOW_INTERVAL}]

Optional arguments:

  • SHOW_INTERVAL: The interval of show (s).

  • --not-show: Whether do not show the images on the fly.

Show SOT evaluation results in video level

The SOT evaluation results are sorted in video level from largest to smallest by the Success metric. You can selectively show the performance results of some good cases or bad cases by setting eval_show_video_indices.

test_evaluator=dict(
    type='SOTMetric',
    options_after_eval=dict(eval_show_video_indices=10))

Here, eval_show_video_indices is used to index a numpy.ndarray. It can be int (positive or negative) or list. The positive number k means all the top-k reuslts while the negative number means the bottom-k results.

Save SOT evaluation results and plot them

Save the SOT evaluation result by setting the SOTMetric in the config.

test_evaluator = dict(
    type='SOTMetric',
    options_after_eval = dict(tracker_name = 'SiamRPN++', saved_eval_res_file = './results/sot_results.json'))

The saved result is a dict in the format:

dict{tracker_name=dict(
      success = np.ndarray,
      norm_precision = np.ndarray,
      precision = np.ndarray)}

The metrics have shape (M, ), where M is the number of values corresponding to different thresholds.

Given the saved results, you can plot them using the following command:

python ./tools/analysis_tools/sot/sot_plot_curve.py ./results --plot_save_path ./results

Save tracked results and playback them

Save the tracked result by setting the SOTMetric in the config.

test_evaluator = dict(
    type='SOTMetric',
    options_after_eval = dict(saved_track_res_path='./tracked_results'))

Playback the tracked results using the following command:

python ./tools/analysis_tools/sot/sot_playback.py  data/OTB100/data/Basketball/img/ tracked_results/basketball.txt --show --output results/basketball.mp4 --fps 20 --gt_bboxes data/OTB100/data/Basketball/groundtruth_rect.txt

Visualization of feature map

Here is an example of calling the Visualizer in MMEngine:

# call visualizer at any position
visualizer = Visualizer.get_current_instance()
# set the image as background
visualizer.set_image(image=image)
# draw feature map on the image
drawn_img = visualizer.draw_featmap(feature_map, image, channel_reduction='squeeze_mean')
# show
visualizer.show(drawn_img)
# saved as ${saved_dir}/vis_data/vis_image/feat_0.png
visualizer.add_image('feature_map', drawn_img)

More details about visualization of feature map can be seen in visualizer docs and draw_featmap function

Read the Docs v: dev-1.x
Versions
latest
stable
1.x
dev-1.x
Downloads
pdf
html
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.