We provide lots of useful tools under the tools/ directory.

Log Analysis

tools/analysis/ plots loss/mAP curves given a training log file.

python tools/ plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]


  • Plot the classification loss of some run.

    python tools/analysis/ plot_curve log.json --keys loss_cls --legend loss_cls
  • Plot the classification and regression loss of some run, and save the figure to a pdf.

    python tools/analysis/ plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf
  • Compare the bbox mAP of two runs in the same figure.

    python tools/analysis/ plot_curve log1.json log2.json --keys bbox_mAP --legend run1 run2
  • Compute the average training speed.

    python tools/analysis/ cal_train_time log.json [--include-outliers]

    The output is expected to be like the following.

    -----Analyze train time of work_dirs/some_exp/20190611_192040.log.json-----
    slowest epoch 11, average time is 1.2024
    fastest epoch 1, average time is 1.1909
    time std over epochs is 0.0028
    average iter time: 1.1959 s/iter

Model Conversion

Prepare a model for publishing

tools/analysis/ helps users to prepare their model for publishing.

Before you upload a model to AWS, you may want to

  1. convert model weights to CPU tensors

  2. delete the optimizer states and

  3. compute the hash of the checkpoint file and append the hash id to the filename.

python tools/analysis/ ${INPUT_FILENAME} ${OUTPUT_FILENAME}


python tools/analysis/ work_dirs/dff_faster_rcnn_r101_dc5_1x_imagenetvid/latest.pth dff_faster_rcnn_r101_dc5_1x_imagenetvid.pth

The final output filename will be dff_faster_rcnn_r101_dc5_1x_imagenetvid_20201230-{hash id}.pth.


Model Serving

In order to serve an MMTracking model with TorchServe, you can follow the steps:

1. Convert model from MMTracking to TorchServe

python tools/torchserve/ ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}

${MODEL_STORE} needs to be an absolute path to a folder.

2. Build mmtrack-serve docker image

docker build -t mmtrack-serve:latest docker/serve/

3. Run mmtrack-serve

Check the official docs for running TorchServe with docker.

In order to run in GPU, you need to install nvidia-docker. You can omit the --gpus argument in order to run in CPU.


docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \

Read the docs about the Inference (8080), Management (8081) and Metrics (8082) APIs

4. Test deployment

curl${MODEL_NAME} -T demo/demo.mp4 -o result.mp4

The response will be a “.mp4” mask.

You can visualize the output as follows:

import cv2
    cap = cv2.VideoCapture(video_path)
    fps = cap.get(cv2.CAP_PROP_FPS)
    while cap.isOpened():
        flag, frame =
        if not flag:
        cv2.imshow('result.mp4', frame)
        if cv2.waitKey(int(1000 / fps)) & 0xFF == ord('q'):

And you can use to compare result of torchserve and pytorch, and visualize them.

python tools/torchserve/ ${VIDEO_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--result-video ${RESULT_VIDEO}] [--device ${DEVICE}]
[--score-thr ${SCORE_THR}]


python tools/torchserve/ \
demo/demo.mp4 \
configs/vid/selsa/ \
checkpoint/selsa_faster_rcnn_r101_dc5_1x_imagenetvid_20201218_172724-aa961bcc.pth \
selsa \
Read the Docs v: latest
On Read the Docs
Project Home

Free document hosting provided by Read the Docs.