|
We provide lots of useful tools under `tools/` directory. |
|
|
|
## Log Analysis |
|
|
|
You can plot loss/mAP curves given a training log file. Run `pip install seaborn` first to install the dependency. |
|
|
|
 |
|
|
|
```shell |
|
python tools/analysis_tools/analyze_logs.py plot_curve [--keys ${KEYS}] [--title ${TITLE}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}] [--mode ${MODE}] [--interval ${INTERVAL}] |
|
``` |
|
|
|
**Notice**: If the metric you want to plot is calculated in the eval stage, you need to add the flag `--mode eval`. If you perform evaluation with an interval of `${INTERVAL}`, you need to add the args `--interval ${INTERVAL}`. |
|
|
|
Examples: |
|
|
|
- Plot the classification loss of some run. |
|
|
|
```shell |
|
python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls --legend loss_cls |
|
``` |
|
|
|
- Plot the classification and regression loss of some run, and save the figure to a pdf. |
|
|
|
```shell |
|
python tools/analysis_tools/analyze_logs.py plot_curve log.json --keys loss_cls loss_bbox --out losses.pdf |
|
``` |
|
|
|
- Compare the bbox mAP of two runs in the same figure. |
|
|
|
```shell |
|
# evaluate PartA2 and second on KITTI according to Car_3D_moderate_strict |
|
python tools/analysis_tools/analyze_logs.py plot_curve tools/logs/PartA2.log.json tools/logs/second.log.json --keys KITTI/Car_3D_moderate_strict --legend PartA2 second --mode eval --interval 1 |
|
# evaluate PointPillars for car and 3 classes on KITTI according to Car_3D_moderate_strict |
|
python tools/analysis_tools/analyze_logs.py plot_curve tools/logs/pp-3class.log.json tools/logs/pp.log.json --keys KITTI/Car_3D_moderate_strict --legend pp-3class pp --mode eval --interval 2 |
|
``` |
|
|
|
You can also compute the average training speed. |
|
|
|
```shell |
|
python tools/analysis_tools/analyze_logs.py cal_train_time log.json [--include-outliers] |
|
``` |
|
|
|
The output is expected to be like the following. |
|
|
|
``` |
|
-----Analyze train time of work_dirs/some_exp/20190611_192040.log.json----- |
|
slowest epoch 11, average time is 1.2024 |
|
fastest epoch 1, average time is 1.1909 |
|
time std over epochs is 0.0028 |
|
average iter time: 1.1959 s/iter |
|
``` |
|
|
|
  |
|
|
|
## Model Serving |
|
|
|
**Note**: This tool is still experimental now, only SECOND is supported to be served with [`TorchServe`](https://pytorch.org/serve/). We'll support more models in the future. |
|
|
|
In order to serve an `MMDetection3D` model with [`TorchServe`](https://pytorch.org/serve/), you can follow the steps: |
|
|
|
### 1. Convert the model from MMDetection3D to TorchServe |
|
|
|
```shell |
|
python tools/deployment/mmdet3d2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \ |
|
--output-folder ${MODEL_STORE} \ |
|
--model-name ${MODEL_NAME} |
|
``` |
|
|
|
**Note**: ${MODEL_STORE} needs to be an absolute path to a folder. |
|
|
|
### 2. Build `mmdet3d-serve` docker image |
|
|
|
```shell |
|
docker build -t mmdet3d-serve:latest docker/serve/ |
|
``` |
|
|
|
### 3. Run `mmdet3d-serve` |
|
|
|
Check the official docs for [running TorchServe with docker](https://github.com/pytorch/serve/blob/master/docker/README.md#running-torchserve-in-a-production-docker-environment). |
|
|
|
In order to run it on the GPU, you need to install [nvidia-docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html). You can omit the `--gpus` argument in order to run on the CPU. |
|
|
|
Example: |
|
|
|
```shell |
|
docker run --rm \ |
|
--cpus 8 \ |
|
--gpus device=0 \ |
|
-p8080:8080 -p8081:8081 -p8082:8082 \ |
|
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \ |
|
mmdet3d-serve:latest |
|
``` |
|
|
|
[Read the docs](https://github.com/pytorch/serve/blob/072f5d088cce9bb64b2a18af065886c9b01b317b/docs/rest_api.md/) about the Inference (8080), Management (8081) and Metrics (8082) APis |
|
|
|
### 4. Test deployment |
|
|
|
You can use `test_torchserver.py` to compare result of torchserver and pytorch. |
|
|
|
```shell |
|
python tools/deployment/test_torchserver.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME} |
|
[--inference-addr ${INFERENCE_ADDR}] [--device ${DEVICE}] [--score-thr ${SCORE_THR}] |
|
``` |
|
|
|
Example: |
|
|
|
```shell |
|
python tools/deployment/test_torchserver.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_6x8_80e_kitti-3d-car.py checkpoints/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth second |
|
``` |
|
|
|
  |
|
|
|
## Model Complexity |
|
|
|
You can use `tools/analysis_tools/get_flops.py` in MMDetection3D, a script adapted from [flops-counter.pytorch](https://github.com/sovrasov/flops-counter.pytorch), to compute the FLOPs and params of a given model. |
|
|
|
```shell |
|
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}] |
|
``` |
|
|
|
You will get the results like this. |
|
|
|
```text |
|
============================== |
|
Input shape: (40000, 4) |
|
Flops: 5.78 GFLOPs |
|
Params: 953.83 k |
|
============================== |
|
``` |
|
|
|
**Note**: This tool is still experimental and we do not guarantee that the |
|
number is absolutely correct. You may well use the result for simple |
|
comparisons, but double check it before you adopt it in technical reports or papers. |
|
|
|
1. FLOPs are related to the input shape while parameters are not. The default |
|
input shape is (1, 40000, 4). |
|
2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/master/mmcv/cnn/utils/flops_counter.py) for details. |
|
3. We currently only support FLOPs calculation of single-stage models with single-modality input (point cloud or image). We will support two-stage and multi-modality models in the future. |
|
|
|
  |
|
|
|
## Model Conversion |
|
|
|
### RegNet model to MMDetection |
|
|
|
`tools/model_converters/regnet2mmdet.py` convert keys in pycls pretrained RegNet models to |
|
MMDetection style. |
|
|
|
```shell |
|
python tools/model_converters/regnet2mmdet.py ${SRC} ${DST} [-h] |
|
``` |
|
|
|
### Detectron ResNet to Pytorch |
|
|
|
`tools/detectron2pytorch.py` in MMDetection could convert keys in the original detectron pretrained |
|
ResNet models to PyTorch style. |
|
|
|
```shell |
|
python tools/detectron2pytorch.py ${SRC} ${DST} ${DEPTH} [-h] |
|
``` |
|
|
|
### Prepare a model for publishing |
|
|
|
`tools/model_converters/publish_model.py` helps users to prepare their model for publishing. |
|
|
|
Before you upload a model to AWS, you may want to |
|
|
|
1. convert model weights to CPU tensors |
|
2. delete the optimizer states and |
|
3. compute the hash of the checkpoint file and append the hash id to the |
|
filename. |
|
|
|
```shell |
|
python tools/model_converters/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME} |
|
``` |
|
|
|
E.g., |
|
|
|
```shell |
|
python tools/model_converters/publish_model.py work_dirs/faster_rcnn/latest.pth faster_rcnn_r50_fpn_1x_20190801.pth |
|
``` |
|
|
|
The final output filename will be `faster_rcnn_r50_fpn_1x_20190801-{hash id}.pth`. |
|
|
|
  |
|
|
|
## Dataset Conversion |
|
|
|
`tools/dataset_converters/` contains tools for converting datasets to other formats. Most of them convert datasets to pickle based info files, like kitti, nuscenes and lyft. Waymo converter is used to reorganize waymo raw data like KITTI style. Users could refer to them for our approach to converting data format. It is also convenient to modify them to use as scripts like nuImages converter. |
|
|
|
To convert the nuImages dataset into COCO format, please use the command below: |
|
|
|
```shell |
|
python -u tools/dataset_converters/nuimage_converter.py --data-root ${DATA_ROOT} --version ${VERSIONS} \ |
|
--out-dir ${OUT_DIR} --nproc ${NUM_WORKERS} --extra-tag ${TAG} |
|
``` |
|
|
|
- `--data-root`: the root of the dataset, defaults to `./data/nuimages`. |
|
- `--version`: the version of the dataset, defaults to `v1.0-mini`. To get the full dataset, please use `--version v1.0-train v1.0-val v1.0-mini` |
|
- `--out-dir`: the output directory of annotations and semantic masks, defaults to `./data/nuimages/annotations/`. |
|
- `--nproc`: number of workers for data preparation, defaults to `4`. Larger number could reduce the preparation time as images are processed in parallel. |
|
- `--extra-tag`: extra tag of the annotations, defaults to `nuimages`. This can be used to separate different annotations processed in different time for study. |
|
|
|
More details could be referred to the [doc](https://mmdetection3d.readthedocs.io/en/latest/data_preparation.html) for dataset preparation and [README](https://github.com/open-mmlab/mmdetection3d/blob/main/configs/nuimages/README.md/) for nuImages dataset. |
|
|
|
  |
|
|
|
## Miscellaneous |
|
|
|
### Print the entire config |
|
|
|
`tools/misc/print_config.py` prints the whole config verbatim, expanding all its |
|
imports. |
|
|
|
```shell |
|
python tools/misc/print_config.py ${CONFIG} [-h] [--options ${OPTIONS [OPTIONS...]}] |
|
``` |
|
|