update for regression test
Browse files
README.md
CHANGED
|
@@ -43,23 +43,24 @@ You can use the raw model for object detection. See the [model hub](https://hugg
|
|
| 43 |
|
| 44 |
The dataset MSCOCO2017 contains 118287 images for training and 5000 images for validation.
|
| 45 |
|
| 46 |
-
Download COCO dataset and create directories in your code like this:
|
| 47 |
```plain
|
| 48 |
-
βββ
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
|
|
|
| 63 |
```
|
| 64 |
1. put the val2017 image folder under images directory or use a softlink
|
| 65 |
2. the labels folder and val2017.txt above are generate by **general_json2yolo.py**
|
|
@@ -87,8 +88,8 @@ for batch in dataset:
|
|
| 87 |
im = preprocess(im)
|
| 88 |
if len(im.shape) == 3:
|
| 89 |
im = im[None]
|
| 90 |
-
outputs = onnx_model.run(None, {onnx_model.get_inputs()[0].name: im.cpu().numpy()})
|
| 91 |
-
outputs = [torch.tensor(item) for item in outputs]
|
| 92 |
preds = post_process(outputs)
|
| 93 |
preds = non_max_suppression(
|
| 94 |
preds, 0.25, 0.7, agnostic=False, max_det=300, classes=None
|
|
@@ -105,12 +106,12 @@ for batch in dataset:
|
|
| 105 |
|
| 106 |
- Run inference for a single image
|
| 107 |
```python
|
| 108 |
-
python onnx_inference.py -m ./
|
| 109 |
```
|
| 110 |
*Note: __vaip_config.json__ is located at the setup package of Ryzen AI (refer to [Installation](#installation))*
|
| 111 |
- Test accuracy of the quantized model
|
| 112 |
```python
|
| 113 |
-
python onnx_eval.py -m ./
|
| 114 |
```
|
| 115 |
|
| 116 |
### Performance
|
|
|
|
| 43 |
|
| 44 |
The dataset MSCOCO2017 contains 118287 images for training and 5000 images for validation.
|
| 45 |
|
| 46 |
+
Download COCO dataset and create/mount directories in your code like this:
|
| 47 |
```plain
|
| 48 |
+
βββ yolov8m
|
| 49 |
+
βββ datasets
|
| 50 |
+
βββ coco
|
| 51 |
+
βββ annotations
|
| 52 |
+
| βββ instances_val2017.json
|
| 53 |
+
| βββ ...
|
| 54 |
+
βββ labels
|
| 55 |
+
| βββ val2017
|
| 56 |
+
| | βββ 000000000139.txt
|
| 57 |
+
| βββ 000000000285.txt
|
| 58 |
+
| βββ ...
|
| 59 |
+
βββ images
|
| 60 |
+
| βββ val2017
|
| 61 |
+
| | βββ 000000000139.jpg
|
| 62 |
+
| βββ 000000000285.jpg
|
| 63 |
+
βββ val2017.txt
|
| 64 |
```
|
| 65 |
1. put the val2017 image folder under images directory or use a softlink
|
| 66 |
2. the labels folder and val2017.txt above are generate by **general_json2yolo.py**
|
|
|
|
| 88 |
im = preprocess(im)
|
| 89 |
if len(im.shape) == 3:
|
| 90 |
im = im[None]
|
| 91 |
+
outputs = onnx_model.run(None, {onnx_model.get_inputs()[0].name: im.permute(0, 2, 3, 1).cpu().numpy()})
|
| 92 |
+
outputs = [torch.tensor(item).permute(0, 3, 1, 2) for item in outputs]
|
| 93 |
preds = post_process(outputs)
|
| 94 |
preds = non_max_suppression(
|
| 95 |
preds, 0.25, 0.7, agnostic=False, max_det=300, classes=None
|
|
|
|
| 106 |
|
| 107 |
- Run inference for a single image
|
| 108 |
```python
|
| 109 |
+
python onnx_inference.py -m ./yolov8m.onnx -i /Path/To/Your/Image --ipu --provider_config /Path/To/Your/Provider_config
|
| 110 |
```
|
| 111 |
*Note: __vaip_config.json__ is located at the setup package of Ryzen AI (refer to [Installation](#installation))*
|
| 112 |
- Test accuracy of the quantized model
|
| 113 |
```python
|
| 114 |
+
python onnx_eval.py -m ./yolov8m.onnx --ipu --provider_config /Path/To/Your/Provider_config
|
| 115 |
```
|
| 116 |
|
| 117 |
### Performance
|