|
# easy_ViTPose |
|
<p align="center"> |
|
<img src="https://user-images.githubusercontent.com/24314647/236082274-b25a70c8-9267-4375-97b0-eddf60a7dfc6.png" width=375> easy_ViTPose |
|
</p> |
|
|
|
## Accurate 2d human pose estimation, finetuned on 25 keypoints COCO skeleton + feet |
|
### Easy to use SOTA `ViTPose` [Y. Xu et al., 2022] models for fast inference. |
|
|
|
These are just the models, refer to https://github.com/JunkyByte/easy_ViTPose for the actual code. |
|
|
|
|
|
| Models | Path | |
|
| :----: | :----: | |
|
| TORCH | [Folder](https://huggingface.co/JunkyByte/easy_ViTPose/tree/main/torch) | |
|
| ONNX | [Folder](https://huggingface.co/JunkyByte/easy_ViTPose/tree/main/onnx) | |
|
| TENSORRT | [Folder](https://huggingface.co/JunkyByte/easy_ViTPose/tree/main/tensorrt) | |
|
|
|
You can also download the YOLOv5 models: |
|
| Models | Path | |
|
| :----: | :----: | |
|
| YOLOv5 | [Folder](https://huggingface.co/JunkyByte/easy_ViTPose/tree/main/yolov5) | |
|
|
|
### License |
|
Refer to official https://github.com/ViTAE-Transformer/ViTPose/blob/main/LICENSE for model license |