Spaces:
Running
Running
Clement Vachet
commited on
Commit
·
25e9ef7
1
Parent(s):
5eda9a8
doc: add menu and deployment section
Browse files
README.md
CHANGED
@@ -14,18 +14,32 @@ short_description: Object detection via Gradio
|
|
14 |
|
15 |
Aim: AI-driven object detection (on COCO image dataset)
|
16 |
|
17 |
-
|
|
|
|
|
|
|
|
|
18 |
|
19 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
> python detect_torch.py
|
21 |
|
22 |
-
### 2. Use of transformers library
|
23 |
> python detect_transformers.py
|
24 |
|
25 |
-
### 3. Use of HuggingFace pipeline library
|
26 |
> python detect_pipeline.py
|
27 |
|
28 |
-
##
|
29 |
Use of Gradio library for web interface
|
30 |
|
31 |
Command line:
|
@@ -33,11 +47,11 @@ Command line:
|
|
33 |
|
34 |
<b>Note:</b> The Gradio app should now be accessible at http://localhost:7860
|
35 |
|
36 |
-
##
|
37 |
|
38 |
<b>Note:</b> Use of existing Gradio server (running locally, in a Docker container, or in the cloud as a HuggingFace space or AWS)
|
39 |
|
40 |
-
### 1. Creation of docker container
|
41 |
|
42 |
Command lines:
|
43 |
> sudo docker build -t gradio-app .
|
@@ -46,6 +60,21 @@ Command lines:
|
|
46 |
|
47 |
The Gradio app should now be accessible at http://localhost:7860
|
48 |
|
49 |
-
### 2. Direct inference via API
|
50 |
Command line:
|
51 |
> python inference_API.py
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
|
15 |
Aim: AI-driven object detection (on COCO image dataset)
|
16 |
|
17 |
+
Machine learning models:
|
18 |
+
- facebook/detr-resnet-50,
|
19 |
+
- facebook/detr-resnet-101,
|
20 |
+
- hustvl/yolos-tiny,
|
21 |
+
- hustvl/yolos-small
|
22 |
|
23 |
+
### <b>Table of contents:</b>
|
24 |
+
- [Execution via command line](#1-execution-via-command-line)
|
25 |
+
- [Execution via User Interface ](#2-execution-via-user-interface)
|
26 |
+
- [Execution via Gradio client API](#3-execution-via-gradio-client-api)
|
27 |
+
- [Deployment on Hugging Face](#4-deployment-on-hugging-face)
|
28 |
+
- [Deployment on Docker Hub](#5-deployment-on-docker-hub)
|
29 |
+
|
30 |
+
|
31 |
+
## 1. Execution via command line
|
32 |
+
|
33 |
+
### 1.1. Use of torch library
|
34 |
> python detect_torch.py
|
35 |
|
36 |
+
### 1.2. Use of transformers library
|
37 |
> python detect_transformers.py
|
38 |
|
39 |
+
### 1.3. Use of HuggingFace pipeline library
|
40 |
> python detect_pipeline.py
|
41 |
|
42 |
+
## 2. Execution via User Interface
|
43 |
Use of Gradio library for web interface
|
44 |
|
45 |
Command line:
|
|
|
47 |
|
48 |
<b>Note:</b> The Gradio app should now be accessible at http://localhost:7860
|
49 |
|
50 |
+
## 3. Execution via Gradio client API
|
51 |
|
52 |
<b>Note:</b> Use of existing Gradio server (running locally, in a Docker container, or in the cloud as a HuggingFace space or AWS)
|
53 |
|
54 |
+
### 3.1. Creation of docker container
|
55 |
|
56 |
Command lines:
|
57 |
> sudo docker build -t gradio-app .
|
|
|
60 |
|
61 |
The Gradio app should now be accessible at http://localhost:7860
|
62 |
|
63 |
+
### 3.2. Direct inference via API
|
64 |
Command line:
|
65 |
> python inference_API.py
|
66 |
+
|
67 |
+
|
68 |
+
## 4. Deployment on Hugging Face
|
69 |
+
|
70 |
+
This web application is available on Hugging Face, via a Gradio space
|
71 |
+
|
72 |
+
URL: https://huggingface.co/spaces/cvachet/object_detection_gradio
|
73 |
+
|
74 |
+
|
75 |
+
## 5. Deployment on Docker Hub
|
76 |
+
|
77 |
+
This web application is available as a container on Docker Hub
|
78 |
+
|
79 |
+
URL: https://hub.docker.com/r/cvachet/object-detection-gradio
|
80 |
+
|