Spaces:
Running
Running
Koro33
commited on
Commit
·
8ebf733
1
Parent(s):
7fba0ff
feat: :sparkles: add docker support
Browse files- .dockerignore +6 -0
- Dockerfile +22 -0
- README.md +18 -0
.dockerignore
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
venv/
|
| 2 |
+
ui/__pycache__/
|
| 3 |
+
outputs/
|
| 4 |
+
modules/__pycache__/
|
| 5 |
+
models/
|
| 6 |
+
modules/yt_tmp.wav
|
Dockerfile
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM nvidia/cuda:12.3.2-cudnn9-runtime-ubuntu22.04 AS runtime
|
| 2 |
+
|
| 3 |
+
VOLUME [ "/Whisper-WebUI/models" ]
|
| 4 |
+
VOLUME [ "/Whisper-WebUI/outputs" ]
|
| 5 |
+
|
| 6 |
+
RUN apt-get update && \
|
| 7 |
+
apt-get install -y curl ffmpeg git python3 python3-pip python3-venv && \
|
| 8 |
+
rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/* && \
|
| 9 |
+
mkdir -p /Whisper-WebUI
|
| 10 |
+
|
| 11 |
+
WORKDIR /Whisper-WebUI
|
| 12 |
+
|
| 13 |
+
COPY . .
|
| 14 |
+
|
| 15 |
+
RUN python3 -m venv venv && \
|
| 16 |
+
. venv/bin/activate && \
|
| 17 |
+
pip install --no-cache-dir -r requirements.txt
|
| 18 |
+
|
| 19 |
+
ENV PATH="/Whisper-WebUI/venv/bin:$PATH"
|
| 20 |
+
ENV LD_LIBRARY_PATH=/Whisper-WebUI/venv/lib64/python3.10/site-packages/nvidia/cublas/lib:/Whisper-WebUI/venv/lib64/python3.10/site-packages/nvidia/cudnn/lib
|
| 21 |
+
|
| 22 |
+
ENTRYPOINT [ "python", "app.py" ]
|
README.md
CHANGED
|
@@ -44,6 +44,24 @@ If you have satisfied the prerequisites listed above, you are now ready to start
|
|
| 44 |
|
| 45 |
And you can also run the project with command line arguments if you like by running `user-start-webui.bat`, see [wiki](https://github.com/jhj0517/Whisper-WebUI/wiki/Command-Line-Arguments) for a guide to arguments.
|
| 46 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
# VRAM Usages
|
| 48 |
This project is integrated with [faster-whisper](https://github.com/guillaumekln/faster-whisper) by default for better VRAM usage and transcription speed.
|
| 49 |
|
|
|
|
| 44 |
|
| 45 |
And you can also run the project with command line arguments if you like by running `user-start-webui.bat`, see [wiki](https://github.com/jhj0517/Whisper-WebUI/wiki/Command-Line-Arguments) for a guide to arguments.
|
| 46 |
|
| 47 |
+
## Using Docker
|
| 48 |
+
|
| 49 |
+
1. build the image
|
| 50 |
+
|
| 51 |
+
```sh
|
| 52 |
+
docker build -t whisper-webui:latest .
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
2. run the container
|
| 56 |
+
|
| 57 |
+
```sh
|
| 58 |
+
docker run --gpus all -d \
|
| 59 |
+
-v /path/to/models:/Whisper-WebUI/models \
|
| 60 |
+
-v /path/to/outputs:/Whisper-WebUI/outputs \
|
| 61 |
+
-p 7860:7860 \
|
| 62 |
+
whisper-webui:latest --server_name 0.0.0.0 --server_port 7860
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
# VRAM Usages
|
| 66 |
This project is integrated with [faster-whisper](https://github.com/guillaumekln/faster-whisper) by default for better VRAM usage and transcription speed.
|
| 67 |
|