LAP-DEV commited on
Commit
119a0fb
·
verified ·
1 Parent(s): fc2ac0e

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -19
README.md CHANGED
@@ -26,8 +26,7 @@ A Gradio-based browser interface for [Whisper](https://github.com/openai/whisper
26
  - ## Run Locally
27
 
28
  ### Prerequisite
29
- To run this WebUI, you need to have `git`, `python` version 3.8 ~ 3.10, `FFmpeg`.<BR>
30
- If you're not using an Nvida GPU, or using a different `CUDA` version than 12.4, edit the `file requirements.txt` to match your environment.
31
 
32
  Please follow the links below to install the necessary software:
33
  - git : [https://git-scm.com/downloads](https://git-scm.com/downloads)
@@ -46,25 +45,25 @@ A Gradio-based browser interface for [Whisper](https://github.com/openai/whisper
46
  - ## Running with Docker
47
 
48
  1. Install and launch [Docker-Desktop](https://www.docker.com/products/docker-desktop/)
49
-
50
  2. Get the repository
51
-
52
- 3. Build the image ( Image is about ~7GB)
53
-
54
- ```sh
55
- docker compose build
56
- ```
57
-
58
- 4. Run the container
59
-
60
- ```sh
61
- docker compose up
62
- ```
63
-
 
 
64
  5. Connect to the WebUI with your browser at `http://localhost:7860`
65
 
66
- Note: If needed, update the `docker-compose.yaml` to match your environment
67
-
68
  # VRAM Usages
69
  - This project is integrated with [faster-whisper](https://github.com/guillaumekln/faster-whisper) by default for better VRAM usage and transcription speed.<BR>According to faster-whisper, the efficiency of the optimized whisper model is as follows:
70
  | Implementation | Precision | Beam size | Time | Max. GPU memory | Max. CPU memory |
@@ -81,4 +80,4 @@ A Gradio-based browser interface for [Whisper](https://github.com/openai/whisper
81
  | medium | 769 M | `medium.en` | `medium` | ~5 GB | ~2x |
82
  | large | 1550 M | N/A | `large` | ~10 GB | 1x |
83
 
84
- Note: `.en` models are for English only, and you can use the `Translate to English` option from the other models
 
26
  - ## Run Locally
27
 
28
  ### Prerequisite
29
+ To run this WebUI, you need to have `git`, `python` version 3.8 ~ 3.10, `FFmpeg`.<BR>If you're not using an Nvida GPU, or using a different `CUDA` version than 12.4, edit the file `requirements.txt` to match your environment.
 
30
 
31
  Please follow the links below to install the necessary software:
32
  - git : [https://git-scm.com/downloads](https://git-scm.com/downloads)
 
45
  - ## Running with Docker
46
 
47
  1. Install and launch [Docker-Desktop](https://www.docker.com/products/docker-desktop/)
48
+
49
  2. Get the repository
50
+
51
+ 3. If needed, update the `docker-compose.yaml` to match your environment
52
+
53
+ 4. Docker commands:
54
+
55
+ Build the image ( Image is about ~7GB)
56
+ ```sh
57
+ docker compose build
58
+ ```
59
+
60
+ Run the container
61
+ ```sh
62
+ docker compose up
63
+ ```
64
+
65
  5. Connect to the WebUI with your browser at `http://localhost:7860`
66
 
 
 
67
  # VRAM Usages
68
  - This project is integrated with [faster-whisper](https://github.com/guillaumekln/faster-whisper) by default for better VRAM usage and transcription speed.<BR>According to faster-whisper, the efficiency of the optimized whisper model is as follows:
69
  | Implementation | Precision | Beam size | Time | Max. GPU memory | Max. CPU memory |
 
80
  | medium | 769 M | `medium.en` | `medium` | ~5 GB | ~2x |
81
  | large | 1550 M | N/A | `large` | ~10 GB | 1x |
82
 
83
+ Note: `.en` models are for English only, and you can use the `Translate to English` option from the other models