LAP-DEV commited on
Commit
62c3596
·
verified ·
1 Parent(s): c202b1e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -27,7 +27,7 @@ A Gradio-based browser interface for [Whisper](https://github.com/openai/whisper
27
 
28
  ### Prerequisite
29
  To run this WebUI, you need to have `git`, `python` version 3.8 ~ 3.10, `FFmpeg`.<BR>
30
- If you're not using an Nvida GPU, or using a different `CUDA` version than 12.4, edit the **file requirements.txt** to match your environment.
31
 
32
  Please follow the links below to install the necessary software:
33
  - git : [https://git-scm.com/downloads](https://git-scm.com/downloads)
@@ -35,7 +35,7 @@ A Gradio-based browser interface for [Whisper](https://github.com/openai/whisper
35
  - FFmpeg : [https://ffmpeg.org/download.html](https://ffmpeg.org/download.html)
36
  - CUDA : [https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads)
37
 
38
- After installing FFmpeg, make sure to **add** the `FFmpeg/bin` folder to your system **PATH**
39
 
40
  ### Installation using the script files
41
 
@@ -63,7 +63,7 @@ A Gradio-based browser interface for [Whisper](https://github.com/openai/whisper
63
 
64
  5. Connect to the WebUI with your browser at `http://localhost:7860`
65
 
66
- Note: If needed, update the **docker-compose.yaml** to match your environment
67
 
68
  # VRAM Usages
69
  - This project is integrated with [faster-whisper](https://github.com/guillaumekln/faster-whisper) by default for better VRAM usage and transcription speed.<BR>According to faster-whisper, the efficiency of the optimized whisper model is as follows:
 
27
 
28
  ### Prerequisite
29
  To run this WebUI, you need to have `git`, `python` version 3.8 ~ 3.10, `FFmpeg`.<BR>
30
+ If you're not using an Nvida GPU, or using a different `CUDA` version than 12.4, edit the `file requirements.txt` to match your environment.
31
 
32
  Please follow the links below to install the necessary software:
33
  - git : [https://git-scm.com/downloads](https://git-scm.com/downloads)
 
35
  - FFmpeg : [https://ffmpeg.org/download.html](https://ffmpeg.org/download.html)
36
  - CUDA : [https://developer.nvidia.com/cuda-downloads](https://developer.nvidia.com/cuda-downloads)
37
 
38
+ After installing `FFmpeg`, make sure to **add** the `FFmpeg/bin` folder to your system `PATH`
39
 
40
  ### Installation using the script files
41
 
 
63
 
64
  5. Connect to the WebUI with your browser at `http://localhost:7860`
65
 
66
+ Note: If needed, update the `docker-compose.yaml` to match your environment
67
 
68
  # VRAM Usages
69
  - This project is integrated with [faster-whisper](https://github.com/guillaumekln/faster-whisper) by default for better VRAM usage and transcription speed.<BR>According to faster-whisper, the efficiency of the optimized whisper model is as follows: