Spaces:
Running
Running
<div align="center"> | |
<h1>GPT-SoVITS-WebUI</h1> | |
A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.<br><br> | |
[](https://github.com/RVC-Boss/GPT-SoVITS) | |
<img src="https://counter.seku.su/cmoe?name=gptsovits&theme=r34" /><br> | |
[](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb) | |
[](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE) | |
[](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) | |
[](https://discord.gg/dnrgs5GHfG) | |
**English** | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md) | [**Türkçe**](./docs/tr/README.md) | |
</div> | |
--- | |
## Features: | |
1. **Zero-shot TTS:** Input a 5-second vocal sample and experience instant text-to-speech conversion. | |
2. **Few-shot TTS:** Fine-tune the model with just 1 minute of training data for improved voice similarity and realism. | |
3. **Cross-lingual Support:** Inference in languages different from the training dataset, currently supporting English, Japanese, and Chinese. | |
4. **WebUI Tools:** Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models. | |
**Check out our [demo video](https://www.bilibili.com/video/BV12g4y1m7Uw) here!** | |
Unseen speakers few-shot fine-tuning demo: | |
https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb | |
**User guide: [简体中文](https://www.yuque.com/baicaigongchang1145haoyuangong/ib3g1e) | [English](https://rentry.co/GPT-SoVITS-guide#/)** | |
## Installation | |
For users in the China region, you can [click here](https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official) to use AutoDL Cloud Docker to experience the full functionality online. | |
### Tested Environments | |
- Python 3.9, PyTorch 2.0.1, CUDA 11 | |
- Python 3.10.13, PyTorch 2.1.2, CUDA 12.3 | |
- Python 3.9, PyTorch 2.2.2, macOS 14.4.1 (Apple silicon) | |
- Python 3.9, PyTorch 2.2.2, CPU devices | |
_Note: numba==0.56.4 requires py<3.11_ | |
### Windows | |
If you are a Windows user (tested with win>=10), you can [download the integrated package](https://huggingface.co/lj1995/GPT-SoVITS-windows-package/resolve/main/GPT-SoVITS-beta.7z?download=true) and double-click on _go-webui.bat_ to start GPT-SoVITS-WebUI. | |
Users in the China region can [download the package](https://www.icloud.com.cn/iclouddrive/030K8WjGJ9xMXhpzJVIMEWPzQ#GPT-SoVITS-beta0706fix1) by clicking the link and then selecting "Download a copy." (Log out if you encounter errors while downloading.) | |
### Linux | |
```bash | |
conda create -n GPTSoVits python=3.9 | |
conda activate GPTSoVits | |
bash install.sh | |
``` | |
### macOS | |
**Note: The models trained with GPUs on Macs result in significantly lower quality compared to those trained on other devices, so we are temporarily using CPUs instead.** | |
1. Install Xcode command-line tools by running `xcode-select --install`. | |
2. Install FFmpeg by running `brew install ffmpeg`. | |
3. Install the program by running the following commands: | |
```bash | |
conda create -n GPTSoVits python=3.9 | |
conda activate GPTSoVits | |
pip install -r requirements.txt | |
``` | |
### Install Manually | |
#### Install FFmpeg | |
##### Conda Users | |
```bash | |
conda install ffmpeg | |
``` | |
##### Ubuntu/Debian Users | |
```bash | |
sudo apt install ffmpeg | |
sudo apt install libsox-dev | |
conda install -c conda-forge 'ffmpeg<7' | |
``` | |
##### Windows Users | |
Download and place [ffmpeg.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffmpeg.exe) and [ffprobe.exe](https://huggingface.co/lj1995/VoiceConversionWebUI/blob/main/ffprobe.exe) in the GPT-SoVITS root. | |
Install [Visual Studio 2022](https://visualstudio.microsoft.com/downloads/) (Korean TTS Only) | |
##### MacOS Users | |
```bash | |
brew install ffmpeg | |
``` | |
#### Install Dependences | |
```bash | |
pip install -r requirements.txt | |
``` | |
### Using Docker | |
#### docker-compose.yaml configuration | |
0. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check [Docker Hub](https://hub.docker.com/r/breakstring/gpt-sovits) for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs. | |
1. Environment Variables: | |
- is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation. | |
2. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content. | |
3. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation. | |
4. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances. | |
#### Running with docker compose | |
``` | |
docker compose -f "docker-compose.yaml" up -d | |
``` | |
#### Running with docker command | |
As above, modify the corresponding parameters based on your actual situation, then run the following command: | |
``` | |
docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx | |
``` | |
## Pretrained Models | |
Download pretrained models from [GPT-SoVITS Models](https://huggingface.co/lj1995/GPT-SoVITS) and place them in `GPT_SoVITS/pretrained_models`. | |
Download G2PW models from [G2PWModel-v2-onnx.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip), unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS\text`.(Chinese TTS Only) | |
For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from [UVR5 Weights](https://huggingface.co/lj1995/VoiceConversionWebUI/tree/main/uvr5_weights) and place them in `tools/uvr5/uvr5_weights`. | |
Users in the China region can download these two models by entering the links below and clicking "Download a copy" (Log out if you encounter errors while downloading.) | |
- [GPT-SoVITS Models](https://www.icloud.com/iclouddrive/044boFMiOHHt22SNr-c-tirbA#pretrained_models) | |
- [UVR5 Weights](https://www.icloud.com.cn/iclouddrive/0bekRKDiJXboFhbfm3lM2fVbA#UVR5_Weights) | |
- [G2PWModel_1.1.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS\text`. | |
For Chinese ASR (additionally), download models from [Damo ASR Model](https://modelscope.cn/models/damo/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch/files), [Damo VAD Model](https://modelscope.cn/models/damo/speech_fsmn_vad_zh-cn-16k-common-pytorch/files), and [Damo Punc Model](https://modelscope.cn/models/damo/punc_ct-transformer_zh-cn-common-vocab272727-pytorch/files) and place them in `tools/asr/models`. | |
Or Download FunASR Model from [FunASR Model](https://www.icloud.com/iclouddrive/0b52_7SQWYr75kHkPoPXgpeQA#models), unzip and replace `tools/asr/models`.(Log out if you encounter errors while downloading.) | |
For English or Japanese ASR (additionally), download models from [Faster Whisper Large V3](https://huggingface.co/Systran/faster-whisper-large-v3) and place them in `tools/asr/models`. Also, [other models](https://huggingface.co/Systran) may have the similar effect with smaller disk footprint. | |
Users in the China region can download this model by entering the links below | |
- [Faster Whisper Large V3](https://www.icloud.com/iclouddrive/00bUEp9_mcjMq_dhHu_vrAFDQ#faster-whisper-large-v3) (Click "Download a copy", log out if you encounter errors while downloading.) | |
- [Faster Whisper Large V3](https://hf-mirror.com/Systran/faster-whisper-large-v3) (HuggingFace mirror site) | |
## Dataset Format | |
The TTS annotation .list file format: | |
``` | |
vocal_path|speaker_name|language|text | |
``` | |
Language dictionary: | |
- 'zh': Chinese | |
- 'ja': Japanese | |
- 'en': English | |
- 'ko': Korean | |
- 'yue': Cantonese | |
Example: | |
``` | |
D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin. | |
``` | |
## Finetune and inference | |
### Open WebUI | |
#### Integrated Package Users | |
Double-click `go-webui.bat`or use `go-webui.ps` | |
if you want to switch to V1,then double-click`go-webui-v1.bat` or use `go-webui-v1.ps` | |
#### Others | |
```bash | |
python webui.py <language(optional)> | |
``` | |
if you want to switch to V1,then | |
```bash | |
python webui.py v1 <language(optional)> | |
``` | |
Or maunally switch version in WebUI | |
### Finetune | |
#### Path Auto-filling is now supported | |
1.Fill in the audio path | |
2.Slice the audio into small chunks | |
3.Denoise(optinal) | |
4.ASR | |
5.Proofreading ASR transcriptions | |
6.Go to the next Tab, then finetune the model | |
### Open Inference WebUI | |
#### Integrated Package Users | |
Double-click `go-webui-v2.bat` or use `go-webui-v2.ps` ,then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference` | |
#### Others | |
```bash | |
python GPT_SoVITS/inference_webui.py <language(optional)> | |
``` | |
OR | |
```bash | |
python webui.py | |
``` | |
then open the inference webui at `1-GPT-SoVITS-TTS/1C-inference` | |
## V2 Release Notes | |
New Features: | |
1.Support Korean and Cantonese | |
2.An optimized text frontend | |
3.Pre-trained model extended from 2k hours to 5k hours | |
4.Improved synthesis quality for low-quality reference audio | |
[more details](https://github.com/RVC-Boss/GPT-SoVITS/wiki/GPT%E2%80%90SoVITS%E2%80%90v2%E2%80%90features-(%E6%96%B0%E7%89%B9%E6%80%A7) ) | |
Use v2 from v1 environment: | |
1.pip install -r requirements.txt to update some packages | |
2.clone the latest codes from github | |
3.download v2 pretrained models from [huggingface](https://huggingface.co/lj1995/GPT-SoVITS/tree/main/gsv-v2final-pretrained) and put them into GPT_SoVITS\pretrained_models\gsv-v2final-pretrained | |
Chinese v2 additional: [G2PWModel_1.1.zip](https://paddlespeech.bj.bcebos.com/Parakeet/released_models/g2p/G2PWModel_1.1.zip)(Download G2PW models, unzip and rename to `G2PWModel`, and then place them in `GPT_SoVITS\text`. | |
## Todo List | |
- [x] **High Priority:** | |
- [x] Localization in Japanese and English. | |
- [x] User guide. | |
- [x] Japanese and English dataset fine tune training. | |
- [ ] **Features:** | |
- [x] Zero-shot voice conversion (5s) / few-shot voice conversion (1min). | |
- [x] TTS speaking speed control. | |
- [ ] ~~Enhanced TTS emotion control.~~ | |
- [ ] Experiment with changing SoVITS token inputs to probability distribution of GPT vocabs (transformer latent). | |
- [x] Improve English and Japanese text frontend. | |
- [ ] Develop tiny and larger-sized TTS models. | |
- [x] Colab scripts. | |
- [ ] Try expand training dataset (2k hours -> 10k hours). | |
- [x] better sovits base model (enhanced audio quality) | |
- [ ] model mix | |
## (Additional) Method for running from the command line | |
Use the command line to open the WebUI for UVR5 | |
``` | |
python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5> | |
``` | |
<!-- If you can't open a browser, follow the format below for UVR processing,This is using mdxnet for audio processing | |
``` | |
python mdxnet.py --model --input_root --output_vocal --output_ins --agg_level --format --device --is_half_precision | |
``` --> | |
This is how the audio segmentation of the dataset is done using the command line | |
``` | |
python audio_slicer.py \ | |
--input_path "<path_to_original_audio_file_or_directory>" \ | |
--output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \ | |
--threshold <volume_threshold> \ | |
--min_length <minimum_duration_of_each_subclip> \ | |
--min_interval <shortest_time_gap_between_adjacent_subclips> | |
--hop_size <step_size_for_computing_volume_curve> | |
``` | |
This is how dataset ASR processing is done using the command line(Only Chinese) | |
``` | |
python tools/asr/funasr_asr.py -i <input> -o <output> | |
``` | |
ASR processing is performed through Faster_Whisper(ASR marking except Chinese) | |
(No progress bars, GPU performance may cause time delays) | |
``` | |
python ./tools/asr/fasterwhisper_asr.py -i <input> -o <output> -l <language> -p <precision> | |
``` | |
A custom list save path is enabled | |
## Credits | |
Special thanks to the following projects and contributors: | |
### Theoretical Research | |
- [ar-vits](https://github.com/innnky/ar-vits) | |
- [SoundStorm](https://github.com/yangdongchao/SoundStorm/tree/master/soundstorm/s1/AR) | |
- [vits](https://github.com/jaywalnut310/vits) | |
- [TransferTTS](https://github.com/hcy71o/TransferTTS/blob/master/models.py#L556) | |
- [contentvec](https://github.com/auspicious3000/contentvec/) | |
- [hifi-gan](https://github.com/jik876/hifi-gan) | |
- [fish-speech](https://github.com/fishaudio/fish-speech/blob/main/tools/llama/generate.py#L41) | |
### Pretrained Models | |
- [Chinese Speech Pretrain](https://github.com/TencentGameMate/chinese_speech_pretrain) | |
- [Chinese-Roberta-WWM-Ext-Large](https://huggingface.co/hfl/chinese-roberta-wwm-ext-large) | |
### Text Frontend for Inference | |
- [paddlespeech zh_normalization](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/zh_normalization) | |
- [LangSegment](https://github.com/juntaosun/LangSegment) | |
- [g2pW](https://github.com/GitYCC/g2pW) | |
- [pypinyin-g2pW](https://github.com/mozillazg/pypinyin-g2pW) | |
- [paddlespeech g2pw](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/paddlespeech/t2s/frontend/g2pw) | |
### WebUI Tools | |
- [ultimatevocalremovergui](https://github.com/Anjok07/ultimatevocalremovergui) | |
- [audio-slicer](https://github.com/openvpi/audio-slicer) | |
- [SubFix](https://github.com/cronrpc/SubFix) | |
- [FFmpeg](https://github.com/FFmpeg/FFmpeg) | |
- [gradio](https://github.com/gradio-app/gradio) | |
- [faster-whisper](https://github.com/SYSTRAN/faster-whisper) | |
- [FunASR](https://github.com/alibaba-damo-academy/FunASR) | |
Thankful to @Naozumi520 for providing the Cantonese training set and for the guidance on Cantonese-related knowledge. | |
## Thanks to all contributors for their efforts | |
<a href="https://github.com/RVC-Boss/GPT-SoVITS/graphs/contributors" target="_blank"> | |
<img src="https://contrib.rocks/image?repo=RVC-Boss/GPT-SoVITS" /> | |
</a> | |