voice clone and TTS

RVC-Boss e60988a568 Update colab_webui.ipynb 10 months ago
Docker 4c78f66431 明确Dockerfile中所使用的asr模型的版本,使得和cmd-asr.py中保持一致 10 months ago
GPT_SoVITS 2855f82eb7 Merge pull request #507 from v3ucn/修正g2p_en的英文字母发音问题 10 months ago
docs f6c9803909 Update Changelog_CN.md 10 months ago
i18n f99a473f9b Update fr_FR.json 10 months ago
tools d82f6bbb98 Update funasr_asr.py 10 months ago
.dockerignore 2bf5e003fb 优化Dockerfile分层,尽可能复用缓存;增加条件构建,可以用IMAGE_TYPE参数构建不包含额外模型的Docker镜像来减少镜像大小;增加批量构建Docker镜像脚本 10 months ago
.gitignore 013db82d7d Merge branch 'main' into inference_optin 10 months ago
Dockerfile c990387f7e 修改原有Dockerfile和文档中错误的端口号;Dockerfile中可以去掉 VOLUME 的声明,同时将端口暴露缩减成一行。 10 months ago
LICENSE 6e05506a85 Initial commit 11 months ago
README.md c990387f7e 修改原有Dockerfile和文档中错误的端口号;Dockerfile中可以去掉 VOLUME 的声明,同时将端口暴露缩减成一行。 10 months ago
api.py 72dca99288 fixed `feature_extractor` issue 10 months ago
colab_webui.ipynb e60988a568 Update colab_webui.ipynb 10 months ago
config.py 3857c7d15b 新增is_share参数也从环境变量中获取;更改is_half/is_share从环境变量中获取值的方式。 10 months ago
docker-compose.yaml c990387f7e 修改原有Dockerfile和文档中错误的端口号;Dockerfile中可以去掉 VOLUME 的声明,同时将端口暴露缩减成一行。 10 months ago
dockerbuild.sh 2bf5e003fb 优化Dockerfile分层,尽可能复用缓存;增加条件构建,可以用IMAGE_TYPE参数构建不包含额外模型的Docker镜像来减少镜像大小;增加批量构建Docker镜像脚本 10 months ago
go-webui.bat 14097eee3b Update go-webui.bat 10 months ago
go-webui.ps1 47e113a7b2 support powershell 10 months ago
install.sh 7e533c6995 Update install.sh 11 months ago
requirements.txt ff3b239fd9 Merge branch 'main' into main 10 months ago
webui.py ccb9b08be3 dpo改实验性勾选而非必须。勾选后batch size自动减半。 10 months ago

README.md

GPT-SoVITS-WebUI

A Powerful Few-shot Voice Conversion and Text-to-Speech WebUI.

[![madewithlove](https://img.shields.io/badge/made_with-%E2%9D%A4-red?style=for-the-badge&labelColor=orange)](https://github.com/RVC-Boss/GPT-SoVITS)
[![Open In Colab](https://img.shields.io/badge/Colab-F9AB00?style=for-the-badge&logo=googlecolab&color=525252)](https://colab.research.google.com/github/RVC-Boss/GPT-SoVITS/blob/main/colab_webui.ipynb) [![Licence](https://img.shields.io/badge/LICENSE-MIT-green.svg?style=for-the-badge)](https://github.com/RVC-Boss/GPT-SoVITS/blob/main/LICENSE) [![Huggingface](https://img.shields.io/badge/🤗%20-Models%20Repo-yellow.svg?style=for-the-badge)](https://huggingface.co/lj1995/GPT-SoVITS/tree/main) [**English**](./README.md) | [**中文简体**](./docs/cn/README.md) | [**日本語**](./docs/ja/README.md) | [**한국어**](./docs/ko/README.md)

Check out our demo video here!

Unseen speakers few-shot fine-tuning demo:

https://github.com/RVC-Boss/GPT-SoVITS/assets/129054828/05bee1fa-bdd8-4d85-9350-80c060ab47fb

For users in China region, you can use AutoDL Cloud Docker to experience the full functionality online: https://www.codewithgpu.com/i/RVC-Boss/GPT-SoVITS/GPT-SoVITS-Official

Features:

  1. Zero-shot TTS: Input a 5-second vocal sample and experience instant text-to-speech conversion.

  2. Few-shot TTS: Fine-tune the model with just 1 minute of training data for improved voice similarity and realism.

  3. Cross-lingual Support: Inference in languages different from the training dataset, currently supporting English, Japanese, and Chinese.

  4. WebUI Tools: Integrated tools include voice accompaniment separation, automatic training set segmentation, Chinese ASR, and text labeling, assisting beginners in creating training datasets and GPT/SoVITS models.

Environment Preparation

If you are a Windows user (tested with win>=10) you can install directly via the prezip. Just download the prezip, unzip it and double-click go-webui.bat to start GPT-SoVITS-WebUI.

Tested Environments

  • Python 3.9, PyTorch 2.0.1, CUDA 11
  • Python 3.10.13, PyTorch 2.1.2, CUDA 12.3
  • Python 3.9, PyTorch 2.3.0.dev20240122, macOS 14.3 (Apple silicon, GPU)

Note: numba==0.56.4 require py<3.11

Quick Install with Conda

conda create -n GPTSoVits python=3.9
conda activate GPTSoVits
bash install.sh

Install Manually

Pip Packages

pip install -r requirements.txt

FFmpeg

Conda Users
conda install ffmpeg
Ubuntu/Debian Users
sudo apt install ffmpeg
sudo apt install libsox-dev
conda install -c conda-forge 'ffmpeg<7'
MacOS Users
brew install ffmpeg
Windows Users

Download and place ffmpeg.exe and ffprobe.exe in the GPT-SoVITS root.

Pretrained Models

Download pretrained models from GPT-SoVITS Models and place them in GPT_SoVITS/pretrained_models.

For UVR5 (Vocals/Accompaniment Separation & Reverberation Removal, additionally), download models from UVR5 Weights and place them in tools/uvr5/uvr5_weights.

Users in China region can download these two models by entering the links below and clicking "Download a copy"

For Chinese ASR (additionally), download models from Damo ASR Model, Damo VAD Model, and Damo Punc Model and place them in tools/damo_asr/models.

For Mac Users

If you are a Mac user, make sure you meet the following conditions for training and inferencing with GPU:

  • Mac computers with Apple silicon or AMD GPUs
  • macOS 12.3 or later
  • Xcode command-line tools installed by running xcode-select --install

Other Macs can do inference with CPU only.

Then install by using the following commands:

Create Environment

conda create -n GPTSoVits python=3.9
conda activate GPTSoVits

Install Requirements

pip install -r requirements.txt
pip uninstall torch torchaudio
pip3 install --pre torch torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu

Using Docker

docker-compose.yaml configuration

  1. Regarding image tags: Due to rapid updates in the codebase and the slow process of packaging and testing images, please check Docker Hub for the currently packaged latest images and select as per your situation, or alternatively, build locally using a Dockerfile according to your own needs.
  2. Environment Variables:
  • is_half: Controls half-precision/double-precision. This is typically the cause if the content under the directories 4-cnhubert/5-wav32k is not generated correctly during the "SSL extracting" step. Adjust to True or False based on your actual situation.
  1. Volumes Configuration,The application's root directory inside the container is set to /workspace. The default docker-compose.yaml lists some practical examples for uploading/downloading content.
  2. shm_size: The default available memory for Docker Desktop on Windows is too small, which can cause abnormal operations. Adjust according to your own situation.
  3. Under the deploy section, GPU-related settings should be adjusted cautiously according to your system and actual circumstances.

Running with docker compose

docker compose -f "docker-compose.yaml" up -d

Running with docker command

As above, modify the corresponding parameters based on your actual situation, then run the following command:

docker run --rm -it --gpus=all --env=is_half=False --volume=G:\GPT-SoVITS-DockerTest\output:/workspace/output --volume=G:\GPT-SoVITS-DockerTest\logs:/workspace/logs --volume=G:\GPT-SoVITS-DockerTest\SoVITS_weights:/workspace/SoVITS_weights --workdir=/workspace -p 9880:9880 -p 9871:9871 -p 9872:9872 -p 9873:9873 -p 9874:9874 --shm-size="16G" -d breakstring/gpt-sovits:xxxxx

Dataset Format

The TTS annotation .list file format:

vocal_path|speaker_name|language|text

Language dictionary:

  • 'zh': Chinese
  • 'ja': Japanese
  • 'en': English

Example:

D:\GPT-SoVITS\xxx/xxx.wav|xxx|en|I like playing Genshin.

Todo List

  • [ ] High Priority:

    • Localization in Japanese and English.
    • User guide.
    • Japanese and English dataset fine tune training.
  • [ ] Features:

    • Zero-shot voice conversion (5s) / few-shot voice conversion (1min).
    • TTS speaking speed control.
    • Enhanced TTS emotion control.
    • Experiment with changing SoVITS token inputs to probability distribution of vocabs.
    • Improve English and Japanese text frontend.
    • Develop tiny and larger-sized TTS models.
    • Colab scripts.
    • Try expand training dataset (2k hours -> 10k hours).
    • better sovits base model (enhanced audio quality)
    • model mix

(Optional) If you need, here will provide the command line operation mode

Use the command line to open the WebUI for UVR5

python tools/uvr5/webui.py "<infer_device>" <is_half> <webui_port_uvr5>

If you can't open a browser, follow the format below for UVR processing,This is using mdxnet for audio processing

python mdxnet.py --model --input_root --output_vocal --output_ins --agg_level --format --device --is_half_precision 

This is how the audio segmentation of the dataset is done using the command line

python audio_slicer.py \
    --input_path "<path_to_original_audio_file_or_directory>" \
    --output_root "<directory_where_subdivided_audio_clips_will_be_saved>" \
    --threshold <volume_threshold> \
    --min_length <minimum_duration_of_each_subclip> \
    --min_interval <shortest_time_gap_between_adjacent_subclips> 
    --hop_size <step_size_for_computing_volume_curve>

This is how dataset ASR processing is done using the command line(Only Chinese)

python tools/damo_asr/cmd-asr.py "<Path to the directory containing input audio files>"

ASR processing is performed through Faster_Whisper(ASR marking except Chinese)

(No progress bars, GPU performance may cause time delays)

python ./tools/damo_asr/WhisperASR.py -i <input> -o <output> -f <file_name.list> -l <language>

A custom list save path is enabled

Credits

Special thanks to the following projects and contributors:

Thanks to all contributors for their efforts