SebastianBodza commited on
Commit
9f90da5
·
verified ·
1 Parent(s): 19412e6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +250 -3
README.md CHANGED
@@ -1,3 +1,250 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - de
5
+ base_model:
6
+ - HKUSTAudio/Llasa-1B-Multilingual
7
+ ---
8
+
9
+
10
+ # Llasa-1B-Multilingual-German
11
+
12
+ > This model was trained on top of [HKUSTAudio/Llasa-1B-Multilingual](https://huggingface.co/HKUSTAudio/Llasa-1B-Multilingual).
13
+
14
+ ## Model Overview
15
+
16
+ This text-to-speech (TTS) model has been trained on a custom dataset representing **7,000 hours** of high-quality audio data. The audio data consisted of permissive podcasts, lectures and other OER data.
17
+
18
+ ## Training Details
19
+
20
+ - **Base Model:** HKUSTAudio/Llasa-1B-Multilingual
21
+ - **Dataset:** A custom dataset comprising **7,000 hours** of data.
22
+ - **Compute Resources:** The training was performed using **4x L40s GPUs**.
23
+ - **Raw Training Time:** Approximately **20 hours** not included the data preprocessing with xcodec2 (note: training was restarted after 3 crashes).
24
+
25
+ Huge thanks to Hugging Face for their generous GPU grant! 🤗
26
+
27
+
28
+ ## 👨‍💻 Installation
29
+ First install the following pip packages:
30
+ ```bash
31
+ pip install xcodec2
32
+ pip install torch==2.6.0 torchaudio
33
+ ```
34
+ Install it in the two steps given above! If you get the error message with "flex attention" make sure to install `torch==2.6.0 torchaudio`. If you get an torchaudio error, make sure to update and match it to the torch 2.6.0 version.
35
+
36
+ ## 🛠️ Usage
37
+ ### 🎲 Random voice
38
+ A basic example using the Hugging Face Transformers:
39
+
40
+ ```python
41
+ import os
42
+ from transformers import AutoTokenizer, AutoModelForCausalLM
43
+ import torch
44
+ import soundfile as sf
45
+
46
+ llasa_1b_german = 'MultiLlasa/Llasa-1B-Multilingual-German'
47
+
48
+ # Loading the model
49
+ tokenizer = AutoTokenizer.from_pretrained(llasa_1b_german)
50
+ model = AutoModelForCausalLM.from_pretrained(llasa_1b_german)
51
+ model.to('cuda')
52
+
53
+ # Load XCodec2 model
54
+ from xcodec2.modeling_xcodec2 import XCodec2Model
55
+ model_path = "HKUST-Audio/xcodec2"
56
+ Codec_model = XCodec2Model.from_pretrained(model_path)
57
+ Codec_model.cuda()
58
+
59
+ input_text = "\"Weißt du was, Hoppi\", sagte der weise Uhu, \"manchmal ist es gar nicht so wichtig, das Ende des Regenbogens zu finden. Das Schönste ist doch, dass wir alle zusammen dieses Abenteuer erleben!"
60
+
61
+
62
+ def extract_speech_ids(speech_tokens_str):
63
+ speech_ids = []
64
+ for token_str in speech_tokens_str:
65
+ if token_str.startswith('<|s_') and token_str.endswith('|>'):
66
+ num_str = token_str[4:-2]
67
+ num = int(num_str)
68
+ speech_ids.append(num)
69
+ else:
70
+ print(f"Unexpected token: {token_str}")
71
+ return speech_ids
72
+
73
+ with torch.no_grad():
74
+ formatted_text = f"<|TEXT_UNDERSTANDING_START|>{input_text}<|TEXT_UNDERSTANDING_END|>"
75
+
76
+ chat = [
77
+ {"role": "user", "content": "Convert the text to speech:" + formatted_text},
78
+ {"role": "assistant", "content": "<|SPEECH_GENERATION_START|>"}
79
+ ]
80
+
81
+ input_ids = tokenizer.apply_chat_template(
82
+ chat,
83
+ tokenize=True,
84
+ return_tensors='pt',
85
+ continue_final_message=True
86
+ )
87
+ input_ids = input_ids.to('cuda')
88
+ speech_end_id = tokenizer.convert_tokens_to_ids('<|SPEECH_GENERATION_END|>')
89
+
90
+ outputs = model.generate(
91
+ input_ids,
92
+ max_length=2048,
93
+ eos_token_id=speech_end_id,
94
+ do_sample=True,
95
+ top_p=1,
96
+ temperature=0.8,
97
+ )
98
+
99
+ generated_ids = outputs[0][input_ids.shape[1]:-1]
100
+ speech_tokens = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
101
+ speech_tokens = extract_speech_ids(speech_tokens)
102
+ speech_tokens = torch.tensor(speech_tokens).cuda().unsqueeze(0).unsqueeze(0)
103
+ gen_wav = Codec_model.decode_code(speech_tokens)
104
+
105
+
106
+ sf.write("generation.wav", gen_wav[0, 0, :].cpu().numpy(), 16000)
107
+
108
+ ```
109
+
110
+ ### 🎯 Using a specific speaker
111
+
112
+ An example with speaker reference:
113
+ ```python
114
+ import torch
115
+ import torchaudio
116
+ import tempfile
117
+ import soundfile as sf
118
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
119
+
120
+ # Input your reference audio and optional the text
121
+ sample_audio_path = "male.wav"
122
+ sample_audio_text = None # Set it to none to use whisper for transcription
123
+ # Input the target text here
124
+ target_text = "Und apropos Spannungen und Unfälle, in Stuttgart gibt es auch einige Schlagzeilen. Die Polizei sucht Zeugen, nachdem in der Stadt mehrere Autoscheiben eingeschlagen wurden. Und gestern kam es im Stuttgarter Osten zu einer Verfolgungsjagd mit einer jungen BMW-Fahrerin, die vor einer Polizeistreife geflüchtet ist."
125
+ output_filename = "no_speaker_example.wav"
126
+
127
+
128
+ #### Do not edit below ####
129
+ llasa_model_name = "MultiLlasa/Llasa-1B-Multilingual-German"
130
+ tokenizer = AutoTokenizer.from_pretrained(llasa_model_name)
131
+ model = AutoModelForCausalLM.from_pretrained(llasa_model_name)
132
+ model.to("cuda")
133
+
134
+ from xcodec2.modeling_xcodec2 import XCodec2Model
135
+ codec_model_path = "HKUST-Audio/xcodec2"
136
+ Codec_model = XCodec2Model.from_pretrained(codec_model_path)
137
+ Codec_model.cuda()
138
+
139
+ whisper_turbo_pipe = pipeline(
140
+ "automatic-speech-recognition",
141
+ model="openai/whisper-large-v3-turbo",
142
+ torch_dtype=torch.float16,
143
+ device="cuda",
144
+ )
145
+
146
+ def extract_speech_ids(speech_tokens_str_list):
147
+ """
148
+ Convert tokens like "<|s_12345|>" into integer ids.
149
+ """
150
+ speech_ids = []
151
+ for token_str in speech_tokens_str_list:
152
+ if token_str.startswith("<|s_") and token_str.endswith("|>"):
153
+ num_str = token_str[4:-2]
154
+ try:
155
+ speech_ids.append(int(num_str))
156
+ except ValueError:
157
+ print("Error converting token:", token_str)
158
+ else:
159
+ print(f"Unexpected token: {token_str}")
160
+ return speech_ids
161
+
162
+
163
+ waveform, sample_rate = torchaudio.load(sample_audio_path)
164
+
165
+ max_secs = 15
166
+ if waveform.shape[1] / sample_rate > max_secs:
167
+ print("Trimming audio to the first 15 seconds.")
168
+ waveform = waveform[:, : sample_rate * max_secs]
169
+ # Pad a bit briefly (0.5 sec) at the end
170
+ waveform = torch.nn.functional.pad(
171
+ waveform, (0, int(sample_rate * 0.5)), "constant", 0
172
+ )
173
+
174
+ if waveform.shape[0] > 1:
175
+ waveform = waveform.mean(dim=0, keepdim=True)
176
+
177
+ if sample_rate != 16000:
178
+ resampler = torchaudio.transforms.Resample(orig_freq=sample_rate,
179
+ new_freq=16000)
180
+ waveform = resampler(waveform)
181
+ sample_rate = 16000
182
+
183
+ if sample_audio_text is None:
184
+ print("Transcribing audio...")
185
+ transcription = whisper_turbo_pipe(waveform[0].numpy())["text"].strip()
186
+ else:
187
+ transcription = sample_audio_text
188
+
189
+ print("Transcription:", transcription)
190
+
191
+ if len(target_text) == 0:
192
+ raise ValueError("Target text must be provided!")
193
+ elif len(target_text) > 500:
194
+ print("Text is too long; trimming to first 500 characters.")
195
+ target_text = target_text[:500]
196
+
197
+ input_text = transcription + " " + target_text
198
+
199
+ formatted_text = f"<|TEXT_UNDERSTANDING_START|>{input_text}<|TEXT_UNDERSTANDING_END|>"
200
+
201
+ chat = [
202
+ {"role": "user", "content": "Convert the text to speech:" + formatted_text},
203
+ {"role": "assistant", "content": "<|SPEECH_GENERATION_START|>"}
204
+ ]
205
+
206
+ input_ids = tokenizer.apply_chat_template(
207
+ chat, tokenize=True, return_tensors="pt", continue_final_message=True
208
+ )
209
+ input_ids = input_ids.to("cuda")
210
+ speech_end_id = tokenizer.convert_tokens_to_ids("<|SPEECH_GENERATION_END|>")
211
+
212
+ with torch.no_grad():
213
+ outputs = model.generate(
214
+ input_ids,
215
+ max_length=2048,
216
+ eos_token_id=speech_end_id,
217
+ do_sample=True,
218
+ top_p=1,
219
+ temperature=0.8,
220
+ )
221
+
222
+ generated_ids = outputs[0][input_ids.shape[1] : -1]
223
+
224
+ raw_speech_tokens = tokenizer.batch_decode(generated_ids,
225
+ skip_special_tokens=True)
226
+ speech_ids = extract_speech_ids(raw_speech_tokens)
227
+
228
+ if len(speech_ids) == 0:
229
+ raise ValueError("No valid speech tokens were generated!")
230
+
231
+ speech_tokens_tensor = torch.tensor(speech_ids)\
232
+ .cuda().unsqueeze(0).unsqueeze(0)
233
+
234
+ gen_wav = Codec_model.decode_code(speech_tokens_tensor).cpu().squeeze()
235
+
236
+ sf.write(output_filename, gen_wav, 16000)
237
+ ```
238
+
239
+
240
+ ## Tips
241
+ - With a reference speaker, audio glitches can happen. Try to increase the temperature to get better results.
242
+
243
+ ## License
244
+
245
+ This project is licensed under the [CC-BY-NC-4.0 license](https://creativecommons.org/licenses/by-nc/4.0/).
246
+
247
+ ## Acknowledgments
248
+
249
+ - **Hugging Face:** Thanks for the grant that made this project possible.
250
+