Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,276 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
#
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
|
12 |
-
|
|
|
|
|
13 |
|
14 |
-
### Model Description
|
15 |
|
16 |
-
|
|
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
|
|
29 |
|
30 |
-
|
|
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
|
|
|
35 |
|
36 |
-
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
|
|
41 |
|
42 |
-
|
|
|
43 |
|
44 |
-
|
|
|
45 |
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
-
|
|
|
49 |
|
50 |
-
|
|
|
51 |
|
52 |
-
|
|
|
53 |
|
54 |
-
|
|
|
|
|
55 |
|
56 |
-
|
57 |
|
58 |
-
|
59 |
-
|
60 |
-
|
61 |
-
|
62 |
-
[
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
73 |
|
74 |
-
[More Information Needed]
|
75 |
|
76 |
-
|
77 |
|
78 |
-
|
79 |
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
|
82 |
-
|
83 |
|
84 |
-
|
85 |
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
|
88 |
-
####
|
89 |
|
90 |
-
|
91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
|
163 |
#### Hardware
|
164 |
|
165 |
-
|
166 |
|
167 |
#### Software
|
168 |
|
169 |
-
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
175 |
-
|
|
|
|
|
|
|
|
|
|
|
176 |
|
177 |
-
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
|
189 |
-
|
190 |
|
191 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
192 |
|
193 |
-
|
194 |
|
195 |
-
|
|
|
|
|
196 |
|
197 |
-
## Model Card Contact
|
198 |
|
199 |
-
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- intronhealth/afrispeech-200
|
6 |
+
language:
|
7 |
+
- en
|
8 |
+
base_model:
|
9 |
+
- HuggingFaceTB/SmolLM2-360M
|
10 |
+
pipeline_tag: text-to-speech
|
11 |
---
|
12 |
|
13 |
+
# YarnGPT
|
14 |
+

|
15 |
+
|
16 |
+
## Table of Contents
|
17 |
+
|
18 |
+
1. [Model Summary](#model-summary)
|
19 |
+
2. [Model Description](#model-description)
|
20 |
+
3. [Bias, Risks, and Limitations](#bias-risks-and-limitations)
|
21 |
+
- [Recommendations](#recommendations)
|
22 |
+
4. [Speech Samples](#speech-samples)
|
23 |
+
5. [Training](#training)
|
24 |
+
6. [Future Improvements](#future-improvements)
|
25 |
+
7. [Citation](#citation)
|
26 |
+
8. [Credits & References](#credits--references)
|
27 |
+
|
28 |
+
## Model Summary
|
29 |
+
|
30 |
+
YarnGPT is a text-to-speech (TTS) model designed to synthesize Nigerian-accented English leveraging pure language modelling without external adapters or complex architectures, offering high-quality, natural, and culturally relevant speech synthesis for diverse applications.
|
31 |
|
|
|
32 |
|
33 |
+
#### How to use (Colab)
|
34 |
+
The model can generate audio on its own but its better to use a voice to prompt the model, there are about 12 voices supported by default (6 male and 6 female ):
|
35 |
+
- zainab
|
36 |
+
- jude
|
37 |
+
- tayo
|
38 |
+
- remi
|
39 |
+
- idera
|
40 |
+
- regina
|
41 |
+
- chinenye
|
42 |
+
- umar
|
43 |
+
- osagie
|
44 |
+
- joke
|
45 |
+
- onye
|
46 |
+
- emma (the names do not correlate to any tribe or accent)
|
47 |
|
48 |
|
49 |
+
```python
|
50 |
+
# clone the YarnGPT repo to get access to the `audiotokenizer`
|
51 |
+
!git clone https://github.com/saheedniyi02/yarngpt.git
|
52 |
|
|
|
53 |
|
54 |
+
# install some necessary libraries
|
55 |
+
!pip install outetts==0.2.3 uroman
|
56 |
|
57 |
+
#import some important packages
|
58 |
+
import os
|
59 |
+
import re
|
60 |
+
import json
|
61 |
+
import torch
|
62 |
+
import inflect
|
63 |
+
import random
|
64 |
+
import uroman as ur
|
65 |
+
import numpy as np
|
66 |
+
import torchaudio
|
67 |
+
import IPython
|
68 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
69 |
+
from outetts.wav_tokenizer.decoder import WavTokenizer
|
70 |
+
from yarngpt.audiotokenizer import AudioTokenizer
|
71 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
+
# download the wavtokenizer weights and config (to encode and decode the audio)
|
74 |
+
!wget https://huggingface.co/novateur/WavTokenizer-medium-speech-75token/resolve/main/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml
|
75 |
+
!wget https://huggingface.co/novateur/WavTokenizer-large-speech-75token/resolve/main/wavtokenizer_large_speech_320_24k.ckpt
|
76 |
|
77 |
+
# model path and wavtokenizer weight path (the paths are assumed based on Google colab, a different environment might save the weights to a different location).
|
78 |
+
hf_path="saheedniyi/YarnGPT"
|
79 |
+
wav_tokenizer_config_path="/content/wavtokenizer_mediumdata_frame75_3s_nq1_code4096_dim512_kmeans200_attn.yaml"
|
80 |
+
wav_tokenizer_model_path = "/content/wavtokenizer_large_speech_320_24k.ckpt"
|
81 |
|
82 |
+
# create the AudioTokenizer object
|
83 |
+
audio_tokenizer=AudioTokenizer(
|
84 |
+
hf_path,wav_tokenizer_model_path,wav_tokenizer_config_path
|
85 |
+
)
|
86 |
|
87 |
+
#load the model weights
|
88 |
|
89 |
+
model = AutoModelForCausalLM.from_pretrained(hf_path,torch_dtype="auto").to(audio_tokenizer.device)
|
90 |
|
91 |
+
# your input text
|
92 |
+
text="Uhm, so, what was the inspiration behind your latest project? Like, was there a specific moment where you were like, 'Yeah, this is it!' Or, you know, did it just kind of, uh, come together naturally over time?"
|
93 |
|
94 |
+
# creating a prompt, when creating a prompt, there is an optional `speaker_name` parameter, the possible speakers are "idera","emma","onye","jude","osagie","tayo","zainab","joke","regina","remi","umar","chinenye" if no speaker is selected a speaker is chosen at random
|
95 |
+
prompt=audio_tokenizer.create_prompt(text,"idera")
|
96 |
|
97 |
+
# tokenize the prompt
|
98 |
+
input_ids=audio_tokenizer.tokenize_prompt(prompt)
|
99 |
|
100 |
+
# generate output from the model, you can tune the `.generate` parameters as you wish
|
101 |
+
output = model.generate(
|
102 |
+
input_ids=input_ids,
|
103 |
+
temperature=0.1,
|
104 |
+
repetition_penalty=1.1,
|
105 |
+
max_length=4000,
|
106 |
+
)
|
107 |
|
108 |
+
# convert the output to "audio codes"
|
109 |
+
codes=audio_tokenizer.get_codes(output)
|
110 |
|
111 |
+
# converts the codes to audio
|
112 |
+
audio=audio_tokenizer.get_audio(codes)
|
113 |
|
114 |
+
# play the audio
|
115 |
+
IPython.display.Audio(audio,rate=24000)
|
116 |
|
117 |
+
# save the audio
|
118 |
+
torchaudio.save(f"audio.wav", audio, sample_rate=24000)
|
119 |
+
```
|
120 |
|
121 |
+
## Model Description
|
122 |
|
123 |
+
- **Developed by:** [Saheedniyi](https://linkedin.com/in/azeez-saheed)
|
124 |
+
- **Model type:** Text-to-Speech
|
125 |
+
- **Language(s) (NLP):** English--> Nigerian Accented English
|
126 |
+
- **Finetuned from:** [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M)
|
127 |
+
- **Repository:** [YarnGPT Github Repository](https://github.com/saheedniyi02/yarngpt)
|
128 |
+
- **Paper:** IN PROGRESS.
|
129 |
+
- **Demo:** [Prompt YarnGPT notebook](#)
|
|
|
|
|
|
|
|
|
130 |
|
131 |
+
#### Uses
|
132 |
|
133 |
+
Generate Nigerian-accented English speech for experimental purposes.
|
134 |
|
|
|
135 |
|
136 |
+
#### Out-of-Scope Use
|
137 |
|
138 |
+
The model is not suitable for generating speech in languages other than English or other accents.
|
139 |
|
|
|
140 |
|
141 |
+
## Bias, Risks, and Limitations
|
142 |
|
143 |
+
The model may not capture the full diversity of Nigerian accents and could exhibit biases based on the training dataset. Also a lot of the text the model was trained on were automatically generated which could impact performance.
|
144 |
|
|
|
145 |
|
146 |
+
#### Recommendations
|
147 |
|
148 |
+
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
149 |
|
150 |
+
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. Feedback and diverse training data contributions are encouraged.
|
151 |
+
## Speech Samples
|
152 |
+
|
153 |
+
Listen to samples generated by YarnGPT:
|
154 |
+
|
155 |
+
<div style="margin-top: 20px;">
|
156 |
+
<table style="width: 100%; border-collapse: collapse;">
|
157 |
+
<thead>
|
158 |
+
<tr>
|
159 |
+
<th style="border: 1px solid #ddd; padding: 8px; text-align: left; width: 40%;">Input</th>
|
160 |
+
<th style="border: 1px solid #ddd; padding: 8px; text-align: left; width: 40%;">Audio</th>
|
161 |
+
<th style="border: 1px solid #ddd; padding: 8px; text-align: left; width: 10%;">Notes</th>
|
162 |
+
</tr>
|
163 |
+
</thead>
|
164 |
+
<tbody>
|
165 |
+
<tr>
|
166 |
+
<td style="border: 1px solid #ddd; padding: 8px;">Hello world! I am Saheed Azeez and I am excited to announce the release of his project, I have been gathering data and learning how to build Audio-based models over the last two months, but thanks to God, I have been able to come up with something</td>
|
167 |
+
<td style="border: 1px solid #ddd; padding: 8px;">
|
168 |
+
<audio controls style="width: 100%;">
|
169 |
+
<source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_1.wav" type="audio/wav">
|
170 |
+
Your browser does not support the audio element.
|
171 |
+
</audio>
|
172 |
+
</td>
|
173 |
+
<td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: idera</td>
|
174 |
+
</tr>
|
175 |
+
<tr>
|
176 |
+
<td style="border: 1px solid #ddd; padding: 8px;"> Wizkid, Davido, Burna Boy perform at same event in Lagos. This event has sparked many reactions across social media, with fans and critics alike praising the artistes' performances and the rare opportunity to see the three music giants on the same stage.</td>
|
177 |
+
<td style="border: 1px solid #ddd; padding: 8px;">
|
178 |
+
<audio controls style="width: 100%;">
|
179 |
+
<source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_2.wav" type="audio/wav">
|
180 |
+
Your browser does not support the audio element.
|
181 |
+
</audio>
|
182 |
+
</td>
|
183 |
+
<td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: jude</td>
|
184 |
+
</tr>
|
185 |
+
<tr>
|
186 |
+
<td style="border: 1px solid #ddd; padding: 8px;">Since Nigeria became a republic in 1963, 14 individuals have served as head of state of Nigeria under different titles. The incumbent president Bola Tinubu is the nation's 16th head of state.</td>
|
187 |
+
<td style="border: 1px solid #ddd; padding: 8px;">
|
188 |
+
<audio controls style="width: 100%;">
|
189 |
+
<source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_3.wav" type="audio/wav">
|
190 |
+
Your browser does not support the audio element.
|
191 |
+
</audio>
|
192 |
+
</td>
|
193 |
+
<td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: zainab, the model struggled in pronouncing ` in 1963`</td>
|
194 |
+
</tr>
|
195 |
+
<tr>
|
196 |
+
<td style="border: 1px solid #ddd; padding: 8px;">I visited the President, who has shown great concern for the security of Plateau State, especially considering that just a year ago, our state was in mourning. The President’s commitment to addressing these challenges has been steadfast.</td>
|
197 |
+
<td style="border: 1px solid #ddd; padding: 8px;">
|
198 |
+
<audio controls style="width: 100%;">
|
199 |
+
<source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_4.wav" type="audio/wav">
|
200 |
+
Your browser does not support the audio element.
|
201 |
+
</audio>
|
202 |
+
</td>
|
203 |
+
<td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: emma</td>
|
204 |
+
</tr>
|
205 |
+
<tr>
|
206 |
+
<td style="border: 1px solid #ddd; padding: 8px;">Scientists have discovered a new planet that may be capable of supporting life!</td>
|
207 |
+
<td style="border: 1px solid #ddd; padding: 8px;">
|
208 |
+
<audio controls style="width: 100%;">
|
209 |
+
<source src="https://huggingface.co/saheedniyi/YarnGPT/resolve/main/audio/Sample_5.wav" type="audio/wav">
|
210 |
+
Your browser does not support the audio element.
|
211 |
+
</audio>
|
212 |
+
</td>
|
213 |
+
<td style="border: 1px solid #ddd; padding: 8px;">(temperature=0.1, repetition_penalty=1.1), voice: nonye</td>
|
214 |
+
</tr>
|
215 |
+
</tbody>
|
216 |
+
</table>
|
217 |
+
</div>
|
218 |
+
|
219 |
+
|
220 |
+
## Training
|
221 |
+
|
222 |
+
#### Data
|
223 |
+
Trained on a dataset of about 2000hrs of Nigerian movies, podcasts and open source Nigerian audio data, using the subtitle-audio pairs.
|
224 |
+
|
225 |
+
#### Preprocessing
|
226 |
+
|
227 |
+
Audio files were preprocessed and resampled to 24Khz and tokenized using [wavtokenizer](https://huggingface.co/novateur/WavTokenizer).
|
228 |
|
229 |
#### Training Hyperparameters
|
230 |
+
- **Number of epochs:** 5
|
231 |
+
- **batch_size:** 4
|
232 |
+
- **Scheduler:** linear schedule with warmup for 4 epochs, then linear decay to zero for the last epoch
|
233 |
+
- **Optimizer:** AdamW (betas=(0.9, 0.95),weight_decay=0.01)
|
234 |
+
- **Learning rate:** 1*10^-3
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
235 |
|
236 |
#### Hardware
|
237 |
|
238 |
+
- **GPUs:** 1 A100 (google colab: 50 hours)
|
239 |
|
240 |
#### Software
|
241 |
|
242 |
+
- **Training Framework:** Pytorch
|
|
|
|
|
|
|
|
|
243 |
|
244 |
+
## Future Improvements?
|
245 |
+
- Scaling up model size and human-annotaed/ reviewed training data
|
246 |
+
- Wrap the model around an API endpoint
|
247 |
+
- Add support for local Nigerian languages
|
248 |
+
- Voice cloning.
|
249 |
+
- Potential expansion into speech-to-speech assistant models
|
250 |
|
251 |
+
## Citation [optional]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
252 |
|
253 |
+
#### BibTeX:
|
254 |
|
255 |
+
```python
|
256 |
+
@misc{yarngpt2025,
|
257 |
+
author = {Saheed Azeez},
|
258 |
+
title = {YarnGPT: Nigerian-Accented English Text-to-Speech Model},
|
259 |
+
year = {2025},
|
260 |
+
publisher = {Hugging Face},
|
261 |
+
url = {https://huggingface.co/SaheedAzeez/yarngpt}
|
262 |
+
}
|
263 |
+
```
|
264 |
|
265 |
+
#### APA:
|
266 |
|
267 |
+
```python
|
268 |
+
Saheed Azeez. (2025). YarnGPT: Nigerian-Accented English Text-to-Speech Model. Hugging Face. Available at: https://huggingface.co/SaheedAzeez/yarngpt
|
269 |
+
```
|
270 |
|
|
|
271 |
|
272 |
+
## Credits & References
|
273 |
+
- [OuteAI/OuteTTS-0.2-500M](https://huggingface.co/OuteAI/OuteTTS-0.2-500M/)
|
274 |
+
- [WavTokenizer](https://github.com/jishengpeng/WavTokenizer)
|
275 |
+
- [CTC Forced Alignment](https://pytorch.org/audio/stable/tutorials/ctc_forced_alignment_api_tutorial.html)
|
276 |
+
- [Voicera](https://huggingface.co/Lwasinam/voicera)
|