ajsbsd commited on
Commit
3b520e7
·
verified ·
1 Parent(s): a1d5dc1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +254 -1
README.md CHANGED
@@ -10,5 +10,258 @@ pinned: false
10
  license: mit
11
  short_description: Qwen2.5-1.5B-Instruct-gkd-demo
12
  ---
 
13
 
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  license: mit
11
  short_description: Qwen2.5-1.5B-Instruct-gkd-demo
12
  ---
13
+ Great! I’ve reviewed all your files and prepared a **cleaned-up, ready-to-use version of `app.py`** that includes:
14
 
15
+ ---
16
+
17
+ ## ✅ What’s Fixed & Improved
18
+
19
+ | Issue | Fix / Enhancement |
20
+ |------|---------------------|
21
+ | 🔁 **Duplicate TTS Block** | Removed duplicate code in `generate_response_and_audio` |
22
+ | ❌ **Incorrect Condition Check** | Replaced unsafe `all([...])` with proper `is not None` checks |
23
+ | 📏 **Long Text Handling (TTS)** | Added chunking to avoid exceeding 512 token limit |
24
+ | 🧠 **LLM Generation Safety** | Ensures `generated_text` is always defined |
25
+ | ⚙️ **Model Loading Optimization** | Moved model loading into the first request (Hugging Face Spaces friendly) |
26
+ | 🧼 **Code Cleanliness** | Better structure, comments, and readability |
27
+
28
+ ---
29
+
30
+ ## 📦 Final Version of `app.py`
31
+
32
+ Here is your updated file:
33
+
34
+ ```python
35
+ import gradio as gr
36
+ import torch
37
+ from transformers import (
38
+ AutoTokenizer,
39
+ AutoModelForCausalLM,
40
+ SpeechT5Processor,
41
+ SpeechT5ForTextToSpeech,
42
+ SpeechT5HifiGan,
43
+ WhisperProcessor,
44
+ WhisperForConditionalGeneration
45
+ )
46
+ from datasets import load_dataset
47
+ import os
48
+ import spaces
49
+ import tempfile
50
+ import soundfile as sf
51
+ import librosa
52
+
53
+ # --- Configuration ---
54
+ HUGGINGFACE_MODEL_ID = "HuggingFaceH4/Qwen2.5-1.5B-Instruct-gkd"
55
+ TORCH_DTYPE = torch.bfloat16
56
+ MAX_NEW_TOKENS = 512
57
+ DO_SAMPLE = True
58
+ TEMPERATURE = 0.7
59
+ TOP_K = 50
60
+ TOP_P = 0.95
61
+
62
+ TTS_MODEL_ID = "microsoft/speecht5_tts"
63
+ TTS_VOCODER_ID = "microsoft/speecht5_hifigan"
64
+ STT_MODEL_ID = "openai/whisper-small"
65
+
66
+ # --- Global Variables ---
67
+ tokenizer = None
68
+ llm_model = None
69
+ tts_processor = None
70
+ tts_model = None
71
+ tts_vocoder = None
72
+ speaker_embeddings = None
73
+ whisper_processor = None
74
+ whisper_model = None
75
+ first_load = True
76
+
77
+ # --- Helper: Split Text Into Chunks ---
78
+ def split_text_into_chunks(text, max_chars=400):
79
+ sentences = text.replace("...", ".").split(". ")
80
+ chunks = []
81
+ current_chunk = ""
82
+ for sentence in sentences:
83
+ if len(current_chunk) + len(sentence) + 2 < max_chars:
84
+ current_chunk += ". " + sentence if current_chunk else sentence
85
+ else:
86
+ chunks.append(current_chunk)
87
+ current_chunk = sentence
88
+ if current_chunk:
89
+ chunks.append(current_chunk)
90
+ return [f"{chunk}." for chunk in chunks if chunk.strip()]
91
+
92
+ # --- Load Models Function ---
93
+ @spaces.GPU
94
+ def load_models():
95
+ global tokenizer, llm_model, tts_processor, tts_model, tts_vocoder, speaker_embeddings, whisper_processor, whisper_model
96
+ hf_token = os.environ.get("HF_TOKEN")
97
+
98
+ # LLM
99
+ if tokenizer is None or llm_model is None:
100
+ try:
101
+ tokenizer = AutoTokenizer.from_pretrained(HUGGINGFACE_MODEL_ID, token=hf_token)
102
+ if tokenizer.pad_token is None:
103
+ tokenizer.pad_token = tokenizer.eos_token
104
+ llm_model = AutoModelForCausalLM.from_pretrained(
105
+ HUGGINGFACE_MODEL_ID,
106
+ torch_dtype=TORCH_DTYPE,
107
+ device_map="auto",
108
+ token=hf_token
109
+ ).eval()
110
+ print("LLM loaded successfully.")
111
+ except Exception as e:
112
+ print(f"Error loading LLM: {e}")
113
+
114
+ # TTS
115
+ if tts_processor is None or tts_model is None or tts_vocoder is None:
116
+ try:
117
+ tts_processor = SpeechT5Processor.from_pretrained(TTS_MODEL_ID, token=hf_token)
118
+ tts_model = SpeechT5ForTextToSpeech.from_pretrained(TTS_MODEL_ID, token=hf_token)
119
+ tts_vocoder = SpeechT5HifiGan.from_pretrained(TTS_VOCODER_ID, token=hf_token)
120
+ embeddings = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation", token=hf_token)
121
+ speaker_embeddings = torch.tensor(embeddings[7306]["xvector"]).unsqueeze(0)
122
+ device = llm_model.device if llm_model else 'cpu'
123
+ tts_model.to(device)
124
+ tts_vocoder.to(device)
125
+ speaker_embeddings = speaker_embeddings.to(device)
126
+ print("TTS models loaded.")
127
+ except Exception as e:
128
+ print(f"Error loading TTS: {e}")
129
+
130
+ # STT
131
+ if whisper_processor is None or whisper_model is None:
132
+ try:
133
+ whisper_processor = WhisperProcessor.from_pretrained(STT_MODEL_ID, token=hf_token)
134
+ whisper_model = WhisperForConditionalGeneration.from_pretrained(STT_MODEL_ID, token=hf_token)
135
+ device = llm_model.device if llm_model else 'cpu'
136
+ whisper_model.to(device)
137
+ print("Whisper loaded.")
138
+ except Exception as e:
139
+ print(f"Error loading Whisper: {e}")
140
+
141
+ # --- Generate Response and Audio ---
142
+ @spaces.GPU
143
+ def generate_response_and_audio(message, history):
144
+ global first_load
145
+ if first_load:
146
+ load_models()
147
+ first_load = False
148
+
149
+ global tokenizer, llm_model, tts_processor, tts_model, tts_vocoder, speaker_embeddings
150
+
151
+ if tokenizer is None or llm_model is None:
152
+ return [{"role": "assistant", "content": "Error: LLM not loaded."}], None
153
+
154
+ messages = history.copy()
155
+ messages.append({"role": "user", "content": message})
156
+
157
+ try:
158
+ input_text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
159
+ except:
160
+ input_text = ""
161
+ for item in history:
162
+ input_text += f"{item['role'].capitalize()}: {item['content']}\n"
163
+ input_text += f"User: {message}\nAssistant:"
164
+
165
+ try:
166
+ inputs = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True).to(llm_model.device)
167
+ output_ids = llm_model.generate(
168
+ inputs["input_ids"],
169
+ attention_mask=inputs["attention_mask"],
170
+ max_new_tokens=MAX_NEW_TOKENS,
171
+ do_sample=DO_SAMPLE,
172
+ temperature=TEMPERATURE,
173
+ top_k=TOP_K,
174
+ top_p=TOP_P,
175
+ pad_token_id=tokenizer.eos_token_id
176
+ )
177
+ generated_text = tokenizer.decode(output_ids[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True).strip()
178
+ except Exception as e:
179
+ print(f"LLM error: {e}")
180
+ return history + [{"role": "assistant", "content": "I had an issue generating a response."}], None
181
+
182
+ audio_path = None
183
+ if None not in [tts_processor, tts_model, tts_vocoder, speaker_embeddings]:
184
+ try:
185
+ device = llm_model.device
186
+ text_chunks = split_text_into_chunks(generated_text)
187
+
188
+ full_speech = []
189
+ for chunk in text_chunks:
190
+ tts_inputs = tts_processor(text=chunk, return_tensors="pt", max_length=512, truncation=True).to(device)
191
+ speech = tts_model.generate_speech(tts_inputs["input_ids"], speaker_embeddings, vocoder=tts_vocoder)
192
+ full_speech.append(speech.cpu())
193
+
194
+ full_speech_tensor = torch.cat(full_speech, dim=0)
195
+
196
+ with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as tmp_file:
197
+ audio_path = tmp_file.name
198
+ sf.write(audio_path, full_speech_tensor.numpy(), samplerate=16000)
199
+
200
+ except Exception as e:
201
+ print(f"TTS error: {e}")
202
+
203
+ return history + [{"role": "assistant", "content": generated_text}], audio_path
204
+
205
+ # --- Transcribe Audio ---
206
+ @spaces.GPU
207
+ def transcribe_audio(filepath):
208
+ global first_load
209
+ if first_load:
210
+ load_models()
211
+ first_load = False
212
+
213
+ global whisper_processor, whisper_model
214
+ if whisper_model is None:
215
+ return "Whisper model not loaded."
216
+
217
+ try:
218
+ audio, sr = librosa.load(filepath, sr=16000)
219
+ inputs = whisper_processor(audio, sampling_rate=sr, return_tensors="pt").input_features.to(whisper_model.device)
220
+ outputs = whisper_model.generate(inputs)
221
+ return whisper_processor.batch_decode(outputs, skip_special_tokens=True)[0]
222
+ except Exception as e:
223
+ return f"Transcription failed: {e}"
224
+
225
+ # --- Gradio UI ---
226
+ with gr.Blocks() as demo:
227
+ gr.Markdown("# Qwen2.5 Chatbot with Voice Input/Output")
228
+
229
+ with gr.Tab("Chat"):
230
+ chatbot = gr.Chatbot(type='messages')
231
+ text_input = gr.Textbox(placeholder="Type your message...")
232
+ audio_output = gr.Audio(label="Response Audio", autoplay=True)
233
+ text_input.submit(generate_response_and_audio, [text_input, chatbot], [chatbot, audio_output])
234
+
235
+ with gr.Tab("Transcribe"):
236
+ audio_input = gr.Audio(type="filepath", label="Upload Audio")
237
+ transcribed = gr.Textbox(label="Transcription")
238
+ audio_input.upload(transcribe_audio, audio_input, transcribed)
239
+
240
+ clear_btn = gr.Button("Clear All")
241
+ clear_btn.click(lambda: ([], "", None), None, [chatbot, text_input, audio_output])
242
+
243
+ demo.queue().launch()
244
+ ```
245
+
246
+ ---
247
+
248
+ ## ✅ Instructions for Uploading to Hugging Face Spaces
249
+
250
+ 1. **Go to your Space**: https://huggingface.co/spaces/ajsbsd/Qwen2.5-1.5B-Instruct-gkd-demo
251
+ 2. **Pause the CI**: Go to `Settings > Runtime`, and switch from "Always On" to "Manual"
252
+ 3. **Delete old app.py**
253
+ 4. **Upload this new file** as `app.py`
254
+ 5. **Start the CI again**
255
+
256
+ ---
257
+
258
+ ## 🧩 Optional Enhancements
259
+
260
+ Would you like me to help you with any of the following?
261
+
262
+ - Add **status indicators** during model loading or generation
263
+ - Allow **microphone input** directly in chat tab
264
+ - Use `gr.State()` to store chat history more efficiently
265
+ - Package models into a custom repo for faster load times
266
+
267
+ Just let me know what you'd like next!Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference