Commit
·
43b9cba
1
Parent(s):
7eba3b4
use yi 6b, 6b-200k does not fit
Browse files
app.py
CHANGED
@@ -147,10 +147,10 @@ print("Downloading Mistral 7B Instruct")
|
|
147 |
hf_hub_download(repo_id="TheBloke/Mistral-7B-Instruct-v0.1-GGUF", local_dir=".", filename="mistral-7b-instruct-v0.1.Q5_K_M.gguf")
|
148 |
mistral_model_path="./mistral-7b-instruct-v0.1.Q5_K_M.gguf"
|
149 |
|
150 |
-
print("Downloading Yi-6B
|
151 |
-
#Yi-6B
|
152 |
-
hf_hub_download(repo_id="TheBloke/Yi-6B-
|
153 |
-
yi_model_path="./yi-6b
|
154 |
|
155 |
|
156 |
from llama_cpp import Llama
|
@@ -803,7 +803,7 @@ It relies on following models :
|
|
803 |
Speech to Text : [Whisper-large-v2](https://sanchit-gandhi-whisper-large-v2.hf.space/) as an ASR model, to transcribe recorded audio to text. It is called through a [gradio client](https://www.gradio.app/docs/client).
|
804 |
LLM Mistral : [Mistral-7b-instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as the chat model, GGUF Q5_K_M quantized version used locally via llama_cpp[huggingface_hub](TheBloke/Mistral-7B-Instruct-v0.1-GGUF).
|
805 |
LLM Zephyr : [Zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) as the chat model. GGUF Q5_K_M quantized version used locally via llama_cpp from [huggingface.co/TheBloke](https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF).
|
806 |
-
LLM Yi : [Yi-6B
|
807 |
Text to Speech : [Coqui's XTTS V2](https://huggingface.co/spaces/coqui/xtts) as a Multilingual TTS model, to generate the chatbot answers. This time, the model is hosted locally.
|
808 |
|
809 |
Note:
|
|
|
147 |
hf_hub_download(repo_id="TheBloke/Mistral-7B-Instruct-v0.1-GGUF", local_dir=".", filename="mistral-7b-instruct-v0.1.Q5_K_M.gguf")
|
148 |
mistral_model_path="./mistral-7b-instruct-v0.1.Q5_K_M.gguf"
|
149 |
|
150 |
+
print("Downloading Yi-6B")
|
151 |
+
#Yi-6B
|
152 |
+
hf_hub_download(repo_id="TheBloke/Yi-6B-GGUF", local_dir=".", filename="yi-6b.Q5_K_M.gguf")
|
153 |
+
yi_model_path="./yi-6b.Q5_K_M.gguf"
|
154 |
|
155 |
|
156 |
from llama_cpp import Llama
|
|
|
803 |
Speech to Text : [Whisper-large-v2](https://sanchit-gandhi-whisper-large-v2.hf.space/) as an ASR model, to transcribe recorded audio to text. It is called through a [gradio client](https://www.gradio.app/docs/client).
|
804 |
LLM Mistral : [Mistral-7b-instruct](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) as the chat model, GGUF Q5_K_M quantized version used locally via llama_cpp[huggingface_hub](TheBloke/Mistral-7B-Instruct-v0.1-GGUF).
|
805 |
LLM Zephyr : [Zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) as the chat model. GGUF Q5_K_M quantized version used locally via llama_cpp from [huggingface.co/TheBloke](https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF).
|
806 |
+
LLM Yi : [Yi-6B](https://huggingface.co/01-ai/Yi-6B) as the chat model. GGUF Q5_K_M quantized version used locally via llama_cpp from [huggingface.co/TheBloke](https://huggingface.co/TheBloke/Yi-6B-GGUF).
|
807 |
Text to Speech : [Coqui's XTTS V2](https://huggingface.co/spaces/coqui/xtts) as a Multilingual TTS model, to generate the chatbot answers. This time, the model is hosted locally.
|
808 |
|
809 |
Note:
|