Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
change prompt
Browse files
utils/__pycache__/generator.cpython-310.pyc
CHANGED
Binary files a/utils/__pycache__/generator.cpython-310.pyc and b/utils/__pycache__/generator.cpython-310.pyc differ
|
|
utils/generator.py
CHANGED
@@ -191,7 +191,9 @@ async def _call_llm(messages: list) -> str:
|
|
191 |
try:
|
192 |
# Use async invoke for better performance
|
193 |
response = await chat_model.ainvoke(messages)
|
194 |
-
|
|
|
|
|
195 |
except Exception as e:
|
196 |
logging.exception(f"LLM generation failed with provider '{PROVIDER}' and model '{MODEL}': {e}")
|
197 |
raise
|
@@ -222,7 +224,7 @@ def build_messages(question: str, context: str) -> list:
|
|
222 |
* Do not just summarize each passage one by one. Group your summaries to highlight the key parts in the explanation.
|
223 |
* Use bullet points and lists when it makes sense to improve readability.
|
224 |
* You do not need to use every passage. Only use the ones that help answer the question.
|
225 |
-
- Format your response properly:
|
226 |
|
227 |
Input Format:
|
228 |
- Query: {query}
|
|
|
191 |
try:
|
192 |
# Use async invoke for better performance
|
193 |
response = await chat_model.ainvoke(messages)
|
194 |
+
print(response)
|
195 |
+
return response.content
|
196 |
+
#return response.content.strip()
|
197 |
except Exception as e:
|
198 |
logging.exception(f"LLM generation failed with provider '{PROVIDER}' and model '{MODEL}': {e}")
|
199 |
raise
|
|
|
224 |
* Do not just summarize each passage one by one. Group your summaries to highlight the key parts in the explanation.
|
225 |
* Use bullet points and lists when it makes sense to improve readability.
|
226 |
* You do not need to use every passage. Only use the ones that help answer the question.
|
227 |
+
- Format your response properly: Use markdown formatting (bullet points, numbered lists, headers) to make your response clear and easy to read. Example: <br> for linebreaks
|
228 |
|
229 |
Input Format:
|
230 |
- Query: {query}
|