Spaces:
Sleeping
Sleeping
Update prompts.py
Browse files- prompts.py +16 -15
prompts.py
CHANGED
|
@@ -45,21 +45,22 @@ WellnessBrandTailor = """
|
|
| 45 |
|
| 46 |
|
| 47 |
tailor_prompt_str = """
|
| 48 |
-
You are
|
| 49 |
-
|
|
|
|
| 50 |
{response}
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
"""
|
| 60 |
|
| 61 |
cleaner_prompt_str = """
|
| 62 |
-
You are the
|
| 63 |
|
| 64 |
1) You have two sources:
|
| 65 |
- CSV (KB) Answer: {kb_answer}
|
|
@@ -73,7 +74,7 @@ Write your final merged answer below:
|
|
| 73 |
"""
|
| 74 |
|
| 75 |
refusal_prompt_str = """
|
| 76 |
-
You are the
|
| 77 |
|
| 78 |
Topic to refuse: {topic}
|
| 79 |
|
|
@@ -90,7 +91,7 @@ Return your refusal:
|
|
| 90 |
|
| 91 |
# Existing self-harm prompt
|
| 92 |
selfharm_prompt_str = """
|
| 93 |
-
You are the
|
| 94 |
|
| 95 |
User’s statement: {query}
|
| 96 |
|
|
@@ -105,7 +106,7 @@ Your short supportive response below:
|
|
| 105 |
|
| 106 |
# NEW: Frustration / Harsh Language Prompt
|
| 107 |
frustration_prompt_str = """
|
| 108 |
-
You are the
|
| 109 |
The user is expressing anger, frustration, or negative remarks toward you (the AI).
|
| 110 |
|
| 111 |
User's statement: {query}
|
|
@@ -121,7 +122,7 @@ Return your short, empathetic response:
|
|
| 121 |
|
| 122 |
# NEW: Ethical Conflict Prompt
|
| 123 |
ethical_conflict_prompt_str = """
|
| 124 |
-
You are the
|
| 125 |
The user is asking for moral or ethical advice, e.g., lying to someone, getting revenge, or making a questionable decision.
|
| 126 |
|
| 127 |
User’s statement: {query}
|
|
|
|
| 45 |
|
| 46 |
|
| 47 |
tailor_prompt_str = """
|
| 48 |
+
[INST] You are a wellness assistant having a direct conversation with a user.
|
| 49 |
+
Below is reference information to help answer their question. Transform this into a natural, helpful response:
|
| 50 |
+
|
| 51 |
{response}
|
| 52 |
+
|
| 53 |
+
Guidelines:
|
| 54 |
+
- Speak directly to the user in first person ("I recommend...")
|
| 55 |
+
- Use warm, conversational language
|
| 56 |
+
- Focus on giving clear, direct answers
|
| 57 |
+
- Include practical advice they can implement immediately
|
| 58 |
+
- NEVER mention that you're reformatting information or following instructions
|
| 59 |
+
[/INST]
|
| 60 |
"""
|
| 61 |
|
| 62 |
cleaner_prompt_str = """
|
| 63 |
+
You are the Healthy AI Expert Cleaner Assistant.
|
| 64 |
|
| 65 |
1) You have two sources:
|
| 66 |
- CSV (KB) Answer: {kb_answer}
|
|
|
|
| 74 |
"""
|
| 75 |
|
| 76 |
refusal_prompt_str = """
|
| 77 |
+
You are the Healthy AI Expert Refusal Assistant.
|
| 78 |
|
| 79 |
Topic to refuse: {topic}
|
| 80 |
|
|
|
|
| 91 |
|
| 92 |
# Existing self-harm prompt
|
| 93 |
selfharm_prompt_str = """
|
| 94 |
+
You are the Healthy AI Expert Self-Harm Support Assistant. The user is feeling suicidal or wants to end their life.
|
| 95 |
|
| 96 |
User’s statement: {query}
|
| 97 |
|
|
|
|
| 106 |
|
| 107 |
# NEW: Frustration / Harsh Language Prompt
|
| 108 |
frustration_prompt_str = """
|
| 109 |
+
You are the Healthy AI Expert Frustration Handling Assistant.
|
| 110 |
The user is expressing anger, frustration, or negative remarks toward you (the AI).
|
| 111 |
|
| 112 |
User's statement: {query}
|
|
|
|
| 122 |
|
| 123 |
# NEW: Ethical Conflict Prompt
|
| 124 |
ethical_conflict_prompt_str = """
|
| 125 |
+
You are the Healthy AI Expert Ethical Conflict Assistant.
|
| 126 |
The user is asking for moral or ethical advice, e.g., lying to someone, getting revenge, or making a questionable decision.
|
| 127 |
|
| 128 |
User’s statement: {query}
|