Spaces:
Build error
Build error
Update app1.py
Browse files
app1.py
CHANGED
@@ -157,6 +157,47 @@ if query:
|
|
157 |
# Debugging: Check extracted context
|
158 |
st.write("Extracted Context (page_content):", context)
|
159 |
st.write("Number of Extracted Contexts:", len(context))
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
160 |
|
161 |
-
|
162 |
-
------
|
|
|
157 |
# Debugging: Check extracted context
|
158 |
st.write("Extracted Context (page_content):", context)
|
159 |
st.write("Number of Extracted Contexts:", len(context))
|
160 |
+
|
161 |
+
relevancy_prompt = """You are an expert judge tasked with evaluating whather the EACH OF THE CONTEXT provided in the CONTEXT LIST is self sufficient to answer the QUERY asked.
|
162 |
+
Analyze the provided QUERY AND CONTEXT to determine if each Ccontent in the CONTEXT LIST contains Relevant information to answer the QUERY.
|
163 |
+
|
164 |
+
Guidelines:
|
165 |
+
1. The content must not introduce new information beyond what's provided in the QUERY.
|
166 |
+
2. Pay close attention to the subject of statements. Ensure that attributes, actions, or dates are correctly associated with the right entities (e.g., a person vs. a TV show they star in).
|
167 |
+
3. Be vigilant for subtle misattributions or conflations of information, even if the date or other details are correct.
|
168 |
+
4. Check that the content in the CONTEXT LIST doesn't oversimplify or generalize information in a way that changes the meaning of the QUERY.
|
169 |
+
|
170 |
+
Analyze the text thoroughly and assign a relevancy score 0 or 1 where:
|
171 |
+
- 0: The content has all the necessary information to answer the QUERY
|
172 |
+
- 1: The content does not has the necessary information to answer the QUERY
|
173 |
+
|
174 |
+
```
|
175 |
+
EXAMPLE:
|
176 |
+
INPUT (for context only, not to be used for faithfulness evaluation):
|
177 |
+
What is the capital of France?
|
178 |
+
|
179 |
+
CONTEXT:
|
180 |
+
['France is a country in Western Europe. Its capital is Paris, which is known for landmarks like the Eiffel Tower.',
|
181 |
+
'Mr. Naveen patnaik has been the chief minister of Odisha for consequetive 5 terms']
|
182 |
+
|
183 |
+
OUTPUT:
|
184 |
+
The Context has sufficient information to answer the query.
|
185 |
+
|
186 |
+
RESPONSE:
|
187 |
+
{{"score":0}}
|
188 |
+
```
|
189 |
+
|
190 |
+
CONTENT LIST:
|
191 |
+
{context}
|
192 |
+
|
193 |
+
QUERY:
|
194 |
+
{retriever_query}
|
195 |
+
Provide your verdict in JSON format with a single key 'score' and no preamble or explanation:
|
196 |
+
[{{"content:1,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
|
197 |
+
{{"content:2,"score": <your score either 0 or 1>,"Reasoning":<why you have chose the score as 0 or 1>}},
|
198 |
+
...]
|
199 |
+
"""
|
200 |
+
|
201 |
+
context_relevancy_checker_prompt = PromptTemplate(input_variables=["retriever_query","context"],template=relevancy_prompt)
|
202 |
|
203 |
+
|
|