Nathan Brake commited on
Commit
5d76917
·
unverified ·
1 Parent(s): 908e451

Test Harness Taking Shape (#10)

Browse files

* First test case.

* Format

* unit test fix

* Fix tracing

* Re-arrange eval code

* Formatting

* typing comes to test cases

* format

pyproject.toml CHANGED
@@ -12,7 +12,7 @@ dependencies = [
12
  "arize-phoenix>=8.12.1",
13
  "fire",
14
  "loguru",
15
- "mcp>=1.3.0",
16
  "pydantic",
17
  "smolagents[litellm,mcp,telemetry]>=1.10.0",
18
  ]
@@ -37,6 +37,7 @@ docs = [
37
  tests = [
38
  "pytest>=8,<9",
39
  "pytest-sugar>=0.9.6",
 
40
  ]
41
 
42
  # TODO maybe we don't want to keep this, or we want to swap this to Lumigator SDK
@@ -56,7 +57,13 @@ namespaces = false
56
 
57
  [tool.setuptools_scm]
58
 
 
 
 
 
 
59
  [project.scripts]
60
  surf-spot-finder = "surf_spot_finder.cli:main"
 
61
  # TODO maybe this would be lumigator
62
  start-phoenix = "phoenix.server.main:main"
 
12
  "arize-phoenix>=8.12.1",
13
  "fire",
14
  "loguru",
15
+ "mcp==1.3.0",
16
  "pydantic",
17
  "smolagents[litellm,mcp,telemetry]>=1.10.0",
18
  ]
 
37
  tests = [
38
  "pytest>=8,<9",
39
  "pytest-sugar>=0.9.6",
40
+ "debugpy>=1.8.13",
41
  ]
42
 
43
  # TODO maybe we don't want to keep this, or we want to swap this to Lumigator SDK
 
57
 
58
  [tool.setuptools_scm]
59
 
60
+ [dependency-groups]
61
+ dev = [
62
+ "pre-commit>=4.1.0",
63
+ ]
64
+
65
  [project.scripts]
66
  surf-spot-finder = "surf_spot_finder.cli:main"
67
+ surf-spot-finder-evaluate = "surf_spot_finder.evaluation.evaluate:main"
68
  # TODO maybe this would be lumigator
69
  start-phoenix = "phoenix.server.main:main"
src/surf_spot_finder/agents/prompts/smolagents.py ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copied from https://github.com/huggingface/smolagents/blob/main/src/smolagents/prompts/code_agent.yaml
2
+ SYSTEM_PROMPT = """
3
+ You are an expert assistant who can solve any task using code blobs. You will be given a task to solve as best you can.
4
+ To do so, you have been given access to a list of tools: these tools are basically Python functions which you can call with code.
5
+ To solve the task, you must plan forward to proceed in a series of steps, in a cycle of 'Thought:', 'Code:', and 'Observation:' sequences.
6
+
7
+ At each step, in the 'Thought:' sequence, you should first explain your reasoning towards solving the task and the tools that you want to use.
8
+ Then in the 'Code:' sequence, you should write the code in simple Python. The code sequence must end with '<end_code>' sequence.
9
+ During each intermediate step, you can use 'print()' to save whatever important information you will then need.
10
+ These print outputs will then appear in the 'Observation:' field, which will be available as input for the next step.
11
+ In the end you have to return a final answer using the `final_answer` tool.
12
+
13
+ Here are a few examples using notional tools:
14
+ ---
15
+ Task: "Generate an image of the oldest person in this document."
16
+
17
+ Thought: I will proceed step by step and use the following tools: `document_qa` to find the oldest person in the document, then `image_generator` to generate an image according to the answer.
18
+ Code:
19
+ ```py
20
+ answer = document_qa(document=document, question="Who is the oldest person mentioned?")
21
+ print(answer)
22
+ ```<end_code>
23
+ Observation: "The oldest person in the document is John Doe, a 55 year old lumberjack living in Newfoundland."
24
+
25
+ Thought: I will now generate an image showcasing the oldest person.
26
+ Code:
27
+ ```py
28
+ image = image_generator("A portrait of John Doe, a 55-year-old man living in Canada.")
29
+ final_answer(image)
30
+ ```<end_code>
31
+
32
+ ---
33
+ Task: "What is the result of the following operation: 5 + 3 + 1294.678?"
34
+
35
+ Thought: I will use python code to compute the result of the operation and then return the final answer using the `final_answer` tool
36
+ Code:
37
+ ```py
38
+ result = 5 + 3 + 1294.678
39
+ final_answer(result)
40
+ ```<end_code>
41
+
42
+ ---
43
+ Task:
44
+ "Answer the question in the variable `question` about the image stored in the variable `image`. The question is in French.
45
+ You have been provided with these additional arguments, that you can access using the keys as variables in your python code:
46
+ {'question': 'Quel est l'animal sur l'image?', 'image': 'path/to/image.jpg'}"
47
+
48
+ Thought: I will use the following tools: `translator` to translate the question into English and then `image_qa` to answer the question on the input image.
49
+ Code:
50
+ ```py
51
+ translated_question = translator(question=question, src_lang="French", tgt_lang="English")
52
+ print(f"The translated question is {translated_question}.")
53
+ answer = image_qa(image=image, question=translated_question)
54
+ final_answer(f"The answer is {answer}")
55
+ ```<end_code>
56
+
57
+ ---
58
+ Task:
59
+ In a 1979 interview, Stanislaus Ulam discusses with Martin Sherwin about other great physicists of his time, including Oppenheimer.
60
+ What does he say was the consequence of Einstein learning too much math on his creativity, in one word?
61
+
62
+ Thought: I need to find and read the 1979 interview of Stanislaus Ulam with Martin Sherwin.
63
+ Code:
64
+ ```py
65
+ pages = search(query="1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein")
66
+ print(pages)
67
+ ```<end_code>
68
+ Observation:
69
+ No result found for query "1979 interview Stanislaus Ulam Martin Sherwin physicists Einstein".
70
+
71
+ Thought: The query was maybe too restrictive and did not find any results. Let's try again with a broader query.
72
+ Code:
73
+ ```py
74
+ pages = search(query="1979 interview Stanislaus Ulam")
75
+ print(pages)
76
+ ```<end_code>
77
+ Observation:
78
+ Found 6 pages:
79
+ [Stanislaus Ulam 1979 interview](https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/)
80
+
81
+ [Ulam discusses Manhattan Project](https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/)
82
+
83
+ (truncated)
84
+
85
+ Thought: I will read the first 2 pages to know more.
86
+ Code:
87
+ ```py
88
+ for url in ["https://ahf.nuclearmuseum.org/voices/oral-histories/stanislaus-ulams-interview-1979/", "https://ahf.nuclearmuseum.org/manhattan-project/ulam-manhattan-project/"]:
89
+ whole_page = visit_webpage(url)
90
+ print(whole_page)
91
+ print("\n" + "="*80 + "\n") # Print separator between pages
92
+ ```<end_code>
93
+ Observation:
94
+ Manhattan Project Locations:
95
+ Los Alamos, NM
96
+ Stanislaus Ulam was a Polish-American mathematician. He worked on the Manhattan Project at Los Alamos and later helped design the hydrogen bomb. In this interview, he discusses his work at
97
+ (truncated)
98
+
99
+ Thought: I now have the final answer: from the webpages visited, Stanislaus Ulam says of Einstein: "He learned too much mathematics and sort of diminished, it seems to me personally, it seems to me his purely physics creativity." Let's answer in one word.
100
+ Code:
101
+ ```py
102
+ final_answer("diminished")
103
+ ```<end_code>
104
+
105
+ ---
106
+ Task: "Which city has the highest population: Guangzhou or Shanghai?"
107
+
108
+ Thought: I need to get the populations for both cities and compare them: I will use the tool `search` to get the population of both cities.
109
+ Code:
110
+ ```py
111
+ for city in ["Guangzhou", "Shanghai"]:
112
+ print(f"Population {city}:", search(f"{city} population")
113
+ ```<end_code>
114
+ Observation:
115
+ Population Guangzhou: ['Guangzhou has a population of 15 million inhabitants as of 2021.']
116
+ Population Shanghai: '26 million (2019)'
117
+
118
+ Thought: Now I know that Shanghai has the highest population.
119
+ Code:
120
+ ```py
121
+ final_answer("Shanghai")
122
+ ```<end_code>
123
+
124
+ ---
125
+ Task: "What is the current age of the pope, raised to the power 0.36?"
126
+
127
+ Thought: I will use the tool `wiki` to get the age of the pope, and confirm that with a web search.
128
+ Code:
129
+ ```py
130
+ pope_age_wiki = wiki(query="current pope age")
131
+ print("Pope age as per wikipedia:", pope_age_wiki)
132
+ pope_age_search = web_search(query="current pope age")
133
+ print("Pope age as per google search:", pope_age_search)
134
+ ```<end_code>
135
+ Observation:
136
+ Pope age: "The pope Francis is currently 88 years old."
137
+
138
+ Thought: I know that the pope is 88 years old. Let's compute the result using python code.
139
+ Code:
140
+ ```py
141
+ pope_current_age = 88 ** 0.36
142
+ final_answer(pope_current_age)
143
+ ```<end_code>
144
+
145
+ Above example were using notional tools that might not exist for you. On top of performing computations in the Python code snippets that you create, you only have access to these tools:
146
+ {%- for tool in tools.values() %}
147
+ - {{ tool.name }}: {{ tool.description }}
148
+ Takes inputs: {{tool.inputs}}
149
+ Returns an output of type: {{tool.output_type}}
150
+ {%- endfor %}
151
+
152
+ {%- if managed_agents and managed_agents.values() | list %}
153
+ You can also give tasks to team members.
154
+ Calling a team member works the same as for calling a tool: simply, the only argument you can give in the call is 'task', a long string explaining your task.
155
+ Given that this team member is a real human, you should be very verbose in your task.
156
+ Here is a list of the team members that you can call:
157
+ {%- for agent in managed_agents.values() %}
158
+ - {{ agent.name }}: {{ agent.description }}
159
+ {%- endfor %}
160
+ {%- else %}
161
+ {%- endif %}
162
+
163
+ Here are the rules you should always follow to solve your task:
164
+ 1. Always provide a 'Thought:' sequence, and a 'Code:\n```py' sequence ending with '```<end_code>' sequence, else you will fail.
165
+ 2. Use only variables that you have defined!
166
+ 3. Always use the right arguments for the tools. DO NOT pass the arguments as a dict as in 'answer = wiki({'query': "What is the place where James Bond lives?"})', but use the arguments directly as in 'answer = wiki(query="What is the place where James Bond lives?")'.
167
+ 4. Take care to not chain too many sequential tool calls in the same code block, especially when the output format is unpredictable. For instance, a call to search has an unpredictable return format, so do not have another tool call that depends on its output in the same block: rather output results with print() to use them in the next block.
168
+ 5. Call a tool only when needed, and never re-do a tool call that you previously did with the exact same parameters.
169
+ 6. Don't name any new variable with the same name as a tool: for instance don't name a variable 'final_answer'.
170
+ 7. Never create any notional variables in our code, as having these in your logs will derail you from the true variables.
171
+ 8. You can use imports in your code, but only from the following list of modules: {{authorized_imports}}
172
+ 9. The state persists between code executions: so if in one step you've created variables or imported modules, these will all persist.
173
+ 10. Don't give up! You're in charge of solving the task, not providing directions to solve it.
174
+
175
+ Now Begin! If you solve the task correctly, you will receive a reward of $1,000,000.
176
+ """.strip()
src/surf_spot_finder/agents/smolagents.py CHANGED
@@ -39,6 +39,7 @@ def run_smolagent(
39
  ToolCollection,
40
  )
41
  from mcp import StdioServerParameters
 
42
 
43
  model = LiteLLMModel(
44
  model_id=model_id,
@@ -61,6 +62,7 @@ def run_smolagent(
61
  *tool_collection.tools,
62
  DuckDuckGoSearchTool(),
63
  ],
 
64
  model=model,
65
  add_base_tools=False, # Turn this on if you want to let it run python code as it sees fit
66
  )
 
39
  ToolCollection,
40
  )
41
  from mcp import StdioServerParameters
42
+ from surf_spot_finder.agents.prompts.smolagents import SYSTEM_PROMPT
43
 
44
  model = LiteLLMModel(
45
  model_id=model_id,
 
62
  *tool_collection.tools,
63
  DuckDuckGoSearchTool(),
64
  ],
65
+ prompt_templates={"system_prompt": SYSTEM_PROMPT},
66
  model=model,
67
  add_base_tools=False, # Turn this on if you want to let it run python code as it sees fit
68
  )
src/surf_spot_finder/config.py CHANGED
@@ -9,8 +9,10 @@ DEFAULT_PROMPT = (
9
  ", in a {MAX_DRIVING_HOURS} hour driving radius"
10
  ", at {DATE}? it is currently "
11
  + CURRENT_DATE
12
- + ". find me the best surf spot and the"
13
- " up to date weather forecast for that day."
 
 
14
  )
15
 
16
 
 
9
  ", in a {MAX_DRIVING_HOURS} hour driving radius"
10
  ", at {DATE}? it is currently "
11
  + CURRENT_DATE
12
+ + ". find me the best surf spot and also report back"
13
+ " on the expected water temperature and wave height."
14
+ " Please remember that doing a google/duckduckgo search may be useful for finding which sites are relevant,"
15
+ " but the final answer should be based on information retrieved from https://www.surf-forecast.com."
16
  )
17
 
18
 
src/surf_spot_finder/evaluation/__init__.py ADDED
File without changes
src/surf_spot_finder/evaluation/evaluate.py ADDED
@@ -0,0 +1,153 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from textwrap import dedent
3
+ from typing import Any, Dict, List, Optional
4
+ from loguru import logger
5
+ from fire import Fire
6
+ from surf_spot_finder.agents.smolagents import run_smolagent
7
+ from surf_spot_finder.config import (
8
+ DEFAULT_PROMPT,
9
+ Config,
10
+ )
11
+ from surf_spot_finder.tracing import get_tracer_provider, setup_tracing
12
+ from surf_spot_finder.evaluation.utils import (
13
+ extract_hypothesis_answer,
14
+ verify_checkpoints,
15
+ verify_hypothesis_answer,
16
+ )
17
+ from surf_spot_finder.evaluation.test_case import TestCase
18
+
19
+
20
+ def run_agent(test_case: TestCase) -> str:
21
+ input_data = test_case.input
22
+ logger.info("Loading config")
23
+ config = Config(
24
+ location=input_data.location,
25
+ date=input_data.date,
26
+ max_driving_hours=input_data.max_driving_hours,
27
+ model_id=input_data.model_id,
28
+ api_key_var=input_data.api_key_var,
29
+ prompt=DEFAULT_PROMPT,
30
+ json_tracer=input_data.json_tracer,
31
+ api_base=input_data.api_base,
32
+ agent_type=input_data.agent_type,
33
+ )
34
+ # project_name is a name + uuid
35
+ project_name = "surf-spot-finder"
36
+
37
+ logger.info("Setting up tracing")
38
+ tracer_provider, telemetry_path = get_tracer_provider(
39
+ project_name=project_name, json_tracer=config.json_tracer
40
+ )
41
+ setup_tracing(tracer_provider, agent_type=config.agent_type)
42
+ logger.info("Running agent")
43
+ run_smolagent(
44
+ model_id=config.model_id,
45
+ api_key_var=config.api_key_var,
46
+ api_base=config.api_base,
47
+ prompt=config.prompt.format(
48
+ LOCATION=config.location,
49
+ MAX_DRIVING_HOURS=config.max_driving_hours,
50
+ DATE=config.date,
51
+ ),
52
+ )
53
+ return telemetry_path
54
+
55
+
56
+ def evaluate_telemetry(test_case: TestCase, telemetry_path: str) -> bool:
57
+ # load the json file
58
+ with open(telemetry_path, "r") as f:
59
+ telemetry: List[Dict[str, Any]] = json.loads(f.read())
60
+ logger.info(f"Telemetry loaded from {telemetry_path}")
61
+
62
+ # Extract the final answer from the telemetry
63
+ hypothesis_answer = extract_hypothesis_answer(telemetry)
64
+ logger.info(
65
+ dedent(f"""
66
+ Hypothesis Final answer extracted:
67
+ - {hypothesis_answer}
68
+ """)
69
+ )
70
+ # Verify agent behavior against checkpoints using llm-as-a-judge
71
+ llm_judge = "openai/gpt-4o"
72
+ checkpoint_results = verify_checkpoints(
73
+ telemetry,
74
+ hypothesis_answer,
75
+ test_case.checkpoints,
76
+ test_case.ground_truth,
77
+ llm_judge,
78
+ )
79
+
80
+ hypothesis_answer_results = verify_hypothesis_answer(
81
+ hypothesis_answer,
82
+ test_case.ground_truth,
83
+ test_case.final_answer_criteria,
84
+ llm_judge,
85
+ )
86
+ # Summarize results
87
+
88
+ verification_results = checkpoint_results + hypothesis_answer_results
89
+ all_passed = all(result["passed"] for result in verification_results)
90
+ failed_checks = [r for r in verification_results if not r["passed"]]
91
+ passed_checks = [r for r in verification_results if r["passed"]]
92
+ if passed_checks:
93
+ logger.info(
94
+ f"Passed checkpoints: {len(passed_checks)}/{len(verification_results)}"
95
+ )
96
+ for check in passed_checks:
97
+ message = dedent(
98
+ f"""
99
+ Passed:
100
+ - {check["criteria"]}
101
+ - {check["reason"]}
102
+ """
103
+ )
104
+ logger.info(message)
105
+ if failed_checks:
106
+ logger.error(
107
+ f"Failed checkpoints: {len(failed_checks)}/{len(verification_results)}"
108
+ )
109
+ for check in failed_checks:
110
+ message = dedent(
111
+ f"""
112
+ Failed:
113
+ - {check["criteria"]}
114
+ - {check["reason"]}
115
+ """
116
+ )
117
+ logger.error(message)
118
+ else:
119
+ logger.info("All checkpoints passed!")
120
+
121
+ return all_passed
122
+
123
+
124
+ def evaluate(test_case_path: str, telemetry_path: Optional[str] = None) -> None:
125
+ """
126
+ Evaluate agent performance using either a provided telemetry file or by running the agent.
127
+
128
+ Args:
129
+ telemetry_path: Optional path to an existing telemetry file. If not provided,
130
+ the agent will be run to generate one.
131
+ """
132
+ test_case = TestCase.from_yaml(test_case_path)
133
+
134
+ if telemetry_path is None:
135
+ logger.info(
136
+ "No telemetry path provided. Running agent to generate telemetry..."
137
+ )
138
+ telemetry_path = run_agent(test_case)
139
+ else:
140
+ logger.info(f"Using provided telemetry file: {telemetry_path}")
141
+ logger.info(
142
+ "For this to work, the telemetry file must align with the test case."
143
+ )
144
+
145
+ evaluate_telemetry(test_case, telemetry_path)
146
+
147
+
148
+ def main():
149
+ Fire(evaluate)
150
+
151
+
152
+ if __name__ == "__main__":
153
+ main()
src/surf_spot_finder/evaluation/test_case.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Dict, List, Optional, Any
2
+ from pydantic import BaseModel, Field, ConfigDict
3
+ import yaml
4
+
5
+
6
+ class InputModel(BaseModel):
7
+ """Input configuration for the surf spot finder test case"""
8
+
9
+ model_config = ConfigDict(extra="forbid")
10
+ location: str
11
+ date: str
12
+ max_driving_hours: int
13
+ model_id: str
14
+ api_key_var: str
15
+ json_tracer: bool
16
+ api_base: Optional[str] = None
17
+ agent_type: str
18
+
19
+
20
+ class CheckpointCriteria(BaseModel):
21
+ """Represents a checkpoint criteria with a value and description"""
22
+
23
+ model_config = ConfigDict(extra="forbid")
24
+ value: int
25
+ criteria: str
26
+
27
+
28
+ class TestCase(BaseModel):
29
+ model_config = ConfigDict(extra="forbid")
30
+
31
+ input: InputModel
32
+ ground_truth: Dict[str, Any]
33
+ checkpoints: List[CheckpointCriteria] = Field(default_factory=list)
34
+ final_answer_criteria: List[CheckpointCriteria] = Field(default_factory=list)
35
+
36
+ @classmethod
37
+ def from_yaml(cls, case_path: str) -> "TestCase":
38
+ """Load a test case from a YAML file and process it"""
39
+ with open(case_path, "r") as f:
40
+ test_case_dict = yaml.safe_load(f)
41
+
42
+ # Generate final_answer_criteria if not explicitly provided
43
+ if "final_answer_criteria" not in test_case_dict:
44
+ final_answer_criteria = []
45
+
46
+ def add_gt_final_answer_criteria(ground_truth_dict, prefix=""):
47
+ """Recursively add checkpoints for each value in the ground_truth dictionary"""
48
+ for key, value in ground_truth_dict.items():
49
+ path = f"{prefix}: {key}" if prefix else key
50
+ if isinstance(value, dict):
51
+ add_gt_final_answer_criteria(value, path)
52
+ else:
53
+ final_answer_criteria.append(
54
+ {
55
+ "value": 1,
56
+ "criteria": f"Check if {path} is approximately '{value}'.",
57
+ }
58
+ )
59
+
60
+ add_gt_final_answer_criteria(test_case_dict["ground_truth"])
61
+ test_case_dict["final_answer_criteria"] = final_answer_criteria
62
+
63
+ return cls.model_validate(test_case_dict)
src/surf_spot_finder/evaluation/test_cases/alpha.yaml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Test case for surf spot finder
2
+ input:
3
+ location: "Vigo"
4
+ date: "2025-03-15 22:00"
5
+ max_driving_hours: 3
6
+ model_id: "openai/gpt-4o"
7
+ api_key_var: "OPENAI_API_KEY"
8
+ json_tracer: true
9
+ api_base: null
10
+ agent_type: "smolagents"
11
+
12
+ ground_truth:
13
+ "Surf location": "Playa de Patos"
14
+ "Water temperature": "about 14°C +-5°C"
15
+ "Wave height": "about 1 meter"
16
+
17
+ # Base checkpoints for agent behavior
18
+ checkpoints:
19
+ - value: 1
20
+ criteria: "Check if the agent consulted DuckDuckGoSearchTool for locations near Vigo."
21
+ - value: 1
22
+ criteria: "Check if the agent fetched a website for forecasting, not relying on text from a DuckDuckGo search."
src/surf_spot_finder/evaluation/utils.py ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from typing import Dict, List, Any, Optional
3
+ import re
4
+
5
+ from litellm import completion
6
+
7
+ from surf_spot_finder.evaluation.test_case import CheckpointCriteria
8
+
9
+
10
+ def extract_hypothesis_answer(telemetry: List[Dict[str, Any]]) -> str | None:
11
+ """Extract the hypothesis agent final answer from the telemetry data"""
12
+ for span in reversed(telemetry):
13
+ if span.get("attributes", {}).get("openinference.span.kind") == "AGENT":
14
+ hypo = span.get("attributes", {}).get("output.value")
15
+ return hypo
16
+ raise ValueError("Final answer not found in telemetry")
17
+
18
+
19
+ def evaluate_criterion(
20
+ criteria: str,
21
+ value: int,
22
+ ground_truth_output: List[CheckpointCriteria] | Dict[str, Any],
23
+ hypothesis_final_answer: str,
24
+ model: str,
25
+ evidence: Optional[str] = None,
26
+ ) -> Dict[str, Any]:
27
+ """Evaluate a single criterion using LLM"""
28
+
29
+ prompt = f"""
30
+ Evaluate if the following {"checkpoint" if evidence else "criterion"} was met {"based on the provided evidence" if evidence else "in the agent's answer"}.
31
+
32
+ {"Checkpoint" if evidence else "Criterion"}: {criteria}
33
+ Value: {value}
34
+
35
+ Expected output: {json.dumps(ground_truth_output)}
36
+
37
+ Agent's answer: {hypothesis_final_answer}
38
+ """
39
+
40
+ if evidence:
41
+ prompt += f"""
42
+
43
+ Telemetry evidence:
44
+ {evidence}
45
+ """
46
+
47
+ prompt += f"""
48
+
49
+ Based on the {"evidence" if evidence else "comparison between the expected output and the actual final answer"},
50
+ was this {"checkpoint" if evidence else "criterion"} satisfied? Answer with:
51
+ 1. "passed": true or false
52
+ 2. "reason": Brief explanation for your decision
53
+ 3. "score": A score from 0 to {value} indicating how well the {"checkpoint" if evidence else "criterion"} was met
54
+ """
55
+ prompt += """
56
+ Output valid JSON with these three fields only, in the format:
57
+ ```json
58
+ {
59
+ "passed": true,
60
+ "reason": "I have them",
61
+ "score": 1
62
+ }
63
+ ```
64
+ """
65
+
66
+ response = completion(model=model, messages=[{"role": "user", "content": prompt}])
67
+
68
+ content = response.choices[0].message.content
69
+ try:
70
+ # Extract JSON from the response - looks for patterns like ```json {...} ``` or just {...}
71
+ # Claude helped me with this one, regex is hard
72
+ json_match = re.search(
73
+ r"```(?:json)?\s*(\{.*?\})\s*```|(\{.*?\})", content, re.DOTALL
74
+ )
75
+
76
+ if json_match:
77
+ # Use the first matching group that captured content
78
+ json_str = next(group for group in json_match.groups() if group)
79
+ evaluation = json.loads(json_str)
80
+ else:
81
+ # Fallback: try parsing the whole content as JSON
82
+ evaluation = json.loads(content)
83
+
84
+ evaluation["criteria"] = criteria
85
+ evaluation["value"] = value
86
+ return evaluation
87
+ except (json.JSONDecodeError, AttributeError, StopIteration) as e:
88
+ return {
89
+ "passed": False,
90
+ "reason": f"Failed to evaluate due to parsing: {str(e)} \n Response: {content}",
91
+ "score": 0,
92
+ "criteria": criteria,
93
+ "value": value,
94
+ }
95
+
96
+
97
+ def verify_checkpoints(
98
+ telemetry: List[Dict[str, Any]],
99
+ hypothesis_final_answer: str,
100
+ checkpoints: List[CheckpointCriteria],
101
+ ground_truth_checkpoints: List[CheckpointCriteria],
102
+ model: str,
103
+ ) -> List[Dict[str, Any]]:
104
+ """Verify each checkpoint against the telemetry data using LLM"""
105
+ results = []
106
+
107
+ for checkpoint in checkpoints:
108
+ criteria = checkpoint.criteria
109
+ value = checkpoint.value
110
+ evidence = extract_relevant_evidence(telemetry, criteria)
111
+
112
+ evaluation = evaluate_criterion(
113
+ criteria=criteria,
114
+ value=value,
115
+ ground_truth_output=ground_truth_checkpoints,
116
+ hypothesis_final_answer=hypothesis_final_answer,
117
+ model=model,
118
+ evidence=evidence,
119
+ )
120
+
121
+ results.append(evaluation)
122
+
123
+ return results
124
+
125
+
126
+ def verify_hypothesis_answer(
127
+ hypothesis_final_answer: str,
128
+ ground_truth_answer_dict: Dict[str, Any],
129
+ ground_truth_checkpoints: List[CheckpointCriteria],
130
+ model: str,
131
+ ) -> List[Dict[str, Any]]:
132
+ """
133
+ Verify if the final answer meets all specified criteria
134
+ """
135
+ results = []
136
+
137
+ for criterion in ground_truth_checkpoints:
138
+ criteria = criterion.criteria
139
+ value = criterion.value
140
+
141
+ evaluation = evaluate_criterion(
142
+ criteria=criteria,
143
+ value=value,
144
+ ground_truth_output=ground_truth_answer_dict,
145
+ hypothesis_final_answer=hypothesis_final_answer,
146
+ model=model,
147
+ )
148
+
149
+ results.append(evaluation)
150
+
151
+ return results
152
+
153
+
154
+ def extract_relevant_evidence(telemetry: List[Dict[str, Any]], criteria: str) -> str:
155
+ """Extract relevant telemetry evidence based on the checkpoint criteria
156
+ TODO this is not a very robust implementation, since it requires knowledge about which tools have been
157
+ implemented. We should abstract this so that it can dynamically figure out what tools may have been used
158
+ and check for them appropriately."""
159
+ evidence = ""
160
+
161
+ # Look for evidence of tool usage
162
+ if "DuckDuckGoSearchTool" in criteria:
163
+ search_spans = [
164
+ span for span in telemetry if span.get("name") == "DuckDuckGoSearchTool"
165
+ ]
166
+ evidence += f"Search tool was used {len(search_spans)} times.\n"
167
+ for i, span in enumerate(search_spans): # Limit to first 3 searches
168
+ if "attributes" in span and "input.value" in span["attributes"]:
169
+ try:
170
+ input_value = json.loads(span["attributes"]["input.value"])
171
+ if "kwargs" in input_value and "query" in input_value["kwargs"]:
172
+ evidence += (
173
+ f"Search query {i + 1}: {input_value['kwargs']['query']}\n"
174
+ )
175
+ except (json.JSONDecodeError, TypeError):
176
+ pass
177
+
178
+ # Look for evidence of website fetching
179
+ if "fetched a website" in criteria:
180
+ fetch_spans = [
181
+ span
182
+ for span in telemetry
183
+ if span.get("attributes", {}).get("tool.name") == "fetch"
184
+ ]
185
+ evidence += f"Website fetch tool was used {len(fetch_spans)} times.\n"
186
+ for i, span in enumerate(fetch_spans): # Limit to first 3 fetches
187
+ if "attributes" in span and "input.value" in span["attributes"]:
188
+ try:
189
+ input_value = json.loads(span["attributes"]["input.value"])
190
+ if "kwargs" in input_value and "url" in input_value["kwargs"]:
191
+ evidence += (
192
+ f"Fetched URL {i + 1}: {input_value['kwargs']['url']}\n"
193
+ )
194
+ except (json.JSONDecodeError, TypeError):
195
+ pass
196
+
197
+ # Add general evidence about all tool calls
198
+ tool_calls = {}
199
+ for span in telemetry:
200
+ if "name" in span and span["name"] not in tool_calls:
201
+ tool_calls[span["name"]] = 1
202
+ elif "name" in span:
203
+ tool_calls[span["name"]] += 1
204
+
205
+ evidence += "\nTool calls summary:\n"
206
+ for tool, count in tool_calls.items():
207
+ evidence += f"- {tool}: {count} call(s)\n"
208
+
209
+ return evidence
src/surf_spot_finder/tracing.py CHANGED
@@ -1,4 +1,5 @@
1
  import os
 
2
  from datetime import datetime
3
 
4
  from opentelemetry import trace
@@ -11,13 +12,33 @@ from phoenix.otel import register
11
  class JsonFileSpanExporter(SpanExporter):
12
  def __init__(self, file_name: str):
13
  self.file_name = file_name
 
 
 
 
14
 
15
  def export(self, spans) -> None:
16
- with open(self.file_name, "a") as f:
17
- for span in spans:
18
- f.write(
19
- span.to_json() + "\n"
20
- ) # Ensure to_json() method is properly implemented
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  def shutdown(self):
23
  pass
@@ -25,7 +46,7 @@ class JsonFileSpanExporter(SpanExporter):
25
 
26
  def get_tracer_provider(
27
  project_name: str, json_tracer: bool, output_dir: str = "telemetry_output"
28
- ) -> TracerProvider:
29
  """
30
  Create a tracer_provider based on the selected mode.
31
 
@@ -37,7 +58,8 @@ def get_tracer_provider(
37
  Defaults to "telemetry_output".
38
 
39
  Returns:
40
- TracerProvider: The configured tracer provider
 
41
  """
42
  if json_tracer:
43
  if not os.path.exists(output_dir):
@@ -47,15 +69,17 @@ def get_tracer_provider(
47
  tracer_provider = TracerProvider()
48
  trace.set_tracer_provider(tracer_provider)
49
 
50
- json_file_exporter = JsonFileSpanExporter(
51
- file_name=f"{output_dir}/{project_name}-{timestamp}.json"
52
- )
53
  span_processor = SimpleSpanProcessor(json_file_exporter)
54
  tracer_provider.add_span_processor(span_processor)
55
  else:
56
- tracer_provider = register(project_name=project_name)
 
 
 
57
 
58
- return tracer_provider
59
 
60
 
61
  def setup_tracing(tracer_provider: TracerProvider, agent_type: str) -> None:
 
1
  import os
2
+ import json
3
  from datetime import datetime
4
 
5
  from opentelemetry import trace
 
12
  class JsonFileSpanExporter(SpanExporter):
13
  def __init__(self, file_name: str):
14
  self.file_name = file_name
15
+ # Initialize with an empty array if file doesn't exist
16
+ if not os.path.exists(self.file_name):
17
+ with open(self.file_name, "w") as f:
18
+ json.dump([], f)
19
 
20
  def export(self, spans) -> None:
21
+ # Read existing spans
22
+ try:
23
+ with open(self.file_name, "r") as f:
24
+ all_spans = json.load(f)
25
+ except (json.JSONDecodeError, FileNotFoundError):
26
+ all_spans = []
27
+
28
+ # Add new spans
29
+ for span in spans:
30
+ try:
31
+ # Try to parse the span data from to_json() if it returns a string
32
+ span_data = json.loads(span.to_json())
33
+ except (json.JSONDecodeError, TypeError, AttributeError):
34
+ # If span.to_json() doesn't return valid JSON string
35
+ span_data = {"error": "Could not serialize span", "span_str": str(span)}
36
+
37
+ all_spans.append(span_data)
38
+
39
+ # Write all spans back to the file as a proper JSON array
40
+ with open(self.file_name, "w") as f:
41
+ json.dump(all_spans, f, indent=2)
42
 
43
  def shutdown(self):
44
  pass
 
46
 
47
  def get_tracer_provider(
48
  project_name: str, json_tracer: bool, output_dir: str = "telemetry_output"
49
+ ) -> tuple[TracerProvider, str | None]:
50
  """
51
  Create a tracer_provider based on the selected mode.
52
 
 
58
  Defaults to "telemetry_output".
59
 
60
  Returns:
61
+ tracer_provider: The configured tracer provider
62
+ file_name: The name of the JSON file where telemetry will be stored
63
  """
64
  if json_tracer:
65
  if not os.path.exists(output_dir):
 
69
  tracer_provider = TracerProvider()
70
  trace.set_tracer_provider(tracer_provider)
71
 
72
+ file_name = f"{output_dir}/{project_name}-{timestamp}.json"
73
+ json_file_exporter = JsonFileSpanExporter(file_name=file_name)
 
74
  span_processor = SimpleSpanProcessor(json_file_exporter)
75
  tracer_provider.add_span_processor(span_processor)
76
  else:
77
+ tracer_provider = register(
78
+ project_name=project_name, set_global_tracer_provider=True
79
+ )
80
+ file_name = None
81
 
82
+ return tracer_provider, file_name
83
 
84
 
85
  def setup_tracing(tracer_provider: TracerProvider, agent_type: str) -> None:
tests/unit/test_unit_tracing.py CHANGED
@@ -27,7 +27,9 @@ def test_get_tracer_provider(tmp_path, json_tracer):
27
  mock_tracer_provider.return_value
28
  )
29
  else:
30
- mock_register.assert_called_once_with(project_name="test_project")
 
 
31
 
32
 
33
  def test_invalid_agent_type():
 
27
  mock_tracer_provider.return_value
28
  )
29
  else:
30
+ mock_register.assert_called_once_with(
31
+ project_name="test_project", set_global_tracer_provider=True
32
+ )
33
 
34
 
35
  def test_invalid_agent_type():