KuberMehta commited on
Commit
0bfdd2a
·
verified ·
1 Parent(s): 803dd23

Upload 3 files

Browse files

Ported to temp HF space

Files changed (3) hide show
  1. App.py +1065 -0
  2. README.md +71 -13
  3. requirements.txt +4 -0
App.py ADDED
@@ -0,0 +1,1065 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import asyncio
3
+ import gradio as gr
4
+ import logging
5
+ from huggingface_hub import InferenceClient
6
+ import cohere
7
+ import google.generativeai as genai
8
+ from anthropic import Anthropic
9
+ import openai
10
+ from typing import List, Dict, Any, Optional
11
+
12
+ # Configure logging
13
+ logging.basicConfig(level=logging.INFO)
14
+ logger = logging.getLogger(__name__)
15
+
16
+ # --- Agent Class ---
17
+ class PolyThinkAgent:
18
+ def __init__(self, model_name: str, model_path: str, role: str = "solver", api_provider: str = None):
19
+ self.model_name = model_name
20
+ self.model_path = model_path
21
+ self.role = role
22
+ self.api_provider = api_provider
23
+ self.clients = {}
24
+ self.hf_token = None
25
+ self.inference = None
26
+
27
+ def set_clients(self, clients: Dict[str, Any]):
28
+ """Set the API clients for this agent"""
29
+ self.clients = clients
30
+ if "huggingface" in clients:
31
+ self.hf_token = clients["huggingface"]
32
+ if self.hf_token:
33
+ self.inference = InferenceClient(token=self.hf_token)
34
+
35
+ async def solve_problem(self, problem: str) -> Dict[str, Any]:
36
+ """Generate a solution to the given problem"""
37
+ try:
38
+ if self.api_provider == "cohere" and "cohere" in self.clients:
39
+ response = self.clients["cohere"].chat(
40
+ model=self.model_path,
41
+ message=f"""
42
+ PROBLEM: {problem}
43
+ INSTRUCTIONS:
44
+ - Provide a clear, concise solution in one sentence.
45
+ - Include brief reasoning in one additional sentence.
46
+ - Do not repeat the solution or add extraneous text.
47
+ """
48
+ )
49
+ solution = response.text.strip()
50
+ return {"solution": solution, "model_name": self.model_name}
51
+
52
+ elif self.api_provider == "anthropic" and "anthropic" in self.clients:
53
+ response = self.clients["anthropic"].messages.create(
54
+ model=self.model_path,
55
+ messages=[{
56
+ "role": "user",
57
+ "content": f"""
58
+ PROBLEM: {problem}
59
+ INSTRUCTIONS:
60
+ - Provide a clear, concise solution in one sentence.
61
+ - Include brief reasoning in one additional sentence.
62
+ - Do not repeat the solution or add extraneous text.
63
+ """
64
+ }]
65
+ )
66
+ solution = response.content[0].text.strip()
67
+ return {"solution": solution, "model_name": self.model_name}
68
+
69
+ elif self.api_provider == "openai" and "openai" in self.clients:
70
+ response = self.clients["openai"].chat.completions.create(
71
+ model=self.model_path,
72
+ max_tokens=100,
73
+ messages=[{
74
+ "role": "user",
75
+ "content": f"""
76
+ PROBLEM: {problem}
77
+ INSTRUCTIONS:
78
+ - Provide a clear, concise solution in one sentence.
79
+ - Include brief reasoning in one additional sentence.
80
+ - Do not repeat the solution or add extraneous text.
81
+ - Keep the response under 100 characters.
82
+ """
83
+ }]
84
+ )
85
+ solution = response.choices[0].message.content.strip()
86
+ return {"solution": solution, "model_name": self.model_name}
87
+
88
+ elif self.api_provider == "huggingface" and self.inference:
89
+ prompt = f"""
90
+ PROBLEM: {problem}
91
+ INSTRUCTIONS:
92
+ - Provide a clear, concise solution in one sentence.
93
+ - Include brief reasoning in one additional sentence.
94
+ - Do not repeat the solution or add extraneous text.
95
+ - Keep the response under 100 characters.
96
+ SOLUTION AND REASONING:
97
+ """
98
+ result = self.inference.text_generation(
99
+ prompt, model=self.model_path, max_new_tokens=5000, temperature=0.5
100
+ )
101
+ solution = result if isinstance(result, str) else result.generated_text
102
+ return {"solution": solution.strip(), "model_name": self.model_name}
103
+
104
+ elif self.api_provider == "gemini" and "gemini" in self.clients:
105
+ model = self.clients["gemini"].GenerativeModel(self.model_path)
106
+ try:
107
+ response = model.generate_content(
108
+ f"""
109
+ PROBLEM: {problem}
110
+ INSTRUCTIONS:
111
+ - Provide a clear, concise solution in one sentence.
112
+ - Include brief reasoning in one additional sentence.
113
+ - Do not repeat the solution or add extraneous text.
114
+ - Keep the response under 100 characters.
115
+ """,
116
+ generation_config=genai.types.GenerationConfig(
117
+ temperature=0.5,
118
+ )
119
+ )
120
+ # Check response validity and handle different response structures
121
+ try:
122
+ # First try to access text directly if available
123
+ if hasattr(response, 'text'):
124
+ solution = response.text.strip()
125
+ # Otherwise check for candidates
126
+ elif hasattr(response, 'candidates') and response.candidates:
127
+ # Make sure we have candidates and parts before accessing
128
+ if hasattr(response.candidates[0], 'content') and hasattr(response.candidates[0].content, 'parts'):
129
+ solution = response.candidates[0].content.parts[0].text.strip()
130
+ else:
131
+ logger.warning(f"Gemini response has candidates but missing content structure: {response}")
132
+ solution = "Error parsing API response; incomplete response structure."
133
+ else:
134
+ # Fallback for when candidates is empty
135
+ logger.warning(f"Gemini API returned no candidates: {response}")
136
+ solution = "No solution generated; API returned empty response."
137
+ except Exception as e:
138
+ logger.error(f"Error extracting text from Gemini response: {e}, response: {response}")
139
+ solution = "Error parsing API response."
140
+ except Exception as e:
141
+ logger.error(f"Gemini API call failed: {e}")
142
+ solution = f"API error: {str(e)}"
143
+ return {"solution": solution, "model_name": self.model_name}
144
+
145
+ else:
146
+ return {"solution": f"Error: Missing API configuration for {self.api_provider}", "model_name": self.model_name}
147
+
148
+ except Exception as e:
149
+ logger.error(f"Error in {self.model_name}: {str(e)}")
150
+ return {"solution": f"Error: {str(e)}", "model_name": self.model_name}
151
+ async def evaluate_solutions(self, problem: str, solutions: List[Dict[str, Any]]) -> Dict[str, Any]:
152
+ """Evaluate solutions from solver agents"""
153
+ try:
154
+ prompt = f"""
155
+ PROBLEM: {problem}
156
+ SOLUTIONS:
157
+ 1. {solutions[0]['model_name']}: {solutions[0]['solution']}
158
+ 2. {solutions[1]['model_name']}: {solutions[1]['solution']}
159
+ INSTRUCTIONS:
160
+ - Extract the numerical final answer from each solution (e.g., 68 from '16 + 52 = 68').
161
+ - Extract the key reasoning steps from each solution.
162
+ - Apply strict evaluation criteria:
163
+ * Numerical answers must match EXACTLY (including units and precision).
164
+ * Key reasoning steps must align in approach and logic.
165
+ - Output exactly: 'AGREEMENT: YES' if BOTH the numerical answers AND reasoning align perfectly.
166
+ - Output 'AGREEMENT: NO' followed by a one-sentence explanation if either the answers or reasoning differ in ANY way.
167
+ - Be conservative in declaring agreement - when in doubt, declare disagreement.
168
+ - Do not add scoring, commentary, or extraneous text.
169
+ EVALUATION:
170
+ """
171
+
172
+ if self.api_provider == "gemini" and "gemini" in self.clients:
173
+ # Instantiate the model for consistency and clarity
174
+ model = self.clients["gemini"].GenerativeModel(self.model_path)
175
+ # Use generate_content on the model instance
176
+ response = model.generate_content(
177
+ prompt,
178
+ generation_config=genai.types.GenerationConfig(
179
+ temperature=0.5,
180
+ )
181
+ )
182
+
183
+ # Handle potential empty response or missing text attribute
184
+ try:
185
+ # First try to access text directly if available
186
+ if hasattr(response, 'text'):
187
+ judgment = response.text.strip()
188
+ # Otherwise check for candidates
189
+ elif hasattr(response, 'candidates') and response.candidates:
190
+ # Make sure we have candidates and parts before accessing
191
+ if hasattr(response.candidates[0], 'content') and hasattr(response.candidates[0].content, 'parts'):
192
+ judgment = response.candidates[0].content.parts[0].text.strip()
193
+ else:
194
+ logger.warning(f"Gemini response has candidates but missing content structure: {response}")
195
+ judgment = "AGREEMENT: NO - Unable to evaluate due to API response structure issue."
196
+ else:
197
+ # Fallback for when candidates is empty
198
+ logger.warning(f"Empty response from Gemini API: {response}")
199
+ judgment = "AGREEMENT: NO - Unable to evaluate due to API response issue."
200
+ except Exception as e:
201
+ logger.error(f"Error extracting text from Gemini response: {e}")
202
+ judgment = "AGREEMENT: NO - Unable to evaluate due to API response issue."
203
+
204
+ return {"judgment": judgment, "reprompt_needed": "AGREEMENT: NO" in judgment.upper()}
205
+
206
+ elif self.api_provider == "openai" and "openai" in self.clients:
207
+ response = self.clients["openai"].chat.completions.create(
208
+ model=self.model_path,
209
+ max_tokens=200,
210
+ messages=[{"role": "user", "content": prompt}]
211
+ )
212
+ judgment = response.choices[0].message.content.strip()
213
+ return {"judgment": judgment, "reprompt_needed": "AGREEMENT: NO" in judgment.upper()}
214
+
215
+ elif self.api_provider == "huggingface" and self.inference:
216
+ result = self.inference.text_generation(
217
+ prompt, model=self.model_path, max_new_tokens=200, temperature=0.5
218
+ )
219
+ judgment = result if isinstance(result, str) else result.generated_text
220
+ return {"judgment": judgment.strip(), "reprompt_needed": "AGREEMENT: NO" in judgment.upper()}
221
+
222
+ else:
223
+ return {"judgment": f"Error: Missing API configuration for {self.api_provider}", "reprompt_needed": False}
224
+
225
+ except Exception as e:
226
+ logger.error(f"Error in judge: {str(e)}")
227
+ return {"judgment": f"Error: {str(e)}", "reprompt_needed": False}
228
+
229
+ async def reprompt_with_context(self, problem: str, solutions: List[Dict[str, Any]], judgment: str) -> Dict[str, Any]:
230
+ """Generate a revised solution based on previous solutions and judgment"""
231
+ try:
232
+ prompt = f"""
233
+ PROBLEM: {problem}
234
+ PREVIOUS SOLUTIONS:
235
+ 1. {solutions[0]['model_name']}: {solutions[0]['solution']}
236
+ 2. {solutions[1]['model_name']}: {solutions[1]['solution']}
237
+ JUDGE FEEDBACK: {judgment}
238
+ INSTRUCTIONS:
239
+ - Provide a revised, concise solution in one sentence.
240
+ - Include brief reasoning in one additional sentence.
241
+ - Address the judge's feedback.
242
+ """
243
+
244
+ if self.api_provider == "cohere" and "cohere" in self.clients:
245
+ response = self.clients["cohere"].chat(
246
+ model=self.model_path,
247
+ message=prompt
248
+ )
249
+ solution = response.text.strip()
250
+ return {"solution": solution, "model_name": self.model_name}
251
+
252
+ elif self.api_provider == "anthropic" and "anthropic" in self.clients:
253
+ response = self.clients["anthropic"].messages.create(
254
+ model=self.model_path,
255
+ max_tokens=100,
256
+ messages=[{"role": "user", "content": prompt}]
257
+ )
258
+ solution = response.content[0].text.strip()
259
+ return {"solution": solution, "model_name": self.model_name}
260
+
261
+ elif self.api_provider == "openai" and "openai" in self.clients:
262
+ response = self.clients["openai"].chat.completions.create(
263
+ model=self.model_path,
264
+ max_tokens=100,
265
+ messages=[{"role": "user", "content": prompt}]
266
+ )
267
+ solution = response.choices[0].message.content.strip()
268
+ return {"solution": solution, "model_name": self.model_name}
269
+
270
+ elif self.api_provider == "huggingface" and self.inference:
271
+ prompt += "\nREVISED SOLUTION AND REASONING:"
272
+ result = self.inference.text_generation(
273
+ prompt, model=self.model_path, max_new_tokens=500, temperature=0.5
274
+ )
275
+ solution = result if isinstance(result, str) else result.generated_text
276
+ return {"solution": solution.strip(), "model_name": self.model_name}
277
+
278
+ elif self.api_provider == "gemini" and "gemini" in self.clients:
279
+ # Instantiate the model for consistency and clarity
280
+ model = self.clients["gemini"].GenerativeModel(self.model_path)
281
+ # Use generate_content
282
+ response = model.generate_content(
283
+ f"""
284
+ PROBLEM: {problem}
285
+ PREVIOUS SOLUTIONS:
286
+ 1. {solutions[0]['model_name']}: {solutions[0]['solution']}
287
+ 2. {solutions[1]['model_name']}: {solutions[1]['solution']}
288
+ JUDGE FEEDBACK: {judgment}
289
+ INSTRUCTIONS:
290
+ - Provide a revised, concise solution in one sentence.
291
+ - Include brief reasoning in one additional sentence.
292
+ - Address the judge's feedback.
293
+ """,
294
+ generation_config=genai.types.GenerationConfig(
295
+ temperature=0.5,
296
+ max_output_tokens=100
297
+ )
298
+ )
299
+ # Handle potential empty response or missing text attribute
300
+ try:
301
+ # First try to access text directly if available
302
+ if hasattr(response, 'text'):
303
+ solution = response.text.strip()
304
+ # Otherwise check for candidates
305
+ elif hasattr(response, 'candidates') and response.candidates:
306
+ # Make sure we have candidates and parts before accessing
307
+ if hasattr(response.candidates[0], 'content') and hasattr(response.candidates[0].content, 'parts'):
308
+ solution = response.candidates[0].content.parts[0].text.strip()
309
+ else:
310
+ logger.warning(f"Gemini response has candidates but missing content structure: {response}")
311
+ solution = "Unable to generate a solution due to API response structure issue."
312
+ else:
313
+ # Fallback for when candidates is empty
314
+ logger.warning(f"Empty response from Gemini API: {response}")
315
+ solution = "Unable to generate a solution due to API response issue."
316
+ except Exception as e:
317
+ logger.error(f"Error extracting text from Gemini response: {e}")
318
+ solution = "Unable to generate a solution due to API response issue."
319
+
320
+ return {"solution": solution, "model_name": self.model_name}
321
+ else:
322
+ return {"solution": f"Error: Missing API configuration for {self.api_provider}", "model_name": self.model_name}
323
+
324
+ except Exception as e:
325
+ logger.error(f"Error in {self.model_name}: {str(e)}")
326
+ return {"solution": f"Error: {str(e)}", "model_name": self.model_name}
327
+
328
+ # --- Model Registry ---
329
+ class ModelRegistry:
330
+ @staticmethod
331
+ def get_available_models():
332
+ """Get the list of available models grouped by provider (original list)"""
333
+ return {
334
+ "Anthropic": [
335
+ {"name": "Claude 3.5 Sonnet", "id": "claude-3-5-sonnet-20240620", "provider": "anthropic", "type": ["solver"], "icon": "📜"},
336
+ {"name": "Claude 3.7 Sonnet", "id": "claude-3-7-sonnet-20250219", "provider": "anthropic", "type": ["solver"], "icon": "📜"},
337
+ {"name": "Claude 3 Opus", "id": "claude-3-opus-20240229", "provider": "anthropic", "type": ["solver"], "icon": "📜"},
338
+ {"name": "Claude 3 Haiku", "id": "claude-3-haiku-20240307", "provider": "anthropic", "type": ["solver"], "icon": "📜"}
339
+ ],
340
+ "OpenAI": [
341
+ {"name": "GPT-4o", "id": "gpt-4o", "provider": "openai", "type": ["solver"], "icon": "🤖"},
342
+ {"name": "GPT-4 Turbo", "id": "gpt-4-turbo", "provider": "openai", "type": ["solver"], "icon": "🤖"},
343
+ {"name": "GPT-4", "id": "gpt-4", "provider": "openai", "type": ["solver"], "icon": "🤖"},
344
+ {"name": "GPT-3.5 Turbo", "id": "gpt-3.5-turbo", "provider": "openai", "type": ["solver"], "icon": "🤖"},
345
+ {"name": "OpenAI o1", "id": "o1", "provider": "openai", "type": ["solver", "judge"], "icon": "🤖"},
346
+ {"name": "OpenAI o3", "id": "o3", "provider": "openai", "type": ["solver", "judge"], "icon": "🤖"}
347
+ ],
348
+ "Cohere": [
349
+ {"name": "Cohere Command R", "id": "command-r-08-2024", "provider": "cohere", "type": ["solver"], "icon": "💬"},
350
+ {"name": "Cohere Command R+", "id": "command-r-plus-08-2024", "provider": "cohere", "type": ["solver"], "icon": "💬"}
351
+ ],
352
+ "Google": [
353
+ {"name": "Gemini 1.5 Pro", "id": "gemini-1.5-pro", "provider": "gemini", "type": ["solver"], "icon": "🌟"},
354
+ {"name": "Gemini 2.0 Flash Thinking Experimental 01-21", "id": "gemini-2.0-flash-thinking-exp-01-21", "provider": "gemini", "type": ["solver", "judge"], "icon": "🌟"},
355
+ {"name": "Gemini 2.5 Pro Experimental 03-25", "id": "gemini-2.5-pro-exp-03-25", "provider": "gemini", "type": ["solver", "judge"], "icon": "🌟"}
356
+ ],
357
+ "HuggingFace": [
358
+ {"name": "Llama 3.3 70B Instruct", "id": "meta-llama/Llama-3.3-70B-Instruct", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
359
+ {"name": "Llama 3.2 3B Instruct", "id": "meta-llama/Llama-3.2-3B-Instruct", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
360
+ {"name": "Llama 3.1 70B Instruct", "id": "meta-llama/Llama-3.1-70B-Instruct", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
361
+ {"name": "Mistral 7B Instruct v0.3", "id": "mistralai/Mistral-7B-Instruct-v0.3", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
362
+ {"name": "DeepSeek R1 Distill Qwen 32B", "id": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", "provider": "huggingface", "type": ["solver", "judge"], "icon": "🔥"},
363
+ {"name": "DeepSeek Coder V2 Instruct", "id": "deepseek-ai/DeepSeek-Coder-V2-Instruct", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
364
+ {"name": "Qwen 2.5 72B Instruct", "id": "Qwen/Qwen2.5-72B-Instruct", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
365
+ {"name": "Qwen 2.5 Coder 32B Instruct", "id": "Qwen/Qwen2.5-Coder-32B-Instruct", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
366
+ {"name": "Qwen 2.5 Math 1.5B Instruct", "id": "Qwen/Qwen2.5-Math-1.5B-Instruct", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
367
+ {"name": "Gemma 3 27B Instruct", "id": "google/gemma-3-27b-it", "provider": "huggingface", "type": ["solver"], "icon": "🔥"},
368
+ {"name": "Phi-3 Mini 4K Instruct", "id": "microsoft/Phi-3-mini-4k-instruct", "provider": "huggingface", "type": ["solver"], "icon": "🔥"}
369
+ ]
370
+ }
371
+
372
+ @staticmethod
373
+ def get_solver_models():
374
+ """Get models suitable for solver role with provider grouping"""
375
+ all_models = ModelRegistry.get_available_models()
376
+ solver_models = {}
377
+
378
+ for provider, models in all_models.items():
379
+ provider_models = []
380
+ for model in models:
381
+ if "solver" in model["type"]:
382
+ provider_models.append({
383
+ "name": f"{model['icon']} {model['name']} ({provider})",
384
+ "id": model["id"],
385
+ "provider": model["provider"]
386
+ })
387
+ if provider_models:
388
+ solver_models[provider] = provider_models
389
+
390
+ return solver_models
391
+
392
+ @staticmethod
393
+ def get_judge_models():
394
+ """Get only specific reasoning models suitable for judge role with provider grouping"""
395
+ all_models = ModelRegistry.get_available_models()
396
+ judge_models = {}
397
+ allowed_judge_models = [
398
+ "Gemini 2.0 Flash Thinking Experimental 01-21 (Google)",
399
+ "DeepSeek R1 (HuggingFace)",
400
+ "Gemini 2.5 Pro Experimental 03-25 (Google)",
401
+ "OpenAI o1 (OpenAI)",
402
+ "OpenAI o3 (OpenAI)"
403
+ ]
404
+
405
+ for provider, models in all_models.items():
406
+ provider_models = []
407
+ for model in models:
408
+ full_name = f"{model['name']} ({provider})"
409
+ if "judge" in model["type"] and full_name in allowed_judge_models:
410
+ provider_models.append({
411
+ "name": f"{model['icon']} {model['name']} ({provider})",
412
+ "id": model["id"],
413
+ "provider": model["provider"]
414
+ })
415
+ if provider_models:
416
+ judge_models[provider] = provider_models
417
+
418
+ return judge_models
419
+
420
+ # --- Orchestrator Class ---
421
+ class PolyThinkOrchestrator:
422
+ def __init__(self, solver1_config=None, solver2_config=None, judge_config=None, api_clients=None):
423
+ self.solvers = []
424
+ self.judge = None
425
+ self.api_clients = api_clients or {}
426
+
427
+ if solver1_config:
428
+ solver1 = PolyThinkAgent(
429
+ model_name=solver1_config["name"].split(" ", 1)[1].rsplit(" (", 1)[0] if " " in solver1_config["name"] else solver1_config["name"],
430
+ model_path=solver1_config["id"],
431
+ api_provider=solver1_config["provider"]
432
+ )
433
+ solver1.set_clients(self.api_clients)
434
+ self.solvers.append(solver1)
435
+
436
+ if solver2_config:
437
+ solver2 = PolyThinkAgent(
438
+ model_name=solver2_config["name"].split(" ", 1)[1].rsplit(" (", 1)[0] if " " in solver2_config["name"] else solver2_config["name"],
439
+ model_path=solver2_config["id"],
440
+ api_provider=solver2_config["provider"]
441
+ )
442
+ solver2.set_clients(self.api_clients)
443
+ self.solvers.append(solver2)
444
+
445
+ if judge_config:
446
+ self.judge = PolyThinkAgent(
447
+ model_name=judge_config["name"].split(" ", 1)[1].rsplit(" (", 1)[0] if " " in judge_config["name"] else judge_config["name"],
448
+ model_path=judge_config["id"],
449
+ role="judge",
450
+ api_provider=judge_config["provider"]
451
+ )
452
+ self.judge.set_clients(self.api_clients)
453
+
454
+ async def get_initial_solutions(self, problem: str) -> List[Dict[str, Any]]:
455
+ tasks = [solver.solve_problem(problem) for solver in self.solvers]
456
+ return await asyncio.gather(*tasks)
457
+
458
+ async def get_judgment(self, problem: str, solutions: List[Dict[str, Any]]) -> Dict[str, Any]:
459
+ if self.judge:
460
+ return await self.judge.evaluate_solutions(problem, solutions)
461
+ return {"judgment": "No judge configured", "reprompt_needed": False}
462
+
463
+ async def get_revised_solutions(self, problem: str, solutions: List[Dict[str, Any]], judgment: str) -> List[Dict[str, Any]]:
464
+ tasks = [solver.reprompt_with_context(problem, solutions, judgment) for solver in self.solvers]
465
+ return await asyncio.gather(*tasks)
466
+
467
+ def generate_final_report(self, problem: str, history: List[Dict[str, Any]]) -> str:
468
+ report = f"""
469
+ <div class="final-report-container">
470
+ <h2 class="final-report-title">🔍 Final Analysis Report</h2>
471
+ <div class="problem-container">
472
+ <h3 class="problem-title">Problem Statement</h3>
473
+ <div class="problem-content">{problem}</div>
474
+ </div>
475
+
476
+ <div class="timeline-container">
477
+ """
478
+
479
+ for i, step in enumerate(history, 1):
480
+ if "solutions" in step and i == 1:
481
+ report += f"""
482
+ <div class="timeline-item">
483
+ <div class="timeline-marker">1</div>
484
+ <div class="timeline-content">
485
+ <h4>Initial Solutions</h4>
486
+ <div class="solutions-container">
487
+ """
488
+
489
+ for sol in step["solutions"]:
490
+ report += f"""
491
+ <div class="solution-item">
492
+ <div class="solution-header">{sol['model_name']}</div>
493
+ <div class="solution-body">{sol['solution']}</div>
494
+ </div>
495
+ """
496
+
497
+ report += """
498
+ </div>
499
+ </div>
500
+ </div>
501
+ """
502
+
503
+ elif "judgment" in step:
504
+ is_agreement = "AGREEMENT: YES" in step["judgment"].upper()
505
+ judgment_class = "agreement" if is_agreement else "disagreement"
506
+ judgment_icon = "✅" if is_agreement else "❌"
507
+
508
+ report += f"""
509
+ <div class="timeline-item">
510
+ <div class="timeline-marker">{i}</div>
511
+ <div class="timeline-content">
512
+ <h4>Evaluation {(i+1)//2}</h4>
513
+ <div class="judgment-container {judgment_class}">
514
+ <div class="judgment-icon">{judgment_icon}</div>
515
+ <div class="judgment-text">{step["judgment"]}</div>
516
+ </div>
517
+ </div>
518
+ </div>
519
+ """
520
+
521
+ elif "solutions" in step and i > 1:
522
+ round_num = (i+1)//2
523
+ report += f"""
524
+ <div class="timeline-item">
525
+ <div class="timeline-marker">{i}</div>
526
+ <div class="timeline-content">
527
+ <h4>Revised Solutions (Round {round_num})</h4>
528
+ <div class="solutions-container">
529
+ """
530
+
531
+ for sol in step["solutions"]:
532
+ report += f"""
533
+ <div class="solution-item">
534
+ <div class="solution-header">{sol['model_name']}</div>
535
+ <div class="solution-body">{sol['solution']}</div>
536
+ </div>
537
+ """
538
+
539
+ report += """
540
+ </div>
541
+ </div>
542
+ </div>
543
+ """
544
+
545
+ last_judgment = next((step.get("judgment", "") for step in reversed(history) if "judgment" in step), "")
546
+ if "AGREEMENT: YES" in last_judgment.upper():
547
+ confidence = "100%" if len(history) == 2 else "80%"
548
+ report += f"""
549
+ <div class="conclusion-container agreement">
550
+ <h3>Conclusion</h3>
551
+ <div class="conclusion-content">
552
+ <div class="conclusion-icon">✅</div>
553
+ <div class="conclusion-text">
554
+ <p>Models reached <strong>AGREEMENT</strong></p>
555
+ <p>Confidence level: <strong>{confidence}</strong></p>
556
+ </div>
557
+ </div>
558
+ </div>
559
+ """
560
+ else:
561
+ report += f"""
562
+ <div class="conclusion-container disagreement">
563
+ <h3>Conclusion</h3>
564
+ <div class="conclusion-content">
565
+ <div class="conclusion-icon">❓</div>
566
+ <div class="conclusion-text">
567
+ <p>Models could not reach agreement</p>
568
+ <p>Review all solutions above for best answer</p>
569
+ </div>
570
+ </div>
571
+ </div>
572
+ """
573
+
574
+ report += """
575
+ </div>
576
+ </div>
577
+ """
578
+
579
+ return report
580
+
581
+ # --- Gradio Interface ---
582
+ def create_polythink_interface():
583
+ custom_css = """
584
+ /* Reverted to Original Black Theme */
585
+ body {
586
+ background: #000000;
587
+ color: #ffffff;
588
+ font-family: 'Arial', sans-serif;
589
+ }
590
+ .gradio-container {
591
+ background: #1a1a1a;
592
+ border-radius: 10px;
593
+ box-shadow: 0 4px 15px rgba(0, 0, 0, 0.5);
594
+ padding: 20px;
595
+ }
596
+ .gr-button {
597
+ background: linear-gradient(45deg, #666666, #999999);
598
+ color: #ffffff;
599
+ border: none;
600
+ padding: 10px 20px;
601
+ border-radius: 5px;
602
+ transition: all 0.3s ease;
603
+ }
604
+ .gr-button:hover {
605
+ background: linear-gradient(45deg, #555555, #888888);
606
+ transform: translateY(-2px);
607
+ }
608
+ .gr-textbox {
609
+ background: #333333;
610
+ color: #ffffff;
611
+ border: 1px solid #444444;
612
+ border-radius: 5px;
613
+ padding: 10px;
614
+ }
615
+ .gr-slider {
616
+ background: #333333;
617
+ border-radius: 5px;
618
+ }
619
+ .gr-slider .track-fill {
620
+ background: #cccccc;
621
+ }
622
+ .step-section {
623
+ background: #1a1a1a;
624
+ border-radius: 8px;
625
+ padding: 15px;
626
+ margin-bottom: 20px;
627
+ box-shadow: 0 2px 10px rgba(0, 0, 0, 0.3);
628
+ }
629
+ .step-section h3 {
630
+ color: #cccccc;
631
+ margin-top: 0;
632
+ font-size: 1.5em;
633
+ }
634
+ .step-section p {
635
+ color: #aaaaaa;
636
+ line-height: 1.6;
637
+ }
638
+ .step-section code {
639
+ background: #333333;
640
+ padding: 2px 6px;
641
+ border-radius: 3px;
642
+ color: #ff6b6b;
643
+ }
644
+ .step-section strong {
645
+ color: #ffffff;
646
+ }
647
+ .status-bar {
648
+ background: #1a1a1a;
649
+ padding: 10px;
650
+ border-radius: 5px;
651
+ font-size: 1.1em;
652
+ margin-bottom: 20px;
653
+ border-left: 4px solid #666666;
654
+ }
655
+
656
+ /* Agreement/Disagreement styling */
657
+ .agreement {
658
+ color: #4CAF50 !important;
659
+ border: 1px solid #4CAF50;
660
+ background-color: rgba(76, 175, 80, 0.1) !important;
661
+ padding: 10px;
662
+ border-radius: 5px;
663
+ }
664
+
665
+ .disagreement {
666
+ color: #F44336 !important;
667
+ border: 1px solid #F44336;
668
+ background-color: rgba(244, 67, 54, 0.1) !important;
669
+ padding: 10px;
670
+ border-radius: 5px;
671
+ }
672
+
673
+ /* Enhanced Final Report Styling */
674
+ .final-report {
675
+ background: #111111;
676
+ padding: 0;
677
+ border-radius: 8px;
678
+ box-shadow: 0 4px 15px rgba(0, 0, 0, 0.5);
679
+ margin-top: 20px;
680
+ overflow: hidden;
681
+ }
682
+
683
+ .final-report-container {
684
+ font-family: 'Arial', sans-serif;
685
+ }
686
+
687
+ .final-report-title {
688
+ background: linear-gradient(45deg, #333333, #444444);
689
+ color: #ffffff;
690
+ padding: 20px;
691
+ margin: 0;
692
+ border-bottom: 1px solid #555555;
693
+ font-size: 24px;
694
+ text-align: center;
695
+ }
696
+
697
+ .problem-container {
698
+ background: #222222;
699
+ padding: 15px 20px;
700
+ margin: 0;
701
+ border-bottom: 1px solid #333333;
702
+ }
703
+
704
+ .problem-title {
705
+ color: #bbbbbb;
706
+ margin: 0 0 10px 0;
707
+ font-size: 18px;
708
+ }
709
+
710
+ .problem-content {
711
+ background: #333333;
712
+ padding: 15px;
713
+ border-radius: 5px;
714
+ font-family: monospace;
715
+ font-size: 16px;
716
+ color: #ffffff;
717
+ }
718
+
719
+ .timeline-container {
720
+ padding: 20px;
721
+ }
722
+
723
+ .timeline-item {
724
+ display: flex;
725
+ margin-bottom: 25px;
726
+ position: relative;
727
+ }
728
+
729
+ .timeline-item:before {
730
+ content: '';
731
+ position: absolute;
732
+ left: 15px;
733
+ top: 30px;
734
+ bottom: -25px;
735
+ width: 2px;
736
+ background: #444444;
737
+ z-index: 0;
738
+ }
739
+
740
+ .timeline-item:last-child:before {
741
+ display: none;
742
+ }
743
+
744
+ .timeline-marker {
745
+ width: 34px;
746
+ height: 34px;
747
+ border-radius: 50%;
748
+ background: #333333;
749
+ display: flex;
750
+ align-items: center;
751
+ justify-content: center;
752
+ font-weight: bold;
753
+ position: relative;
754
+ z-index: 1;
755
+ border: 2px solid #555555;
756
+ margin-right: 15px;
757
+ }
758
+
759
+ .timeline-content {
760
+ flex: 1;
761
+ background: #1d1d1d;
762
+ border-radius: 5px;
763
+ padding: 15px;
764
+ border: 1px solid #333333;
765
+ }
766
+
767
+ .timeline-content h4 {
768
+ margin-top: 0;
769
+ margin-bottom: 15px;
770
+ color: #cccccc;
771
+ border-bottom: 1px solid #333333;
772
+ padding-bottom: 8px;
773
+ }
774
+
775
+ .solutions-container {
776
+ display: flex;
777
+ flex-wrap: wrap;
778
+ gap: 10px;
779
+ }
780
+
781
+ .solution-item {
782
+ flex: 1;
783
+ min-width: 250px;
784
+ background: #252525;
785
+ border-radius: 5px;
786
+ overflow: hidden;
787
+ border: 1px solid #383838;
788
+ }
789
+
790
+ .solution-header {
791
+ background: #333333;
792
+ padding: 8px 12px;
793
+ font-weight: bold;
794
+ color: #dddddd;
795
+ border-bottom: 1px solid #444444;
796
+ }
797
+
798
+ .solution-body {
799
+ padding: 12px;
800
+ color: #bbbbbb;
801
+ }
802
+
803
+ .judgment-container {
804
+ display: flex;
805
+ align-items: center;
806
+ padding: 10px;
807
+ border-radius: 5px;
808
+ }
809
+
810
+ .judgment-icon {
811
+ font-size: 24px;
812
+ margin-right: 15px;
813
+ }
814
+
815
+ .conclusion-container {
816
+ margin-top: 30px;
817
+ border-radius: 5px;
818
+ padding: 5px 15px 15px;
819
+ }
820
+
821
+ .conclusion-content {
822
+ display: flex;
823
+ align-items: center;
824
+ }
825
+
826
+ .conclusion-icon {
827
+ font-size: 36px;
828
+ margin-right: 20px;
829
+ }
830
+
831
+ .conclusion-text {
832
+ flex: 1;
833
+ }
834
+
835
+ .conclusion-text p {
836
+ margin: 5px 0;
837
+ }
838
+
839
+ /* Header styling */
840
+ .app-header {
841
+ background: linear-gradient(45deg, #222222, #333333);
842
+ padding: 20px;
843
+ border-radius: 10px;
844
+ margin-bottom: 20px;
845
+ box-shadow: 0 4px 10px rgba(0, 0, 0, 0.3);
846
+ border: 1px solid #444444;
847
+ }
848
+
849
+ .app-title {
850
+ font-size: 28px;
851
+ margin: 0 0 10px 0;
852
+ background: -webkit-linear-gradient(45deg, #cccccc, #ffffff);
853
+ -webkit-background-clip: text;
854
+ -webkit-text-fill-color: transparent;
855
+ display: inline-block;
856
+ }
857
+
858
+ .app-subtitle {
859
+ font-size: 16px;
860
+ color: #aaaaaa;
861
+ margin: 0;
862
+ }
863
+
864
+ /* Button style */
865
+ .primary-button {
866
+ background: linear-gradient(45deg, #555555, #777777) !important;
867
+ border: none !important;
868
+ color: white !important;
869
+ padding: 12px 24px !important;
870
+ font-weight: bold !important;
871
+ transition: all 0.3s ease !important;
872
+ box-shadow: 0 4px 10px rgba(0, 0, 0, 0.3) !important;
873
+ }
874
+
875
+ .primary-button:hover {
876
+ transform: translateY(-2px) !important;
877
+ box-shadow: 0 6px 15px rgba(0, 0, 0, 0.4) !important;
878
+ background: linear-gradient(45deg, #666666, #888888) !important;
879
+ }
880
+ """
881
+
882
+ # Hardcoded model configurations
883
+ solver1_config = {
884
+ "name": "Cohere Command R",
885
+ "id": "command-r-08-2024",
886
+ "provider": "cohere"
887
+ }
888
+
889
+ solver2_config = {
890
+ "name": "Llama 3.2 3B Instruct",
891
+ "id": "meta-llama/Llama-3.2-3B-Instruct",
892
+ "provider": "huggingface"
893
+ }
894
+
895
+ judge_config = {
896
+ "name": "Gemini 2.0 Flash Thinking Experimental 01-21",
897
+ "id": "gemini-2.0-flash-thinking-exp-01-21",
898
+ "provider": "gemini"
899
+ }
900
+
901
+ async def solve_problem(problem: str, max_rounds: int):
902
+ # Get API keys from environment variables
903
+ api_clients = {}
904
+
905
+ # Cohere client
906
+ cohere_key = os.getenv("COHERE_API_KEY")
907
+ if cohere_key:
908
+ api_clients["cohere"] = cohere.Client(cohere_key)
909
+
910
+ # Hugging Face client
911
+ hf_key = os.getenv("HF_API_KEY")
912
+ if hf_key:
913
+ api_clients["huggingface"] = hf_key
914
+
915
+ # Gemini client
916
+ gemini_key = os.getenv("GEMINI_API_KEY")
917
+ if gemini_key:
918
+ genai.configure(api_key=gemini_key)
919
+ api_clients["gemini"] = genai
920
+
921
+ # Check if all required API keys are present
922
+ required_providers = {solver1_config["provider"], solver2_config["provider"], judge_config["provider"]}
923
+ missing_keys = [p for p in required_providers if p not in api_clients]
924
+ if missing_keys:
925
+ yield [
926
+ gr.update(value=f"Error: Missing API keys for {', '.join(missing_keys)}", visible=True),
927
+ gr.update(visible=False),
928
+ gr.update(visible=False),
929
+ gr.update(visible=False),
930
+ gr.update(visible=False),
931
+ gr.update(visible=False),
932
+ gr.update(visible=False),
933
+ gr.update(visible=False),
934
+ gr.update(value=f"### Status: ❌ Missing API keys for {', '.join(missing_keys)}", visible=True)
935
+ ]
936
+ return
937
+
938
+ orchestrator = PolyThinkOrchestrator(solver1_config, solver2_config, judge_config, api_clients)
939
+
940
+ initial_solutions = await orchestrator.get_initial_solutions(problem)
941
+ initial_content = f"## Initial Solutions\n**Problem:** `{problem}`\n\n**Solutions:**\n- **{initial_solutions[0]['model_name']}**: {initial_solutions[0]['solution']}\n- **{initial_solutions[1]['model_name']}**: {initial_solutions[1]['solution']}"
942
+ yield [
943
+ gr.update(value=initial_content, visible=True),
944
+ gr.update(value="", visible=False),
945
+ gr.update(value="", visible=False),
946
+ gr.update(value="", visible=False),
947
+ gr.update(value="", visible=False),
948
+ gr.update(value="", visible=False),
949
+ gr.update(value="", visible=False),
950
+ gr.update(value="", visible=False),
951
+ gr.update(value="### Status: 📋 Initial solutions generated", visible=True)
952
+ ]
953
+ await asyncio.sleep(1)
954
+
955
+ solutions = initial_solutions
956
+ history = [{"solutions": initial_solutions}]
957
+ max_outputs = max(int(max_rounds) * 2, 6)
958
+ round_outputs = [""] * max_outputs
959
+
960
+ for round_num in range(int(max_rounds)):
961
+ judgment = await orchestrator.get_judgment(problem, solutions)
962
+ history.append({"judgment": judgment["judgment"]})
963
+
964
+ is_agreement = "AGREEMENT: YES" in judgment["judgment"].upper()
965
+ agreement_class = "agreement" if is_agreement else "disagreement"
966
+ agreement_icon = "✅" if is_agreement else "❌"
967
+
968
+ judgment_content = f"## Round {round_num + 1} Judgment\n**Evaluation:** <div class='{agreement_class}'>{agreement_icon} {judgment['judgment']}</div>"
969
+ round_outputs[round_num * 2] = judgment_content
970
+
971
+ yield [
972
+ gr.update(value=initial_content, visible=True),
973
+ gr.update(value=round_outputs[0], visible=bool(round_outputs[0])),
974
+ gr.update(value=round_outputs[1], visible=bool(round_outputs[1])),
975
+ gr.update(value=round_outputs[2], visible=bool(round_outputs[2])),
976
+ gr.update(value=round_outputs[3], visible=bool(round_outputs[3])),
977
+ gr.update(value=round_outputs[4], visible=bool(round_outputs[4])),
978
+ gr.update(value=round_outputs[5], visible=bool(round_outputs[5])),
979
+ gr.update(value="", visible=False),
980
+ gr.update(value=f"### Status: 🔍 Round {round_num + 1} judgment complete", visible=True)
981
+ ]
982
+ await asyncio.sleep(1)
983
+
984
+ if not judgment["reprompt_needed"]:
985
+ break
986
+
987
+ revised_solutions = await orchestrator.get_revised_solutions(problem, solutions, judgment["judgment"])
988
+ history.append({"solutions": revised_solutions})
989
+ revision_content = f"## Round {round_num + 1} Revised Solutions\n**Revised Solutions:**\n- **{revised_solutions[0]['model_name']}**: {revised_solutions[0]['solution']}\n- **{revised_solutions[1]['model_name']}**: {revised_solutions[1]['solution']}"
990
+ round_outputs[round_num * 2 + 1] = revision_content
991
+ yield [
992
+ gr.update(value=initial_content, visible=True),
993
+ gr.update(value=round_outputs[0], visible=bool(round_outputs[0])),
994
+ gr.update(value=round_outputs[1], visible=bool(round_outputs[1])),
995
+ gr.update(value=round_outputs[2], visible=bool(round_outputs[2])),
996
+ gr.update(value=round_outputs[3], visible=bool(round_outputs[3])),
997
+ gr.update(value=round_outputs[4], visible=bool(round_outputs[4])),
998
+ gr.update(value=round_outputs[5], visible=bool(round_outputs[5])),
999
+ gr.update(value="", visible=False),
1000
+ gr.update(value=f"### Status: 🔄 Round {round_num + 1} revised solutions generated", visible=True)
1001
+ ]
1002
+ await asyncio.sleep(1)
1003
+ solutions = revised_solutions
1004
+
1005
+ final_report_content = orchestrator.generate_final_report(problem, history)
1006
+ yield [
1007
+ gr.update(value=initial_content, visible=True),
1008
+ gr.update(value=round_outputs[0], visible=True),
1009
+ gr.update(value=round_outputs[1], visible=bool(round_outputs[1])),
1010
+ gr.update(value=round_outputs[2], visible=bool(round_outputs[2])),
1011
+ gr.update(value=round_outputs[3], visible=bool(round_outputs[3])),
1012
+ gr.update(value=round_outputs[4], visible=bool(round_outputs[4])),
1013
+ gr.update(value=round_outputs[5], visible=bool(round_outputs[5])),
1014
+ gr.update(value=final_report_content, visible=True),
1015
+ gr.update(value=f"### Status: ✨ Process complete! Completed {round_num + 1} round(s)", visible=True)
1016
+ ]
1017
+
1018
+ with gr.Blocks(title="PolyThink Alpha", css=custom_css) as demo:
1019
+ with gr.Column(elem_classes=["app-header"]):
1020
+ gr.Markdown("<h1 class='app-title'>PolyThink Alpha</h1>", show_label=False)
1021
+ gr.Markdown("<p class='app-subtitle'>Multi-Agent Problem Solving System</p>", show_label=False)
1022
+
1023
+ with gr.Row():
1024
+ with gr.Column(scale=2):
1025
+ gr.Markdown("### Problem Input")
1026
+ problem_input = gr.Textbox(label="Problem", placeholder="e.g., What is 32 + 63?", lines=3)
1027
+ rounds_slider = gr.Slider(2, 6, value=2, step=1, label="Maximum Rounds")
1028
+ solve_button = gr.Button("Solve Problem", elem_classes=["primary-button"])
1029
+
1030
+ status_text = gr.Markdown("### Status: Ready", elem_classes=["status-bar"], visible=True)
1031
+
1032
+ with gr.Column():
1033
+ initial_solutions = gr.Markdown(elem_classes=["step-section"], visible=False)
1034
+ round_judgment_1 = gr.Markdown(elem_classes=["step-section"], visible=False)
1035
+ revised_solutions_1 = gr.Markdown(elem_classes=["step-section"], visible=False)
1036
+ round_judgment_2 = gr.Markdown(elem_classes=["step-section"], visible=False)
1037
+ revised_solutions_2 = gr.Markdown(elem_classes=["step-section"], visible=False)
1038
+ round_judgment_3 = gr.Markdown(elem_classes=["step-section"], visible=False)
1039
+ revised_solutions_3 = gr.Markdown(elem_classes=["step-section"], visible=False)
1040
+ final_report = gr.HTML(elem_classes=["final-report"], visible=False)
1041
+
1042
+ solve_button.click(
1043
+ fn=solve_problem,
1044
+ inputs=[
1045
+ problem_input,
1046
+ rounds_slider
1047
+ ],
1048
+ outputs=[
1049
+ initial_solutions,
1050
+ round_judgment_1,
1051
+ revised_solutions_1,
1052
+ round_judgment_2,
1053
+ revised_solutions_2,
1054
+ round_judgment_3,
1055
+ revised_solutions_3,
1056
+ final_report,
1057
+ status_text
1058
+ ]
1059
+ )
1060
+
1061
+ return demo.queue()
1062
+
1063
+ if __name__ == "__main__":
1064
+ demo = create_polythink_interface()
1065
+ demo.launch(share=True)
README.md CHANGED
@@ -1,13 +1,71 @@
1
- ---
2
- title: PolyThink Alpha
3
- emoji: 👀
4
- colorFrom: indigo
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 5.23.3
8
- app_file: app.py
9
- pinned: false
10
- short_description: Multiple AI Models Fighting to Give You the Best Answer
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: PolyThink-YC
3
+ emoji: 💭
4
+ colorFrom: gray
5
+ colorTo: gray
6
+ sdk: gradio
7
+ sdk_version: "5.11.0"
8
+ app_file: App.py
9
+ pinned: true
10
+ ---
11
+
12
+ # PolyThink Multi-Agent Problem Solver
13
+
14
+ A multi-agent system that uses multiple AI models to solve problems collaboratively through a consensus-based approach.
15
+
16
+ ## Architecture
17
+
18
+ PolyThink uses a multi-agent architecture with three specialized AI models:
19
+
20
+ 1. **Solver Agents**:
21
+ - **Cohere Command R**: A powerful reasoning model that generates concise solutions
22
+ - **Llama 3.2 3B**: A Meta AI model that provides alternative perspectives
23
+
24
+ 2. **Judge Agent**:
25
+ - **Gemini 2.0 Flash Thinking**: Evaluates solutions from solver agents and determines if they agree
26
+
27
+ The system works through multiple rounds of solution refinement until consensus is reached or the maximum number of rounds is completed.
28
+
29
+ ## Setup
30
+
31
+ 1. Clone this repository
32
+ 2. Install dependencies:
33
+ ```bash
34
+ pip install -r requirements.txt
35
+ ```
36
+ 3. Set up your API keys:
37
+ - Get your Hugging Face token from [Hugging Face](https://huggingface.co/settings/tokens)
38
+ - Get your Cohere API key from [Cohere](https://dashboard.cohere.com/api-keys)
39
+ - Get your Gemini API key from [Google AI Studio](https://makersuite.google.com/app/apikey)
40
+
41
+ ## Usage
42
+
43
+ Run the application:
44
+ ```bash
45
+ python App.py
46
+ ```
47
+
48
+ The application will launch a Gradio interface where you can:
49
+ 1. Enter your API keys for each service
50
+ 2. Input a problem or question
51
+ 3. Choose the number of rounds for solution refinement (1-3)
52
+ 4. Watch as multiple AI agents collaborate to solve the problem in real-time
53
+
54
+ ## Process Flow
55
+
56
+ 1. Two solver agents generate initial solutions independently
57
+ 2. The judge agent evaluates if the solutions agree
58
+ 3. If solutions disagree, solver agents refine their answers based on feedback
59
+ 4. Process repeats until agreement is reached or max rounds completed
60
+ 5. A final report is generated showing the problem-solving process
61
+
62
+ ## Dependencies
63
+
64
+ - gradio: Web interface framework
65
+ - huggingface_hub: Access to Hugging Face models
66
+ - cohere: Access to Cohere models
67
+ - google-genai: Access to Google's Gemini models
68
+
69
+ ## Note
70
+
71
+ This application requires valid API keys for Hugging Face, Cohere, and Google Gemini. Make sure you have sufficient API credits for your usage.
requirements.txt ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ gradio
2
+ huggingface_hub
3
+ cohere
4
+ google-genai