Reja1 commited on
Commit
6bc6a63
·
1 Parent(s): 95c3f37

Refactor: Generalize benchmark for JEE and NEET exams

Browse files

- Update parsing, LLM prompts, and evaluation for various question types (MCQ Single/Multiple, Integer).
- Implement specific scoring rules for NEET, JEE Main, and JEE Advanced.
- Make benchmark runner and README more generic to support multiple exams.

Files changed (5) hide show
  1. README.md +37 -29
  2. src/benchmark_runner.py +151 -241
  3. src/evaluation.py +225 -189
  4. src/llm_interface.py +124 -91
  5. src/utils.py +82 -66
README.md CHANGED
@@ -17,10 +17,11 @@ task_categories:
17
  - visual-question-answering
18
  - image-text-to-text
19
  - question-answering
 
20
  # task_ids: # More specific task IDs from https://hf.co/tasks
21
  # - visual-question-answering
22
  # Pretty name for the dataset.
23
- pretty_name: JEE/NEET LLM Benchmark
24
  # Dataset identifier from a recognized benchmark.
25
  # benchmark: # e.g., super_glue, anli
26
  # Date of the last update.
@@ -70,19 +71,20 @@ column_info:
70
  description: Subject (e.g., "Physics", "Chemistry", "Biology", "Mathematics").
71
  data_type: string
72
  question_type:
73
- description: Type of question (e.g., "MCQ", "Multiple Correct").
74
  data_type: string
75
  correct_answer:
76
- description: List containing the correct answer index/indices (e.g., [2], [1, 3]).
77
  data_type: list[int32] # or sequence of int32
78
 
79
  # More Information
80
  # ----------------
81
  # Add any other relevant information about the dataset.
82
  dataset_summary: |
83
- A benchmark dataset for evaluating Large Language Models (LLMs) on Joint Entrance Examination (JEE)
84
- and National Eligibility cum Entrance Test (NEET) questions from India. Questions are provided as
85
- images, and metadata includes exam details, subject, and correct answers.
 
86
  dataset_tags: # Tags to help users find your dataset
87
  - education
88
  - science
@@ -117,12 +119,16 @@ personal_sensitive_information: false # Does the dataset contain PII?
117
 
118
  ## Dataset Description
119
 
120
- This repository contains a benchmark dataset designed for evaluating the capabilities of Large Language Models (LLMs) on questions from the Joint Entrance Examination (JEE) and the National Eligibility cum Entrance Test (NEET) conducted in India. These are highly competitive entrance examinations for engineering and medical colleges, respectively.
 
 
121
 
122
- The questions are presented in image format (`.png`) as they appear in the original papers. The dataset includes metadata linking each image to its corresponding exam details, subject, question type, and correct answer(s).
123
 
124
- **Current Data:**
125
  * NEET 2024 (Code T3)
 
 
126
 
127
  ## How to Use
128
 
@@ -178,7 +184,7 @@ This repository contains scripts to run the benchmark evaluation directly:
178
  * **Important:** The `.gitignore` file is already configured to prevent committing the `.env` file. Never commit your API keys directly.
179
  4. **Configure Models:**
180
  * Edit the `configs/benchmark_config.yaml` file.
181
- * Modify the `openrouter_models` list to include the specific model identifiers (e.g., `"openai/gpt-4o"`, `"google/gemini-pro-vision"`) you want to evaluate. Ensure these models support vision input on OpenRouter.
182
  * You can also adjust other parameters like `max_tokens` and `request_timeout` if needed.
183
  5. **Run the benchmark:**
184
  * Execute the runner script from the root directory:
@@ -193,37 +199,40 @@ This repository contains scripts to run the benchmark evaluation directly:
193
  ```bash
194
  python src/benchmark_runner.py --config configs/benchmark_config.yaml --output_dir my_custom_results
195
  ```
196
- * To run the benchmark on a specific exam paper, use the `--exam_name` and `--exam_year` arguments. Both must be provided:
197
  ```bash
198
  # Example: Run only NEET 2024 questions
199
  python src/benchmark_runner.py --config configs/benchmark_config.yaml --exam_name NEET --exam_year 2024
200
 
201
- # Example: Run only NEET 2025 questions (assuming data exists)
202
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --exam_name NEET --exam_year 2025
203
  ```
204
- Note: If using exam names with spaces, enclose them in quotes.
205
  6. **Check Results:**
206
  * Results for each model will be saved in subdirectories within the `results/` folder (or your custom output directory).
207
- * Each model's folder (e.g., `results/provider/modelname_YYYYMMDD_HHMMSS`) will contain:
208
- * `predictions.jsonl`: Detailed results for each question (prediction, ground truth, raw response).
209
- * `summary.json`: Overall accuracy and statistics for that model.
210
- * Sample benchmark results for some models can be found in the `results/` folder.
 
211
 
212
  ## Pros
213
 
214
  * **Multimodal Reasoning:** Uses images of questions directly, testing the multimodal reasoning capability of the model.
215
- * **Reattempt Mechanism:** Implements a reattempt mechanism to encourage the model to provide the final answer within `<answer>` tags.
 
 
216
  * **Reproducibility:** Easily reproducible with simple commands and an OpenRouter API key.
217
  * **Model Flexibility:** Allows testing of various models available through OpenRouter.
218
 
219
  ## Dataset Structure
220
 
221
- * **`data/metadata.jsonl`**: Contains metadata for each question image. Each line is a JSON object with fields like `image_path`, `question_id`, `exam_name`, `exam_year`, `exam_code`, `subject`, `question_type`, `correct_answer`.
222
- * **`images/`**: Contains subdirectories for each exam set (e.g., `images/NEET_2024_T3/`), holding the `.png` question images.
223
  * **`src/`**: Python source code for running the benchmark (data loading, LLM interaction, evaluation).
224
  * **`configs/`**: Configuration files for the benchmark.
225
  * **`results/`**: Directory where benchmark results (LLM outputs) will be stored.
226
- * **`jee_neet_benchmark_dataset.py`**: Hugging Face `datasets` loading script.
227
  * **`requirements.txt`**: Python dependencies.
228
  * **`README.md`**: This file.
229
 
@@ -233,17 +242,16 @@ The dataset contains the following fields (accessible via `datasets`):
233
 
234
  * `image`: The question image (`datasets.Image`).
235
  * `question_id`: Unique identifier for the question (string).
236
- * `exam_name`: Name of the exam (e.g., "NEET", "JEE Main") (string).
237
  * `exam_year`: Year of the exam (int).
238
- * `exam_code`: Specific paper code/session (e.g., "T3", "S1") (string).
239
- * `subject`: Subject (e.g., "Physics", "Chemistry", "Biology", "Mathematics") (string).
240
- * `question_type`: Type of question (e.g., "MCQ", "Multiple Correct") (string).
241
- * `correct_answer`: List containing the correct answer index/indices (e.g., `[2]`, `[1, 3]`) (list of int).
242
 
243
  ## Cons / Current Limitations
244
 
245
- * **Early Development:** The benchmark is still in its early stages of development.
246
- * **Limited Data:** Currently, only one exam question paper (NEET 2024 T3) is available. More exam papers are planned for future inclusion.
247
 
248
  ## Citation
249
 
 
17
  - visual-question-answering
18
  - image-text-to-text
19
  - question-answering
20
+ - multimodal-reasoning
21
  # task_ids: # More specific task IDs from https://hf.co/tasks
22
  # - visual-question-answering
23
  # Pretty name for the dataset.
24
+ pretty_name: Indian Competitive Exams (JEE/NEET) LLM Benchmark
25
  # Dataset identifier from a recognized benchmark.
26
  # benchmark: # e.g., super_glue, anli
27
  # Date of the last update.
 
71
  description: Subject (e.g., "Physics", "Chemistry", "Biology", "Mathematics").
72
  data_type: string
73
  question_type:
74
+ description: Type of question (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER").
75
  data_type: string
76
  correct_answer:
77
+ description: List containing the correct answer index/indices (e.g., [2], [1, 3]) or a single integer for INTEGER type.
78
  data_type: list[int32] # or sequence of int32
79
 
80
  # More Information
81
  # ----------------
82
  # Add any other relevant information about the dataset.
83
  dataset_summary: |
84
+ A benchmark dataset for evaluating Large Language Models (LLMs) on questions from major Indian competitive examinations:
85
+ Joint Entrance Examination (JEE Main & Advanced) for engineering and the National Eligibility cum Entrance Test (NEET) for medical fields.
86
+ Questions are provided as images, and metadata includes exam details (name, year, subject, question type) and correct answers.
87
+ The benchmark supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions.
88
  dataset_tags: # Tags to help users find your dataset
89
  - education
90
  - science
 
119
 
120
  ## Dataset Description
121
 
122
+ This repository contains a benchmark dataset designed for evaluating the capabilities of Large Language Models (LLMs) on questions from major Indian competitive examinations:
123
+ * **JEE (Main & Advanced):** Joint Entrance Examination for engineering.
124
+ * **NEET:** National Eligibility cum Entrance Test for medical fields.
125
 
126
+ The questions are presented in image format (`.png`) as they appear in the original papers. The dataset includes metadata linking each image to its corresponding exam details (name, year, subject, question type), and correct answer(s). The benchmark framework supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions.
127
 
128
+ **Current Data (Examples):**
129
  * NEET 2024 (Code T3)
130
+ * NEET 2025 (Code 45)
131
+ * (Support for JEE Main & Advanced questions can be added by updating `data/metadata.jsonl` and the `images/` directory accordingly.)
132
 
133
  ## How to Use
134
 
 
184
  * **Important:** The `.gitignore` file is already configured to prevent committing the `.env` file. Never commit your API keys directly.
185
  4. **Configure Models:**
186
  * Edit the `configs/benchmark_config.yaml` file.
187
+ * Modify the `openrouter_models` list to include the specific model identifiers (e.g., `"openai/gpt-4o"`, `"google/gemini-2.5-pro-preview-03-25"`) you want to evaluate. Ensure these models support vision input on OpenRouter.
188
  * You can also adjust other parameters like `max_tokens` and `request_timeout` if needed.
189
  5. **Run the benchmark:**
190
  * Execute the runner script from the root directory:
 
199
  ```bash
200
  python src/benchmark_runner.py --config configs/benchmark_config.yaml --output_dir my_custom_results
201
  ```
202
+ * To run the benchmark on a specific exam paper, use the `--exam_name` and `--exam_year` arguments. Both must be provided. The `exam_name` should match the values in your `metadata.jsonl` (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED").
203
  ```bash
204
  # Example: Run only NEET 2024 questions
205
  python src/benchmark_runner.py --config configs/benchmark_config.yaml --exam_name NEET --exam_year 2024
206
 
207
+ # Example: Run only JEE_MAIN 2023 questions (assuming data exists)
208
+ python src/benchmark_runner.py --config configs/benchmark_config.yaml --exam_name JEE_MAIN --exam_year 2023
209
  ```
210
+ Note: If using exam names with spaces (though not recommended in metadata), enclose them in quotes.
211
  6. **Check Results:**
212
  * Results for each model will be saved in subdirectories within the `results/` folder (or your custom output directory).
213
+ * Each model's folder (e.g., `results/openai_gpt-4o_NEET_2024_YYYYMMDD_HHMMSS`) will contain:
214
+ * `predictions.jsonl`: Detailed results for each question (prediction, ground truth, raw response, evaluation status, marks awarded).
215
+ * `summary.json`: Overall scores and statistics for that model run.
216
+ * `summary.md`: A human-readable Markdown version of the summary.
217
+ * Sample benchmark results for some models can be found in the `results/` folder (these may be outdated).
218
 
219
  ## Pros
220
 
221
  * **Multimodal Reasoning:** Uses images of questions directly, testing the multimodal reasoning capability of the model.
222
+ * **Flexible Exam Support:** Designed to support multiple exams (NEET, JEE Main, JEE Advanced) and various question types (MCQ Single Correct, MCQ Multiple Correct, Integer).
223
+ * **Detailed Scoring:** Implements specific scoring rules for different exams and question types, including partial marking for JEE Advanced multiple correct questions.
224
+ * **Reattempt Mechanism:** Implements a reattempt mechanism to encourage the model to provide the final answer within `<answer>` tags, adapted for different question types.
225
  * **Reproducibility:** Easily reproducible with simple commands and an OpenRouter API key.
226
  * **Model Flexibility:** Allows testing of various models available through OpenRouter.
227
 
228
  ## Dataset Structure
229
 
230
+ * **`data/metadata.jsonl`**: Contains metadata for each question image. Each line is a JSON object with fields like `image_path`, `question_id`, `exam_name` (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED"), `exam_year`, `subject`, `question_type` (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER"), `correct_answer`.
231
+ * **`images/`**: Contains subdirectories for each exam set (e.g., `images/NEET_2024_T3/`, `images/JEE_MAIN_2023_Example/`), holding the `.png` question images.
232
  * **`src/`**: Python source code for running the benchmark (data loading, LLM interaction, evaluation).
233
  * **`configs/`**: Configuration files for the benchmark.
234
  * **`results/`**: Directory where benchmark results (LLM outputs) will be stored.
235
+ * **`jee_neet_benchmark_dataset.py`**: Hugging Face `datasets` loading script (defines how to load `metadata.jsonl` and images).
236
  * **`requirements.txt`**: Python dependencies.
237
  * **`README.md`**: This file.
238
 
 
242
 
243
  * `image`: The question image (`datasets.Image`).
244
  * `question_id`: Unique identifier for the question (string).
245
+ * `exam_name`: Name of the exam (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED") (string).
246
  * `exam_year`: Year of the exam (int).
247
+ * `subject`: Subject (e.g., "Physics", "Chemistry", "Botany", "Zoology", "Mathematics") (string).
248
+ * `question_type`: Type of question (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER") (string).
249
+ * `correct_answer`: List containing the correct answer index/indices (e.g., `[2]`, `[1, 3]`) or a single integer for INTEGER type questions (list of int, or int).
 
250
 
251
  ## Cons / Current Limitations
252
 
253
+ * **Data Expansion:** While the framework supports various exams and question types, the current `metadata.jsonl` primarily contains NEET data. More diverse data (especially for JEE Main and Advanced with varied question types) needs to be added to make the benchmark more comprehensive.
254
+ * **Max Score in Summary:** The overall maximum score in the generated Markdown summary is currently marked as "N/A (variable per question)" due to the complexity of calculating it accurately across mixed question types in a single run. Each question's max score depends on its type and exam.
255
 
256
  ## Citation
257
 
src/benchmark_runner.py CHANGED
@@ -12,8 +12,8 @@ from PIL import Image as PILImage # Import PIL for type hinting
12
  # Import local modules
13
  from utils import load_api_key
14
  from llm_interface import get_openrouter_prediction
15
- # Import both evaluation functions
16
- from evaluation import calculate_accuracy, calculate_neet_scores
17
 
18
  # Configure logging
19
  logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
@@ -76,63 +76,68 @@ def generate_markdown_summary(summary: Dict[str, Any], filepath: str):
76
  md_content.append("\n---\n")
77
 
78
  # Check if NEET results are present (or any dataset with overall_score and section_breakdown)
79
- if "overall_score" in summary and "section_breakdown" in summary:
80
  total_processed = summary.get("total_questions_processed", 0)
81
- # Dynamically calculate max_score based on questions in the dataset used for this run
82
- max_score = total_questions_in_dataset * 4 if total_questions_in_dataset > 0 else "N/A"
 
 
 
 
 
 
83
  overall_score = summary.get('overall_score', 'N/A')
84
- correct_count = summary.get('overall_correct', 'N/A')
85
- incorrect_count = summary.get('overall_incorrect', 'N/A')
 
86
  skipped_count = summary.get('overall_skipped', 'N/A')
87
  failures_count = summary.get('overall_api_parse_failures', 'N/A')
88
  unmapped_count = summary.get('unmapped_section_questions', 'N/A')
89
 
90
- md_content.append("## NEET Scoring Results")
91
- md_content.append(f"**Overall Score:** **{overall_score} / {max_score}**")
92
- md_content.append(f"- **Correct Answers (+4):** {correct_count}")
93
- md_content.append(f"- **Incorrect Answers (-1):** {incorrect_count}")
94
- md_content.append(f"- **Skipped Questions (0):** {skipped_count}")
95
- md_content.append(f"- **API/Parse Failures (-1):** {failures_count}")
 
 
96
  md_content.append(f"- **Total Questions Processed:** {total_processed}")
97
  if unmapped_count > 0:
98
- md_content.append(f"- **Unmapped Section Questions:** {unmapped_count} *(These were not included in section breakdown)*")
99
 
100
  md_content.append("\n### Section Breakdown")
101
- md_content.append("| Section | Score | Correct | Incorrect | Skipped |")
102
- md_content.append("|---------------|-------|---------|-----------|---------|")
103
  section_breakdown = summary.get("section_breakdown", {})
104
 
105
- # Dynamically get section names (subjects) from the breakdown, sorted for consistent order
106
  sorted_section_names = sorted(section_breakdown.keys())
107
- if not sorted_section_names and section_breakdown: # If keys exist but sorting failed (e.g. mixed types)
108
  logging.warning("Could not sort section names for Markdown summary; using unsorted.")
109
  sorted_section_names = list(section_breakdown.keys())
110
 
111
  for section_name in sorted_section_names:
112
- stats = section_breakdown.get(section_name, {}) # Should always find it
113
  score = stats.get('score', 'N/A')
114
- correct = stats.get('correct', 'N/A')
115
- incorrect = stats.get('incorrect', 'N/A')
116
- skipped = stats.get('skipped', 'N/A')
117
- display_section_name = section_name.replace('_', ' ') # Basic formatting
118
- md_content.append(f"| {display_section_name:<13} | {score:<5} | {correct:<7} | {incorrect:<9} | {skipped:<7} |")
 
 
119
  if not sorted_section_names:
120
- md_content.append("| No section data available | N/A | N/A | N/A | N/A |")
121
-
122
-
123
- # Fallback or alternative for simple accuracy
124
- elif "accuracy_on_parsed" in summary:
125
- md_content.append("## Simple Accuracy Results")
126
  md_content.append(f"- **Accuracy (on successfully parsed non-skipped):** {summary.get('accuracy_on_parsed', 'N/A'):.4f}")
127
  md_content.append(f"- **Total Processed Attempts:** {summary.get('total_processed_attempts', 'N/A')}")
128
- md_content.append(f"- **Successful API Calls:** {summary.get('successful_api_calls', 'N/A')}")
129
- md_content.append(f"- **Successful Parses:** {summary.get('successful_parses', 'N/A')}")
130
  else:
131
  md_content.append("## Summary")
132
- md_content.append("*(No specific NEET or Accuracy metrics found in summary)*")
133
- # Optionally print raw summary keys/values
134
- # for key, value in summary.items():
135
- # md_content.append(f"- **{key}:** {value}")
136
 
137
 
138
  with open(filepath, 'w') as f:
@@ -230,308 +235,213 @@ def run_benchmark(config: dict, api_key: str, models_override: list[str] | None
230
  # --- Initial Pass: Iterate through questions ---
231
  for example in tqdm(dataset, desc=f"Processing {model_id} (Initial Pass)", total=total_questions):
232
  question_id = example["question_id"]
233
- subject = example["subject"] # Get subject for evaluation
 
 
234
  image: PILImage.Image = example["image"]
235
  truth = example["correct_answer"]
236
 
237
  result_data = {
238
  "question_id": question_id,
239
  "subject": subject,
 
 
240
  "ground_truth": truth,
241
  "predicted_answer": None,
242
  "raw_response": None,
243
  "parse_successful": False,
244
- "api_call_successful": False, # Assume failure initially
245
  "error": None,
246
- "attempt": 1 # Mark as first attempt
247
  }
248
 
249
  try:
250
  # --- Initial API Call ---
 
251
  parsed_answer, raw_response = get_openrouter_prediction(
252
  model_identifier=model_id,
253
  api_key=api_key,
254
- image=image, # Pass image for initial call
255
- exam_name=exam_name_filter, # Pass exam_name
256
- exam_year=exam_year_filter, # Pass exam_year
 
257
  max_tokens=config.get("max_tokens", 100),
258
- request_timeout=config.get("request_timeout", 60)
259
- )
260
-
261
- # Initial attempt results
262
- api_success_attempt1 = True
263
- parse_success_attempt1 = parsed_answer is not None # Includes SKIP
264
  raw_response_attempt1 = raw_response
265
 
266
  # --- Re-prompt Logic ---
267
  if api_success_attempt1 and not parse_success_attempt1 and raw_response_attempt1 is not None:
268
  logging.warning(f"Question {question_id}: Initial parse failed. Attempting re-prompt.")
269
  try:
270
- # Second API call (re-prompt)
271
  parsed_answer_rp, raw_response_rp = get_openrouter_prediction(
272
  model_identifier=model_id,
273
  api_key=api_key,
274
- previous_raw_response=raw_response_attempt1, # Pass previous response
 
275
  max_tokens=config.get("max_tokens", 100),
276
  request_timeout=config.get("request_timeout", 60)
277
  )
278
- # Update results with re-prompt outcome
279
  result_data.update({
280
  "predicted_answer": parsed_answer_rp,
281
- "raw_response": raw_response_rp, # Store the re-prompt response
282
- "parse_successful": parsed_answer_rp is not None, # Includes SKIP
283
- "api_call_successful": True, # API call was successful
284
- "attempt": 2 # Mark as second attempt due to re-prompt
285
  })
286
  logging.info(f"Question {question_id}: Re-prompt {'succeeded' if result_data['parse_successful'] else 'failed to parse'}.")
287
-
288
  except Exception as e_rp:
289
- # Handle failure during the re-prompt API call itself
290
  logging.error(f"Re-prompt API call failed for question {question_id}: {e_rp}")
291
- # Keep initial attempt data, but mark as failed parse and add re-prompt error
292
  result_data.update({
293
- "predicted_answer": None, # Failed parse overall
294
- "raw_response": raw_response_attempt1, # Keep original raw response
295
  "parse_successful": False,
296
- "api_call_successful": True, # Initial API call succeeded
297
  "error": f"Initial parse failed. Re-prompt API call failed: {str(e_rp)}",
298
- "attempt": 1 # Revert attempt count as re-prompt failed
299
  })
300
  else:
301
- # Initial API call was successful and parsed correctly, or API failed initially,
302
- # or re-prompt was skipped due to empty initial response.
303
- current_error = result_data.get("error") # Preserve existing error if any from re-prompt failure
304
  api_actually_successful = api_success_attempt1
305
-
306
  if api_success_attempt1 and raw_response_attempt1 is None and parsed_answer is None:
307
- # This case specifically handles when initial call was "successful" (no exception)
308
- # but returned no content, and thus re-prompt was skipped.
309
  current_error = "Initial API call returned empty content. Re-prompt skipped."
310
- # Consider if api_call_successful should be False for scoring if empty means failure
311
- # For now, we keep it as True if no exception occurred during the API call itself.
312
-
313
  result_data.update({
314
  "predicted_answer": parsed_answer,
315
  "raw_response": raw_response_attempt1,
316
- "parse_successful": parse_success_attempt1, # This would be False if parsed_answer is None
317
  "api_call_successful": api_actually_successful,
318
  "error": current_error,
319
- "attempt": 1 # Remains attempt 1 if re-prompt didn't occur or failed
320
  })
321
-
322
- # Append final result (from initial success or re-prompt attempt)
323
  model_results.append(result_data)
324
  append_prediction(result_data, predictions_path)
325
 
326
- # Provide live feedback based on final result_data
327
  final_parsed_answer = result_data["predicted_answer"]
328
  if result_data["parse_successful"]:
329
  if final_parsed_answer == "SKIP":
330
  logging.info(f"Question {question_id}: Skipped (Attempt {result_data['attempt']})")
331
- else:
332
- is_correct = isinstance(final_parsed_answer, list) and sorted(final_parsed_answer) == sorted(truth)
333
- logging.info(f"Question {question_id}: {'Correct' if is_correct else 'Incorrect'} (Attempt {result_data['attempt']})")
334
- else: # Parse failed even after potential re-prompt
335
  logging.info(f"Question {question_id}: Failed to parse answer (Attempt {result_data['attempt']})")
336
 
337
-
338
  except Exception as e:
339
- # Catch potential failures from the *initial* get_openrouter_prediction call (after its internal retries)
340
  logging.error(f"Initial API call failed for question {question_id} (Attempt 1): {e}")
341
  result_data["error"] = str(e)
342
- result_data["api_call_successful"] = False # Explicitly mark API failure
343
- # Store data needed for the separate API retry pass
344
- failed_questions_data.append(example) # Store original example data
345
- # Do not append to model_results or predictions file yet for initial API failures
346
-
347
 
348
- # --- Retry Pass: Iterate through questions where the *initial API call* failed ---
349
  if failed_questions_data:
350
  logging.info(f"--- Retrying {len(failed_questions_data)} questions with initial API failures for model: {model_id} ---")
351
- for example in tqdm(failed_questions_data, desc=f"Processing {model_id} (API Retry Pass)", total=len(failed_questions_data)):
352
- question_id = example["question_id"]
353
- subject = example["subject"]
354
- image: PILImage.Image = example["image"]
355
- truth = example["correct_answer"]
356
-
357
- # Initialize result data for the API retry attempt
358
- result_data = {
359
- "question_id": question_id,
360
- "subject": subject, # Get subject for evaluation
361
- "ground_truth": truth,
 
 
 
362
  "predicted_answer": None,
363
  "raw_response": None,
364
  "parse_successful": False,
365
- "api_call_successful": False, # Assume failure initially
366
- "error": None,
367
- "attempt": 2 # Mark as second attempt (due to API retry)
368
  }
369
 
370
  try:
371
- # Retry getting prediction (initial call logic again)
372
- parsed_answer, raw_response = get_openrouter_prediction(
373
  model_identifier=model_id,
374
  api_key=api_key,
375
- image=image, # Pass image for this retry
376
- exam_name=exam_name_filter, # Pass exam_name
377
- exam_year=exam_year_filter, # Pass exam_year
 
378
  max_tokens=config.get("max_tokens", 100),
379
  request_timeout=config.get("request_timeout", 60)
380
  )
381
- # API Retry succeeded, now check parsing
382
  api_success_attempt2 = True
383
- parse_success_attempt2 = parsed_answer is not None # Includes SKIP
384
- raw_response_attempt2 = raw_response
385
 
386
- # --- Re-prompt Logic (within API Retry Pass) ---
387
  if api_success_attempt2 and not parse_success_attempt2 and raw_response_attempt2 is not None:
388
- logging.warning(f"Question {question_id}: API Retry succeeded, but parse failed. Attempting re-prompt.")
389
  try:
390
- # Third API call (re-prompt after API retry)
391
  parsed_answer_rp2, raw_response_rp2 = get_openrouter_prediction(
392
  model_identifier=model_id,
393
  api_key=api_key,
394
- previous_raw_response=raw_response_attempt2, # Pass API retry response
 
395
  max_tokens=config.get("max_tokens", 100),
396
  request_timeout=config.get("request_timeout", 60)
397
  )
398
- # Update results with re-prompt outcome
399
- result_data.update({
400
- "predicted_answer": parsed_answer_rp2,
401
- "raw_response": raw_response_rp2,
402
- "parse_successful": parsed_answer_rp2 is not None,
403
- "api_call_successful": True,
404
- "attempt": 3 # Mark as third attempt (API retry + re-prompt)
405
  })
406
- logging.info(f"Question {question_id}: API Retry + Re-prompt {'succeeded' if result_data['parse_successful'] else 'failed to parse'}.")
407
-
408
  except Exception as e_rp2:
409
- # Handle failure during the re-prompt API call itself
410
- logging.error(f"Re-prompt API call failed for question {question_id} after API retry: {e_rp2}")
411
- result_data.update({
412
- "predicted_answer": None, # Failed parse overall
413
- "raw_response": raw_response_attempt2, # Keep API retry raw response
414
- "parse_successful": False,
415
- "api_call_successful": True, # API retry call succeeded
416
- "error": f"API retry succeeded, but parse failed. Re-prompt failed: {str(e_rp2)}",
417
- "attempt": 2 # Revert attempt count
418
  })
419
  else:
420
- # API retry succeeded and parsed correctly, or re-prompt was skipped.
421
- current_error_retry = result_data.get("error") # Preserve error from re-prompt failure
422
- api_actually_successful_retry = api_success_attempt2
423
-
424
- if api_success_attempt2 and raw_response_attempt2 is None and parsed_answer is None:
425
  current_error_retry = "API retry call returned empty content. Re-prompt skipped."
426
-
427
- result_data.update({
428
- "predicted_answer": parsed_answer,
429
- "raw_response": raw_response_attempt2,
430
- "parse_successful": parse_success_attempt2, # False if parsed_answer is None
431
- "api_call_successful": api_actually_successful_retry,
432
- "error": current_error_retry,
433
- "attempt": 2 # Remains attempt 2 if re-prompt didn't occur or failed
434
  })
435
-
436
- except Exception as e:
437
- # Final API failure after the API retry pass
438
- logging.error(f"API call failed permanently for question {question_id} (Attempt 2 API Retry): {e}")
439
- result_data["error"] = str(e)
440
- result_data["api_call_successful"] = False # Remains False
441
- result_data["attempt"] = 2 # Still attempt 2
442
-
443
- # Append the final result (success or failure) from the API retry pass (including potential re-prompt)
444
- model_results.append(result_data)
445
- append_prediction(result_data, predictions_path)
446
-
447
 
448
  # --- Final Evaluation for the current model ---
449
  logging.info(f"--- Calculating final results for model: {model_id} ---")
450
-
451
- # Use the new NEET scoring function
452
- # Check if the dataset is NEET before applying NEET scoring
453
- # We infer this based on the presence of subjects like Botany/Zoology or question ID format
454
- is_neet_dataset = any(res.get("subject") in ["Botany", "Zoology"] for res in model_results) or \
455
- (model_results and model_results[0].get("question_id", "").startswith("NEET"))
456
-
457
- if is_neet_dataset:
458
- logging.info("Detected NEET dataset. Applying NEET scoring.")
459
- evaluation_summary = calculate_neet_scores(model_results) # model_results modified in-place
460
-
461
- # Determine exam name and year for summary
462
- summary_exam_name = exam_name_filter if exam_name_filter else "All_Exams"
463
- summary_exam_year = exam_year_filter if exam_year_filter else "All_Years"
464
-
465
- summary = {
466
- "model_name": model_id,
467
- "exam_name": summary_exam_name,
468
- "exam_year": summary_exam_year,
469
- "timestamp": timestamp,
470
- "total_questions_in_dataset": total_questions, # This is total in dataset *before* model processing
471
- **evaluation_summary # Merge NEET score details (includes total_questions_processed)
472
- }
473
- logging.info(f"NEET Score: {summary.get('overall_score')}")
474
- logging.info(f"Correct: {summary.get('overall_correct')}, Incorrect: {summary.get('overall_incorrect')}, Skipped: {summary.get('overall_skipped')}, API/Parse Failures: {summary.get('overall_api_parse_failures')}")
475
-
476
- else:
477
- # Fallback to simple accuracy for non-NEET datasets (like JEE)
478
- logging.info("Non-NEET dataset detected (or could not determine). Applying simple accuracy scoring.")
479
- # Extract predictions and truths for accuracy calculation
480
- predictions_for_acc = [res.get("predicted_answer") for res in model_results if isinstance(res.get("predicted_answer"), list)] # Only use list predictions
481
- truths_for_acc = [res.get("ground_truth") for res in model_results if isinstance(res.get("predicted_answer"), list)]
482
- # Handle cases where no valid list predictions exist
483
- accuracy = 0.0
484
- if predictions_for_acc:
485
- try:
486
- # Note: calculate_accuracy expects Optional[List[int]], but we filtered Nones/SKIPs
487
- # We need to align the lists properly if we want to use the original function
488
- # For simplicity here, let's recalculate based on the final model_results
489
- correct_count = 0
490
- valid_attempts = 0
491
- api_parse_failures = 0
492
- for res in model_results:
493
- pred = res.get("predicted_answer")
494
- truth = res.get("ground_truth")
495
- api_success = res.get("api_call_successful")
496
- parse_success = res.get("parse_successful")
497
-
498
- if not api_success or not parse_success:
499
- api_parse_failures += 1
500
- continue # Count as failure, not incorrect for accuracy
501
-
502
- valid_attempts += 1
503
- if isinstance(pred, list) and sorted(pred) == sorted(truth):
504
- correct_count += 1
505
-
506
- accuracy = correct_count / valid_attempts if valid_attempts > 0 else 0.0
507
-
508
- except ValueError as e:
509
- logging.error(f"Error calculating accuracy for model {model_id}: {e}")
510
- accuracy = 0.0
511
- else:
512
- logging.warning(f"No valid list predictions were generated for model {model_id} for accuracy calculation.")
513
-
514
-
515
- # Determine exam name and year for summary
516
- summary_exam_name = exam_name_filter if exam_name_filter else "All_Exams"
517
- summary_exam_year = exam_year_filter if exam_year_filter else "All_Years"
518
-
519
- summary = {
520
- "model_name": model_id,
521
- "exam_name": summary_exam_name,
522
- "exam_year": summary_exam_year,
523
- "timestamp": timestamp,
524
- "total_questions_in_dataset": total_questions, # This is total in dataset *before* model processing
525
- "total_processed_attempts": len(model_results), # Includes retries
526
- "successful_api_calls": sum(1 for res in model_results if res.get("api_call_successful")),
527
- "successful_parses": sum(1 for res in model_results if res.get("parse_successful")),
528
- "accuracy_on_parsed": accuracy # Accuracy based only on successfully parsed non-skipped answers
529
- # Note: total_questions_processed is part of evaluation_summary for NEET, handle for non-NEET if needed
530
- }
531
- logging.info(f"Accuracy (on successfully parsed non-skipped): {accuracy:.4f}")
532
-
533
  logging.info(f"--- Results Summary for model: {model_id} ---")
534
- logging.info(json.dumps(summary, indent=2))
535
  logging.info("-------------------------------------")
536
 
537
  # --- Overwrite predictions file with final evaluated results ---
 
12
  # Import local modules
13
  from utils import load_api_key
14
  from llm_interface import get_openrouter_prediction
15
+ # Import evaluation functions
16
+ from evaluation import calculate_accuracy, calculate_exam_scores
17
 
18
  # Configure logging
19
  logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
 
76
  md_content.append("\n---\n")
77
 
78
  # Check if NEET results are present (or any dataset with overall_score and section_breakdown)
79
+ if "overall_score" in summary and "section_breakdown" in summary: # Generic check for score-based summary
80
  total_processed = summary.get("total_questions_processed", 0)
81
+
82
+ # Max score calculation is complex due to varied scoring per question.
83
+ # For now, we'll omit max_score from the overall display or acknowledge its variability.
84
+ # A more accurate max_score would require iterating through the dataset items used in the run
85
+ # and summing their individual max possible scores based on exam_name and question_type.
86
+ # This is out of scope for the current summary generation simplicity.
87
+ max_score_display = "N/A (variable per question)" # Placeholder
88
+
89
  overall_score = summary.get('overall_score', 'N/A')
90
+ correct_full_count = summary.get('overall_correct_full', 'N/A')
91
+ partial_correct_count = summary.get('overall_partial_correct', 'N/A')
92
+ incorrect_choice_count = summary.get('overall_incorrect_choice', 'N/A')
93
  skipped_count = summary.get('overall_skipped', 'N/A')
94
  failures_count = summary.get('overall_api_parse_failures', 'N/A')
95
  unmapped_count = summary.get('unmapped_section_questions', 'N/A')
96
 
97
+ md_content.append("## Exam Scoring Results") # Changed from NEET
98
+ md_content.append(f"**Overall Score:** **{overall_score}** (Max score varies based on question types)")
99
+ md_content.append(f"- **Fully Correct Answers:** {correct_full_count}")
100
+ if partial_correct_count != 'N/A' and partial_correct_count > 0 : # Only show if applicable
101
+ md_content.append(f"- **Partially Correct Answers:** {partial_correct_count}")
102
+ md_content.append(f"- **Incorrectly Answered (Choice Made):** {incorrect_choice_count}")
103
+ md_content.append(f"- **Skipped Questions:** {skipped_count}")
104
+ md_content.append(f"- **API/Parse Failures:** {failures_count}")
105
  md_content.append(f"- **Total Questions Processed:** {total_processed}")
106
  if unmapped_count > 0:
107
+ md_content.append(f"- **Unmapped Section Questions:** {unmapped_count} *(Not included in section breakdown)*")
108
 
109
  md_content.append("\n### Section Breakdown")
110
+ md_content.append("| Section | Score | Fully Correct | Partially Correct | Incorrect Choice | Skipped | API/Parse Failures |")
111
+ md_content.append("|---------------|-------|---------------|-------------------|------------------|---------|--------------------|")
112
  section_breakdown = summary.get("section_breakdown", {})
113
 
 
114
  sorted_section_names = sorted(section_breakdown.keys())
115
+ if not sorted_section_names and section_breakdown:
116
  logging.warning("Could not sort section names for Markdown summary; using unsorted.")
117
  sorted_section_names = list(section_breakdown.keys())
118
 
119
  for section_name in sorted_section_names:
120
+ stats = section_breakdown.get(section_name, {})
121
  score = stats.get('score', 'N/A')
122
+ s_correct = stats.get('correct', 'N/A') # This is full correct from new structure
123
+ s_partial = stats.get('partial_correct', 'N/A')
124
+ s_incorrect = stats.get('incorrect', 'N/A') # This is incorrect choice from new structure
125
+ s_skipped = stats.get('skipped', 'N/A')
126
+ s_failures = stats.get('api_parse_failures', 'N/A')
127
+ display_section_name = section_name.replace('_', ' ')
128
+ md_content.append(f"| {display_section_name:<13} | {score:<5} | {s_correct:<13} | {s_partial:<17} | {s_incorrect:<16} | {s_skipped:<7} | {s_failures:<18} |")
129
  if not sorted_section_names:
130
+ md_content.append("| No section data available | N/A | N/A | N/A | N/A | N/A | N/A |")
131
+
132
+ # Fallback for simple accuracy (if exam scoring wasn't applicable or failed)
133
+ elif "accuracy_on_parsed" in summary: # This branch might be less used if all datasets now have exam_name/type
134
+ md_content.append("## Simple Accuracy Results (Fallback)")
 
135
  md_content.append(f"- **Accuracy (on successfully parsed non-skipped):** {summary.get('accuracy_on_parsed', 'N/A'):.4f}")
136
  md_content.append(f"- **Total Processed Attempts:** {summary.get('total_processed_attempts', 'N/A')}")
137
+ # Add other relevant simple stats if available
 
138
  else:
139
  md_content.append("## Summary")
140
+ md_content.append("*(No specific Exam Scoring or Accuracy metrics found in summary)*")
 
 
 
141
 
142
 
143
  with open(filepath, 'w') as f:
 
235
  # --- Initial Pass: Iterate through questions ---
236
  for example in tqdm(dataset, desc=f"Processing {model_id} (Initial Pass)", total=total_questions):
237
  question_id = example["question_id"]
238
+ subject = example["subject"]
239
+ exam_name_from_data = example.get("exam_name", "UNKNOWN_EXAM") # Get exam_name from data
240
+ question_type_from_data = example.get("question_type", "MCQ_SINGLE_CORRECT") # Get question_type
241
  image: PILImage.Image = example["image"]
242
  truth = example["correct_answer"]
243
 
244
  result_data = {
245
  "question_id": question_id,
246
  "subject": subject,
247
+ "exam_name": exam_name_from_data, # Store for evaluation
248
+ "question_type": question_type_from_data, # Store for evaluation
249
  "ground_truth": truth,
250
  "predicted_answer": None,
251
  "raw_response": None,
252
  "parse_successful": False,
253
+ "api_call_successful": False,
254
  "error": None,
255
+ "attempt": 1
256
  }
257
 
258
  try:
259
  # --- Initial API Call ---
260
+ # Pass exam_name_from_data and question_type_from_data to get_openrouter_prediction
261
  parsed_answer, raw_response = get_openrouter_prediction(
262
  model_identifier=model_id,
263
  api_key=api_key,
264
+ image=image,
265
+ exam_name=exam_name_from_data, # Use exam_name from current data item
266
+ exam_year=str(example.get("exam_year", "UNKNOWN_YEAR")), # Use exam_year from data
267
+ question_type=question_type_from_data, # Pass question_type
268
  max_tokens=config.get("max_tokens", 100),
269
+ request_timeout=config.get("request_timeout", 60)
270
+ )
271
+
272
+ api_success_attempt1 = True # If no exception, API call itself was successful
273
+ parse_success_attempt1 = parsed_answer is not None
 
274
  raw_response_attempt1 = raw_response
275
 
276
  # --- Re-prompt Logic ---
277
  if api_success_attempt1 and not parse_success_attempt1 and raw_response_attempt1 is not None:
278
  logging.warning(f"Question {question_id}: Initial parse failed. Attempting re-prompt.")
279
  try:
 
280
  parsed_answer_rp, raw_response_rp = get_openrouter_prediction(
281
  model_identifier=model_id,
282
  api_key=api_key,
283
+ previous_raw_response=raw_response_attempt1,
284
+ question_type=question_type_from_data, # Pass question_type for re-prompt
285
  max_tokens=config.get("max_tokens", 100),
286
  request_timeout=config.get("request_timeout", 60)
287
  )
 
288
  result_data.update({
289
  "predicted_answer": parsed_answer_rp,
290
+ "raw_response": raw_response_rp,
291
+ "parse_successful": parsed_answer_rp is not None,
292
+ "api_call_successful": True,
293
+ "attempt": 2
294
  })
295
  logging.info(f"Question {question_id}: Re-prompt {'succeeded' if result_data['parse_successful'] else 'failed to parse'}.")
 
296
  except Exception as e_rp:
 
297
  logging.error(f"Re-prompt API call failed for question {question_id}: {e_rp}")
 
298
  result_data.update({
299
+ "predicted_answer": None,
300
+ "raw_response": raw_response_attempt1,
301
  "parse_successful": False,
302
+ "api_call_successful": True,
303
  "error": f"Initial parse failed. Re-prompt API call failed: {str(e_rp)}",
304
+ "attempt": 1
305
  })
306
  else:
307
+ current_error = result_data.get("error")
 
 
308
  api_actually_successful = api_success_attempt1
 
309
  if api_success_attempt1 and raw_response_attempt1 is None and parsed_answer is None:
 
 
310
  current_error = "Initial API call returned empty content. Re-prompt skipped."
311
+
 
 
312
  result_data.update({
313
  "predicted_answer": parsed_answer,
314
  "raw_response": raw_response_attempt1,
315
+ "parse_successful": parse_success_attempt1,
316
  "api_call_successful": api_actually_successful,
317
  "error": current_error,
318
+ "attempt": 1
319
  })
320
+
 
321
  model_results.append(result_data)
322
  append_prediction(result_data, predictions_path)
323
 
 
324
  final_parsed_answer = result_data["predicted_answer"]
325
  if result_data["parse_successful"]:
326
  if final_parsed_answer == "SKIP":
327
  logging.info(f"Question {question_id}: Skipped (Attempt {result_data['attempt']})")
328
+ else: # For logging, simple truth comparison
329
+ is_correct_log = isinstance(final_parsed_answer, list) and sorted(final_parsed_answer) == sorted(truth if isinstance(truth, list) else [truth])
330
+ logging.info(f"Question {question_id}: {'Correct (log)' if is_correct_log else 'Incorrect (log)'} (Attempt {result_data['attempt']})")
331
+ else:
332
  logging.info(f"Question {question_id}: Failed to parse answer (Attempt {result_data['attempt']})")
333
 
 
334
  except Exception as e:
 
335
  logging.error(f"Initial API call failed for question {question_id} (Attempt 1): {e}")
336
  result_data["error"] = str(e)
337
+ result_data["api_call_successful"] = False
338
+ failed_questions_data.append(example) # Store original example for retry pass
 
 
 
339
 
340
+ # --- Retry Pass for questions with initial API failures ---
341
  if failed_questions_data:
342
  logging.info(f"--- Retrying {len(failed_questions_data)} questions with initial API failures for model: {model_id} ---")
343
+ for example_retry in tqdm(failed_questions_data, desc=f"Processing {model_id} (API Retry Pass)", total=len(failed_questions_data)):
344
+ question_id_retry = example_retry["question_id"]
345
+ subject_retry = example_retry["subject"]
346
+ exam_name_retry = example_retry.get("exam_name", "UNKNOWN_EXAM")
347
+ question_type_retry = example_retry.get("question_type", "MCQ_SINGLE_CORRECT")
348
+ image_retry: PILImage.Image = example_retry["image"]
349
+ truth_retry = example_retry["correct_answer"]
350
+
351
+ result_data_retry = {
352
+ "question_id": question_id_retry,
353
+ "subject": subject_retry,
354
+ "exam_name": exam_name_retry,
355
+ "question_type": question_type_retry,
356
+ "ground_truth": truth_retry,
357
  "predicted_answer": None,
358
  "raw_response": None,
359
  "parse_successful": False,
360
+ "api_call_successful": False,
361
+ "error": "Initial API call failed.", # Pre-fill error
362
+ "attempt": 2
363
  }
364
 
365
  try:
366
+ parsed_answer_retry, raw_response_retry = get_openrouter_prediction(
 
367
  model_identifier=model_id,
368
  api_key=api_key,
369
+ image=image_retry,
370
+ exam_name=exam_name_retry,
371
+ exam_year=str(example_retry.get("exam_year", "UNKNOWN_YEAR")),
372
+ question_type=question_type_retry,
373
  max_tokens=config.get("max_tokens", 100),
374
  request_timeout=config.get("request_timeout", 60)
375
  )
 
376
  api_success_attempt2 = True
377
+ parse_success_attempt2 = parsed_answer_retry is not None
378
+ raw_response_attempt2 = raw_response_retry
379
 
 
380
  if api_success_attempt2 and not parse_success_attempt2 and raw_response_attempt2 is not None:
381
+ logging.warning(f"Question {question_id_retry}: API Retry succeeded, but parse failed. Attempting re-prompt.")
382
  try:
 
383
  parsed_answer_rp2, raw_response_rp2 = get_openrouter_prediction(
384
  model_identifier=model_id,
385
  api_key=api_key,
386
+ previous_raw_response=raw_response_attempt2,
387
+ question_type=question_type_retry,
388
  max_tokens=config.get("max_tokens", 100),
389
  request_timeout=config.get("request_timeout", 60)
390
  )
391
+ result_data_retry.update({
392
+ "predicted_answer": parsed_answer_rp2, "raw_response": raw_response_rp2,
393
+ "parse_successful": parsed_answer_rp2 is not None, "api_call_successful": True,
394
+ "error": None if parsed_answer_rp2 is not None else "Re-prompt after API retry failed to parse.",
395
+ "attempt": 3
 
 
396
  })
397
+ logging.info(f"Question {question_id_retry}: API Retry + Re-prompt {'succeeded' if result_data_retry['parse_successful'] else 'failed to parse'}.")
 
398
  except Exception as e_rp2:
399
+ logging.error(f"Re-prompt API call failed for question {question_id_retry} after API retry: {e_rp2}")
400
+ result_data_retry.update({
401
+ "error": f"API retry ok, parse failed. Re-prompt API call failed: {str(e_rp2)}",
402
+ "attempt": 2 # Stay at attempt 2 as re-prompt itself failed
 
 
 
 
 
403
  })
404
  else:
405
+ current_error_retry = result_data_retry.get("error")
406
+ if api_success_attempt2 and raw_response_attempt2 is None and parsed_answer_retry is None:
 
 
 
407
  current_error_retry = "API retry call returned empty content. Re-prompt skipped."
408
+
409
+ result_data_retry.update({
410
+ "predicted_answer": parsed_answer_retry, "raw_response": raw_response_attempt2,
411
+ "parse_successful": parse_success_attempt2, "api_call_successful": api_success_attempt2,
412
+ "error": None if parse_success_attempt2 else current_error_retry, # Clear initial error if parse now ok
413
+ "attempt": 2
 
 
414
  })
415
+ except Exception as e_retry_api:
416
+ logging.error(f"API call failed permanently for question {question_id_retry} (Attempt 2 API Retry): {e_retry_api}")
417
+ result_data_retry["error"] = f"Initial API fail. Retry API call also failed: {str(e_retry_api)}"
418
+ result_data_retry["api_call_successful"] = False
419
+
420
+ model_results.append(result_data_retry)
421
+ append_prediction(result_data_retry, predictions_path)
 
 
 
 
 
422
 
423
  # --- Final Evaluation for the current model ---
424
  logging.info(f"--- Calculating final results for model: {model_id} ---")
425
+
426
+ # Always use calculate_exam_scores now
427
+ evaluation_summary = calculate_exam_scores(model_results) # model_results modified in-place
428
+
429
+ summary_exam_name = exam_name_filter if exam_name_filter else "All_Exams"
430
+ summary_exam_year = exam_year_filter if exam_year_filter else "All_Years"
431
+
432
+ summary = {
433
+ "model_name": model_id,
434
+ "exam_name": summary_exam_name, # This is the filter, not necessarily from data if no filter
435
+ "exam_year": summary_exam_year, # This is the filter
436
+ "timestamp": timestamp,
437
+ "total_questions_in_dataset": total_questions,
438
+ **evaluation_summary
439
+ }
440
+ logging.info(f"Exam Score: {summary.get('overall_score')}")
441
+ logging.info(f"Full Correct: {summary.get('overall_correct_full')}, Partial Correct: {summary.get('overall_partial_correct')}, Incorrect Choice: {summary.get('overall_incorrect_choice')}, Skipped: {summary.get('overall_skipped')}, API/Parse Failures: {summary.get('overall_api_parse_failures')}")
442
+
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
443
  logging.info(f"--- Results Summary for model: {model_id} ---")
444
+ logging.info(json.dumps(summary, indent=2, sort_keys=True))
445
  logging.info("-------------------------------------")
446
 
447
  # --- Overwrite predictions file with final evaluated results ---
src/evaluation.py CHANGED
@@ -56,160 +56,235 @@ def calculate_accuracy(predictions: List[Optional[List[int]]], ground_truths: Li
56
  return accuracy
57
 
58
 
59
- def get_neet_section(subject: str, question_num: int) -> Optional[str]: # question_num is effectively unused by this revised logic
60
  """
61
  Returns the subject name directly as the section identifier.
62
- This makes the section reporting dynamic based on the 'subject' field from metadata.
63
  """
64
- if subject and isinstance(subject, str) and subject.strip(): # Ensure subject is a non-empty string
65
- return subject.strip() # Return the subject name as is
66
  else:
67
- # Log if subject is missing or invalid, helps in debugging data issues.
68
- # question_num is included in log for context, though not used for determination.
69
- logging.warning(f"Invalid or missing subject ('{subject}') for question_num '{question_num}'. Cannot determine section.")
70
  return None
71
 
72
 
73
- def calculate_neet_scores(results: List[Dict[str, Any]]) -> Dict[str, Any]:
74
  """
75
- Calculates NEET scores (+4 / -1 / 0) and provides section-wise breakdown.
76
 
77
  Args:
78
  results (List[Dict[str, Any]]): A list of result dictionaries. Each dict must contain:
79
- 'question_id' (str): e.g., "NEET_2024_T3_045"
80
- 'subject' (str): e.g., "Physics"
81
- 'ground_truth' (List[int]): Correct answer(s)
82
- 'predicted_answer' (List[int] | str | None): Model's prediction ("SKIP", list, or None)
 
 
83
  'api_call_successful' (bool): Whether the API call succeeded.
84
  This list will be modified in-place to add 'evaluation_status' and 'marks_awarded'.
85
-
86
  Returns:
87
  Dict[str, Any]: A dictionary containing overall and section-wise scores and counts.
88
  """
89
  if not results:
90
  return {"error": "No results provided."}
91
 
92
- # Initialize overall and section counters
93
- overall_stats = {"score": 0, "correct": 0, "incorrect": 0, "skipped": 0, "api_parse_failures": 0}
94
 
95
- # Dynamically discover subjects from the results and initialize section_stats
96
- # Filter out None or empty subjects before creating the set
97
  valid_subjects_from_data = [r.get("subject") for r in results if r.get("subject") and isinstance(r.get("subject"), str) and r.get("subject").strip()]
98
- if not valid_subjects_from_data and results: # If results exist but no valid subjects found
99
- logging.warning("No valid subjects found in results data to initialize section_stats, though results were provided.")
100
 
101
  unique_subjects = sorted(list(set(s.strip() for s in valid_subjects_from_data)))
102
-
103
  section_stats = {
104
- subj: {"score": 0, "correct": 0, "incorrect": 0, "skipped": 0}
105
  for subj in unique_subjects
106
  }
 
 
107
 
108
- if not unique_subjects and results: # Log if results are present but no sections could be initialized
109
- logging.warning("section_stats is empty because no unique, valid subjects were found in the results.")
110
-
111
- unmapped_questions = 0
112
 
113
  for result in results:
114
  question_id = result.get("question_id")
115
- # Use the subject directly from the result item for section mapping.
116
- # This 'subject' variable will be passed to get_neet_section.
117
- subject = result.get("subject")
 
118
  pred = result.get("predicted_answer")
119
- truth = result.get("ground_truth")
120
- api_success = result.get("api_call_successful", False) # Default to False if missing
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
121
 
122
- # Determine section
123
  section = None
124
- question_num = -1
125
- if question_id and subject:
126
- match = re.search(r'_(\d+)$', question_id) # Extract number at the end
127
- if match:
128
  try:
129
- question_num = int(match.group(1))
130
- section = get_neet_section(subject, question_num)
131
  except ValueError:
132
- logging.warning(f"Could not parse number from question_id: {question_id}")
133
- else:
134
- logging.warning(f"Could not extract number from question_id format: {question_id}")
135
- else:
136
- logging.warning(f"Missing question_id or subject for a result: {result}")
137
-
138
  if section is None:
139
- unmapped_questions += 1
140
- logging.warning(f"Could not map question to NEET section: ID={question_id}, Subject={subject}, Num={question_num}")
141
- # Decide how to handle unmapped questions - here we just count them and don't score them section-wise
142
- # They will still contribute to overall failure counts if applicable
143
 
144
- # --- Scoring Logic ---
145
  current_score_change = 0
146
- is_correct = False
147
- is_incorrect = False
 
148
  is_skipped = False
149
- is_failure = False
150
- evaluation_status = "unknown" # Initialize status
151
-
152
- if not api_success or pred is None:
153
- # API call failed OR parsing failed (but wasn't a deliberate SKIP)
154
- evaluation_status = "failure" # API or Parse Failure
155
- is_incorrect = True
156
- is_failure = True
157
- current_score_change = -1
 
 
 
 
 
158
  elif pred == "SKIP":
159
  is_skipped = True
160
  current_score_change = 0
161
  evaluation_status = "skipped"
162
- elif isinstance(pred, list):
163
- # API and parsing succeeded, compare answers
164
- sorted_pred = sorted(pred)
165
- sorted_truth = sorted(truth)
166
- # Check if any of the predicted answers are in the ground truth
167
- if any(p_ans in sorted_truth for p_ans in sorted_pred):
168
- is_correct = True
169
- current_score_change = 4
170
- evaluation_status = "correct"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
171
  else:
172
- is_incorrect = True
173
- current_score_change = -1
174
- evaluation_status = "incorrect" # Wrong answer
175
  else:
176
- # Should not happen if parsing logic is correct, but handle defensively
177
- logging.error(f"Unexpected prediction type for {question_id}: {pred}")
178
- is_incorrect = True
179
- is_failure = True # Treat unexpected type as failure
180
- current_score_change = -1
181
- evaluation_status = "failure" # Unexpected type
182
-
183
- # Add evaluation details to the result dictionary
184
  result['evaluation_status'] = evaluation_status
185
  result['marks_awarded'] = current_score_change
186
 
187
- # Update overall stats
188
  overall_stats["score"] += current_score_change
189
- if is_correct: overall_stats["correct"] += 1
190
- if is_incorrect: overall_stats["incorrect"] += 1
191
  if is_skipped: overall_stats["skipped"] += 1
192
- if is_failure: overall_stats["api_parse_failures"] += 1
 
193
 
194
- # Update section stats if section was determined
195
- if section:
196
  section_stats[section]["score"] += current_score_change
197
- if is_correct: section_stats[section]["correct"] += 1
198
- if is_incorrect: section_stats[section]["incorrect"] += 1
199
  if is_skipped: section_stats[section]["skipped"] += 1
200
-
201
- logging.info(f"NEET Score Calculation Complete. Overall Score: {overall_stats['score']}")
202
- if unmapped_questions > 0:
203
- logging.warning(f"{unmapped_questions} questions could not be mapped to a NEET section.")
 
 
204
 
205
  return {
206
  "overall_score": overall_stats["score"],
207
- "overall_correct": overall_stats["correct"],
208
- "overall_incorrect": overall_stats["incorrect"],
 
209
  "overall_skipped": overall_stats["skipped"],
210
  "overall_api_parse_failures": overall_stats["api_parse_failures"],
211
  "total_questions_processed": len(results),
212
- "unmapped_section_questions": unmapped_questions,
213
  "section_breakdown": section_stats
214
  }
215
 
@@ -218,107 +293,68 @@ def calculate_neet_scores(results: List[Dict[str, Any]]) -> Dict[str, Any]:
218
  if __name__ == '__main__':
219
  print("Running evaluation tests...")
220
 
221
- # --- Test calculate_accuracy ---
222
  print("\n--- Testing calculate_accuracy ---")
223
- # Test case 1: Perfect match
224
  preds1 = [[1], [2], [1, 3]]
225
  truths1 = [[1], [2], [3, 1]]
226
  acc1 = calculate_accuracy(preds1, truths1)
227
- print(f"Test Case 1: Preds={preds1}, Truths={truths1} -> Accuracy: {acc1} (Expected: 1.0)")
228
  assert acc1 == 1.0
229
-
230
- # Test case 2: One wrong
231
- preds2 = [[1], [4], [1, 3]]
232
- truths2 = [[1], [2], [3, 1]]
233
- acc2 = calculate_accuracy(preds2, truths2)
234
- print(f"Test Case 2: Preds={preds2}, Truths={truths2} -> Accuracy: {acc2} (Expected: ~0.6667)")
235
- assert abs(acc2 - 2/3) < 1e-6
236
-
237
- # Test case 3: Parsing failure (None)
238
- preds3 = [[1], None, [1, 3]]
239
- truths3 = [[1], [2], [3, 1]]
240
- acc3 = calculate_accuracy(preds3, truths3)
241
- print(f"Test Case 3: Preds={preds3}, Truths={truths3} -> Accuracy: {acc3} (Expected: ~0.6667)")
242
- assert abs(acc3 - 2/3) < 1e-6
243
-
244
- # Test case 4: Empty lists
245
- preds4 = []
246
- truths4 = []
247
- acc4 = calculate_accuracy(preds4, truths4)
248
- print(f"Test Case 4: Preds={preds4}, Truths={truths4} -> Accuracy: {acc4} (Expected: 0.0)")
249
- assert acc4 == 0.0
250
-
251
- # Test case 5: Length mismatch (should raise ValueError)
252
- preds5 = [[1]]
253
- truths5 = [[1], [2]]
254
- try:
255
- calculate_accuracy(preds5, truths5)
256
- print("Test Case 5: FAILED - ValueError not raised")
257
- except ValueError as e:
258
- print(f"Test Case 5: PASSED - Raised ValueError: {e}")
259
-
260
- # Test case 6: Partial match in multiple correct is still wrong
261
- preds6 = [[1], [2], [1]]
262
- truths6 = [[1], [2], [1, 3]]
263
- acc6 = calculate_accuracy(preds6, truths6)
264
- print(f"Test Case 6: Preds={preds6}, Truths={truths6} -> Accuracy: {acc6} (Expected: ~0.6667)")
265
- assert abs(acc6 - 2/3) < 1e-6
266
-
267
- # --- Test calculate_neet_scores ---
268
- print("\n--- Testing calculate_neet_scores ---")
269
- test_results = [
270
- # Physics A - Correct
271
- {"question_id": "NEET_2024_T3_001", "subject": "Physics", "ground_truth": [1], "predicted_answer": [1], "api_call_successful": True},
272
- # Physics B - Incorrect
273
- {"question_id": "NEET_2024_T3_040", "subject": "Physics", "ground_truth": [4], "predicted_answer": [2], "api_call_successful": True},
274
- # Chemistry A - Skipped
275
- {"question_id": "NEET_2024_T3_055", "subject": "Chemistry", "ground_truth": [4], "predicted_answer": "SKIP", "api_call_successful": True},
276
- # Chemistry B - API Fail
277
- {"question_id": "NEET_2024_T3_090", "subject": "Chemistry", "ground_truth": [3], "predicted_answer": None, "api_call_successful": False},
278
- # Botany A - Parse Fail (None)
279
- {"question_id": "NEET_2024_T3_110", "subject": "Botany", "ground_truth": [4], "predicted_answer": None, "api_call_successful": True},
280
- # Botany B - Correct (multi)
281
- {"question_id": "NEET_2024_T3_145", "subject": "Botany", "ground_truth": [2, 4], "predicted_answer": [4, 2], "api_call_successful": True}, # Assuming multi-correct allowed
282
- # Zoology A - Correct
283
- {"question_id": "NEET_2024_T3_160", "subject": "Zoology", "ground_truth": [4], "predicted_answer": [4], "api_call_successful": True},
284
- # Zoology B - Incorrect
285
- {"question_id": "NEET_2024_T3_190", "subject": "Zoology", "ground_truth": [2], "predicted_answer": [1], "api_call_successful": True},
286
- # Unmapped ID
287
- {"question_id": "JEE_2023_Q1", "subject": "Physics", "ground_truth": [1], "predicted_answer": [1], "api_call_successful": True},
288
- # Missing Subject
289
- {"question_id": "NEET_2024_T3_002", "subject": None, "ground_truth": [3], "predicted_answer": [3], "api_call_successful": True},
290
  ]
291
 
292
- neet_summary = calculate_neet_scores(test_results)
293
- print("\nNEET Score Summary:")
294
  import json
295
- print(json.dumps(neet_summary, indent=2))
296
-
297
- # Expected:
298
- # Overall: Score=4-1+0-1-1+4+4-1 = 8. Correct=3, Incorrect=4, Skipped=1, Failures=2, Unmapped=2
299
- # Phys A: Score=4, Correct=1, Incorrect=0, Skipped=0
300
- # Phys B: Score=-1, Correct=0, Incorrect=1, Skipped=0
301
- # Chem A: Score=0, Correct=0, Incorrect=0, Skipped=1
302
- # Chem B: Score=-1, Correct=0, Incorrect=1, Skipped=0 (API Fail counts as incorrect)
303
- # Bot A: Score=-1, Correct=0, Incorrect=1, Skipped=0 (Parse Fail counts as incorrect)
304
- # Bot B: Score=4, Correct=1, Incorrect=0, Skipped=0
305
- # Zoo A: Score=4, Correct=1, Incorrect=0, Skipped=0
306
- # Zoo B: Score=-1, Correct=0, Incorrect=1, Skipped=0
307
-
308
- assert neet_summary["overall_score"] == 8
309
- assert neet_summary["overall_correct"] == 3
310
- assert neet_summary["overall_incorrect"] == 4
311
- assert neet_summary["overall_skipped"] == 1
312
- assert neet_summary["overall_api_parse_failures"] == 2
313
- assert neet_summary["unmapped_section_questions"] == 2
314
- assert neet_summary["section_breakdown"]["Physics_A"]["score"] == 4
315
- assert neet_summary["section_breakdown"]["Physics_B"]["score"] == -1
316
- assert neet_summary["section_breakdown"]["Chemistry_A"]["score"] == 0
317
- assert neet_summary["section_breakdown"]["Chemistry_B"]["score"] == -1
318
- assert neet_summary["section_breakdown"]["Botany_A"]["score"] == -1
319
- assert neet_summary["section_breakdown"]["Botany_B"]["score"] == 4
320
- assert neet_summary["section_breakdown"]["Zoology_A"]["score"] == 4
321
- assert neet_summary["section_breakdown"]["Zoology_B"]["score"] == -1
322
-
323
 
324
  print("\nEvaluation tests completed.")
 
56
  return accuracy
57
 
58
 
59
+ def get_subject_as_section(subject: str, question_num_for_log: int) -> Optional[str]:
60
  """
61
  Returns the subject name directly as the section identifier.
62
+ question_num_for_log is only used for logging context if subject is invalid.
63
  """
64
+ if subject and isinstance(subject, str) and subject.strip():
65
+ return subject.strip()
66
  else:
67
+ logging.warning(f"Invalid or missing subject ('{subject}') for question_num '{question_num_for_log}'. Cannot determine section.")
 
 
68
  return None
69
 
70
 
71
+ def calculate_exam_scores(results: List[Dict[str, Any]]) -> Dict[str, Any]:
72
  """
73
+ Calculates exam scores based on exam_name and question_type, providing section-wise breakdown.
74
 
75
  Args:
76
  results (List[Dict[str, Any]]): A list of result dictionaries. Each dict must contain:
77
+ 'question_id' (str)
78
+ 'subject' (str)
79
+ 'exam_name' (str) e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED"
80
+ 'question_type' (str) e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER"
81
+ 'ground_truth' (List[int] | int): Correct answer(s). For INTEGER, it's a single int.
82
+ 'predicted_answer' (List[int] | str | None): Model's prediction.
83
  'api_call_successful' (bool): Whether the API call succeeded.
84
  This list will be modified in-place to add 'evaluation_status' and 'marks_awarded'.
 
85
  Returns:
86
  Dict[str, Any]: A dictionary containing overall and section-wise scores and counts.
87
  """
88
  if not results:
89
  return {"error": "No results provided."}
90
 
91
+ overall_stats = {"score": 0, "correct": 0, "incorrect": 0, "skipped": 0, "api_parse_failures": 0, "partial_correct": 0}
 
92
 
 
 
93
  valid_subjects_from_data = [r.get("subject") for r in results if r.get("subject") and isinstance(r.get("subject"), str) and r.get("subject").strip()]
94
+ if not valid_subjects_from_data and results:
95
+ logging.warning("No valid subjects found in results data to initialize section_stats.")
96
 
97
  unique_subjects = sorted(list(set(s.strip() for s in valid_subjects_from_data)))
 
98
  section_stats = {
99
+ subj: {"score": 0, "correct": 0, "incorrect": 0, "skipped": 0, "api_parse_failures": 0, "partial_correct": 0}
100
  for subj in unique_subjects
101
  }
102
+ if not unique_subjects and results:
103
+ logging.warning("section_stats is empty because no unique, valid subjects were found.")
104
 
105
+ unmapped_section_questions = 0
 
 
 
106
 
107
  for result in results:
108
  question_id = result.get("question_id")
109
+ subject = result.get("subject")
110
+ exam_name = result.get("exam_name", "").upper() # Default to empty string, then uppercase
111
+ question_type = result.get("question_type", "").upper() # Default to empty string, then uppercase
112
+
113
  pred = result.get("predicted_answer")
114
+ truth = result.get("ground_truth") # Can be list for MCQ, int for INTEGER
115
+ api_success = result.get("api_call_successful", False)
116
+
117
+ # Ensure truth is a list for consistent processing, even for single-answer INTEGER types
118
+ # For INTEGER, ground_truth might be a single int. Convert to list for set operations.
119
+ if isinstance(truth, int):
120
+ truth_set = {truth}
121
+ truth_list_for_comparison = [truth]
122
+ elif isinstance(truth, list):
123
+ truth_set = set(truth)
124
+ truth_list_for_comparison = sorted(truth) # For exact match comparison
125
+ else:
126
+ logging.error(f"Invalid ground_truth format for {question_id}: {truth}. Skipping scoring for this question.")
127
+ result['evaluation_status'] = "error_bad_ground_truth"
128
+ result['marks_awarded'] = 0
129
+ overall_stats["api_parse_failures"] +=1 # Count as a type of failure
130
+ if subject and subject in section_stats:
131
+ section_stats[subject]["api_parse_failures"] +=1
132
+ continue
133
+
134
 
 
135
  section = None
136
+ question_num_for_log = -1 # For logging in get_subject_as_section
137
+ if question_id:
138
+ match_num = re.search(r'_(\d+)$', question_id)
139
+ if match_num:
140
  try:
141
+ question_num_for_log = int(match_num.group(1))
 
142
  except ValueError:
143
+ logging.warning(f"Could not parse number from question_id for logging: {question_id}")
144
+
145
+ if subject:
146
+ section = get_subject_as_section(subject, question_num_for_log)
147
+
 
148
  if section is None:
149
+ unmapped_section_questions += 1
150
+ logging.warning(f"Could not map question to section: ID={question_id}, Subject={subject}")
 
 
151
 
 
152
  current_score_change = 0
153
+ evaluation_status = "unknown"
154
+ is_correct_full = False
155
+ is_incorrect_choice = False # Specifically for choices made that are wrong
156
  is_skipped = False
157
+ is_api_parse_failure = False # API or internal parsing failure
158
+ is_partial_correct = False
159
+
160
+
161
+ if not api_success or pred is None: # pred is None means our internal parsing failed
162
+ is_api_parse_failure = True
163
+ evaluation_status = "failure_api_or_parse"
164
+ # Default penalty for API/Parse failure, can be overridden by specific exam rules
165
+ current_score_change = -1
166
+ if exam_name == "JEE_MAIN" and question_type == "INTEGER":
167
+ current_score_change = 0
168
+ if exam_name == "JEE_ADVANCED" and question_type == "INTEGER":
169
+ current_score_change = 0
170
+
171
  elif pred == "SKIP":
172
  is_skipped = True
173
  current_score_change = 0
174
  evaluation_status = "skipped"
175
+ elif isinstance(pred, list): # LLM provided one or more answer choices
176
+ pred_set = set(pred)
177
+
178
+ # NEET Scoring
179
+ if exam_name == "NEET" and question_type == "MCQ_SINGLE_CORRECT":
180
+ if pred_set == truth_set and len(pred_set) == 1: # NEET is always single correct MCQ
181
+ is_correct_full = True; current_score_change = 4; evaluation_status = "correct"
182
+ else:
183
+ is_incorrect_choice = True; current_score_change = -1; evaluation_status = "incorrect"
184
+
185
+ # JEE Main Scoring
186
+ elif exam_name == "JEE_MAIN":
187
+ if question_type == "MCQ_SINGLE_CORRECT":
188
+ if pred_set == truth_set and len(pred_set) == 1:
189
+ is_correct_full = True; current_score_change = 4; evaluation_status = "correct"
190
+ else:
191
+ is_incorrect_choice = True; current_score_change = -1; evaluation_status = "incorrect"
192
+ elif question_type == "INTEGER":
193
+ # For INTEGER, pred should be a list with one number after parsing
194
+ if len(pred) == 1 and pred[0] in truth_set : # truth_set contains the single correct integer
195
+ is_correct_full = True; current_score_change = 4; evaluation_status = "correct"
196
+ else:
197
+ is_incorrect_choice = True; current_score_change = 0; evaluation_status = "incorrect" # No negative for JEE Main Integer
198
+
199
+ # JEE Advanced Scoring
200
+ elif exam_name == "JEE_ADVANCED":
201
+ if question_type == "MCQ_SINGLE_CORRECT":
202
+ if pred_set == truth_set and len(pred_set) == 1:
203
+ is_correct_full = True; current_score_change = 3; evaluation_status = "correct"
204
+ else:
205
+ is_incorrect_choice = True; current_score_change = -1; evaluation_status = "incorrect"
206
+ elif question_type == "INTEGER":
207
+ if len(pred) == 1 and pred[0] in truth_set:
208
+ is_correct_full = True; current_score_change = 4; evaluation_status = "correct" # As per image
209
+ else:
210
+ is_incorrect_choice = True; current_score_change = 0; evaluation_status = "incorrect" # As per image
211
+ elif question_type == "MCQ_MULTIPLE_CORRECT":
212
+ # Complex logic from image
213
+ num_correct_options_in_truth = len(truth_set)
214
+ num_chosen_options = len(pred_set)
215
+
216
+ correct_chosen_options = pred_set.intersection(truth_set)
217
+ incorrect_chosen_options = pred_set.difference(truth_set)
218
+
219
+ num_correct_chosen = len(correct_chosen_options)
220
+ num_incorrect_chosen = len(incorrect_chosen_options)
221
+
222
+ if num_incorrect_chosen > 0:
223
+ current_score_change = -2
224
+ is_incorrect_choice = True
225
+ evaluation_status = "incorrect_negative"
226
+ elif num_correct_chosen == num_correct_options_in_truth and num_chosen_options == num_correct_options_in_truth : # All correct and only correct chosen
227
+ current_score_change = 4
228
+ is_correct_full = True
229
+ evaluation_status = "correct_full"
230
+ elif num_correct_options_in_truth == 4 and num_correct_chosen == 3 and num_chosen_options == 3: # All 4 are correct, 3 chosen
231
+ current_score_change = 3
232
+ is_partial_correct = True
233
+ evaluation_status = "partial_3_of_4"
234
+ elif num_correct_options_in_truth >= 3 and num_correct_chosen == 2 and num_chosen_options == 2: # 3 or more correct, 2 chosen (both correct)
235
+ current_score_change = 2
236
+ is_partial_correct = True
237
+ evaluation_status = "partial_2_of_3_plus"
238
+ elif num_correct_options_in_truth >= 2 and num_correct_chosen == 1 and num_chosen_options == 1: # 2 or more correct, 1 chosen (it's correct)
239
+ current_score_change = 1
240
+ is_partial_correct = True
241
+ evaluation_status = "partial_1_of_2_plus"
242
+ else: # Other cases not explicitly covered by positive partial, but no incorrect chosen
243
+ current_score_change = 0 # Default to 0 if no incorrect, but not matching positive partials
244
+ evaluation_status = "no_marks_no_penalty"
245
  else:
246
+ logging.warning(f"Unknown exam_name/question_type combination for scoring: {exam_name}/{question_type} for QID {question_id}. Assigning 0 marks.")
247
+ current_score_change = 0
248
+ evaluation_status = "unknown_exam_type"
249
  else:
250
+ # pred is not list and not SKIP and not None (should not happen with current parse_llm_answer)
251
+ logging.error(f"Unexpected prediction type for {question_id}: {pred}. Treating as API/Parse Failure.")
252
+ is_api_parse_failure = True
253
+ current_score_change = -1 # Default penalty
254
+ evaluation_status = "failure_unexpected_type"
255
+
256
+
 
257
  result['evaluation_status'] = evaluation_status
258
  result['marks_awarded'] = current_score_change
259
 
 
260
  overall_stats["score"] += current_score_change
261
+ if is_correct_full: overall_stats["correct"] += 1
262
+ if is_incorrect_choice: overall_stats["incorrect"] += 1 # Only count if a choice was made and it was wrong
263
  if is_skipped: overall_stats["skipped"] += 1
264
+ if is_api_parse_failure: overall_stats["api_parse_failures"] += 1
265
+ if is_partial_correct: overall_stats["partial_correct"] +=1
266
 
267
+ if section and section in section_stats:
 
268
  section_stats[section]["score"] += current_score_change
269
+ if is_correct_full: section_stats[section]["correct"] += 1
270
+ if is_incorrect_choice: section_stats[section]["incorrect"] += 1
271
  if is_skipped: section_stats[section]["skipped"] += 1
272
+ if is_api_parse_failure: section_stats[section]["api_parse_failures"] += 1
273
+ if is_partial_correct: section_stats[section]["partial_correct"] +=1
274
+
275
+ logging.info(f"Exam Score Calculation Complete. Overall Score: {overall_stats['score']}")
276
+ if unmapped_section_questions > 0:
277
+ logging.warning(f"{unmapped_section_questions} questions could not be mapped to a section.")
278
 
279
  return {
280
  "overall_score": overall_stats["score"],
281
+ "overall_correct_full": overall_stats["correct"],
282
+ "overall_partial_correct": overall_stats["partial_correct"],
283
+ "overall_incorrect_choice": overall_stats["incorrect"],
284
  "overall_skipped": overall_stats["skipped"],
285
  "overall_api_parse_failures": overall_stats["api_parse_failures"],
286
  "total_questions_processed": len(results),
287
+ "unmapped_section_questions": unmapped_section_questions,
288
  "section_breakdown": section_stats
289
  }
290
 
 
293
  if __name__ == '__main__':
294
  print("Running evaluation tests...")
295
 
296
+ # --- Test calculate_accuracy (existing tests can remain as they test general list comparison) ---
297
  print("\n--- Testing calculate_accuracy ---")
 
298
  preds1 = [[1], [2], [1, 3]]
299
  truths1 = [[1], [2], [3, 1]]
300
  acc1 = calculate_accuracy(preds1, truths1)
301
+ print(f"Test Case 1 (Accuracy): Preds={preds1}, Truths={truths1} -> Accuracy: {acc1} (Expected: 1.0)")
302
  assert acc1 == 1.0
303
+ # ... (other accuracy tests can be kept or adapted if needed)
304
+
305
+
306
+ # --- Test calculate_exam_scores ---
307
+ print("\n--- Testing calculate_exam_scores ---")
308
+ test_results_exam = [
309
+ # NEET
310
+ {"question_id": "N001", "subject": "Physics", "exam_name": "NEET", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [1], "predicted_answer": [1], "api_call_successful": True}, # Correct +4
311
+ {"question_id": "N002", "subject": "Physics", "exam_name": "NEET", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [4], "predicted_answer": [2], "api_call_successful": True}, # Incorrect -1
312
+ {"question_id": "N003", "subject": "Chemistry", "exam_name": "NEET", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [4], "predicted_answer": "SKIP", "api_call_successful": True}, # Skipped 0
313
+ {"question_id": "N004", "subject": "Chemistry", "exam_name": "NEET", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [3], "predicted_answer": None, "api_call_successful": False}, # API Fail -1
314
+ {"question_id": "N005", "subject": "Botany", "exam_name": "NEET", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [4], "predicted_answer": None, "api_call_successful": True}, # Parse Fail -1
315
+
316
+ # JEE Main - MCQ
317
+ {"question_id": "JM001", "subject": "Maths", "exam_name": "JEE_MAIN", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [2], "predicted_answer": [2], "api_call_successful": True}, # Correct +4
318
+ {"question_id": "JM002", "subject": "Maths", "exam_name": "JEE_MAIN", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [3], "predicted_answer": [1], "api_call_successful": True}, # Incorrect -1
319
+ # JEE Main - Integer
320
+ {"question_id": "JM003", "subject": "Physics", "exam_name": "JEE_MAIN", "question_type": "INTEGER", "ground_truth": 5, "predicted_answer": [5], "api_call_successful": True}, # Correct +4
321
+ {"question_id": "JM004", "subject": "Physics", "exam_name": "JEE_MAIN", "question_type": "INTEGER", "ground_truth": 10, "predicted_answer": [8], "api_call_successful": True}, # Incorrect 0
322
+ {"question_id": "JM005", "subject": "Chemistry", "exam_name": "JEE_MAIN", "question_type": "INTEGER", "ground_truth": 7, "predicted_answer": None, "api_call_successful": True}, # Parse Fail 0
323
+
324
+ # JEE Advanced - MCQ Single Correct
325
+ {"question_id": "JA001", "subject": "Maths", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [1], "predicted_answer": [1], "api_call_successful": True}, # Correct +3
326
+ {"question_id": "JA002", "subject": "Maths", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_SINGLE_CORRECT", "ground_truth": [2], "predicted_answer": [3], "api_call_successful": True}, # Incorrect -1
327
+ # JEE Advanced - Integer
328
+ {"question_id": "JA003", "subject": "Physics", "exam_name": "JEE_ADVANCED", "question_type": "INTEGER", "ground_truth": 12, "predicted_answer": [12], "api_call_successful": True}, # Correct +4
329
+ {"question_id": "JA004", "subject": "Physics", "exam_name": "JEE_ADVANCED", "question_type": "INTEGER", "ground_truth": 0, "predicted_answer": [1], "api_call_successful": True}, # Incorrect 0
330
+ # JEE Advanced - MCQ Multiple Correct
331
+ {"question_id": "JA005", "subject": "Chemistry", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1, 3], "predicted_answer": [1, 3], "api_call_successful": True}, # All Correct +4
332
+ {"question_id": "JA006", "subject": "Chemistry", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1, 2, 3], "predicted_answer": [1, 2], "api_call_successful": True}, # Partial +2 (3 correct, 2 chosen)
333
+ {"question_id": "JA007", "subject": "Chemistry", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1, 2, 3, 4], "predicted_answer": [1, 2, 3], "api_call_successful": True}, # Partial +3 (4 correct, 3 chosen)
334
+ {"question_id": "JA008", "subject": "Chemistry", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1, 2], "predicted_answer": [1], "api_call_successful": True}, # Partial +1 (2 correct, 1 chosen)
335
+ {"question_id": "JA009", "subject": "Chemistry", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1, 3], "predicted_answer": [1, 2], "api_call_successful": True}, # Incorrect option chosen -2
336
+ {"question_id": "JA010", "subject": "Chemistry", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1, 3], "predicted_answer": [2, 4], "api_call_successful": True}, # All incorrect options chosen -2
337
+ {"question_id": "JA011", "subject": "Chemistry", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1, 3], "predicted_answer": "SKIP", "api_call_successful": True}, # Skipped 0
338
+ {"question_id": "JA012", "subject": "Maths", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1], "predicted_answer": [1], "api_call_successful": True}, # Single correct in multi-choice, full marks +4
339
+ {"question_id": "JA013", "subject": "Physics", "exam_name": "JEE_ADVANCED", "question_type": "MCQ_MULTIPLE_CORRECT", "ground_truth": [1,2,3], "predicted_answer": [1,4], "api_call_successful": True}, # One correct, one incorrect -> -2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
340
  ]
341
 
342
+ exam_summary = calculate_exam_scores(test_results_exam)
343
+ print("\nExam Score Summary:")
344
  import json
345
+ print(json.dumps(exam_summary, indent=2, sort_keys=True))
346
+
347
+ # Basic assertions - can be expanded
348
+ assert exam_summary["overall_score"] == (4-1+0-1-1) + (4-1) + (4+0+0) + (3-1) + (4+0) + (4+2+3+1-2-2+0+4-2)
349
+ assert exam_summary["overall_correct_full"] == 8
350
+ assert exam_summary["overall_partial_correct"] == 3
351
+ assert exam_summary["overall_incorrect_choice"] == 7
352
+ assert exam_summary["overall_skipped"] == 2
353
+ assert exam_summary["overall_api_parse_failures"] == 3 # N004, N005, JM005
354
+
355
+ assert exam_summary["section_breakdown"]["Physics"]["score"] == (4-1) + (4+0) + (4+0) - 2 # N001,N002 + JM003,JM004 + JA003,JA004 + JA013
356
+ assert exam_summary["section_breakdown"]["Chemistry"]["score"] == (0-1) + (0) + (4+2+3+1-2-2+0) # N003,N004 + JM005 + JA005-JA011
357
+ assert exam_summary["section_breakdown"]["Botany"]["score"] == -1 # N005
358
+ assert exam_summary["section_breakdown"]["Maths"]["score"] == (4-1) + (3-1) + 4 # JM001,JM002 + JA001,JA002 + JA012
 
 
 
 
 
 
 
 
 
 
 
 
 
 
359
 
360
  print("\nEvaluation tests completed.")
src/llm_interface.py CHANGED
@@ -42,45 +42,69 @@ def encode_image_to_base64(image: Image.Image) -> str:
42
  img_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
43
  return img_str
44
 
45
- def construct_reprompt_prompt(previous_raw_response: str) -> list:
46
- """Constructs the message list for a re-prompt API call."""
 
 
 
 
 
 
 
 
 
 
47
  prompt_text = f"""You previously provided the following response to an exam question:
48
  --- PREVIOUS RESPONSE START ---
49
  {previous_raw_response}
50
  --- PREVIOUS RESPONSE END ---
51
 
52
- Your previous response did not correctly format the final answer within <answer> tags, or it contained multiple answers.
53
-
54
- For NEET exam questions, only a single answer option is correct.
55
 
56
- Please re-examine your previous reasoning and provide ONLY the single integer corresponding to the correct answer choice, enclosed in <answer> tags.
57
 
58
- Example:
59
- - If the correct option is 2: <answer>2</answer>
60
- - If you are unsure or cannot determine the answer: <answer>SKIP</answer>
61
 
62
- It is crucial that your response contains ONLY the <answer> tag with the single correct integer number OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
63
  messages = [{"role": "user", "content": prompt_text}]
64
  return messages
65
 
66
 
67
- def construct_initial_prompt(base64_image: str, exam_name: str, exam_year: str) -> list:
68
- """Constructs the initial message list with image for the OpenRouter API call."""
69
- # Updated prompt for the first attempt
70
- prompt_text = f"""You are an expert at analyzing exam questions from the {exam_name} {exam_year} exam and extracting the correct answer option.
71
- This exam uses positive marking for correct answers and may use negative marking for incorrect answers, so accuracy is crucial.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
 
73
  Please think step-by-step to solve the problem.
74
- Examine the provided image of a multiple-choice question carefully.
75
- 1. Analyze the question and the provided options.
76
- 2. Reason through the problem to determine the single correct integer number corresponding to the correct option.
77
- 3. Format your final answer by enclosing ONLY the single integer number within <answer> tags.
78
 
79
  Examples:
80
- - If the correct option is 2: <answer>2</answer>
81
  - If you are unsure or cannot determine the answer: <answer>SKIP</answer>
82
 
83
- It is crucial that your response contains ONLY the <answer> tag with the single correct integer number OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
84
 
85
  messages = [
86
  {
@@ -104,8 +128,9 @@ def get_openrouter_prediction(
104
  api_key: str,
105
  image: Image.Image | None = None, # Image is now optional
106
  previous_raw_response: str | None = None, # Added for re-prompting
107
- exam_name: str | None = None, # New parameter
108
- exam_year: str | None = None, # New parameter
 
109
  max_tokens: int = 100,
110
  request_timeout: int = 60
111
  ) -> tuple[list[int] | str | None, str | None]: # Allow predicted_answer to be "SKIP"
@@ -117,8 +142,9 @@ def get_openrouter_prediction(
117
  api_key (str): The OpenRouter API key.
118
  image (Image.Image | None): The question image (for initial prompt). Default None.
119
  previous_raw_response (str | None): The raw response from a previous failed parse attempt (for re-prompt). Default None.
120
- exam_name (str | None): The name of the exam (e.g., "NEET", "JEE"). Required if 'image' is provided.
121
- exam_year (str | None): The year of the exam. Required if 'image' is provided.
 
122
  max_tokens (int): Max tokens for the response.
123
  request_timeout (int): Timeout for the API request in seconds.
124
 
@@ -128,24 +154,28 @@ def get_openrouter_prediction(
128
  - The raw response text from the LLM (or None if API call failed).
129
 
130
  Raises:
131
- ValueError: If neither image nor previous_raw_response is provided.
132
  requests.exceptions.RequestException: If the API call fails after retries.
133
  """
134
- logging.info(f"Requesting prediction from model: {model_identifier}")
135
 
136
  if image is not None and previous_raw_response is None:
137
  # Initial prompt with image
138
- if not exam_name or not exam_year:
139
  raise ValueError("'exam_name' and 'exam_year' must be provided when 'image' is specified for an initial prompt.")
140
- logging.debug(f"Constructing initial prompt with image for {exam_name} {exam_year}.")
141
  base64_image = encode_image_to_base64(image)
142
- messages = construct_initial_prompt(base64_image, exam_name, exam_year)
143
  elif image is None and previous_raw_response is not None:
144
  # Re-prompt based on previous response
145
- logging.debug("Constructing re-prompt based on previous response.")
146
- messages = construct_reprompt_prompt(previous_raw_response)
147
  else:
148
- raise ValueError("Either 'image' (for initial call) or 'previous_raw_response' (for re-prompt) must be provided, but not both.")
 
 
 
 
149
 
150
  try:
151
  headers = {
@@ -166,91 +196,94 @@ def get_openrouter_prediction(
166
  timeout=request_timeout
167
  )
168
 
169
- # Handle specific status code retries (though tenacity handles exceptions)
170
  if response.status_code in RETRYABLE_STATUS_CODES:
171
- logging.warning(f"Received retryable status code {response.status_code} from {model_identifier}. Retrying might occur if configured.")
172
- # Raise an exception to trigger tenacity retry based on status code if needed,
173
- # or handle retry logic more explicitly here if preferred.
174
- # For simplicity, we rely on tenacity for exception-based retries.
175
- # If the request fails multiple times with these codes, it will eventually raise.
176
- response.raise_for_status() # Raise HTTPError for bad status codes after retries fail
177
-
178
- # Handle non-retryable client/server errors
179
  if not response.ok:
180
- logging.error(f"API Error for model {model_identifier}: Status {response.status_code} - {response.text}")
181
- return None, None # Failed API call
182
 
183
  response_json = response.json()
184
  raw_response_text = response_json.get("choices", [{}])[0].get("message", {}).get("content")
185
 
186
  if not raw_response_text:
187
- logging.warning(f"Empty response content received from model: {model_identifier}")
188
  return None, None
189
 
190
- logging.info(f"Raw response received from {model_identifier}: '{raw_response_text[:100]}...'")
191
- parsed_answer = parse_llm_answer(raw_response_text)
 
192
 
193
  if parsed_answer is None:
194
- logging.warning(f"Failed to parse answer from model {model_identifier}.") # Simplified log
195
 
196
  return parsed_answer, raw_response_text
197
 
198
  except requests.exceptions.Timeout as e:
199
- logging.error(f"Request timed out for model {model_identifier}: {e}")
200
- raise # Re-raise to allow tenacity to handle retry
201
  except requests.exceptions.RequestException as e:
202
- logging.error(f"Request failed for model {model_identifier}: {e}")
203
- raise # Re-raise to allow tenacity to handle retry
204
  except Exception as e:
205
- logging.error(f"An unexpected error occurred for model {model_identifier}: {e}")
206
- # Don't re-raise unexpected errors unless retry logic is designed for them
207
- return None, None # Failed due to unexpected error
208
 
209
  # Example Usage (requires a valid API key in .env and Pillow/requests/tenacity installed)
210
  if __name__ == '__main__':
211
  from src.utils import load_api_key
212
  try:
213
- # Create a dummy black image for testing
214
  dummy_image = Image.new('RGB', (60, 30), color = 'black')
215
  api_key = load_api_key()
216
- # Replace with a model you have access to via OpenRouter, e.g., "openai/gpt-4o"
217
- # Note: Free models might not support vision or follow instructions well.
218
- test_model = "anthropic/claude-3-haiku" # Example model
219
-
220
- print(f"\nTesting prediction with model: {test_model}")
221
- # Example with image, requiring exam_name and exam_year
222
- parsed_ans, raw_resp = get_openrouter_prediction(
223
- model_identifier=test_model,
224
- api_key=api_key,
225
- image=dummy_image,
226
- exam_name="DUMMY_EXAM",
227
- exam_year="2024"
228
  )
 
229
 
230
- print(f"Model: {test_model}")
231
- print(f"Parsed Answer (Initial): {parsed_ans}")
232
- print(f"Raw Response (Initial): {raw_resp}")
233
-
234
- # Example re-prompt (does not need image, exam_name, or exam_year)
235
- if raw_resp: # Only attempt re-prompt if there was an initial response
236
- print(f"\nTesting re-prompt for model: {test_model}")
237
- reprompt_ans, reprompt_raw_resp = get_openrouter_prediction(
238
- model_identifier=test_model,
239
- api_key=api_key,
240
- previous_raw_response="<answer>1 2</answer> This is some extra text." # Example bad response
241
- )
242
- print(f"Parsed Answer (Re-prompt): {reprompt_ans}")
243
- print(f"Raw Response (Re-prompt): {reprompt_raw_resp}")
244
- else:
245
- print("\nSkipping re-prompt test as initial response was empty.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
246
 
247
  except ValueError as e:
248
- print(f"Setup Error: {e}")
249
- # The following Exception catch was too broad and could mask the raw_resp not being defined
250
- # if the ValueError for setup occurred first.
251
- # It's better to catch a more general Exception for runtime issues after setup.
252
  except Exception as e:
253
- # Check if raw_resp was defined (e.g. if initial call succeeded but re-prompt failed)
254
- # This is a bit tricky as raw_resp might be from a successful first call even if a later part fails.
255
- # For simplicity in an example, just print the runtime error.
256
  print(f"Runtime Error during example execution: {e}")
 
42
  img_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
43
  return img_str
44
 
45
+ def construct_reprompt_prompt(previous_raw_response: str, question_type: str) -> list:
46
+ """Constructs the message list for a re-prompt API call based on question_type."""
47
+ specific_instructions = ""
48
+ if question_type == "MCQ_SINGLE_CORRECT":
49
+ specific_instructions = "provide ONLY the single integer corresponding to the correct answer choice"
50
+ elif question_type == "INTEGER":
51
+ specific_instructions = "provide ONLY the single non-negative integer that is the answer"
52
+ elif question_type == "MCQ_MULTIPLE_CORRECT":
53
+ specific_instructions = "provide ALL correct integer option(s) separated by commas (e.g., <answer>1,3</answer> or <answer>2</answer> if only one is correct)"
54
+ else: # Default or unknown
55
+ specific_instructions = "provide the answer according to the question format"
56
+
57
  prompt_text = f"""You previously provided the following response to an exam question:
58
  --- PREVIOUS RESPONSE START ---
59
  {previous_raw_response}
60
  --- PREVIOUS RESPONSE END ---
61
 
62
+ Your previous response did not correctly format the final answer within <answer> tags, or it did not match the expected format for a '{question_type}' question.
 
 
63
 
64
+ Please re-examine your previous reasoning and {specific_instructions}, enclosed in <answer> tags.
65
 
66
+ Example for single correct integer: <answer>2</answer>
67
+ Example for multiple correct integers: <answer>1,4</answer>
68
+ If you are unsure or cannot determine the answer: <answer>SKIP</answer>
69
 
70
+ It is crucial that your response contains ONLY the <answer> tag with the correct integer(s) OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
71
  messages = [{"role": "user", "content": prompt_text}]
72
  return messages
73
 
74
 
75
+ def construct_initial_prompt(base64_image: str, exam_name: str, exam_year: str, question_type: str) -> list:
76
+ """Constructs the initial message list with image for the OpenRouter API call, tailored by question_type."""
77
+
78
+ answer_format_instruction = ""
79
+ example_instruction = ""
80
+
81
+ if question_type == "MCQ_SINGLE_CORRECT":
82
+ answer_format_instruction = "determine the single correct integer number corresponding to the correct option."
83
+ example_instruction = "- If the correct option is 2: <answer>2</answer>"
84
+ elif question_type == "INTEGER":
85
+ answer_format_instruction = "determine the single non-negative integer that is the answer."
86
+ example_instruction = "- If the answer is 5: <answer>5</answer>"
87
+ elif question_type == "MCQ_MULTIPLE_CORRECT":
88
+ answer_format_instruction = "determine all correct integer option(s). If multiple options are correct, list them separated by commas."
89
+ example_instruction = "- If options 1 and 3 are correct: <answer>1,3</answer>\n- If only option 2 is correct: <answer>2</answer>"
90
+ else: # Default or unknown
91
+ answer_format_instruction = "determine the correct answer."
92
+ example_instruction = "- Example: <answer>Your Answer</answer>"
93
+
94
+ prompt_text = f"""You are an expert at analyzing exam questions from the {exam_name} {exam_year} exam ({question_type}) and extracting the correct answer option(s).
95
+ This exam uses specific marking schemes, so accuracy and correct formatting are crucial.
96
 
97
  Please think step-by-step to solve the problem.
98
+ Examine the provided image of the question carefully.
99
+ 1. Analyze the question and the provided options (if any).
100
+ 2. Reason through the problem to {answer_format_instruction}
101
+ 3. Format your final answer by enclosing ONLY the determined integer(s) within <answer> tags.
102
 
103
  Examples:
104
+ {example_instruction}
105
  - If you are unsure or cannot determine the answer: <answer>SKIP</answer>
106
 
107
+ It is crucial that your response contains ONLY the <answer> tag with the correct integer(s) OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
108
 
109
  messages = [
110
  {
 
128
  api_key: str,
129
  image: Image.Image | None = None, # Image is now optional
130
  previous_raw_response: str | None = None, # Added for re-prompting
131
+ exam_name: str | None = None,
132
+ exam_year: str | None = None,
133
+ question_type: str = "MCQ_SINGLE_CORRECT", # New parameter with default
134
  max_tokens: int = 100,
135
  request_timeout: int = 60
136
  ) -> tuple[list[int] | str | None, str | None]: # Allow predicted_answer to be "SKIP"
 
142
  api_key (str): The OpenRouter API key.
143
  image (Image.Image | None): The question image (for initial prompt). Default None.
144
  previous_raw_response (str | None): The raw response from a previous failed parse attempt (for re-prompt). Default None.
145
+ exam_name (str | None): The name of the exam (e.g., "NEET", "JEE"). Required if 'image' is provided for initial prompt.
146
+ exam_year (str | None): The year of the exam. Required if 'image' is provided for initial prompt.
147
+ question_type (str): Type of question, e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER".
148
  max_tokens (int): Max tokens for the response.
149
  request_timeout (int): Timeout for the API request in seconds.
150
 
 
154
  - The raw response text from the LLM (or None if API call failed).
155
 
156
  Raises:
157
+ ValueError: If arguments are inconsistent (e.g., image provided without exam details for initial prompt).
158
  requests.exceptions.RequestException: If the API call fails after retries.
159
  """
160
+ logging.info(f"Requesting prediction from model: {model_identifier} for question_type: {question_type}")
161
 
162
  if image is not None and previous_raw_response is None:
163
  # Initial prompt with image
164
+ if not exam_name or not exam_year: # exam_name and exam_year are crucial for initial prompt context
165
  raise ValueError("'exam_name' and 'exam_year' must be provided when 'image' is specified for an initial prompt.")
166
+ logging.debug(f"Constructing initial prompt with image for {exam_name} {exam_year}, type: {question_type}.")
167
  base64_image = encode_image_to_base64(image)
168
+ messages = construct_initial_prompt(base64_image, exam_name, exam_year, question_type)
169
  elif image is None and previous_raw_response is not None:
170
  # Re-prompt based on previous response
171
+ logging.debug(f"Constructing re-prompt based on previous response for type: {question_type}.")
172
+ messages = construct_reprompt_prompt(previous_raw_response, question_type)
173
  else:
174
+ # This condition means either both image and previous_raw_response are None, or both are provided.
175
+ # The latter (both provided) is ambiguous for which prompt to use.
176
+ # The former (both None) means no input to act on.
177
+ raise ValueError("Provide 'image' (with 'exam_name' and 'exam_year') for an initial call, OR 'previous_raw_response' for a re-prompt. Not neither or both.")
178
+
179
 
180
  try:
181
  headers = {
 
196
  timeout=request_timeout
197
  )
198
 
 
199
  if response.status_code in RETRYABLE_STATUS_CODES:
200
+ logging.warning(f"Received retryable status code {response.status_code} from {model_identifier} for {question_type}. Retrying might occur.")
201
+ response.raise_for_status()
202
+
 
 
 
 
 
203
  if not response.ok:
204
+ logging.error(f"API Error for model {model_identifier} ({question_type}): Status {response.status_code} - {response.text}")
205
+ return None, None
206
 
207
  response_json = response.json()
208
  raw_response_text = response_json.get("choices", [{}])[0].get("message", {}).get("content")
209
 
210
  if not raw_response_text:
211
+ logging.warning(f"Empty response content received from model: {model_identifier} for {question_type}")
212
  return None, None
213
 
214
+ logging.info(f"Raw response received from {model_identifier} ({question_type}): '{raw_response_text[:100]}...'")
215
+ # Pass question_type to parse_llm_answer
216
+ parsed_answer = parse_llm_answer(raw_response_text, question_type=question_type)
217
 
218
  if parsed_answer is None:
219
+ logging.warning(f"Failed to parse answer from model {model_identifier} for {question_type}.")
220
 
221
  return parsed_answer, raw_response_text
222
 
223
  except requests.exceptions.Timeout as e:
224
+ logging.error(f"Request timed out for model {model_identifier} ({question_type}): {e}")
225
+ raise
226
  except requests.exceptions.RequestException as e:
227
+ logging.error(f"Request failed for model {model_identifier} ({question_type}): {e}")
228
+ raise
229
  except Exception as e:
230
+ logging.error(f"An unexpected error occurred for model {model_identifier} ({question_type}): {e}")
231
+ return None, None
 
232
 
233
  # Example Usage (requires a valid API key in .env and Pillow/requests/tenacity installed)
234
  if __name__ == '__main__':
235
  from src.utils import load_api_key
236
  try:
 
237
  dummy_image = Image.new('RGB', (60, 30), color = 'black')
238
  api_key = load_api_key()
239
+ test_model = "anthropic/claude-3-haiku"
240
+
241
+ print(f"\n--- Testing with model: {test_model} ---")
242
+
243
+ # Test Case 1: Initial call - MCQ_SINGLE_CORRECT
244
+ print("\nTest Case 1: Initial - MCQ_SINGLE_CORRECT")
245
+ parsed_ans_1, raw_resp_1 = get_openrouter_prediction(
246
+ model_identifier=test_model, api_key=api_key, image=dummy_image,
247
+ exam_name="DUMMY_EXAM", exam_year="2024", question_type="MCQ_SINGLE_CORRECT"
 
 
 
248
  )
249
+ print(f"Parsed: {parsed_ans_1}, Raw: {raw_resp_1[:60] if raw_resp_1 else None}...")
250
 
251
+ # Test Case 2: Initial call - MCQ_MULTIPLE_CORRECT
252
+ print("\nTest Case 2: Initial - MCQ_MULTIPLE_CORRECT")
253
+ parsed_ans_2, raw_resp_2 = get_openrouter_prediction(
254
+ model_identifier=test_model, api_key=api_key, image=dummy_image,
255
+ exam_name="DUMMY_EXAM", exam_year="2024", question_type="MCQ_MULTIPLE_CORRECT"
256
+ )
257
+ print(f"Parsed: {parsed_ans_2}, Raw: {raw_resp_2[:60] if raw_resp_2 else None}...")
258
+
259
+ # Test Case 3: Initial call - INTEGER
260
+ print("\nTest Case 3: Initial - INTEGER")
261
+ parsed_ans_3, raw_resp_3 = get_openrouter_prediction(
262
+ model_identifier=test_model, api_key=api_key, image=dummy_image,
263
+ exam_name="DUMMY_EXAM", exam_year="2024", question_type="INTEGER"
264
+ )
265
+ print(f"Parsed: {parsed_ans_3}, Raw: {raw_resp_3[:60] if raw_resp_3 else None}...")
266
+
267
+
268
+ # Test Case 4: Re-prompt - MCQ_SINGLE_CORRECT (simulating bad initial response)
269
+ print("\nTest Case 4: Re-prompt - MCQ_SINGLE_CORRECT")
270
+ bad_initial_resp_mcq_single = "<answer>1 2</answer> This is some extra text."
271
+ reprompt_ans_4, reprompt_raw_4 = get_openrouter_prediction(
272
+ model_identifier=test_model, api_key=api_key,
273
+ previous_raw_response=bad_initial_resp_mcq_single, question_type="MCQ_SINGLE_CORRECT"
274
+ )
275
+ print(f"Parsed: {reprompt_ans_4}, Raw: {reprompt_raw_4[:60] if reprompt_raw_4 else None}...")
276
+
277
+ # Test Case 5: Re-prompt - MCQ_MULTIPLE_CORRECT (simulating bad initial response)
278
+ print("\nTest Case 5: Re-prompt - MCQ_MULTIPLE_CORRECT")
279
+ bad_initial_resp_mcq_multi = "The answer is <answer>option 1 and 4</answer> because reasons."
280
+ reprompt_ans_5, reprompt_raw_5 = get_openrouter_prediction(
281
+ model_identifier=test_model, api_key=api_key,
282
+ previous_raw_response=bad_initial_resp_mcq_multi, question_type="MCQ_MULTIPLE_CORRECT"
283
+ )
284
+ print(f"Parsed: {reprompt_ans_5}, Raw: {reprompt_raw_5[:60] if reprompt_raw_5 else None}...")
285
 
286
  except ValueError as e:
287
+ print(f"Setup or Argument Error: {e}")
 
 
 
288
  except Exception as e:
 
 
 
289
  print(f"Runtime Error during example execution: {e}")
src/utils.py CHANGED
@@ -28,103 +28,119 @@ def load_api_key(key_name="OPENROUTER_API_KEY"):
28
  return api_key
29
 
30
 
31
- def parse_llm_answer(response_text: str) -> list[int] | str | None:
32
  """
33
  Parses the LLM response text to extract answers within <answer> tags.
34
- Requires exactly one integer answer for NEET context.
35
 
36
  Handles:
37
- - Single integer answers.
 
38
  - The specific string "SKIP" for skipped questions.
39
  - Potential newlines within the tags.
40
 
41
  Args:
42
  response_text (str): The raw text response from the LLM.
 
 
 
43
 
44
  Returns:
45
  list[int] | str | None:
46
- - A list containing the single integer answer if found and valid.
 
47
  - The string "SKIP" if the response indicates a skip.
48
- - None if parsing fails (no tag, invalid content, multiple answers, etc.).
49
  """
50
  if not response_text:
51
  return None
52
 
53
  # Check for exact SKIP response first (case-insensitive)
54
- # Use strip() to handle potential leading/trailing whitespace around the tag itself
55
  if response_text.strip().upper() == "<ANSWER>SKIP</ANSWER>":
56
- logging.info("Parsed answer as SKIP.")
57
  return "SKIP"
58
 
59
- # Regex to find content within <answer>...</answer>, allowing for newlines (re.DOTALL)
60
- # It captures the content inside the tags.
61
  match = re.search(r"<answer>(.*?)</answer>", response_text, re.DOTALL | re.IGNORECASE)
62
 
63
- if match:
64
- extracted_content = match.group(1).strip()
65
- logging.debug(f"DEBUG_PARSE: Raw extracted content: {match.group(1)!r}, Stripped: {extracted_content!r}") # Changed to debug
66
- if not extracted_content:
67
- logging.warning(f"Found <answer> tag but content is empty.")
68
- return None # Empty content within tags
69
-
70
- # Split by comma and strip whitespace for each potential number
71
- potential_answers = [item.strip() for item in extracted_content.split(',')]
72
-
73
- parsed_answers = []
74
- valid = True
75
- for ans_str in potential_answers:
76
- if not ans_str: continue # Skip empty strings resulting from trailing commas etc.
77
- try:
78
- logging.debug(f"DEBUG_PARSE: Attempting int conversion on: {ans_str!r}") # Changed to debug
79
- # Attempt to convert to integer
80
- parsed_answers.append(int(ans_str))
81
- except ValueError:
82
- # Log only the problematic part, not the whole content
83
- logging.warning(f"Could not parse '{ans_str}' as an integer within <answer> tag.")
84
- valid = False
85
- break # Stop parsing this tag content if any part is invalid
86
-
87
- if valid and len(parsed_answers) == 1:
88
- # Return list with the single integer answer
89
- return parsed_answers # No need to sort a single-element list
90
- elif valid and len(parsed_answers) > 1:
91
- logging.warning(f"Found multiple answers ({parsed_answers}) within <answer> tag. Treating as parse failure for single-answer context.")
92
- return None # Treat multiple answers as failure
93
  else:
94
- # Parsing failed (invalid content) or resulted in empty list after validation
95
- # No need to log here, handled by the ValueError log above if applicable
96
  return None
 
 
 
 
97
  else:
98
- # Tag not found
99
- logging.warning(f"Could not find <answer> tag in response.") # Simplified log
100
- return None # Return None if tag is missing, fallback removed
101
 
102
  # Example Usage (for testing)
103
  if __name__ == '__main__':
104
- test_responses = [
105
- "Some text before <answer>2</answer> and after.",
106
- "Blah blah <answer> 1 </answer> blah",
107
- "<answer>1,3</answer>", # Should fail now
108
- "<answer> 4 , 2 </answer> end", # Should fail now
109
- "<answer>\n 3 \n</answer>",
110
- "<answer>\n 1,\n 4 \n</answer>", # Should fail now
111
- "No answer tag here.", # Should fail now
112
- "<answer></answer>",
113
- "<answer> </answer>",
114
- "<answer>abc</answer>",
115
- "<answer>1, abc</answer>",
116
- "<answer>1, </answer>", # Should fail now
117
- "<answer>,2</answer>", # Should fail now
118
- None,
119
- "",
120
- "<ANSWER>SKIP</ANSWER>",
121
- " <ANSWER>SKIP</ANSWER> "
 
 
 
 
 
 
 
 
 
 
 
 
122
  ]
123
 
124
- print("\n--- Testing parse_llm_answer (single answer required) ---")
125
- for resp in test_responses:
126
- parsed = parse_llm_answer(resp)
127
- print(f"Response: '{str(resp)[:50]}...' -> Parsed: {parsed}")
 
128
 
129
  # Test API key loading (will raise error if .env or env var not set)
130
  # try:
 
28
  return api_key
29
 
30
 
31
+ def parse_llm_answer(response_text: str, question_type: str = "MCQ_SINGLE_CORRECT") -> list[int] | str | None:
32
  """
33
  Parses the LLM response text to extract answers within <answer> tags.
34
+ The parsing logic adapts based on the question_type.
35
 
36
  Handles:
37
+ - Single integer answers (for MCQ_SINGLE_CORRECT, INTEGER).
38
+ - Multiple integer answers (comma-separated for MCQ_MULTIPLE_CORRECT).
39
  - The specific string "SKIP" for skipped questions.
40
  - Potential newlines within the tags.
41
 
42
  Args:
43
  response_text (str): The raw text response from the LLM.
44
+ question_type (str): The type of question, e.g., "MCQ_SINGLE_CORRECT",
45
+ "MCQ_MULTIPLE_CORRECT", "INTEGER".
46
+ Defaults to "MCQ_SINGLE_CORRECT".
47
 
48
  Returns:
49
  list[int] | str | None:
50
+ - A list containing integer answer(s) if found and valid.
51
+ (single element for MCQ_SINGLE_CORRECT/INTEGER, potentially multiple for MCQ_MULTIPLE_CORRECT)
52
  - The string "SKIP" if the response indicates a skip.
53
+ - None if parsing fails (no tag, invalid content, type mismatch, etc.).
54
  """
55
  if not response_text:
56
  return None
57
 
58
  # Check for exact SKIP response first (case-insensitive)
 
59
  if response_text.strip().upper() == "<ANSWER>SKIP</ANSWER>":
60
+ logging.info(f"Parsed answer as SKIP for question_type: {question_type}.")
61
  return "SKIP"
62
 
 
 
63
  match = re.search(r"<answer>(.*?)</answer>", response_text, re.DOTALL | re.IGNORECASE)
64
 
65
+ if not match:
66
+ logging.warning(f"Could not find <answer> tag in response for question_type: {question_type}.")
67
+ return None
68
+
69
+ extracted_content = match.group(1).strip()
70
+ if not extracted_content:
71
+ logging.warning(f"Found <answer> tag but content is empty for question_type: {question_type}.")
72
+ return None
73
+
74
+ potential_answers_str = [item.strip() for item in extracted_content.split(',')]
75
+ parsed_numbers = []
76
+ all_valid_numbers = True
77
+
78
+ for ans_str in potential_answers_str:
79
+ if not ans_str: continue # Skip empty strings (e.g., from "1," or ",2")
80
+ try:
81
+ parsed_numbers.append(int(ans_str))
82
+ except ValueError:
83
+ logging.warning(f"Could not parse '{ans_str}' as an integer within <answer> tag for question_type: {question_type}.")
84
+ all_valid_numbers = False
85
+ break
86
+
87
+ if not all_valid_numbers or not parsed_numbers: # If any part was not a number or if list is empty after parsing
88
+ return None
89
+
90
+ # Apply rules based on question_type
91
+ if question_type in ["MCQ_SINGLE_CORRECT", "INTEGER"]:
92
+ if len(parsed_numbers) == 1:
93
+ return parsed_numbers # Returns [integer]
 
94
  else:
95
+ logging.warning(f"Expected single answer for {question_type}, but found {len(parsed_numbers)} numbers: {parsed_numbers}. Treating as parse failure.")
 
96
  return None
97
+ elif question_type == "MCQ_MULTIPLE_CORRECT":
98
+ # For multiple correct, any number of valid integers is acceptable.
99
+ # Return them sorted and unique.
100
+ return sorted(list(set(parsed_numbers)))
101
  else:
102
+ logging.error(f"Unknown question_type '{question_type}' provided to parse_llm_answer.")
103
+ return None
 
104
 
105
  # Example Usage (for testing)
106
  if __name__ == '__main__':
107
+ test_cases = [
108
+ # MCQ_SINGLE_CORRECT / INTEGER
109
+ {"resp": "Some text before <answer>2</answer> and after.", "type": "MCQ_SINGLE_CORRECT", "expected": [2]},
110
+ {"resp": "Blah blah <answer> 1 </answer> blah", "type": "INTEGER", "expected": [1]},
111
+ {"resp": "<answer>\n 3 \n</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": [3]},
112
+ {"resp": "<answer>1,3</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None}, # Fail: multiple for single
113
+ {"resp": "<answer>1,3</answer>", "type": "INTEGER", "expected": None}, # Fail: multiple for single
114
+ {"resp": "No answer tag here.", "type": "MCQ_SINGLE_CORRECT", "expected": None},
115
+ {"resp": "<answer></answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
116
+ {"resp": "<answer> </answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
117
+ {"resp": "<answer>abc</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
118
+ {"resp": "<answer>1, abc</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
119
+ {"resp": "<ANSWER>SKIP</ANSWER>", "type": "MCQ_SINGLE_CORRECT", "expected": "SKIP"},
120
+ {"resp": " <ANSWER>SKIP</ANSWER> ", "type": "INTEGER", "expected": "SKIP"},
121
+
122
+ # MCQ_MULTIPLE_CORRECT
123
+ {"resp": "<answer>1,3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [1, 3]},
124
+ {"resp": "<answer> 4 , 2 </answer> end", "type": "MCQ_MULTIPLE_CORRECT", "expected": [2, 4]},
125
+ {"resp": "<answer>\n 1,\n 4 \n</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [1, 4]},
126
+ {"resp": "<answer>3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [3]}, # Single is valid for multi
127
+ {"resp": "<answer>3,1,4,1</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [1, 3, 4]}, # Unique and sorted
128
+ {"resp": "<answer>1, </answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [1]}, # Handles trailing comma
129
+ {"resp": "<answer>,2</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [2]}, # Handles leading comma
130
+ {"resp": "<answer>1,abc,3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": None}, # Invalid content
131
+ {"resp": "<ANSWER>SKIP</ANSWER>", "type": "MCQ_MULTIPLE_CORRECT", "expected": "SKIP"},
132
+
133
+ # General / Edge cases
134
+ {"resp": None, "type": "MCQ_SINGLE_CORRECT", "expected": None},
135
+ {"resp": "", "type": "MCQ_SINGLE_CORRECT", "expected": None},
136
+ {"resp": "<answer>5</answer>", "type": "UNKNOWN_TYPE", "expected": None}, # Unknown type
137
  ]
138
 
139
+ print("\n--- Testing parse_llm_answer (with question_type) ---")
140
+ for case in test_cases:
141
+ parsed = parse_llm_answer(case["resp"], case["type"])
142
+ print(f"Response: '{str(case['resp'])[:50]}...', Type: {case['type']} -> Parsed: {parsed} (Expected: {case['expected']})")
143
+ assert parsed == case["expected"], f"Mismatch for {case['resp']} with type {case['type']}"
144
 
145
  # Test API key loading (will raise error if .env or env var not set)
146
  # try: