Reja1 commited on
Commit
eb0a98a
·
1 Parent(s): 9c21c5f

--question-ids working and force redownloading

Browse files
Files changed (7) hide show
  1. README.md +202 -73
  2. jee-neet-benchmark.py +7 -2
  3. requirements.txt +3 -0
  4. src/benchmark_runner.py +1 -1
  5. src/evaluation.py +45 -27
  6. src/prompts.py +19 -16
  7. src/utils.py +110 -56
README.md CHANGED
@@ -58,13 +58,13 @@ column_info:
58
  description: Unique identifier for the question.
59
  data_type: string
60
  exam_name:
61
- description: Name of the exam (e.g., "NEET", "JEE Main").
62
  data_type: string
63
  exam_year:
64
  description: Year of the exam.
65
  data_type: int32
66
  exam_code:
67
- description: Specific paper code/session (e.g., "T3", "S1").
68
  data_type: string
69
  subject:
70
  description: Subject (e.g., "Physics", "Chemistry", "Biology", "Mathematics").
@@ -73,8 +73,8 @@ column_info:
73
  description: Type of question (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER").
74
  data_type: string
75
  correct_answer:
76
- description: List containing the correct answer index/indices (e.g., [2], [1, 3]) or a single integer for INTEGER type.
77
- data_type: list[int32] # or sequence of int32
78
 
79
  # More Information
80
  # ----------------
@@ -113,7 +113,7 @@ personal_sensitive_information: false # Does the dataset contain PII?
113
  ---
114
  # JEE/NEET LLM Benchmark Dataset
115
 
116
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT) <!-- Choose your license -->
117
 
118
  ## Dataset Description
119
 
@@ -123,10 +123,20 @@ This repository contains a benchmark dataset designed for evaluating the capabil
123
 
124
  The questions are presented in image format (`.png`) as they appear in the original papers. The dataset includes metadata linking each image to its corresponding exam details (name, year, subject, question type), and correct answer(s). The benchmark framework supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions.
125
 
126
- **Current Data (Examples):**
127
- * NEET 2024 (Code T3)
128
- * NEET 2025 (Code 45)
129
- * (Support for JEE Main & Advanced questions can be added by updating `data/metadata.jsonl` and the `images/` directory accordingly.)
 
 
 
 
 
 
 
 
 
 
130
 
131
  ## How to Use
132
 
@@ -160,96 +170,215 @@ This repository contains scripts to run the benchmark evaluation directly:
160
 
161
  1. **Clone the repository:**
162
  ```bash
163
- # Replace with your actual Hugging Face repository URL
164
- git clone https://huggingface.co/datasets/Reja1/jee-neet-benchmark
165
- cd your-repo-name
166
  # Ensure Git LFS is installed and pull large files if necessary
167
  # git lfs pull
168
  ```
 
169
  2. **Install dependencies:**
170
  ```bash
171
  # It's recommended to use a virtual environment
172
  python -m venv venv
173
- # source venv/bin/activate # or .\venv\Scripts\activate on Windows
174
  pip install -r requirements.txt
175
  ```
 
176
  3. **Configure API Key:**
177
- * Create a file named `.env` in the root directory of the project (`your-repo-name/`).
178
  * Add your OpenRouter API key to this file:
179
  ```dotenv
180
- OPENROUTER_API_KEY=your_actual_openrouter_api_key
181
  ```
182
  * **Important:** The `.gitignore` file is already configured to prevent committing the `.env` file. Never commit your API keys directly.
 
183
  4. **Configure Models:**
184
  * Edit the `configs/benchmark_config.yaml` file.
185
- * Modify the `openrouter_models` list to include the specific model identifiers (e.g., `"openai/gpt-4o"`, `"google/gemini-2.5-pro-preview-03-25"`) you want to evaluate. Ensure these models support vision input on OpenRouter.
 
 
 
 
 
 
 
186
  * You can also adjust other parameters like `max_tokens` and `request_timeout` if needed.
 
187
  5. **Run the benchmark:**
188
- * Execute the runner script from the root directory:
189
- ```bash
190
- python src/benchmark_runner.py --config configs/benchmark_config.yaml
191
- ```
192
- * You can override the models list from the command line:
193
- ```bash
194
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --models "openai/gpt-4o" "google/gemini-2.5-pro-preview-03-25"
195
- ```
196
- * You can specify a different output directory:
197
- ```bash
198
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --output_dir my_custom_results
199
- ```
200
- * To run the benchmark on a specific exam paper, use the `--exam_name` and `--exam_year` arguments. Both must be provided. The `exam_name` should match the values in your `metadata.jsonl` (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED").
201
- ```bash
202
- # Example: Run only NEET 2024 questions
203
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --exam_name NEET --exam_year 2024
204
-
205
- # Example: Run only JEE_MAIN 2023 questions (assuming data exists)
206
- python src/benchmark_runner.py --config configs/benchmark_config.yaml --exam_name JEE_MAIN --exam_year 2023
207
- ```
208
- Note: If using exam names with spaces (though not recommended in metadata), enclose them in quotes.
 
 
 
 
 
 
 
 
 
 
209
  6. **Check Results:**
210
- * Results for each model will be saved in subdirectories within the `results/` folder (or your custom output directory).
211
- * Each model's folder (e.g., `results/openai_gpt-4o_NEET_2024_YYYYMMDD_HHMMSS`) will contain:
212
- * `predictions.jsonl`: Detailed results for each question (prediction, ground truth, raw response, evaluation status, marks awarded).
213
- * `summary.json`: Overall scores and statistics for that model run.
214
- * `summary.md`: A human-readable Markdown version of the summary.
215
- * Sample benchmark results for some models can be found in the `results/` folder (these may be outdated).
216
-
217
- ## Pros
218
-
219
- * **Multimodal Reasoning:** Uses images of questions directly, testing the multimodal reasoning capability of the model.
220
- * **Flexible Exam Support:** Designed to support multiple exams (NEET, JEE Main, JEE Advanced) and various question types (MCQ Single Correct, MCQ Multiple Correct, Integer).
221
- * **Detailed Scoring:** Implements specific scoring rules for different exams and question types, including partial marking for JEE Advanced multiple correct questions.
222
- * **Reattempt Mechanism:** Implements a reattempt mechanism to encourage the model to provide the final answer within `<answer>` tags, adapted for different question types.
223
- * **Reproducibility:** Easily reproducible with simple commands and an OpenRouter API key.
224
- * **Model Flexibility:** Allows testing of various models available through OpenRouter.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
225
 
226
  ## Dataset Structure
227
 
228
- * **`data/metadata.jsonl`**: Contains metadata for each question image. Each line is a JSON object with fields like `image_path`, `question_id`, `exam_name` (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED"), `exam_year`, `subject`, `question_type` (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER"), `correct_answer`.
229
- * **`images/`**: Contains subdirectories for each exam set (e.g., `images/NEET_2024_T3/`, `images/JEE_MAIN_2023_Example/`), holding the `.png` question images.
230
- * **`src/`**: Python source code for running the benchmark (data loading, LLM interaction, evaluation).
231
- * **`configs/`**: Configuration files for the benchmark.
232
- * **`results/`**: Directory where benchmark results (LLM outputs) will be stored.
233
- * **`jee_neet_benchmark_dataset.py`**: Hugging Face `datasets` loading script (defines how to load `metadata.jsonl` and images).
234
- * **`requirements.txt`**: Python dependencies.
235
- * **`README.md`**: This file.
 
236
 
237
- ## Data Fields
 
 
 
238
 
239
- The dataset contains the following fields (accessible via `datasets`):
 
 
 
 
 
 
 
 
240
 
241
- * `image`: The question image (`datasets.Image`).
242
- * `question_id`: Unique identifier for the question (string).
243
- * `exam_name`: Name of the exam (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED") (string).
244
- * `exam_year`: Year of the exam (int).
245
- * `subject`: Subject (e.g., "Physics", "Chemistry", "Botany", "Zoology", "Mathematics") (string).
246
- * `question_type`: Type of question (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER") (string).
247
- * `correct_answer`: List containing the correct answer index/indices (e.g., `[2]`, `[1, 3]`) or a single integer for INTEGER type questions (list of int, or int).
248
 
249
- ## Cons / Current Limitations
 
 
 
 
250
 
251
- * **Data Expansion:** While the framework supports various exams and question types, the current `metadata.jsonl` primarily contains NEET data. More diverse data (especially for JEE Main and Advanced with varied question types) needs to be added to make the benchmark more comprehensive.
252
- * **Max Score in Summary:** The overall maximum score in the generated Markdown summary is currently marked as "N/A (variable per question)" due to the complexity of calculating it accurately across mixed question types in a single run. Each question's max score depends on its type and exam.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
253
 
254
  ## Citation
255
 
@@ -260,7 +389,7 @@ If you use this dataset or benchmark code, please cite:
260
  title={JEE/NEET LLM Benchmark},
261
  author={Md Rejaullah},
262
  year={2025},
263
- howpublished={\\url{https://huggingface.co/datasets/Reja1/jee-neet-benchmark}},
264
  }
265
  ```
266
 
 
58
  description: Unique identifier for the question.
59
  data_type: string
60
  exam_name:
61
+ description: Name of the exam (e.g., "NEET", "JEE_MAIN", "JEE_ADVANCED").
62
  data_type: string
63
  exam_year:
64
  description: Year of the exam.
65
  data_type: int32
66
  exam_code:
67
+ description: Specific paper code/session (e.g., "T3", "45").
68
  data_type: string
69
  subject:
70
  description: Subject (e.g., "Physics", "Chemistry", "Biology", "Mathematics").
 
73
  description: Type of question (e.g., "MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER").
74
  data_type: string
75
  correct_answer:
76
+ description: List containing the correct answer strings (e.g., ["A"], ["B", "C"]) or a single string for INTEGER type.
77
+ data_type: list[string] # Updated to reflect string format
78
 
79
  # More Information
80
  # ----------------
 
113
  ---
114
  # JEE/NEET LLM Benchmark Dataset
115
 
116
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
117
 
118
  ## Dataset Description
119
 
 
123
 
124
  The questions are presented in image format (`.png`) as they appear in the original papers. The dataset includes metadata linking each image to its corresponding exam details (name, year, subject, question type), and correct answer(s). The benchmark framework supports various question types including Single Correct MCQs, Multiple Correct MCQs (with partial marking for JEE Advanced), and Integer type questions.
125
 
126
+ **Current Data:**
127
+ * **NEET 2024** (Code T3): 200 questions across Physics, Chemistry, Botany, and Zoology
128
+ * **NEET 2025** (Code 45): 180 questions across Physics, Chemistry, Botany, and Zoology
129
+ * **JEE Advanced 2024** (Paper 1 & 2): 102 questions across Physics, Chemistry, and Mathematics
130
+ * **Total:** 380+ questions with comprehensive metadata
131
+
132
+ ## Key Features
133
+
134
+ * **🖼️ Multimodal Reasoning:** Uses images of questions directly, testing the multimodal reasoning capability of models
135
+ * **📊 Exam-Specific Scoring:** Implements authentic scoring rules for different exams and question types, including partial marking for JEE Advanced
136
+ * **🔄 Robust API Handling:** Built-in retry mechanism and re-prompting for failed API calls or parsing errors
137
+ * **🎯 Flexible Filtering:** Filter by exam name, year, or specific question IDs for targeted evaluation
138
+ * **📈 Comprehensive Results:** Generates detailed JSON and human-readable Markdown summaries with section-wise breakdowns
139
+ * **🔧 Easy Configuration:** Simple YAML-based configuration for models and parameters
140
 
141
  ## How to Use
142
 
 
170
 
171
  1. **Clone the repository:**
172
  ```bash
173
+ # Replace with your actual repository URL
174
+ git clone https://github.com/your-username/jee-neet-benchmark
175
+ cd jee-neet-benchmark
176
  # Ensure Git LFS is installed and pull large files if necessary
177
  # git lfs pull
178
  ```
179
+
180
  2. **Install dependencies:**
181
  ```bash
182
  # It's recommended to use a virtual environment
183
  python -m venv venv
184
+ source venv/bin/activate # On Windows: venv\Scripts\activate
185
  pip install -r requirements.txt
186
  ```
187
+
188
  3. **Configure API Key:**
189
+ * Create a file named `.env` in the root directory of the project.
190
  * Add your OpenRouter API key to this file:
191
  ```dotenv
192
+ OPENROUTER_API_KEY=your_actual_openrouter_api_key_here
193
  ```
194
  * **Important:** The `.gitignore` file is already configured to prevent committing the `.env` file. Never commit your API keys directly.
195
+
196
  4. **Configure Models:**
197
  * Edit the `configs/benchmark_config.yaml` file.
198
+ * Modify the `openrouter_models` list to include the specific model identifiers you want to evaluate:
199
+ ```yaml
200
+ openrouter_models:
201
+ - "google/gemini-2.5-pro-preview-03-25"
202
+ - "openai/gpt-4o"
203
+ - "anthropic/claude-3-5-sonnet-20241022"
204
+ ```
205
+ * Ensure these models support vision input on OpenRouter.
206
  * You can also adjust other parameters like `max_tokens` and `request_timeout` if needed.
207
+
208
  5. **Run the benchmark:**
209
+
210
+ **Basic usage (run all available models on all questions):**
211
+ ```bash
212
+ python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25"
213
+ ```
214
+
215
+ **Filter by exam and year:**
216
+ ```bash
217
+ # Run only NEET 2024 questions
218
+ python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --exam_name NEET --exam_year 2024
219
+
220
+ # Run only JEE Advanced 2024 questions
221
+ python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "anthropic/claude-3-5-sonnet-20241022" --exam_name JEE_ADVANCED --exam_year 2024
222
+ ```
223
+
224
+ **Run specific questions:**
225
+ ```bash
226
+ # Run specific question IDs
227
+ python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "google/gemini-2.5-pro-preview-03-25" --question_ids "NEET_2024_T3_001,NEET_2024_T3_002,JEE_ADVANCE_2024_P1_MATH_01"
228
+ ```
229
+
230
+ **Custom output directory:**
231
+ ```bash
232
+ python src/benchmark_runner.py --config configs/benchmark_config.yaml --model "openai/gpt-4o" --output_dir my_custom_results
233
+ ```
234
+
235
+ **Available filtering options:**
236
+ - `--exam_name`: Choose from `NEET`, `JEE_MAIN`, `JEE_ADVANCED`, or `all` (default)
237
+ - `--exam_year`: Choose from available years (`2024`, `2025`, etc.) or `all` (default)
238
+ - `--question_ids`: Comma-separated list of specific question IDs to evaluate (e.g., "NEET_2024_T3_001,JEE_ADVANCE_2024_P1_MATH_01")
239
+
240
  6. **Check Results:**
241
+ * Results for each model run will be saved in timestamped subdirectories within the `results/` folder.
242
+ * Each run's folder (e.g., `results/google_gemini-2.5-pro-preview-03-25_NEET_2024_20250524_141230/`) contains:
243
+ * **`predictions.jsonl`**: Detailed results for each question including:
244
+ - Model predictions and ground truth
245
+ - Raw LLM responses
246
+ - Evaluation status and marks awarded
247
+ - API call success/failure information
248
+ * **`summary.json`**: Overall scores and statistics in JSON format
249
+ * **`summary.md`**: Human-readable Markdown summary with:
250
+ - Overall exam scores
251
+ - Section-wise breakdown (by subject)
252
+ - Detailed statistics on correct/incorrect/skipped questions
253
+
254
+ ## Scoring System
255
+
256
+ The benchmark implements authentic scoring systems for each exam type:
257
+
258
+ ### NEET Scoring
259
+ - **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped
260
+
261
+ ### JEE Main Scoring
262
+ - **Single Correct MCQ**: +4 for correct, -1 for incorrect, 0 for skipped
263
+ - **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped
264
+
265
+ ### JEE Advanced Scoring
266
+ - **Single Correct MCQ**: +3 for correct, -1 for incorrect, 0 for skipped
267
+ - **Multiple Correct MCQ**: Complex partial marking system:
268
+ - +4 for all correct options selected
269
+ - +3 for 3 out of 4 correct options (when 4 are correct)
270
+ - +2 for 2 out of 3+ correct options
271
+ - +1 for 1 out of 2+ correct options
272
+ - -2 for any incorrect option selected
273
+ - 0 for skipped
274
+ - **Integer Type**: +4 for correct, 0 for incorrect, 0 for skipped
275
+
276
+ ## Advanced Features
277
+
278
+ ### Retry Mechanism
279
+ - Automatic retry for failed API calls (up to 3 attempts with exponential backoff)
280
+ - Separate retry pass for questions that failed initially
281
+ - Comprehensive error tracking and reporting
282
+
283
+ ### Re-prompting System
284
+ - If initial response parsing fails, the system automatically re-prompts the model
285
+ - Uses the previous response to ask for properly formatted answers
286
+ - Adapts prompts based on question type (MCQ vs Integer)
287
+
288
+ ### Comprehensive Evaluation
289
+ - Tracks multiple metrics: correct answers, partial credit, skipped questions, API failures
290
+ - Section-wise breakdown by subject
291
+ - Detailed logging with color-coded progress indicators
292
 
293
  ## Dataset Structure
294
 
295
+ * **`data/metadata.jsonl`**: Contains metadata for each question image with fields:
296
+ - `image_path`: Path to the question image
297
+ - `question_id`: Unique identifier (e.g., "NEET_2024_T3_001")
298
+ - `exam_name`: Exam type ("NEET", "JEE_MAIN", "JEE_ADVANCED")
299
+ - `exam_year`: Year of the exam (integer)
300
+ - `exam_code`: Paper/session code (e.g., "T3", "P1")
301
+ - `subject`: Subject name (e.g., "Physics", "Chemistry", "Mathematics")
302
+ - `question_type`: Question format ("MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT", "INTEGER")
303
+ - `correct_answer`: List of correct answer strings (e.g., ["A"], ["B", "C"], ["42"])
304
 
305
+ * **`images/`**: Contains subdirectories for each exam set:
306
+ - `images/NEET_2024_T3/`: NEET 2024 question images
307
+ - `images/NEET_2025_45/`: NEET 2025 question images
308
+ - `images/JEE_ADVANCE_2024/`: JEE Advanced 2024 question images
309
 
310
+ * **`src/`**: Python source code for the benchmark system:
311
+ - `benchmark_runner.py`: Main benchmark execution script
312
+ - `llm_interface.py`: OpenRouter API interface with retry logic
313
+ - `evaluation.py`: Scoring and evaluation functions
314
+ - `prompts.py`: LLM prompts for different question types
315
+ - `utils.py`: Utility functions for parsing and configuration
316
+
317
+ * **`configs/`**: Configuration files:
318
+ - `benchmark_config.yaml`: Model selection and API parameters
319
 
320
+ * **`results/`**: Directory where benchmark results are stored (timestamped subdirectories)
 
 
 
 
 
 
321
 
322
+ * **`jee-neet-benchmark.py`**: Hugging Face `datasets` loading script
323
+
324
+ ## Data Fields
325
+
326
+ The dataset contains the following fields (accessible via `datasets`):
327
 
328
+ * `image`: The question image (`datasets.Image`)
329
+ * `question_id`: Unique identifier for the question (string)
330
+ * `exam_name`: Name of the exam (e.g., "NEET", "JEE_ADVANCED") (string)
331
+ * `exam_year`: Year of the exam (int)
332
+ * `exam_code`: Paper/session code (e.g., "T3", "P1") (string)
333
+ * `subject`: Subject (e.g., "Physics", "Chemistry", "Mathematics") (string)
334
+ * `question_type`: Type of question (e.g., "MCQ_SINGLE_CORRECT", "INTEGER") (string)
335
+ * `correct_answer`: List containing the correct answer strings.
336
+ - For MCQs, these are option identifiers (e.g., `["1"]`, `["A"]`, `["B", "C"]`). The LLM should output the identifier as it appears in the question.
337
+ - For INTEGER type, this is the numerical answer as a string (e.g., `["42"]`, `["12.75"]`). The LLM should output the number.
338
+ - For some `MCQ_SINGLE_CORRECT` questions, multiple answers in this list are considered correct if the LLM prediction matches any one of them.
339
+ (list of strings)
340
+
341
+ ## LLM Answer Format
342
+
343
+ The LLM is expected to return its answer enclosed in `<answer>` tags. For example:
344
+ - MCQ Single Correct (Option A): `<answer>A</answer>`
345
+ - MCQ Single Correct (Option 2): `<answer>2</answer>`
346
+ - MCQ Multiple Correct (Options B and D): `<answer>B,D</answer>`
347
+ - Integer Answer: `<answer>42</answer>`
348
+ - Decimal Answer: `<answer>12.75</answer>`
349
+ - Skipped Question: `<answer>SKIP</answer>
350
+
351
+ The system parses these formats. Prompts are designed to guide the LLM accordingly.
352
+
353
+ ## Troubleshooting
354
+
355
+ ### Common Issues
356
+
357
+ **API Key Issues:**
358
+ - Ensure your `.env` file is in the root directory
359
+ - Verify your OpenRouter API key is valid and has sufficient credits
360
+ - Check that the key has access to vision-capable models
361
+
362
+ **Model Not Found:**
363
+ - Verify the model identifier exists on OpenRouter
364
+ - Ensure the model supports vision input
365
+ - Check your OpenRouter account has access to the specific model
366
+
367
+ **Memory Issues:**
368
+ - Reduce `max_tokens` in the config file
369
+ - Process smaller subsets using `--question_ids` filter
370
+ - Use models with smaller context windows
371
+
372
+ **Parsing Failures:**
373
+ - The system automatically attempts re-prompting for parsing failures
374
+ - Check the raw responses in `predictions.jsonl` to debug prompt issues
375
+ - Consider adjusting prompts in `src/prompts.py` for specific models
376
+
377
+ ## Current Limitations
378
+
379
+ * **Dataset Size:** While comprehensive, the dataset could benefit from more JEE Main questions and additional years
380
+ * **Language Support:** Currently only supports English questions
381
+ * **Model Dependencies:** Requires models with vision capabilities available through OpenRouter
382
 
383
  ## Citation
384
 
 
389
  title={JEE/NEET LLM Benchmark},
390
  author={Md Rejaullah},
391
  year={2025},
392
+ howpublished={\url{https://huggingface.co/datasets/Reja1/jee-neet-benchmark}},
393
  }
394
  ```
395
 
jee-neet-benchmark.py CHANGED
@@ -60,7 +60,7 @@ class JeeNeetBenchmark(datasets.GeneratorBasedBuilder):
60
  "question_id": datasets.Value("string"),
61
  "exam_name": datasets.Value("string"),
62
  "exam_year": datasets.Value("int32"),
63
- "exam_code": datasets.Value("string"),
64
  "subject": datasets.Value("string"),
65
  "question_type": datasets.Value("string"),
66
  "correct_answer": datasets.Sequence(datasets.Value("string")), # List of strings
@@ -81,6 +81,11 @@ class JeeNeetBenchmark(datasets.GeneratorBasedBuilder):
81
  repo_metadata_path = os.path.join("data", "metadata.jsonl")
82
  repo_images_archive_path = "images.tar.gz" # At the root of the repository
83
 
 
 
 
 
 
84
  try:
85
  # Download and extract metadata and images archive
86
  downloaded_files = dl_manager.download_and_extract({
@@ -155,7 +160,7 @@ class JeeNeetBenchmark(datasets.GeneratorBasedBuilder):
155
  "question_id": row.get("question_id", ""),
156
  "exam_name": row.get("exam_name", ""),
157
  "exam_year": row.get("exam_year", -1), # Use a default if missing
158
- "exam_code": row.get("exam_code", ""),
159
  "subject": row.get("subject", ""),
160
  "question_type": row.get("question_type", ""),
161
  "correct_answer": row.get("correct_answer", []),
 
60
  "question_id": datasets.Value("string"),
61
  "exam_name": datasets.Value("string"),
62
  "exam_year": datasets.Value("int32"),
63
+ "exam_code": datasets.Value("string"), # Will provide default if missing in source
64
  "subject": datasets.Value("string"),
65
  "question_type": datasets.Value("string"),
66
  "correct_answer": datasets.Sequence(datasets.Value("string")), # List of strings
 
81
  repo_metadata_path = os.path.join("data", "metadata.jsonl")
82
  repo_images_archive_path = "images.tar.gz" # At the root of the repository
83
 
84
+ # Ensure force download and extract for the current run
85
+ # dl_manager.download_config is an instance of datasets.DownloadConfig
86
+ dl_manager.download_config.force_download = True
87
+ dl_manager.download_config.force_extract = True # If redownloading, re-extraction is also desired
88
+
89
  try:
90
  # Download and extract metadata and images archive
91
  downloaded_files = dl_manager.download_and_extract({
 
160
  "question_id": row.get("question_id", ""),
161
  "exam_name": row.get("exam_name", ""),
162
  "exam_year": row.get("exam_year", -1), # Use a default if missing
163
+ "exam_code": row.get("exam_code", "N/A"), # Provide "N/A" if exam_code is missing
164
  "subject": row.get("subject", ""),
165
  "question_type": row.get("question_type", ""),
166
  "correct_answer": row.get("correct_answer", []),
requirements.txt CHANGED
@@ -19,4 +19,7 @@ python-dotenv>=0.19.0
19
  # For handling retries during API calls
20
  tenacity>=8.0.0
21
 
 
 
 
22
  # Add other dependencies as needed for your benchmark scripts (e.g., numpy, scikit-learn for evaluation)
 
19
  # For handling retries during API calls
20
  tenacity>=8.0.0
21
 
22
+ # For progress bars in benchmark execution
23
+ tqdm>=4.60.0
24
+
25
  # Add other dependencies as needed for your benchmark scripts (e.g., numpy, scikit-learn for evaluation)
src/benchmark_runner.py CHANGED
@@ -236,7 +236,7 @@ def run_benchmark(
236
  # Explicitly specify data_files and data_dir for local loading.
237
  # data_dir should be the project root ('.') when loading a local script,
238
  # as the script is copied to a cache and needs to know where the actual data is.
239
- dataset = load_dataset(dataset_path, split='test', data_files={'test': 'data/metadata.jsonl'}, data_dir='.', trust_remote_code=True)
240
  dataset = dataset.cast_column("image", HFImage(decode=True)) # Ensure images are loaded as PIL
241
  logging.info(f"Dataset loaded successfully from path: {dataset_path}. Original number of questions: {len(dataset)}")
242
  except Exception as e:
 
236
  # Explicitly specify data_files and data_dir for local loading.
237
  # data_dir should be the project root ('.') when loading a local script,
238
  # as the script is copied to a cache and needs to know where the actual data is.
239
+ dataset = load_dataset(dataset_path, split='test', data_files={'test': 'data/metadata.jsonl'}, data_dir=os.getcwd(), trust_remote_code=True)
240
  dataset = dataset.cast_column("image", HFImage(decode=True)) # Ensure images are loaded as PIL
241
  logging.info(f"Dataset loaded successfully from path: {dataset_path}. Original number of questions: {len(dataset)}")
242
  except Exception as e:
src/evaluation.py CHANGED
@@ -96,17 +96,23 @@ def calculate_single_question_score_details(result_item: Dict[str, Any]) -> Dict
96
  current_score_change = 0
97
  evaluation_status = "unknown"
98
 
99
- # Ensure truth is a set of uppercase strings for consistent processing
 
 
100
  truth_set: set
101
- if isinstance(truth, str): # e.g. for single integer answer like "14" or single option "A"
 
 
 
102
  truth_set = {truth.upper()}
103
- elif isinstance(truth, list) and all(isinstance(t, str) for t in truth): # e.g. ["A", "C"] or ["14"]
104
  truth_set = {s.upper() for s in truth}
105
- elif isinstance(truth, list) and all(isinstance(t, int) for t in truth): # Handle old integer list format if it slips through
106
- logging.warning(f"Ground truth for {question_id} is List[int]: {truth}. Converting to List[str].")
 
107
  truth_set = {str(s).upper() for s in truth}
108
- elif isinstance(truth, int): # Handle old integer format if it slips through
109
- logging.warning(f"Ground truth for {question_id} is int: {truth}. Converting to str.")
110
  truth_set = {str(truth).upper()}
111
  else:
112
  logging.error(f"Invalid ground_truth format for {question_id}: {truth} (type: {type(truth)}). Assigning 0 marks.")
@@ -124,29 +130,41 @@ def calculate_single_question_score_details(result_item: Dict[str, Any]) -> Dict
124
  evaluation_status = "skipped"
125
  elif isinstance(pred, list) and all(isinstance(p, str) for p in pred):
126
  pred_set = {p.upper() for p in pred} # Convert to uppercase strings
127
- if exam_name == "NEET" and question_type == "MCQ_SINGLE_CORRECT":
128
- if pred_set == truth_set and len(pred_set) == 1:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
129
  current_score_change = 4; evaluation_status = "correct"
130
  else:
131
- current_score_change = -1; evaluation_status = "incorrect"
132
- elif exam_name == "JEE_MAIN":
133
- if question_type == "MCQ_SINGLE_CORRECT":
134
- if pred_set == truth_set and len(pred_set) == 1:
135
- current_score_change = 4; evaluation_status = "correct"
136
- else:
137
- current_score_change = -1; evaluation_status = "incorrect"
138
- elif question_type == "INTEGER": # Integer answers are now strings in a list e.g. ["14"]
139
- if len(pred_set) == 1 and list(pred_set)[0] in truth_set: # Compare the single string prediction
140
- current_score_change = 4; evaluation_status = "correct"
141
- else:
142
- current_score_change = 0; evaluation_status = "incorrect"
143
  elif exam_name == "JEE_ADVANCED":
144
- if question_type == "MCQ_SINGLE_CORRECT":
145
- if pred_set == truth_set and len(pred_set) == 1:
146
- current_score_change = 3; evaluation_status = "correct"
147
- else:
148
- current_score_change = -1; evaluation_status = "incorrect"
149
- elif question_type == "INTEGER": # Integer answers are now strings in a list e.g. ["12"]
150
  if len(pred_set) == 1 and list(pred_set)[0] in truth_set: # Compare the single string prediction
151
  current_score_change = 4; evaluation_status = "correct"
152
  else:
 
96
  current_score_change = 0
97
  evaluation_status = "unknown"
98
 
99
+ # Ensure truth is a set of uppercase strings for consistent processing.
100
+ # Ground truth from metadata.jsonl is expected to be a list of strings.
101
+ # e.g., ["1"], ["A"], ["12.75"], ["A", "C"]
102
  truth_set: set
103
+ if isinstance(truth, str):
104
+ # This case might occur if metadata had a single string instead of list for some reason,
105
+ # or if an old format slips through. Convert to a set of one uppercase string.
106
+ logging.warning(f"Ground truth for {question_id} is a single string: '{truth}'. Converting to set.")
107
  truth_set = {truth.upper()}
108
+ elif isinstance(truth, list) and all(isinstance(t, str) for t in truth):
109
  truth_set = {s.upper() for s in truth}
110
+ # Deprecated int/List[int] handling, as metadata should now be List[str]
111
+ elif isinstance(truth, list) and any(isinstance(t, int) for t in truth):
112
+ logging.warning(f"Ground truth for {question_id} contains integers: {truth}. Converting all to strings.")
113
  truth_set = {str(s).upper() for s in truth}
114
+ elif isinstance(truth, int):
115
+ logging.warning(f"Ground truth for {question_id} is int: {truth}. Converting to string set.")
116
  truth_set = {str(truth).upper()}
117
  else:
118
  logging.error(f"Invalid ground_truth format for {question_id}: {truth} (type: {type(truth)}). Assigning 0 marks.")
 
130
  evaluation_status = "skipped"
131
  elif isinstance(pred, list) and all(isinstance(p, str) for p in pred):
132
  pred_set = {p.upper() for p in pred} # Convert to uppercase strings
133
+
134
+ # Handle MCQ_SINGLE_CORRECT first, as it has special logic for multiple truths.
135
+ # The parser (`parse_llm_answer`) returns `pred` as `list[str]` with one element for valid single answers.
136
+ if question_type == "MCQ_SINGLE_CORRECT":
137
+ # A prediction is correct if its single element is present in the truth_set.
138
+ # This accommodates metadata entries where `correct_answer` for an MCQ_SINGLE_CORRECT
139
+ # might list multiple acceptable options (e.g., if a question had two official correct answers).
140
+ is_correct = False
141
+ if len(pred_set) == 1: # Ensure prediction is indeed a single option
142
+ single_pred_answer = list(pred_set)[0] # Get the single predicted option
143
+ if single_pred_answer in truth_set: # Check if this predicted option is in the set of true answers
144
+ is_correct = True
145
+
146
+ if is_correct:
147
+ evaluation_status = "correct"
148
+ if exam_name == "NEET": current_score_change = 4
149
+ elif exam_name == "JEE_MAIN": current_score_change = 4
150
+ elif exam_name == "JEE_ADVANCED": current_score_change = 3
151
+ else: current_score_change = 1 # Default positive score for unknown exam
152
+ else:
153
+ evaluation_status = "incorrect"
154
+ if exam_name == "NEET": current_score_change = -1
155
+ elif exam_name == "JEE_MAIN": current_score_change = -1
156
+ elif exam_name == "JEE_ADVANCED": current_score_change = -1
157
+ else: current_score_change = 0 # Default no penalty
158
+
159
+ elif exam_name == "JEE_MAIN" and question_type == "INTEGER": # Integer answers are now strings in a list e.g. ["14"]
160
+ if len(pred_set) == 1 and list(pred_set)[0] in truth_set: # Compare the single string prediction
161
  current_score_change = 4; evaluation_status = "correct"
162
  else:
163
+ current_score_change = 0; evaluation_status = "incorrect"
164
+
 
 
 
 
 
 
 
 
 
 
165
  elif exam_name == "JEE_ADVANCED":
166
+ # Note: MCQ_SINGLE_CORRECT for JEE_ADVANCED is handled by the common block above
167
+ if question_type == "INTEGER": # Integer answers are now strings in a list e.g. ["12"]
 
 
 
 
168
  if len(pred_set) == 1 and list(pred_set)[0] in truth_set: # Compare the single string prediction
169
  current_score_change = 4; evaluation_status = "correct"
170
  else:
src/prompts.py CHANGED
@@ -3,41 +3,41 @@
3
  # --- Initial Prompt Components ---
4
 
5
  ANSWER_FORMAT_INSTRUCTIONS = {
6
- "MCQ_SINGLE_CORRECT": "determine the single correct integer number corresponding to the correct option.",
7
- "INTEGER": "determine the single non-negative integer that is the answer.",
8
- "MCQ_MULTIPLE_CORRECT": "determine all correct integer option(s). If multiple options are correct, list them separated by commas.",
9
- "DEFAULT": "determine the correct answer."
10
  }
11
 
12
  EXAMPLE_INSTRUCTIONS = {
13
- "MCQ_SINGLE_CORRECT": "- If the correct option is 2: <answer>2</answer>",
14
- "INTEGER": "- If the answer is 5: <answer>5</answer>",
15
- "MCQ_MULTIPLE_CORRECT": "- If options A and B are correct: <answer>A,B</answer>\n- If only option B is correct: <answer>B</answer>",
16
  "DEFAULT": "- Example: <answer>Your Answer</answer>"
17
  }
18
 
19
- INITIAL_PROMPT_TEMPLATE = """You are an expert at analyzing exam questions from the {exam_name} {exam_year} exam ({question_type}) and extracting the correct answer option(s).
20
  This exam uses specific marking schemes, so accuracy and correct formatting are crucial.
21
 
22
  Please think step-by-step to solve the problem.
23
  Examine the provided image of the question carefully.
24
  1. Analyze the question and the provided options (if any).
25
  2. Reason through the problem to {answer_format_instruction}
26
- 3. Format your final answer by enclosing ONLY the determined integer(s) within <answer> tags.
27
 
28
  Examples:
29
  {example_instruction}
30
  - If you are unsure or cannot determine the answer: <answer>SKIP</answer>
31
 
32
- It is crucial that your response contains ONLY the <answer> tag with the correct integer(s) OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
33
 
34
 
35
  # --- Reprompt Components ---
36
 
37
  SPECIFIC_INSTRUCTIONS_REPROMPT = {
38
- "MCQ_SINGLE_CORRECT": "provide ONLY the single integer or correct option(A, B, C, D) corresponding to the correct answer choice",
39
- "INTEGER": "provide ONLY the single non-negative integer that is the answer",
40
- "MCQ_MULTIPLE_CORRECT": "provide ALL correct integer option(s) separated by commas (e.g., <answer>A,B</answer> or <answer>B</answer> if only one is correct)",
41
  "DEFAULT": "provide the answer according to the question format"
42
  }
43
 
@@ -50,11 +50,14 @@ Your previous response did not correctly format the final answer within <answer>
50
 
51
  Please re-examine your previous reasoning and {specific_instructions}, enclosed in <answer> tags.
52
 
53
- Example for single correct integer: <answer>2</answer>
54
- Example for multiple correct integers: <answer>A,C</answer>
 
 
 
55
  If you are unsure or cannot determine the answer: <answer>SKIP</answer>
56
 
57
- It is crucial that your response contains ONLY the <answer> tag with the correct integer(s) OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
58
 
59
  # --- Helper functions to get instructions ---
60
 
 
3
  # --- Initial Prompt Components ---
4
 
5
  ANSWER_FORMAT_INSTRUCTIONS = {
6
+ "MCQ_SINGLE_CORRECT": "determine the single correct option identifier (e.g., 1, 2, A, B) as it appears in the question.",
7
+ "INTEGER": "determine the single numerical answer. This can be an integer or a decimal value. Provide the number as accurately as possible.",
8
+ "MCQ_MULTIPLE_CORRECT": "determine all correct option identifier(s) (e.g., 1, 2, A, B) as they appear in the question. If multiple options are correct, list their identifiers separated by commas.",
9
+ "DEFAULT": "determine the correct answer based on the question's format."
10
  }
11
 
12
  EXAMPLE_INSTRUCTIONS = {
13
+ "MCQ_SINGLE_CORRECT": "- If the correct option is labeled '2' in the question: <answer>2</answer>\n- If the correct option is labeled 'A' in the question: <answer>A</answer>",
14
+ "INTEGER": "- If the answer is 5: <answer>5</answer>\n- If the answer is 12.75: <answer>12.75</answer>\n- If the answer is 0.5: <answer>0.5</answer>",
15
+ "MCQ_MULTIPLE_CORRECT": "- If options labeled 'A' and 'C' are correct: <answer>A,C</answer>\n- If options labeled '1' and '3' are correct: <answer>1,3</answer>\n- If only option 'B' is correct: <answer>B</answer>\n- If only option '2' is correct: <answer>2</answer>",
16
  "DEFAULT": "- Example: <answer>Your Answer</answer>"
17
  }
18
 
19
+ INITIAL_PROMPT_TEMPLATE = """You are an expert at analyzing exam questions from the {exam_name} {exam_year} exam ({question_type}) and extracting the correct answer option(s)/value.
20
  This exam uses specific marking schemes, so accuracy and correct formatting are crucial.
21
 
22
  Please think step-by-step to solve the problem.
23
  Examine the provided image of the question carefully.
24
  1. Analyze the question and the provided options (if any).
25
  2. Reason through the problem to {answer_format_instruction}
26
+ 3. Format your final answer by enclosing ONLY the determined identifier(s) or numerical value(s) within <answer> tags.
27
 
28
  Examples:
29
  {example_instruction}
30
  - If you are unsure or cannot determine the answer: <answer>SKIP</answer>
31
 
32
+ It is crucial that your response contains ONLY the <answer> tag with the correct option identifier(s), numerical value(s) OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
33
 
34
 
35
  # --- Reprompt Components ---
36
 
37
  SPECIFIC_INSTRUCTIONS_REPROMPT = {
38
+ "MCQ_SINGLE_CORRECT": "provide ONLY the single correct option identifier (e.g., 1, A) as it appears in the question",
39
+ "INTEGER": "provide ONLY the single numerical answer (integer or decimal)",
40
+ "MCQ_MULTIPLE_CORRECT": "provide ALL correct option identifier(s) (e.g., A,C or 1,3) as they appear in the question, separated by commas if multiple. If only one is correct, provide just that one (e.g., <answer>B</answer> or <answer>2</answer>)",
41
  "DEFAULT": "provide the answer according to the question format"
42
  }
43
 
 
50
 
51
  Please re-examine your previous reasoning and {specific_instructions}, enclosed in <answer> tags.
52
 
53
+ Example for single correct MCQ option 'A': <answer>A</answer>
54
+ Example for single correct MCQ option '2': <answer>2</answer>
55
+ Example for multiple correct MCQ options 'A' and 'C': <answer>A,C</answer>
56
+ Example for integer answer: <answer>42</answer>
57
+ Example for decimal answer: <answer>12.75</answer>
58
  If you are unsure or cannot determine the answer: <answer>SKIP</answer>
59
 
60
+ It is crucial that your response contains ONLY the <answer> tag with the correct option identifier(s), numerical value(s) OR the word SKIP inside. Do not include any other text, explanation, or formatting."""
61
 
62
  # --- Helper functions to get instructions ---
63
 
src/utils.py CHANGED
@@ -28,16 +28,17 @@ def load_api_key(key_name="OPENROUTER_API_KEY"):
28
  return api_key
29
 
30
 
31
- def parse_llm_answer(response_text: str, question_type: str = "MCQ_SINGLE_CORRECT") -> list[int] | str | None:
32
  """
33
  Parses the LLM response text to extract answers within <answer> tags.
34
  The parsing logic adapts based on the question_type.
35
 
36
  Handles:
37
- - Single integer answers (for MCQ_SINGLE_CORRECT, INTEGER).
38
- - Multiple integer answers (comma-separated for MCQ_MULTIPLE_CORRECT).
39
- - The specific string "SKIP" for skipped questions.
40
- - Potential newlines within the tags.
 
41
 
42
  Args:
43
  response_text (str): The raw text response from the LLM.
@@ -46,24 +47,26 @@ def parse_llm_answer(response_text: str, question_type: str = "MCQ_SINGLE_CORREC
46
  Defaults to "MCQ_SINGLE_CORRECT".
47
 
48
  Returns:
49
- list[int] | str | None:
50
- - A list containing integer answer(s) if found and valid.
51
- (single element for MCQ_SINGLE_CORRECT/INTEGER, potentially multiple for MCQ_MULTIPLE_CORRECT)
52
  - The string "SKIP" if the response indicates a skip.
53
  - None if parsing fails (no tag, invalid content, type mismatch, etc.).
54
  """
55
  if not response_text:
56
  return None
57
 
58
- # Check for exact SKIP response first (case-insensitive)
59
- if response_text.strip().upper() == "<ANSWER>SKIP</ANSWER>":
 
 
60
  logging.info(f"Parsed answer as SKIP for question_type: {question_type}.")
61
  return "SKIP"
62
 
63
  match = re.search(r"<answer>(.*?)</answer>", response_text, re.DOTALL | re.IGNORECASE)
64
 
65
  if not match:
66
- logging.warning(f"Could not find <answer> tag in response for question_type: {question_type}.")
67
  return None
68
 
69
  extracted_content = match.group(1).strip()
@@ -71,76 +74,127 @@ def parse_llm_answer(response_text: str, question_type: str = "MCQ_SINGLE_CORREC
71
  logging.warning(f"Found <answer> tag but content is empty for question_type: {question_type}.")
72
  return None
73
 
74
- potential_answers_str = [item.strip() for item in extracted_content.split(',')]
75
- parsed_numbers = []
76
- all_valid_numbers = True
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
- for ans_str in potential_answers_str:
79
- if not ans_str: continue # Skip empty strings (e.g., from "1," or ",2")
80
- try:
81
- parsed_numbers.append(int(ans_str))
82
- except ValueError:
83
- logging.warning(f"Could not parse '{ans_str}' as an integer within <answer> tag for question_type: {question_type}.")
84
- all_valid_numbers = False
85
- break
86
 
87
- if not all_valid_numbers or not parsed_numbers: # If any part was not a number or if list is empty after parsing
 
88
  return None
89
 
90
- # Apply rules based on question_type
91
  if question_type in ["MCQ_SINGLE_CORRECT", "INTEGER"]:
92
- if len(parsed_numbers) == 1:
93
- return parsed_numbers # Returns [integer]
94
  else:
95
- logging.warning(f"Expected single answer for {question_type}, but found {len(parsed_numbers)} numbers: {parsed_numbers}. Treating as parse failure.")
96
  return None
97
  elif question_type == "MCQ_MULTIPLE_CORRECT":
98
- # For multiple correct, any number of valid integers is acceptable.
99
  # Return them sorted and unique.
100
- return sorted(list(set(parsed_numbers)))
101
  else:
102
- logging.error(f"Unknown question_type '{question_type}' provided to parse_llm_answer.")
 
103
  return None
104
 
105
  # Example Usage (for testing)
106
  if __name__ == '__main__':
107
  test_cases = [
108
- # MCQ_SINGLE_CORRECT / INTEGER
109
- {"resp": "Some text before <answer>2</answer> and after.", "type": "MCQ_SINGLE_CORRECT", "expected": [2]},
110
- {"resp": "Blah blah <answer> 1 </answer> blah", "type": "INTEGER", "expected": [1]},
111
- {"resp": "<answer>\n 3 \n</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": [3]},
112
  {"resp": "<answer>1,3</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None}, # Fail: multiple for single
113
- {"resp": "<answer>1,3</answer>", "type": "INTEGER", "expected": None}, # Fail: multiple for single
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
  {"resp": "No answer tag here.", "type": "MCQ_SINGLE_CORRECT", "expected": None},
115
  {"resp": "<answer></answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
116
  {"resp": "<answer> </answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
117
- {"resp": "<answer>abc</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
118
- {"resp": "<answer>1, abc</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
119
- {"resp": "<ANSWER>SKIP</ANSWER>", "type": "MCQ_SINGLE_CORRECT", "expected": "SKIP"},
120
- {"resp": " <ANSWER>SKIP</ANSWER> ", "type": "INTEGER", "expected": "SKIP"},
121
-
122
- # MCQ_MULTIPLE_CORRECT
123
- {"resp": "<answer>1,3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [1, 3]},
124
- {"resp": "<answer> 4 , 2 </answer> end", "type": "MCQ_MULTIPLE_CORRECT", "expected": [2, 4]},
125
- {"resp": "<answer>\n 1,\n 4 \n</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [1, 4]},
126
- {"resp": "<answer>3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [3]}, # Single is valid for multi
127
- {"resp": "<answer>3,1,4,1</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [1, 3, 4]}, # Unique and sorted
128
- {"resp": "<answer>1, </answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [1]}, # Handles trailing comma
129
- {"resp": "<answer>,2</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": [2]}, # Handles leading comma
130
- {"resp": "<answer>1,abc,3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": None}, # Invalid content
131
- {"resp": "<ANSWER>SKIP</ANSWER>", "type": "MCQ_MULTIPLE_CORRECT", "expected": "SKIP"},
132
-
133
- # General / Edge cases
134
  {"resp": None, "type": "MCQ_SINGLE_CORRECT", "expected": None},
135
  {"resp": "", "type": "MCQ_SINGLE_CORRECT", "expected": None},
136
  {"resp": "<answer>5</answer>", "type": "UNKNOWN_TYPE", "expected": None}, # Unknown type
 
137
  ]
138
 
139
- print("\n--- Testing parse_llm_answer (with question_type) ---")
140
- for case in test_cases:
 
141
  parsed = parse_llm_answer(case["resp"], case["type"])
142
- print(f"Response: '{str(case['resp'])[:50]}...', Type: {case['type']} -> Parsed: {parsed} (Expected: {case['expected']})")
143
- assert parsed == case["expected"], f"Mismatch for {case['resp']} with type {case['type']}"
 
 
 
 
 
 
 
 
144
 
145
  # Test API key loading (will raise error if .env or env var not set)
146
  # try:
 
28
  return api_key
29
 
30
 
31
+ def parse_llm_answer(response_text: str, question_type: str = "MCQ_SINGLE_CORRECT") -> list[str] | str | None:
32
  """
33
  Parses the LLM response text to extract answers within <answer> tags.
34
  The parsing logic adapts based on the question_type.
35
 
36
  Handles:
37
+ - MCQ_SINGLE_CORRECT: Single option identifier (integer like "1", "2" or letter like "A", "B").
38
+ - INTEGER: Single numerical value, which can be an integer or a decimal (e.g., "5", "12.75", "0.5").
39
+ - MCQ_MULTIPLE_CORRECT: Multiple option identifiers (integers or letters), comma-separated.
40
+ - The specific string "SKIP" for skipped questions (case-insensitive content within tags).
41
+ - Potential newlines and varied spacing within the tags.
42
 
43
  Args:
44
  response_text (str): The raw text response from the LLM.
 
47
  Defaults to "MCQ_SINGLE_CORRECT".
48
 
49
  Returns:
50
+ list[str] | str | None:
51
+ - A list containing string answer(s) if found and valid.
52
+ (e.g., ["1"], ["A"], ["12.75"], ["A", "C"])
53
  - The string "SKIP" if the response indicates a skip.
54
  - None if parsing fails (no tag, invalid content, type mismatch, etc.).
55
  """
56
  if not response_text:
57
  return None
58
 
59
+ # Check for exact SKIP response first (case-insensitive for the tag and content)
60
+ # Using regex to be more flexible with whitespace around SKIP
61
+ skip_match = re.search(r"<answer>\s*SKIP\s*</answer>", response_text, re.IGNORECASE)
62
+ if skip_match:
63
  logging.info(f"Parsed answer as SKIP for question_type: {question_type}.")
64
  return "SKIP"
65
 
66
  match = re.search(r"<answer>(.*?)</answer>", response_text, re.DOTALL | re.IGNORECASE)
67
 
68
  if not match:
69
+ logging.warning(f"Could not find <answer> tag in response for question_type: {question_type} in response: '{response_text[:200]}...'")
70
  return None
71
 
72
  extracted_content = match.group(1).strip()
 
74
  logging.warning(f"Found <answer> tag but content is empty for question_type: {question_type}.")
75
  return None
76
 
77
+ potential_answers_str_list = [item.strip() for item in extracted_content.split(',')]
78
+ parsed_answers_final = []
79
+
80
+ for ans_str_raw in potential_answers_str_list:
81
+ ans_str = ans_str_raw.strip()
82
+ if not ans_str: # Skip empty strings that might result from "1, ,2" or trailing commas
83
+ continue
84
+
85
+ if question_type == "INTEGER":
86
+ try:
87
+ # Try to parse as float to validate it's a number (integer or decimal).
88
+ # The value is kept as a string to preserve original formatting (e.g., "0.50", "5.0")
89
+ # for exact string comparison with ground truth if needed, and for consistent type handling.
90
+ float(ans_str)
91
+ parsed_answers_final.append(ans_str)
92
+ except ValueError:
93
+ logging.warning(f"Could not parse '{ans_str}' as a valid number (integer or decimal) for INTEGER type. Full content: '{extracted_content}'")
94
+ return None # If any part is not a valid number for INTEGER type, fail parsing.
95
+
96
+ elif question_type in ["MCQ_SINGLE_CORRECT", "MCQ_MULTIPLE_CORRECT"]:
97
+ # For MCQs, the answer can be an integer (option number like 1, 2, 3, 4)
98
+ # or a letter (option identifier like A, B, C, D).
99
+ # The parser accepts these and returns them as strings.
100
+ # Numerical options are typically single digits 1-4.
101
+ # Letter options are single letters A-D (case-insensitive, converted to uppercase).
102
+ if re.fullmatch(r"[1-4]", ans_str): # Typical integer options
103
+ parsed_answers_final.append(ans_str)
104
+ elif re.fullmatch(r"[a-dA-D]", ans_str): # Typical letter options (A,B,C,D or a,b,c,d)
105
+ parsed_answers_final.append(ans_str.upper()) # Standardize to uppercase
106
+ elif re.fullmatch(r"[1-9]\d*", ans_str): # Accept other integers if questions use them as options
107
+ logging.debug(f"Accepting non-standard integer option '{ans_str}' for MCQ. Full content: '{extracted_content}'")
108
+ parsed_answers_final.append(ans_str)
109
+ elif re.fullmatch(r"[a-zA-Z]", ans_str): # Accept other single letters if questions use them
110
+ logging.debug(f"Accepting non-standard letter option '{ans_str}' for MCQ. Full content: '{extracted_content}'")
111
+ parsed_answers_final.append(ans_str.upper())
112
+ else:
113
+ logging.warning(f"Could not parse '{ans_str}' as a valid MCQ option (expected 1-4, A-D, or other single int/letter). Full content: '{extracted_content}'")
114
+ return None # If any part is not a valid MCQ option, fail parsing.
115
+ else: # Should not happen if question_type is validated before calling
116
+ logging.error(f"Unknown question_type '{question_type}' encountered in parse_llm_answer logic.")
117
+ return None
118
 
 
 
 
 
 
 
 
 
119
 
120
+ if not parsed_answers_final: # If list is empty after processing (e.g. content was just commas)
121
+ logging.warning(f"No valid answer items found after parsing content: '{extracted_content}' for question_type: {question_type}.")
122
  return None
123
 
124
+ # Apply rules based on question_type for number of answers
125
  if question_type in ["MCQ_SINGLE_CORRECT", "INTEGER"]:
126
+ if len(parsed_answers_final) == 1:
127
+ return parsed_answers_final # Returns list[str] with one element
128
  else:
129
+ logging.warning(f"Expected single answer for {question_type}, but found {len(parsed_answers_final)} items: {parsed_answers_final}. Content: '{extracted_content}'")
130
  return None
131
  elif question_type == "MCQ_MULTIPLE_CORRECT":
132
+ # For multiple correct, any number of valid items is acceptable.
133
  # Return them sorted and unique.
134
+ return sorted(list(set(parsed_answers_final)))
135
  else:
136
+ # This case should ideally be caught by earlier checks or input validation.
137
+ logging.error(f"Unknown question_type '{question_type}' provided to parse_llm_answer at final stage.")
138
  return None
139
 
140
  # Example Usage (for testing)
141
  if __name__ == '__main__':
142
  test_cases = [
143
+ # MCQ_SINGLE_CORRECT (can be number or letter)
144
+ {"resp": "<answer>2</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": ["2"]},
145
+ {"resp": "<answer>B</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": ["B"]},
146
+ {"resp": "<answer> c </answer>", "type": "MCQ_SINGLE_CORRECT", "expected": ["C"]},
147
  {"resp": "<answer>1,3</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None}, # Fail: multiple for single
148
+ {"resp": "<answer>A,C</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None}, # Fail: multiple for single
149
+ {"resp": "<answer>X</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None}, # Fail: invalid letter
150
+ {"resp": "<answer>5</answer>", "type": "MCQ_SINGLE_CORRECT", "expected": ["5"]}, # Allow other numbers for now
151
+
152
+ # INTEGER (can be int or decimal string)
153
+ {"resp": "<answer>42</answer>", "type": "INTEGER", "expected": ["42"]},
154
+ {"resp": "<answer>0</answer>", "type": "INTEGER", "expected": ["0"]},
155
+ {"resp": "<answer>12.75</answer>", "type": "INTEGER", "expected": ["12.75"]},
156
+ {"resp": "<answer>0.5</answer>", "type": "INTEGER", "expected": ["0.5"]},
157
+ {"resp": "<answer>-5</answer>", "type": "INTEGER", "expected": ["-5"]}, # Assuming negative integers are valid if problem allows
158
+ {"resp": "<answer>5.00</answer>", "type": "INTEGER", "expected": ["5.00"]},
159
+ {"resp": "<answer>abc</answer>", "type": "INTEGER", "expected": None}, # Fail: not a number
160
+ {"resp": "<answer>1,2</answer>", "type": "INTEGER", "expected": None}, # Fail: multiple for single int
161
+
162
+ # MCQ_MULTIPLE_CORRECT (can be numbers or letters, mixed is not typical but parser allows if items are valid individually)
163
+ {"resp": "<answer>1,3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": ["1", "3"]},
164
+ {"resp": "<answer>A,C</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": ["A", "C"]},
165
+ {"resp": "<answer> b , d </answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": ["B", "D"]},
166
+ {"resp": "<answer>2</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": ["2"]}, # Single is valid
167
+ {"resp": "<answer>D</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": ["D"]}, # Single is valid
168
+ {"resp": "<answer>1,C,3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": ["1", "3", "C"]}, # Mixed, sorted
169
+ {"resp": "<answer>C,1,A,1</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": ["1", "A", "C"]}, # Unique and sorted
170
+ {"resp": "<answer>1,X,3</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": None}, # Invalid item "X"
171
+
172
+ # SKIP and general failures
173
+ {"resp": "<ANSWER>SKIP</ANSWER>", "type": "MCQ_SINGLE_CORRECT", "expected": "SKIP"},
174
+ {"resp": " <answer> SKIP </answer> ", "type": "INTEGER", "expected": "SKIP"},
175
  {"resp": "No answer tag here.", "type": "MCQ_SINGLE_CORRECT", "expected": None},
176
  {"resp": "<answer></answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
177
  {"resp": "<answer> </answer>", "type": "MCQ_SINGLE_CORRECT", "expected": None},
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
178
  {"resp": None, "type": "MCQ_SINGLE_CORRECT", "expected": None},
179
  {"resp": "", "type": "MCQ_SINGLE_CORRECT", "expected": None},
180
  {"resp": "<answer>5</answer>", "type": "UNKNOWN_TYPE", "expected": None}, # Unknown type
181
+ {"resp": "<answer>1,,2</answer>", "type": "MCQ_MULTIPLE_CORRECT", "expected": ["1", "2"]}, # Handles empty item from double comma
182
  ]
183
 
184
+ print("\n--- Testing parse_llm_answer (revised) ---")
185
+ all_passed = True
186
+ for i, case in enumerate(test_cases):
187
  parsed = parse_llm_answer(case["resp"], case["type"])
188
+ if parsed == case["expected"]:
189
+ print(f"Test {i+1} PASSED: Response: '{str(case['resp'])[:50]}...', Type: {case['type']} -> Parsed: {parsed}")
190
+ else:
191
+ print(f"Test {i+1} FAILED: Response: '{str(case['resp'])[:50]}...', Type: {case['type']} -> Parsed: {parsed} (Expected: {case['expected']})")
192
+ all_passed = False
193
+
194
+ if all_passed:
195
+ print("\nAll revised parse_llm_answer tests passed!")
196
+ else:
197
+ print("\nSome revised parse_llm_answer tests FAILED.")
198
 
199
  # Test API key loading (will raise error if .env or env var not set)
200
  # try: