LG-Anonym commited on
Commit
24f7a4f
·
verified ·
1 Parent(s): 69fdef1

Upload 7 files

Browse files
.gitignore ADDED
@@ -0,0 +1 @@
 
 
1
+ .idea
README.md CHANGED
@@ -1,12 +1,196 @@
1
  ---
2
  title: VerifiableRewardsForScalableLogicalReasoning
3
- emoji: 💻
4
- colorFrom: red
5
- colorTo: yellow
6
  sdk: gradio
7
- sdk_version: 5.38.2
8
  app_file: app.py
9
  pinned: false
 
 
 
 
 
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: VerifiableRewardsForScalableLogicalReasoning
3
+ datasets: []
4
+ colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
+ sdk_version: 5.34.2
8
  app_file: app.py
9
  pinned: false
10
+ tags:
11
+ - evaluate
12
+ - reward
13
+ - reasoning
14
+ - metric
15
+ description: >-
16
+ VerifiableRewardsForScalableLogicalReasoning is a metric for evaluating logical reasoning in AI systems by providing verifiable rewards. It computes rewards through symbolic execution of candidate solutions against validation programs, enabling automatic, transparent and reproducible evaluation in AI systems.
17
  ---
18
 
19
+ # Metric Card for Symbolic Judge: Verifiable Rewards for Scalable Logical Reasoning
20
+
21
+ This metric is part of the SLR framework (LG-Anonym/SLR-Bench) and provides rewards for logical reasoning tasks.
22
+ THe reward model is grounded in the ILP (Inductive Logic Programming) paradigm, testing whether a given hypothesis (logic rule) solves a logical reasoning task.
23
+ TO check for entailment, the logic rule is executed against a set of background knowledge and examples, ensuring automatic evaluation that is verifiable, transparent, and reproducible.
24
+
25
+
26
+ ### How it Works
27
+ - **Input:** The symbolic judge takes as input a candidate hypothesis (logic rule) and an executable validation program containing background knowledge and examples.
28
+ - **Execution:** The candidate rule is executed against the validation program using a Prolog interpreter.
29
+ - **Correctness Criteria:** The rule is considered correct if it entails all positive examples and rejects all negative examples.
30
+ - **Metrics:** The symbolic judge computes a range of evaluation metrics (detailed below).
31
+
32
+ **Note:** A local Prolog interpreter is required to execute validation programs.
33
+
34
+ ---
35
+
36
+ ### Inputs
37
+ - **predictions** (`list` of `str`): Each prediction should be a Prolog rule like "eastbound(T) :- Body."
38
+ - **references** (`list` of `dict`): Each reference should contain:
39
+ - **validation_program** (`str`): Prolog program with background knowledge and examples
40
+ - **evaluation_config** (`dict`, optional): Configuration with:
41
+ - **positive_predicate** (`str`): Predicate identifying positive examples (default: "eastbound")
42
+ - **negative_predicate** (`str`): Predicate identifying negative examples (default: "westbound")
43
+
44
+ ### Metrics & Output Values
45
+ - **accuracy** (`float`): Proportion of predictions that correctly classify all examples (0.0 to 1.0)
46
+ - **partial_score** (`float`): Average proportion of correctly classified examples (0.0 to 1.0)
47
+ - **syntax_score** (`float`): Proportion of rules with valid Prolog syntax (0.0 to 1.0)
48
+ - **detailed_results** (`list` of `dict`): Per-example results with:
49
+ - **is_correct** (`bool`): Whether the rule correctly classifies all examples
50
+ - **partial_score** (`float`): Proportion of correctly classified examples
51
+ - **syntax_valid** (`bool`): Whether the rule has valid syntax
52
+ - **error** (`str`, optional): Any error messages from Prolog evaluation
53
+ - **exec_time** (`float`, optional): Execution time for evaluation
54
+
55
+ ---
56
+ ## How to Use with the datasets library
57
+ ```python
58
+ from evaluate import load
59
+ from datasets import load_dataset
60
+
61
+ # Load the symbolic judge for logical reasoning
62
+ symbolic_judge = load("LG-Anonym/VerifiableRewardsForScalableLogicalReasoning")
63
+
64
+ # load dataset LG-Anonym/SLR-Bench
65
+ ds = load_dataset('LG-Anonym/SLR-Bench', 'v1-All')
66
+ ds_test = ds['test'][:5]
67
+
68
+ # Prepare the predictions and references
69
+ rules = ds_test['ground-truth rule']
70
+ references = [{'validation_program': p,
71
+ 'evaluation_config': {
72
+ "positive_predicate": "eastbound",
73
+ "negative_predicate": "westbound"
74
+ }
75
+ } for p in ds_test['validation program']]
76
+ # Compute the evaluation
77
+ r2 = symbolic_judge.compute(predictions=rules, references=references)
78
+ r2
79
+ ```
80
+
81
+ ### Outputs
82
+
83
+ ```python
84
+ {'accuracy': 1.0,
85
+ 'partial_score': 1.0,
86
+ 'syntax_score': 1.0,
87
+ 'detailed_results': [{'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.014362812042236328},
88
+ {'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.012364625930786133},
89
+ {'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.012273550033569336},
90
+ {'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.012486696243286133},
91
+ {'is_correct': True,'partial_score': 1.0,'syntax_valid': True,'error': None,'exec_time1': 0.012131929397583008}]}
92
+ ```
93
+
94
+
95
+ ---
96
+
97
+ ## Examples
98
+
99
+ ### Example 1: Evaluating a Single Rule
100
+
101
+ ```python
102
+ from evaluate import load
103
+
104
+ symbolic_judge = load("LG-Anonym/VerifiableRewardsForScalableLogicalReasoning")
105
+
106
+ validation_program = """
107
+ eastbound(train0).
108
+ has_car(train0, car0_1).
109
+ car_num(car0_1, 1).
110
+ car_color(car0_1, white).
111
+ car_len(car0_1, short).
112
+ has_wall(car0_1, full).
113
+
114
+ westbound(train1).
115
+ has_car(train1, car1_1).
116
+ car_num(car1_1, 1).
117
+ car_color(car1_1, yellow).
118
+ car_len(car1_1, short).
119
+ has_wall(car1_1, full).
120
+ """
121
+
122
+ predicted_rule = "eastbound(Train):- has_car(Train, Car1), car_color(Car1, white)."
123
+
124
+ results = symbolic_judge.compute(
125
+ predictions=[predicted_rule],
126
+ references=[{"validation_program": validation_program,
127
+ "evaluation_config": {
128
+ "positive_predicate": "eastbound",
129
+ "negative_predicate": "westbound"
130
+ }}]
131
+ )
132
+
133
+ print(results)
134
+ ```
135
+
136
+ ### Output Example 1
137
+
138
+ ```python
139
+ {'accuracy': 1.0,
140
+ 'partial_score': 1.0,
141
+ 'syntax_score': 1.0,
142
+ 'detailed_results': [
143
+ {'is_correct': True,
144
+ 'partial_score': 1.0,
145
+ 'syntax_valid': True,
146
+ 'error': None,
147
+ 'exec_time1': 0.012056350708007812}]
148
+ }
149
+
150
+ ```
151
+
152
+ ### Example 2: Evaluating Multiple Rules
153
+
154
+ ```python
155
+ correct_rule = "eastbound(Train):- has_car(Train, Car1), car_color(Car1, white)."
156
+ incorrect_rule = "eastbound(Train):- has_car(Train, Car1), car_color(Car1, green)."
157
+
158
+ results = symbolic_judge.compute(
159
+ predictions=[correct_rule, incorrect_rule],
160
+ references=[
161
+ {"validation_program": validation_program},
162
+ {"validation_program": validation_program}
163
+ ]
164
+ )
165
+
166
+ print(results)
167
+ ```
168
+
169
+ ### Example 3: Custom Evaluation Configuration
170
+
171
+ ```python
172
+ validation_program = """
173
+ % Custom problem
174
+ parent(john, mary).
175
+ parent(john, bob).
176
+ parent(alice, bob).
177
+ parent(susan, alice).
178
+
179
+ % Examples
180
+ grandparent(susan, bob).
181
+ not_grandparent(john, alice).
182
+ """
183
+
184
+ rule = "grandparent(X, Y) :- parent(X, Z), parent(Z, Y)."
185
+
186
+ results = symbolic_judge.compute(
187
+ predictions=[rule],
188
+ references=[{
189
+ "validation_program": validation_program,
190
+ "evaluation_config": {
191
+ "positive_predicate": "grandparent",
192
+ "negative_predicate": "not_grandparent"
193
+ }
194
+ }]
195
+ )
196
+ ```
VerifiableRewardsForScalableLogicalReasoning.py ADDED
@@ -0,0 +1,371 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """ Symbolic Judge: Verifiable Rewards for Logical Reasoning at Scale """
15
+
16
+ import os
17
+ import subprocess
18
+ import tempfile
19
+ import evaluate
20
+ import logging
21
+ import datasets
22
+ from tqdm import tqdm
23
+ import time
24
+ import multiprocessing as mp
25
+ import re
26
+
27
+ logger = logging.getLogger(__name__)
28
+
29
+ _CITATION = """\
30
+ @misc{anonymous2025slr,
31
+ author = {Anonymous},
32
+ title = {SLR: Anonymous},
33
+ year = {2025}
34
+ }
35
+ """
36
+
37
+ _DESCRIPTION = """\
38
+ Verifiable Rewards for Scalable Logical Reasoning (**SLR**) provides verifiable rewards via logic programm execution.
39
+ It deterministically evaluates candidate hypotheses by executing them against the validation program and verifying all positive examples ($E^+$) are entailed and all negative examples ($E^-$) are not entailed .
40
+ Evaluations performed are fully verifiable and grounded in formal logic, ensuring an automatic, transparent, and reproducible standard for evaluation and reward in both supervised and reinforcement learning settings.
41
+ How it Works:
42
+ - Input: A candidate hypothesis (logic rule) and an executable validation program containing background knowledge and examples.
43
+ - Execution: The candidate rule is executed against the validation program using a Prolog interpreter.
44
+ - Correctness Criteria: The rule is considered correct if it entails all positive examples and rejects all negative examples.
45
+ - Metrics: We provide a range of evaluation metrics (detailed below).
46
+ - Usage: see **Documentation tab** for details on how to use Verifiable Rewards for Scalable Logical Reasoning in your own projects.
47
+
48
+ Example usage:
49
+ from evaluate import load
50
+
51
+ symbolic_judge = load("LG-Anonym/VerifiableRewardsForScalableLogicalReasoning")
52
+
53
+ validation_program = \"\"\"
54
+ eastbound(train0).
55
+ has_car(train0, car0_1).
56
+ car_num(car0_1, 1).
57
+ car_color(car0_1, white).
58
+ car_len(car0_1, short).
59
+ has_wall(car0_1, full).
60
+
61
+ westbound(train1).
62
+ has_car(train1, car1_1).
63
+ car_num(car1_1, 1).
64
+ car_color(car1_1, yellow).
65
+ car_len(car1_1, short).
66
+ has_wall(car1_1, full).
67
+ \"\"\"
68
+
69
+ predicted_rule = "eastbound(Train):- has_car(Train, Car1), car_color(Car1, white)."
70
+
71
+ results = symbolic_judge.compute(
72
+ predictions=[predicted_rule],
73
+ references=[{"validation_program": validation_program,
74
+ "evaluation_config": {
75
+ "positive_predicate": "eastbound",
76
+ "negative_predicate": "westbound"
77
+ }}]
78
+ )
79
+
80
+ Note: A local Prolog interpreter is required to execute validation programs.
81
+ """
82
+
83
+ _KWARGS_DESCRIPTION = """
84
+ Args:
85
+ predictions (`list` of `str`): Each prediction should be a Prolog rule like "pred(T) :- Body."
86
+ references (`list` of `dict`): Each reference should contain:
87
+ - 'validation_program' (`str`): Background knowledge in Prolog syntax
88
+ - 'evaluation_config' (`dict`, optional): Configuration of predicates to use for evaluation.
89
+ Define: positive_predicate, and negative_predicate, the positive one should match the head of the rule to evaluate.
90
+ Returns:
91
+ accuracy (`float`): The proportion of predictions that correctly solve all examples. Value is between 0 and 1.
92
+ partial_score (`float`): Average proportion of correctly classified examples across all predictions. Value is between 0 and 1.
93
+ syntax_score (`float`): Proportion of rules with valid syntax. Value is between 0 and 1.
94
+ detailed_results (`list` of `dict`): Per-example results including correctness, partial score, execution time, and any errors encountered.
95
+ """
96
+
97
+
98
+ def validate_rule_no_hardcoded_cars(prediction):
99
+ """Reject rules that hardcode specific car identifiers"""
100
+ import re
101
+
102
+ # Look for has_car with a constant (lowercase) in second position
103
+ hardcoded_pattern = r'has_car\([^,]+,\s*([a-z][a-z0-9_]*)\)'
104
+ matches = re.findall(hardcoded_pattern, prediction)
105
+
106
+ if matches:
107
+ return False, f"Cars must be variables: {matches[0]}"
108
+
109
+ return True, "Rule is valid"
110
+
111
+
112
+ def _evaluate_with_prolog(prediction, validation_program, eval_config, timeout=5):
113
+ """
114
+ Evaluates a predicted rule against the validation program using Prolog.
115
+ """
116
+
117
+
118
+ # Extract configuration
119
+ positive_pred = eval_config.get("positive_predicate", "eastbound")
120
+ negative_pred = eval_config.get("negative_predicate", "westbound")
121
+ allow_multiple_rules = eval_config.get("allow_multiple_rules", False)
122
+
123
+ # extract predicate from rule_to_evaluate
124
+ rule_to_evaluate = extract_ilp_from_text_v2(prediction, positive_pred, allow_multiple_rules)
125
+
126
+ is_valid, validation_msg = validate_rule_no_hardcoded_cars(rule_to_evaluate)
127
+ if not is_valid:
128
+ return {
129
+ "is_correct": False,
130
+ "partial_score": 0.0,
131
+ "syntax_valid": False,
132
+ "error": f"Rule validation failed: {validation_msg}"
133
+ }
134
+
135
+ if positive_pred not in rule_to_evaluate:
136
+ p = prediction.replace('\n', ' ')
137
+ return {
138
+ "is_correct": False,
139
+ "partial_score": 0.0,
140
+ "syntax_valid": False,
141
+ "error": f"Invalid Syntax: Logic Rule not found for symbol '{positive_pred}': {p}"
142
+ }
143
+
144
+ pos_examples = re.findall(rf'{positive_pred}\(([^)]+)\)', validation_program)
145
+ neg_examples = re.findall(rf'{negative_pred}\(([^)]+)\)', validation_program)
146
+
147
+ # Determine arity by counting commas in first example plus 1
148
+ arity = 1 # default to unary
149
+ if pos_examples:
150
+ arity = pos_examples[0].count(',') + 1
151
+ elif neg_examples:
152
+ arity = neg_examples[0].count(',') + 1
153
+
154
+ # Create variables based on arity
155
+ vars = ", ".join([f"X{i}" for i in range(1, arity + 1)])
156
+
157
+ symbolic_judge = f"""
158
+ % Dynamic evaluation predicates
159
+ check({vars}) :- pos({vars}), {positive_pred}({vars}). % positive covered
160
+ check({vars}) :- neg({vars}), \\+ {positive_pred}({vars}). % negative rejected
161
+
162
+ % Count successful checks
163
+ check_count(Count) :-
164
+ (setof(({vars}), ((pos({vars}); neg({vars})), check({vars})), CorrectExamples) ->
165
+ length(CorrectExamples, Count)
166
+ ;
167
+ Count = 0
168
+ ).
169
+
170
+ check_all :- forall((pos({vars});neg({vars})), check({vars})).
171
+ """
172
+ # Add the rule to evaluate
173
+ validation_program = re.sub(rf'\b{positive_pred}\b', 'pos', validation_program)
174
+ validation_program = re.sub(rf'\b{negative_pred}\b', 'neg', validation_program)
175
+
176
+ pos_negs = validation_program.count("pos(") + validation_program.count("neg(")
177
+ validation_program = '\n'.join(sorted(validation_program.splitlines()))
178
+ full_program = validation_program + "\n\n" + symbolic_judge + "\n\n" + rule_to_evaluate + "\n\n"
179
+
180
+ with tempfile.NamedTemporaryFile(suffix='.pl', mode='w', delete=False) as f:
181
+ f.write(full_program)
182
+ temp_file = f.name
183
+
184
+ try:
185
+ eval_start_time = time.time()
186
+ # Execute the Prolog program
187
+ cmd = ['swipl', '-s', temp_file, '-g', 'check_count(Count), writeln(Count)', '-t', 'halt']
188
+ result = subprocess.run(
189
+ cmd,
190
+ stdout=subprocess.PIPE,
191
+ stderr=subprocess.PIPE,
192
+ timeout=timeout,
193
+ text=True
194
+ )
195
+ partial_score = 0.0 if result.stdout.strip() == '' else int(result.stdout.strip())
196
+ # Extract partial score from output
197
+ partial_score = partial_score / pos_negs if pos_negs > 0 else 0.0
198
+
199
+ is_correct = True if partial_score == 1.0 else False
200
+
201
+ error = f'{result.stderr} -> Eval Rule "{rule_to_evaluate}"' if result.stderr else None
202
+ t1 = time.time()
203
+
204
+ return {
205
+ "is_correct": is_correct,
206
+ "partial_score": partial_score,
207
+ "syntax_valid": True,
208
+ "error": error,
209
+ "exec_time1": t1 - eval_start_time,
210
+ }
211
+
212
+ except subprocess.TimeoutExpired:
213
+ r = rule_to_evaluate.replace('\n', ' ')
214
+ return {"is_correct": False, "partial_score": 0.0, "syntax_valid": False,
215
+ "error": "Evaluation timed out after {timeout} seconds for rule: '{r}'"}
216
+ except Exception as e:
217
+ return {"is_correct": False, "partial_score": 0.0, "syntax_valid": False,
218
+ "error": f"Error evaluating rule '{rule_to_evaluate}' returns: '{result.stdout.strip() if result else 'No error message'}' with error: {e}"}
219
+ finally:
220
+ if os.path.exists(temp_file):
221
+ os.remove(temp_file)
222
+
223
+ def extract_ilp_from_text(text):
224
+ rule_patterns = [
225
+ # Pattern with body (full rule with implication)
226
+ r'([a-zA-Z_][a-zA-Z0-9_]*\([^)]*\)\s*:-[^.]*\.)',
227
+ # Pattern for facts (no body)
228
+ # r'([a-zA-Z_][a-zA-Z0-9_]*\([^)]*\)\s*\.)'
229
+ ]
230
+ p_code = ''
231
+ for pattern in rule_patterns:
232
+ matches = re.findall(pattern, text)
233
+ for match in matches:
234
+ # Ensure the rule ends with a period
235
+ statement = match.strip()
236
+ if not statement.endswith('.'):
237
+ statement += '.'
238
+ p_code += statement + '\n'
239
+ return p_code
240
+
241
+
242
+ def extract_ilp_from_text_v2(text, target_predicate=None, allow_multiple_rules=False):
243
+ text = re.sub(r'%.*?(?=\n|$)', '', text) # remove comments
244
+ # Pre-process: collapse code blocks to single lines
245
+ text = re.sub(r'\n\s*', ' ', text) # crude: flatten all to one line
246
+ # Rule pattern, across newlines
247
+ rule_pattern = re.compile(rf'({target_predicate}\([^()]*\)\s*:-.*?\.)')
248
+ rules = list(rule_pattern.findall(text))
249
+ if len(rules) > 1 and not allow_multiple_rules:
250
+ # logger.warning(f"Found multiple rules in text, but allow_multiple_rules is set to False. Using only the last match.")
251
+ rules = rules[-1:]
252
+ # Remove rules that are also captured as facts
253
+ p_code = ''
254
+ for rule in rules:
255
+ # Ensure the rule ends with a period
256
+ statement = rule.strip()
257
+ if not statement.endswith('.'):
258
+ statement += '.'
259
+ p_code += statement + '\n'
260
+ return p_code.strip() # Ensure no trailing whitespace
261
+
262
+
263
+ @evaluate.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
264
+ class VerifiableRewardsForScalableLogicalReasoning(evaluate.Metric):
265
+ def __init__(self, config_name=None, **kwargs):
266
+ """
267
+ Initializes the PrologEval metric.
268
+
269
+ Args:
270
+ config_name (str, optional): Name of the configuration to use.
271
+ **kwargs: Additional keyword arguments.
272
+ """
273
+ super().__init__(config_name=config_name, **kwargs)
274
+ self.config_name = config_name or "default"
275
+ self._info = self._info()
276
+ self._download_and_prepare(dl_manager=None)
277
+
278
+ def _info(self):
279
+ return evaluate.MetricInfo(
280
+ description=_DESCRIPTION,
281
+ citation=_CITATION,
282
+ inputs_description=_KWARGS_DESCRIPTION,
283
+ features=datasets.Features({
284
+ 'predictions': datasets.Value('string'),
285
+ 'references': {
286
+ 'validation_program': datasets.Value('string'),
287
+ 'evaluation_config': {
288
+ 'positive_predicate': datasets.Value('string'),
289
+ 'negative_predicate': datasets.Value('string')
290
+ }
291
+ },
292
+ }),
293
+ codebase_urls=["https://github.com/LG-Anonym/SLR-Bench"],
294
+ reference_urls=["https://huggingface.co/datasets/LG-Anonym/SLR-Bench"]
295
+ )
296
+
297
+ def _download_and_prepare(self, dl_manager):
298
+ """Checks if SWI-Prolog is installed or warns the user."""
299
+ try:
300
+ subprocess.run(
301
+ ["swipl", "--version"],
302
+ stdout=subprocess.PIPE,
303
+ stderr=subprocess.PIPE,
304
+ check=True
305
+ )
306
+ except (subprocess.CalledProcessError, FileNotFoundError):
307
+ logger.warning(
308
+ "SWI-Prolog not found. Please install it:\n"
309
+ "Ubuntu/Debian: sudo apt-get install swi-prolog\n"
310
+ "macOS: brew install swi-prolog\n"
311
+ "Windows: download from https://www.swi-prolog.org/download/stable"
312
+ )
313
+
314
+ def _compute(self, predictions: list, references: list):
315
+ """Calculates the accuracy of predictions using Prolog for evaluation with multiprocessing."""
316
+ if not isinstance(predictions, list):
317
+ predictions = [predictions]
318
+
319
+ if len(predictions) != len(references):
320
+ raise ValueError(
321
+ f"Number of predictions ({len(predictions)}) and references {len(references)}) don't match")
322
+
323
+ TIMEOUT = 10 if len(predictions) > 500 else 5
324
+ # Prepare evaluation inputs
325
+ eval_inputs = []
326
+ for i, (prediction, reference) in enumerate(zip(predictions, references)):
327
+ validation_program = reference.get("validation_program", reference.get("validation program"))
328
+
329
+ # Extract configuration parameters directly from reference
330
+ # This is the key fix: look for config values at the top level if evaluation_config doesn't exist
331
+ eval_config = reference.get("evaluation_config", {
332
+ "positive_predicate": "eastbound",
333
+ "negative_predicate": "westbound"
334
+ })
335
+
336
+ if not validation_program:
337
+ raise ValueError(f"Example {i} does not contain validation program field")
338
+
339
+ eval_inputs.append((prediction, validation_program, eval_config, TIMEOUT))
340
+
341
+ # if more than 1k predictions, we use multiprocessing to speed up the evaluation
342
+ if len(eval_inputs) > 500:
343
+ # Process evaluations in parallel
344
+ num_cpus = max(1, mp.cpu_count() - 1) # Leave one CPU free
345
+ with mp.Pool(processes=num_cpus) as pool:
346
+ results = list(tqdm(
347
+ pool.starmap(_evaluate_with_prolog, eval_inputs),
348
+ total=len(eval_inputs),
349
+ desc=f"Evaluating rules (parallel processing with {num_cpus} CPUs)"
350
+ ))
351
+ else:
352
+ # Evaluate in the main thread (no multiprocessing)
353
+ results = []
354
+ for prediction, validation_program, eval_config, t in tqdm(eval_inputs, total=len(predictions), desc="Evaluating rules"):
355
+ results.append(_evaluate_with_prolog(prediction, validation_program, eval_config, timeout=t))
356
+
357
+ # Calculate metrics
358
+ partial_scores = [result["partial_score"] for result in results]
359
+ correct_count = sum(1 for result in results if result["is_correct"])
360
+ syntax_valid_count = sum(1 for result in results if result["syntax_valid"])
361
+
362
+ accuracy = correct_count / len(predictions) if predictions else 0
363
+ partial_score = sum(partial_scores) / len(predictions) if partial_scores else 0
364
+ syntax_score = syntax_valid_count / len(predictions) if predictions else 0
365
+
366
+ return {
367
+ "accuracy": accuracy,
368
+ "partial_score": partial_score,
369
+ "syntax_score": syntax_score,
370
+ "detailed_results": results
371
+ }
app.py ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import evaluate
2
+ import gradio as gr
3
+ import os
4
+
5
+
6
+ # Create your own Gradio interface instead of using the built-in widget
7
+ def create_interface(module):
8
+ def evaluate_fn(prediction, references, pos_pred, neg_pred):
9
+ # Check if all required fields are filled
10
+ if not prediction or prediction.strip() == "":
11
+ return "", "", "", "Please provide a candidate hypothesis to evaluate."
12
+
13
+ if not references or references.strip() == "":
14
+ return "", "", "", "Please provide a validation program."
15
+
16
+ if not pos_pred or pos_pred.strip() == "":
17
+ return "", "", "", "Please specify the positive predicate name."
18
+
19
+ if not neg_pred or neg_pred.strip() == "":
20
+ return "", "", "", "Please specify the negative predicate name."
21
+
22
+ # Process a single prediction instead of multiple
23
+ pred = prediction.strip()
24
+
25
+ # Create reference with evaluation_config at the top level
26
+ ref = {
27
+ "validation_program": references.strip(),
28
+ "evaluation_config": {
29
+ "positive_predicate": pos_pred,
30
+ "negative_predicate": neg_pred
31
+ }
32
+ }
33
+
34
+ # Use a list with a single prediction and reference
35
+ results = module.compute(predictions=[pred], references=[ref])
36
+
37
+ # Extract the error message from detailed_results if it exists
38
+ error_msg = ""
39
+ if results["detailed_results"] and len(results["detailed_results"]) > 0:
40
+ error = results["detailed_results"][0].get("error")
41
+ if error:
42
+ error_msg = error
43
+
44
+ return (
45
+ f"Accuracy score: {results['accuracy']:.4f}",
46
+ f"Partial score: {results['partial_score']:.4f}",
47
+ f"Syntax score: {results['syntax_score']:.4f}",
48
+ error_msg
49
+ )
50
+
51
+ # Helper function to load example data
52
+ def load_example(example):
53
+ return (
54
+ example["rule"],
55
+ example["validation"],
56
+ example["pos_pred"],
57
+ example["neg_pred"]
58
+ )
59
+
60
+ # Read README.md content
61
+ readme_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "README.md")
62
+ with open(readme_path, 'r') as f:
63
+ readme_content = f.read()
64
+ readme_content = '# Metric Card ' + readme_content.split('# Metric Card ')[1]
65
+
66
+ # Define examples for quick use
67
+ example_train = {
68
+ "description": "Basic Train Problem",
69
+ "validation": """eastbound(train0).
70
+ has_car(train0, car0_1).
71
+ car_num(car0_1, 1).
72
+ car_color(car0_1, white).
73
+ car_len(car0_1, short).
74
+ has_wall(car0_1, full).
75
+
76
+ westbound(train1).
77
+ has_car(train1, car1_1).
78
+ car_num(car1_1, 1).
79
+ car_color(car1_1, yellow).
80
+ car_len(car1_1, short).
81
+ has_wall(car1_1, full).
82
+ """,
83
+ "rule": "eastbound(Train):- has_car(Train, Car1), car_color(Car1, white).",
84
+ "pos_pred": "eastbound",
85
+ "neg_pred": "westbound"
86
+ }
87
+
88
+ example_family = {
89
+ "description": "Family Relationships",
90
+ "validation": """% Custom problem
91
+ parent(john, mary).
92
+ parent(john, bob).
93
+ parent(alice, bob).
94
+ parent(susan, alice).
95
+
96
+ % Examples
97
+ grandparent(susan, bob).
98
+ not_grandparent(john, alice).""",
99
+ "rule": "grandparent(X, Y) :- parent(X, Z), parent(Z, Y).",
100
+ "pos_pred": "grandparent",
101
+ "neg_pred": "not_grandparent"
102
+ }
103
+
104
+ with gr.Blocks(title="Symbolic Judge") as demo:
105
+ with gr.Tab("Evaluation"):
106
+ gr.Markdown("# Symbolic Judge: Verifiable Rewards for Scalable Logical Reasoning")
107
+ gr.Markdown("""
108
+ Verifiable Rewards for Scalable Logical Reasoning (**SLR**) provides verifiable rewards via logic programm execution.
109
+ It deterministically evaluates candidate hypotheses by executing them against the validation program and verifying all positive examples ($E^+$) are entailed and all negative examples ($E^-$) are not entailed .
110
+ Evaluations performed are fully verifiable and grounded in formal logic, ensuring an automatic, transparent, and reproducible standard for evaluation and reward in both supervised and reinforcement learning settings.
111
+ How it Works:
112
+ - Input: A candidate hypothesis (logic rule) and an executable validation program containing background knowledge and examples.
113
+ - Execution: The candidate rule is executed against the validation program using a Prolog interpreter.
114
+ - Correctness Criteria: The rule is considered correct if it entails all positive examples and rejects all negative examples.
115
+ - Metrics: We provide a range of evaluation metrics (detailed below).
116
+ - Usage: see **Documentation tab** for details on how to use Verifiable Rewards for Scalable Logical Reasoning in your own projects.
117
+ **Note:** A local Prolog interpreter is required to execute validation programs.
118
+ """)
119
+
120
+ with gr.Row():
121
+ with gr.Column(scale=1):
122
+ with gr.Group():
123
+ gr.Markdown("### Model Output")
124
+ prediction_input = gr.Textbox(
125
+ label="Candidate Hypothesis to be evaluated(predicted rule by the model)",
126
+ placeholder="eastbound(T) :- has_car(T, C), short(C), open(C).",
127
+ lines=5
128
+ )
129
+
130
+ with gr.Group():
131
+ gr.Markdown("### Validation Program")
132
+
133
+ references_input = gr.Textbox(
134
+ label="The validation program contains background knowledge and examples for testing",
135
+ placeholder="% Background knowledge\ncar(car_1). car(car_2).\nlong(car_2). short(car_1).\nopen(car_1). closed(car_2).\n\n% Examples\neastbound(train_1).\nwestbound(train_2).\n\n% Train configurations\nhas_car(train_1, car_1).\nhas_car(train_2, car_2).",
136
+ lines=12
137
+ )
138
+
139
+ with gr.Row():
140
+ pos_pred_input = gr.Textbox(
141
+ label="Positive Validation Examples",
142
+ value="eastbound",
143
+ placeholder="eastbound",
144
+ info="The predicate name identifying positive examples in the validation program"
145
+ )
146
+ neg_pred_input = gr.Textbox(
147
+ label="Negative Validation Examples",
148
+ value="westbound",
149
+ placeholder="westbound",
150
+ info="The predicate name identifying negative examples in the validation program"
151
+ )
152
+
153
+ eval_button = gr.Button("Evaluate", variant="primary")
154
+
155
+ with gr.Column(scale=1):
156
+ with gr.Group():
157
+ gr.Markdown("### Evaluation Metrics")
158
+ with gr.Group():
159
+ accuracy_output = gr.Textbox(
160
+ label="Overall Accuracy",
161
+ info="Proportion of hypotheses that solve the tasks",
162
+ container=True
163
+ )
164
+ partial_score_output = gr.Textbox(
165
+ label="Partial Score",
166
+ info="Proportion of examples that are correctly classified in the tasks",
167
+ container=True
168
+ )
169
+ syntax_score_output = gr.Textbox(
170
+ label="Syntax Score",
171
+ info="Proportion of syntactically valid hypothesis",
172
+ container=True
173
+ )
174
+ error_output = gr.Textbox(
175
+ label="Syntax Details",
176
+ info="Error messages for syntax errors or execution failures",
177
+ container=True,
178
+ )
179
+ gr.Markdown("Note: This interface evaluates a single hypothesis at a time. Use Python API for batch processing")
180
+ # Add the examples section
181
+ # Example list
182
+ examples = [
183
+ ["Train Problem", example_train],
184
+ ["Family Relationships", example_family]
185
+ ]
186
+
187
+ with gr.Accordion("Example Logical Reasoning Tasks", open=True):
188
+ example_radio = gr.Radio([ex[0] for ex in examples], label="Select an example", value="Train Problem")
189
+ # Show preview of selected example
190
+ with gr.Row():
191
+ with gr.Column():
192
+ gr.Markdown("### Selected Example Preview")
193
+ example_description = gr.Markdown("**Description**: " + example_train["description"])
194
+ with gr.Row():
195
+ with gr.Column():
196
+ gr.Markdown("#### Candidate Hypothesis")
197
+ example_rule = gr.Code(value=example_train["rule"])
198
+ with gr.Column():
199
+ gr.Markdown("#### Validation Program")
200
+ example_validation = gr.Code(value=example_train["validation"])
201
+
202
+ with gr.Row():
203
+ with gr.Column():
204
+ gr.Markdown("#### Validation Examples")
205
+ example_predicates = gr.Markdown(f"""
206
+ **Positive Examples**: `{example_train["pos_pred"]}`
207
+ **Negative Examples**: `{example_train["neg_pred"]}`
208
+ """)
209
+
210
+ # Load button for the selected example
211
+ load_button = gr.Button("Load Selected Example", variant="secondary")
212
+ gr.Markdown("### Citation")
213
+
214
+ gr.Markdown("""
215
+ If you use Symbolic Judge in your work, please cite:
216
+ ```
217
+ @misc{anonymous2025slr,
218
+ title={Verifiable Rewards for Scalable Logical Reasoning},
219
+ author={Anonymous},
220
+ year={2025},
221
+ }
222
+ ```
223
+ """)
224
+
225
+ # Set up event handlers for the example selection
226
+ def update_example_preview(selection):
227
+ selected_example = next((ex[1] for ex in examples if ex[0] == selection), example_train)
228
+ return (
229
+ "**Description**: " + selected_example["description"],
230
+ selected_example["rule"],
231
+ selected_example["validation"],
232
+ f"""
233
+ **Positive Examples**: `{selected_example["pos_pred"]}`
234
+ **Negative Examples**: `{selected_example["neg_pred"]}`
235
+ """
236
+ )
237
+
238
+ example_radio.change(
239
+ fn=update_example_preview,
240
+ inputs=[example_radio],
241
+ outputs=[example_description, example_rule, example_validation, example_predicates]
242
+ )
243
+
244
+ # Event handler for the load button
245
+ def load_selected_example(selection):
246
+ selected_example = next((ex[1] for ex in examples if ex[0] == selection), example_train)
247
+ return load_example(selected_example)
248
+
249
+ load_button.click(
250
+ fn=load_selected_example,
251
+ inputs=[example_radio],
252
+ outputs=[prediction_input, references_input, pos_pred_input, neg_pred_input]
253
+ )
254
+
255
+ # Set up the evaluate button
256
+ eval_button.click(
257
+ fn=evaluate_fn,
258
+ inputs=[prediction_input, references_input, pos_pred_input, neg_pred_input],
259
+ outputs=[accuracy_output, partial_score_output, syntax_score_output, error_output]
260
+ )
261
+
262
+ with gr.Tab("Documentation"):
263
+ gr.Markdown(readme_content)
264
+
265
+ return demo
266
+
267
+ # Use a local path instead of a module name
268
+ module = evaluate.load("LG-Anonym/VerifiableRewardsForScalableLogicalReasoning")
269
+ create_interface(module).launch()
packages.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ swi-prolog
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ git+https://github.com/huggingface/evaluate@main
2
+ evaluate
3
+ gradio>=5.34.2
4
+ datasets
5
+ tqdm