File size: 1,098 Bytes
d807519
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
🚀 Qwen2.5-3B Fine-Tuned on BBH (Dyck Languages) - Model Card
📌 Model Overview
Model Name: Qwen2.5-3B Fine-Tuned on BBH (Dyck Languages)
Base Model: Qwen2.5-3B-Instruct
Fine-Tuned Dataset: BBH (BigBench Hard) - Dyck Languages
Task: Causal Language Modeling (CLM)
Fine-Tuning Objective: Improve performance on Dyck language sequence completion (correctly closing nested parentheses and brackets)
📌 Dataset Information
This model was fine-tuned on the Dyck Languages subset of the BigBench Hard (BBH) dataset.

Dataset characteristics:

Task Type: Sequence completion of balanced parentheses
Input Format: A sequence of open parentheses, brackets, or braces with missing closing elements
Target Labels: The correct sequence of closing parentheses, brackets, or braces
Example:

Input: Complete the rest of the sequence, making sure that the parentheses are closed properly. Input: [ [

Target: ] ]

This dataset evaluates a model’s ability to correctly complete structured sequences, which is crucial for programming language syntax, formal language understanding, and symbolic reasoning.