|
๐ Qwen2.5-3B Fine-Tuned on BBH (Dyck Languages) - Model Card |
|
๐ Model Overview |
|
Model Name: Qwen2.5-3B Fine-Tuned on BBH (Dyck Languages) |
|
Base Model: Qwen2.5-3B-Instruct |
|
Fine-Tuned Dataset: BBH (BigBench Hard) - Dyck Languages |
|
Task: Causal Language Modeling (CLM) |
|
Fine-Tuning Objective: Improve performance on Dyck language sequence completion (correctly closing nested parentheses and brackets) |
|
๐ Dataset Information |
|
This model was fine-tuned on the Dyck Languages subset of the BigBench Hard (BBH) dataset. |
|
|
|
Dataset characteristics: |
|
|
|
Task Type: Sequence completion of balanced parentheses |
|
Input Format: A sequence of open parentheses, brackets, or braces with missing closing elements |
|
Target Labels: The correct sequence of closing parentheses, brackets, or braces |
|
Example: |
|
|
|
Input: Complete the rest of the sequence, making sure that the parentheses are closed properly. Input: [ [ |
|
|
|
Target: ] ] |
|
|
|
This dataset evaluates a modelโs ability to correctly complete structured sequences, which is crucial for programming language syntax, formal language understanding, and symbolic reasoning. |
|
|
|
|