metadata
license: mit
datasets:
- flwrlabs/code-alpaca-20k
language:
- en
metrics:
- accuracy
base_model:
- HuggingFaceTB/SmolLM2-1.7B-Instruct
pipeline_tag: text-generation
library_name: peft
tags:
- text-generation-inference
- code
Evaluation Results (Pass@1)
- HumanEval: 30.49 %
- MBPP: 34.00 %
- MultiPL-E (C++): 23.60 %
- MultiPL-E (JS): 18.63 %
- Average: 26.68 %
Model Details
This PEFT adapter has been trained by using Flower, a friendly federated AI framework.
The adapter and benchmark results have been submitted to the FlowerTune LLM Code Leaderboard.
Please check the following GitHub project for details on how to reproduce training and evaluation steps: