Model Card for openai-gsm8k_meta-llama-Llama-3.2-3B_sft_lora

This model is a fine-tuned version of meta-llama/Llama-3.2-3B on the openai/gsm8k dataset. It has been trained using TRL.

Framework versions

  • TRL: 0.12.2
  • Transformers: 4.46.3
  • Pytorch: 2.5.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for SidhaarthMurali/original_llama3.2-3b-gsm8k_lora

Finetuned
(100)
this model

Dataset used to train SidhaarthMurali/original_llama3.2-3b-gsm8k_lora