Uploaded model

  • Developed by: ShubhamSinghCodes
  • License: apache-2.0
  • Distilled from model : ShubhamSinghCodes/PyNanoLM-big

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. A fast, lite, not entirely stupid model that assists in Python programming. It is a very small, 3.5M parameter model finetuned and distilled from SmolLM. (WIP)

Downloads last month
11
Safetensors
Model size
35M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ShubhamSinghCodes/PyNanoLm

Finetuned
(1)
this model

Datasets used to train ShubhamSinghCodes/PyNanoLm