A newer version of this model is available: ShubhamSinghCodes/PyNanoLm

Uploaded model

  • Developed by: ShubhamSinghCodes
  • License: apache-2.0
  • Finetuned from model : unsloth/smollm-135m-instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library. Meant as a first step towards a fast, lite, not entirely stupid model that assists in Python programming. (WIP)

Downloads last month
80
Safetensors
Model size
135M params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for ShubhamSinghCodes/PyNanoLM-big

Finetunes
1 model
Quantizations
2 models

Datasets used to train ShubhamSinghCodes/PyNanoLM-big