metadata
library_name: transformers
tags:
- code
datasets:
- jtatman/python-code-dataset-500k
language:
- en
base_model:
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
pipeline_tag: text-generation
license: apache-2.0
- Developed by: [More Information Needed]
- Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Training Hyperparameters
python examples/scripts/sft.py --model_name TinyLlama/TinyLlama-1.1B-Chat-v1.0 --dataset_name jtatman/python-code-dataset-500k --load_in_4bit --dataset_text_field text --per_device_train_batch_size 2 --per_device_eval_batch_size 8 --gradient_accumulation_steps 1 --learning_rate 2e-4 --optim adamw_torch --save_steps 2000 --logging_steps 500 --warmup_ratio 0 --use_peft --lora_r 64 --lora_alpha 16 --lora_dropout 0.1 --report_to wandb --num_train_epochs 1 --output_dir TinyLlama-1.1B-Chat-v1.0-PCD250k_v0.1
However, only 250K out of the 500K dataset was used for fine-tuning. Of that, 70% was used for training data and 30% for evaluation.
Usage
import torch
from transformers import pipeline
pipe = pipeline("text-generation", model="SSK-DNB/TinyLlama-1.1B-Chat-v1.0-PCD250k_v0.1", torch_dtype=torch.bfloat16, device_map="auto")
text = '''Create a program that determines whether a given year is a leap year or not.
The input is an integer Y (1000 ≤ Y ≤ 2999) representing a year, provided in a single line.
Output "YES" if the given year is a leap year, otherwise output "NO" in a single line.
A leap year is determined according to the following rules:
Rule 1: A year divisible by 4 is a leap year.
Rule 2: A year divisible by 100 is not a leap year.
Rule 3: A year divisible by 400 is a leap year.
Rule 4: If none of the above rules (Rule 1-3) apply, the year is not a leap year.
If a year satisfies multiple rules, the rule with the higher number takes precedence.
'''
texts = f"Translate the following problem statement into Python code. :\n{text}"
messages = [
{"role": "system","content": "You are a chatbot who can help code!",},
{"role": "user", "content": f"{texts}"},
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipe(
prompt,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
repetition_penalty=1.0,
top_k=50,
top_p=1.0,
min_p=0
)
print(outputs[0]["generated_text"])