from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("heegyu/TinyMistral-248M-v2.5-Instruct-orpo")
model = AutoModelForCausalLM.from_pretrained("heegyu/TinyMistral-248M-v2.5-Instruct-orpo")

conv = [
  {
    'role': 'user',
    'content': 'What can I do with Large Language Model?'
  }
]
prompt = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt")
output = model.generate(prompt, max_new_token=128)
print(tokenizer.decode(output[0]))
Downloads last month
166
Safetensors
Model size
248M params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for heegyu/TinyMistral-248M-v2.5-Instruct-orpo

Finetuned
(1)
this model
Quantizations
4 models

Dataset used to train heegyu/TinyMistral-248M-v2.5-Instruct-orpo