from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("heegyu/TinyMistral-248M-v2.5-Instruct-orpo")
model = AutoModelForCausalLM.from_pretrained("heegyu/TinyMistral-248M-v2.5-Instruct-orpo")
conv = [
{
'role': 'user',
'content': 'What can I do with Large Language Model?'
}
]
prompt = tokenizer.apply_chat_template(conv, add_generation_prompt=True, return_tensors="pt")
output = model.generate(prompt, max_new_token=128)
print(tokenizer.decode(output[0]))
- Downloads last month
- 166
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for heegyu/TinyMistral-248M-v2.5-Instruct-orpo
Base model
Locutusque/TinyMistral-248M-v2.5