项目介绍

  • 显卡:T4
  • 模型:Llama3-8B
  • 数据集:LooksJuicy/ruozhiba
  • 微调方法:QLoRA

使用方法

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "snowfly/Llama-3-8B-QLoRA-ruozhiba"

model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)


text = "### Human:如何写代码? ### Assistant:"
device = "cuda:0"

inputs = tokenizer(text, return_tensors="pt").to(device)


outputs = model.generate(**inputs, max_new_tokens=50)



print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
76
Safetensors
Model size
4.65B params
Tensor type
FP16
·
F32
·
U8
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train snowfly/Llama-3-8B-QLoRA-ruozhiba