--- base_model: llm-jp/llm-jp-3-13b tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ak0327 - **License:** apache-2.0 - **Finetuned from model :** llm-jp/llm-jp-3-13b This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth) # How to use ```Python def load_model(model_name): # QLoRA config bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=False, ) # Load model model = AutoModelForCausalLM.from_pretrained( model_name, quantization_config=bnb_config, device_map="auto", token=HF_TOKEN ) # Load tokenizer tokenizer = AutoTokenizer.from_pretrained( model_name, trust_remote_code=True, token=HF_TOKEN ) return model, tokenizer model_name = "ak0327/llm-jp-3-13b-ft-5" model, tokenizer = load_model(model_name) datasets = load_test_datasets() results = inference(model_name, datasets, model, tokenizer) ```