Japanese
English

This model is a fine-tuned Llama2-7b-chat-hf model with Japanese dataset with LoRA.

This model is finetuned by the joint efforts of Sparticle Inc. and A. I. Hakusan Inc.

The training set of this model contains:

5% of randomly chosen data from llm-japanese-dataset by izumi-lab.

Japanese-alpaca-lora dataset, retrieved from https://github.com/masa3141/japanese-alpaca-lora/tree/main

For inference, please follow the instructions in https://github.com/tloen/alpaca-lora/ .

Training procedure

The following bitsandbytes quantization config was used during training:

  • load_in_8bit: True
  • llm_int8_threshold: 6.0
  • llm_int8_skip_modules: None
  • llm_int8_enable_fp32_cpu_offload: False

Framework versions

  • PEFT 0.5.0.dev0

You must agree with Meta's agreements when using this LoRA adapter with Llama-2.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Dataset used to train Sparticle/llama-2-7b-chat-japanese-lora

Space using Sparticle/llama-2-7b-chat-japanese-lora 1