metadata
license: apache-2.0
datasets:
- graelo/wikipedia
- uonlp/CulturaX
- HuggingFaceH4/ultrachat_200k
language:
- ja
- en
How to use
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("lightblue/karasu-7B")
model = AutoModelForCausalLM.from_pretrained("lightblue/karasu-7B", torch_dtype=torch.bfloat16, device_map="auto")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [{"role": "system", "content": "あなたはAIアシスタントです。"}]
messages.append({"role": "user", "content": "イギリスの首相は誰ですか?"})
prompt = tokenizer.apply_chat_template(conversation=messages, add_generation_prompt=True, tokenize=False)
pipe(prompt, max_new_tokens=100, do_sample=False, temperature=0.0, return_full_text=False)
Base checkpoint
augmxnt/shisa-7b-v1
- Mistral-7B base
- Pre-trained on 8B of MADLAD-Ja
- Finetuned on Japanese instructions
- Highest scoring 7B model on conversation benchmark (JA MT-Bench)
Training datasets (total ~7B)
- Aozora Bunko
- Japanese Law Precedent Dataset
- Japanese Wikipedia
- .lg.jp, .go.jp, .ac.jp domain webscrapes from CulturaX (Any documents with same first 25 characters were de-duplicated)
- English Ultrachat200K-gen (So that it doesn't forget English and chatting ability learned in the base checkpoint)
Developed by
![Lightblue technology logo](https://www.lightblue-tech.com/wp-content/uploads/2021/10/LBlogo-scaled.jpg)
Engineers
Peter Devine
Sho Higuchi
Advisors
Yuuki Yamanaka
Atom Sonoda
Dataset evaluator
Renju Aoki