Uploaded model

  • Developed by: kky84176
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-2-9b-it-bnb-4bit

This gemma2 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Sample Use

以下は、elyza-tasks-100-TV_0.jsonlの回答のためのコードです。

from transformers import(
    AutoModelForCausalLM,
    AutoTokenizer,
    BitsAndBytesConfig,
)
import torch
from tqdm import tqdm
import json

HF_TOKEN = "your-token"
model_name = "kky84176/gemma-2-9b-it-it01"

#
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4", # nf4は通常のINT4より精度が高く、ニューラルネットワークの分布に最適です
    bnb_4bit_compute_dtype=torch.bfloat16,
)

# モデルの読込み
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map="auto",
    token=HF_TOKEN,
)
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remoe_code=True, token=HF_TOKEN)

# データの読込み
import json
datasets = []
with open("./elyza-tasks-100-TV_0.jsonl", "r") as f:
    item = ""
    for line in f:
      line = line.strip()
      item += line
      if item.endswith("}"):
        datasets.append(json.loads(item))
        item = ""

# モデルによる推論
results = []
for data in tqdm(datasets):

  input = data["input"]

  prompt = f"""### 指示
  {input}
  ### 回答
  """

  tokenized_input = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)
  attention_mask = torch.ones_like(tokenized_input)

  with torch.no_grad():
      outputs = model.generate(
          tokenized_input,
          attention_mask=attention_mask,
          max_new_tokens=512,
          do_sample=False,
          repetition_penalty=1.2,
          pad_token_id=tokenizer.eos_token_id
      )[0]
  output = tokenizer.decode(outputs[tokenized_input.size(1):], skip_special_tokens=True)

  results.append({"task_id": data["task_id"], "input": input, "output": output})

# jsonl への出力
import re
new_model_id = "gemma-2-9b-it-it01"
jsonl_id = re.sub(".*/", "", new_model_id)
with open(f"./{jsonl_id}-outputs.jsonl", 'w', encoding='utf-8') as f:
    for result in results:
        json.dump(result, f, ensure_ascii=False)  # ensure_ascii=False for handling non-ASCII characters
        f.write('\n')
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for kky84176/gemma-2-9b-it-it01

Base model

google/gemma-2-9b
Finetuned
(177)
this model