Overview

YouTube Liveなどのライブ配信での視聴者コメントのようなテキストを生成するモデルです。
rinna/japanese-gpt-neox-3.6b-instruction-ppoをLoraで学習したadapter_modelのみをアップロードしました。

This model generates text like viewer comments in live streaming, such as YouTube Live. This model was trained on rinna/japanese-gpt-neox-3.6b-instruction-ppo using Lora.

How to use the model

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo", use_fast=False)
model = AutoModelForCausalLM.from_pretrained("rinna/japanese-gpt-neox-3.6b-instruction-ppo", torch_dtype=torch.float16, device_map="auto")

from peft import PeftModel
peft_model = PeftModel.from_pretrained(model, "oshizo/comment-generation-japanese-3.6b-lora", device_map="auto")


prompt = f"ユーザー: 今朝うちの小さな畑でトマトがね、いい感じに赤くなってたんだよね。そのまま通学路を歩いてたんだけどさ、一つちぎって弁当に入れておけば良かっな~と思って。トマト可愛くて好き。<NL>システム: "
token_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")

with torch.no_grad():
    output_ids = model.generate(
        token_ids.to(model.device),
        do_sample=True,
        max_new_tokens=32,
        num_return_sequences=4,
        pad_token_id=tokenizer.pad_token_id,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id
    )
for output in output_ids.tolist():
    print(tokenizer.decode(output[token_ids.size(1):], skip_special_tokens=True))

# これから剥くの面倒くさいよ<NL>
# なんやその可愛い好きは<NL>
# 冷やしておくと美味しいよな<NL>
# 食レポ具体的に<NL>
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.