Safetensors
qwen2
stardust-eques's picture
Update README.md
a0c1574 verified
|
raw
history blame
3.05 kB
metadata
license: apache-2.0
base_model:
  - cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese
datasets:
  - HuggingFaceH4/ultrachat_200k

TinyDeepSeek-JP-1.5B

本モデルは, DeepSeek-R1の小型蒸留モデルに日本語を追加学習したcyberagent/DeepSeek-R1-Distill-Qwen-14B-Japaneseに対し、 SakanaAI社が提案した新たな蒸留手法TAIDを適用して小型化したものです.

Teacher model : cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese Student model : SakanaAI/TinySwallow-1.5B-Instruct

Uses

Uses follow the original models.
This model is provided for research and development purposes only and should be considered as an experimental prototype. It is not intended for commercial use or deployment in mission-critical environments. Use of this model is at the user's own risk, and its performance and outcomes are not guaranteed. EQUES Inc. shall not be liable for any direct, indirect, special, incidental, or consequential damages, or any loss arising from the use of this model, regardless of the results obtained. Users must fully understand the risks associated with the use of this model and use it at their own discretion.

Output Examples

Give me a short introduction to large language model.

大規模言語モデルについて教えて。

A regular hexagon can be divided into six equilateral triangles. If the perimeter of one of the triangles is 21 inches, what is the perimeter, in inches, of the regular hexagon?

Sample Usage

import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "EQUES/TinyDeepSeek-JP-1.5B"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "大規模言語モデルについて教えて。"
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512,
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)

License

Apache-2.0

Acknowledgement

  • SakanaAI & Swallow team : development and release of TinySwallow-1.5B
  • SakanaAI : development of TAID
  • CyberAgent : development of DeepSeek-R1-Distill-Qwen-14B-Japanese