R1 not putting out the full model response with transformers pipeline

#21
by rcgalbo - opened

Use a pipeline as a high-level helper

from transformers import pipeline

messages = [
    {"role": "user", "content": "Reason about the number 0?"},
]
pipe = pipeline("text-generation", model="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B")
pipe(messages)

output:
----------
[{'generated_text': [{'role': 'user', 'content': 'Reason about the number 0?'},
   {'role': 'assistant',
    'content': 'To determine the value of 0, I first consider the definition of zero as the absence of any'}]}]

Sign up or log in to comment