• finish_reason: TextGenerationStreamFinishReason
Generation finish reason
inference/src/tasks/nlp/textGenerationStream.ts:36
• generated_text: string
Generated text
inference/src/tasks/nlp/textGenerationStream.ts:34
• generated_tokens: number
Number of generated tokens
inference/src/tasks/nlp/textGenerationStream.ts:38
• prefill: TextGenerationStreamPrefillToken[]
Prompt tokens
inference/src/tasks/nlp/textGenerationStream.ts:42
• Optional seed: number
Sampling seed if sampling was activated
inference/src/tasks/nlp/textGenerationStream.ts:40
• tokens: TextGenerationStreamToken[]
Generated tokens
inference/src/tasks/nlp/textGenerationStream.ts:44
< > Update on GitHub