• Optional best_of_sequences: TextGenerationStreamBestOfSequence[]
Additional sequences when using the best_of parameter
inference/src/tasks/nlp/textGenerationStream.ts:66
• finish_reason: TextGenerationStreamFinishReason
Generation finish reason
inference/src/tasks/nlp/textGenerationStream.ts:56
• generated_tokens: number
Number of generated tokens
inference/src/tasks/nlp/textGenerationStream.ts:58
• prefill: TextGenerationStreamPrefillToken[]
Prompt tokens
inference/src/tasks/nlp/textGenerationStream.ts:62
• Optional seed: number
Sampling seed if sampling was activated
inference/src/tasks/nlp/textGenerationStream.ts:60
• tokens: TextGenerationStreamToken[]
inference/src/tasks/nlp/textGenerationStream.ts:64
< > Update on GitHub