• id: number
Token ID from the model tokenizer
inference/src/tasks/nlp/textGenerationStream.ts:23
• Optional logprob: number
Logprob Optional since the logprob of the first token cannot be computed
inference/src/tasks/nlp/textGenerationStream.ts:30
• text: string
Token text
inference/src/tasks/nlp/textGenerationStream.ts:25
< > Update on GitHub