• id: number
Token ID from the model tokenizer
inference/src/tasks/nlp/textGenerationStream.ts:9
• logprob: number
Logprob
inference/src/tasks/nlp/textGenerationStream.ts:13
• special: boolean
Is the token a special token Can be used to ignore tokens when concatenating
inference/src/tasks/nlp/textGenerationStream.ts:18
• text: string
Token text
inference/src/tasks/nlp/textGenerationStream.ts:11
< > Update on GitHub