• finish_reason: TextGenerationStreamFinishReason
Generation finish reason
inference/src/tasks/nlp/textGenerationStream.ts:35
• generated_text: string
Generated text
inference/src/tasks/nlp/textGenerationStream.ts:33
• generated_tokens: number
Number of generated tokens
inference/src/tasks/nlp/textGenerationStream.ts:37
• prefill: TextGenerationStreamPrefillToken
[]
Prompt tokens
inference/src/tasks/nlp/textGenerationStream.ts:41
• Optional
seed: number
Sampling seed if sampling was activated
inference/src/tasks/nlp/textGenerationStream.ts:39
• tokens: TextGenerationStreamToken
[]
Generated tokens