Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Paper
β’
2403.09629
β’
Published
β’
78
Mistral-7b with continued pretraining using Quiet-STaR (https://arxiv.org/abs/2403.09629) for generating 8 thought tokens before each output token.