File size: 1,410 Bytes
acaaa6a
 
 
 
 
 
 
 
d31918f
acaaa6a
4913580
 
 
 
 
 
1808698
4913580
2ae18c3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
title: README
emoji: 🦀
colorFrom: green
colorTo: yellow
sdk: static
pinned: false
---
https://open-thoughts.ai

Curating the best open reasoning datasets.  A [Bespoke Labs](https://bespokelabs.ai/) and [DataComp](https://www.datacomp.ai/) community effort.

Our first goal is to curate a reasoning dataset to train state of the art small reasoning models that surpass [DeepSeek-R1-Distill-32B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) and [DeepSeek-R1-Distill-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) on math and code reasoning benchmarks.

## About us

We are a team of researchers and engineers from [Bespoke Labs](https://bespokelabs.ai/), Stanford, University of California Berkeley, University of Washington, UT Austin, Juelich Supercomputing Center (JSC), LAION, UCLA, UNC Chapel Hill, and Toyota Research Institute united around building the best datasets (and thus the best models). See our previous works at [datacomp.ai](https://www.datacomp.ai/) and [mlfoundations](https://github.com/mlfoundations).

Open Thoughts is supported by [Bespoke Labs](https://www.bespokelabs.ai/), [Lambda Labs](https://lambdalabs.com/), [NSF IFML](https://www.ifml.institute/), [Juelich Supercomputing Center](https://www.fz-juelich.de/en/ias/jsc), [UT Austin Machine Learning Lab](https://ml.utexas.edu/), [Toyota Research Institute](https://www.tri.global/).