Outdated Training Data
Hey, really great work on building this with GPT-OSS and Qwen 3 as the base. But I feel the model isn’t fully up to date with the latest data, and compared to some other open-source LLMs it doesn’t feel as strong yet. Since it uses Qwen 3, I’m also a bit worried it might carry over some of the censorship issues we’ve seen in models like DeepSeek. At the same time, I’m not getting the same kind of friendly, conversational vibe that DeepSeek gives. Could you share more on how you’ve merged GPT-OSS and Qwen 3? The project is really promising—it just needs fresher training data to take it to the next level.
Thank you for your interest in our work. Since the construction of pretraining data is extremely costly—covering multiple stages such as data crawling, parsing, cleaning, and post-cleaning validation—our last large-scale update was in June of last year. We also open-sourced part of the pretraining data we used (i.e., OmniCorpus), which contains data up to the end of 2023. Given the high cost and long cycle of updating this part of the data, it is difficult for us to update the timeliness of the model’s knowledge in this release. We sincerely apologize for any inconvenience this may cause and genuinely hope for your understanding of our constraints.