|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- Azure99/blossom-v6-sft-stage1 |
|
- Azure99/blossom-v6-sft-stage2 |
|
language: |
|
- zh |
|
- en |
|
base_model: |
|
- Qwen/Qwen2.5-32B |
|
--- |
|
# **BLOSSOM-V6-32B** |
|
|
|
[💻Github](https://github.com/Azure99/BlossomLM) • [🚀Blossom Chat Demo](https://blossom-chat.com/) |
|
|
|
### Introduction |
|
|
|
Blossom is a powerful open-source conversational large language model that provides reproducible post-training data, dedicated to delivering an open, powerful, and cost-effective locally accessible general-purpose model for everyone. |
|
|
|
| Chat Model | Resource | Base Model | |
|
| ------------------------------------------------------------ | ------------------------------------------------------------ | ----------- | |
|
| [Blossom-V6-32B](https://huggingface.co/Azure99/Blossom-V6-32B) | [Demo](https://huggingface.co/spaces/Azure99/Blossom-V6-32B-AWQ-Demo) [AWQ](https://huggingface.co/Azure99/Blossom-V6-32B-AWQ) [GGUF](https://huggingface.co/Azure99/Blossom-V6-32B-GGUF) [Ollama](https://ollama.com/azure99/blossom-v6:32b) | Qwen2.5-32B | |
|
| [Blossom-V6-14B](https://huggingface.co/Azure99/Blossom-V6-14B) | [Demo](https://huggingface.co/spaces/Azure99/Blossom-V6-14B-Demo) [AWQ](https://huggingface.co/Azure99/Blossom-V6-14B-AWQ) [GGUF](https://huggingface.co/Azure99/Blossom-V6-14B-GGUF) [Ollama](https://ollama.com/azure99/blossom-v6:14b) | Qwen2.5-14B | |
|
| [Blossom-V6-7B](https://huggingface.co/Azure99/Blossom-V6-7B) | [Demo](https://huggingface.co/spaces/Azure99/Blossom-V6-7B-Demo) [AWQ](https://huggingface.co/Azure99/Blossom-V6-7B-AWQ) [GGUF](https://huggingface.co/Azure99/Blossom-V6-7B-GGUF) [Ollama](https://ollama.com/azure99/blossom-v6:7b) | Qwen2.5-7B | |
|
|
|
You can find the training data here: [Blossom-V6-SFT-Stage1](https://huggingface.co/datasets/Azure99/blossom-v6-sft-stage1) (1 epoch)、[Blossom-V6-SFT-Stage2](https://huggingface.co/datasets/Azure99/blossom-v6-sft-stage2) (3 epoch)。 |
|
|
|
### **Data Synthesis Workflow Overview** |
|
|
|
Primarily employs three cost-effective models—Yi-Lightning, Deepseek-V2.5, and Doubao-Pro-32K (denoted as A, B, C)—to regenerate responses under different scenarios using tailored synthesis strategies. |
|
|
|
For example: |
|
|
|
- In objective scenarios like mathematics (where answers are unique), Model A first generates responses as a "teacher." If reference answers exist in the source data, Model B verifies the correctness of A's responses against them. If no reference answers exist, Model C generates a second response, and Model B checks consistency between A and C's outputs. Inconsistent responses are filtered out. |
|
- For subjective scenarios, three models cross-evaluate each other. For instance, Models A and B generate responses to a question, and Model C evaluates which is better. The superior response may be retained as training data or used for preference data construction. To mitigate model bias, roles (respondent/evaluator) are randomly assigned to A, B, and C in each instance. |
|
|
|
Additional rule-based filtering is applied, such as: |
|
|
|
- N-Gram filtering to remove data with many repetitions. |
|
- Discarding questions containing toxic content that triggers teacher model refusals. |
|
|
|
Further technical details will be released in the future. The data is synthesized by the [🌸BlossomData](https://github.com/Azure99/BlossomData) framework. |
|
|
|
### Inference |
|
|
|
```python |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
|
|
MODEL = "Azure99/Blossom-V6-32B" |
|
|
|
model = AutoModelForCausalLM.from_pretrained(MODEL) |
|
tokenizer = AutoTokenizer.from_pretrained(MODEL) |
|
|
|
messages = [ |
|
{"role": "user", "content": "北京有什么好吃的"} |
|
] |
|
|
|
formatted_input = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
|
input_ids = tokenizer([formatted_input], return_tensors="pt").to(model.device).input_ids |
|
generated_ids = model.generate(input_ids, max_new_tokens=512) |
|
generated_ids = [ |
|
output_ids[len(input_ids):] for input_ids, output_ids in zip(input_ids, generated_ids) |
|
] |
|
|
|
print(tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]) |
|
``` |