rulixiang commited on
Commit
d288ead
·
1 Parent(s): 4b5295b

Update README.md

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. Ling-lite-1.5-2507-benchmarks.png +3 -0
  3. README.md +92 -3
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Ling-lite-1.5-2507-benchmarks.png filter=lfs diff=lfs merge=lfs -text
Ling-lite-1.5-2507-benchmarks.png ADDED

Git LFS Details

  • SHA256: bc415269c4fc51e6aa3f6ea556ca80fbd7b0358bba241108762821fcb03d7e16
  • Pointer size: 131 Bytes
  • Size of remote file: 102 kB
README.md CHANGED
@@ -1,3 +1,92 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ pipeline_tag: text-generation
4
+ library_name: transformers
5
+ ---
6
+
7
+ # Ling-lite-1.5-2507
8
+
9
+ <p align="center"><img src="https://huggingface.co/inclusionAI/Ling-lite/resolve/main/ant-bailing.png" width="100"/></p>
10
+
11
+ <p align="center">🤗 <a href="https://huggingface.co/inclusionAI/Ling-lite-1.5-2507">Hugging Face</a>| 🤖 <a href="https://www.modelscope.cn/models/inclusionAI/Ling-lite-1.5-2507">ModelScope</a>
12
+
13
+
14
+
15
+ ## Model Overview
16
+ We are excited to introduce **Ling-lite-1.5-2507**, the latest version of our highly capable Ling-lite-1.5 model.
17
+
18
+ Ling-lite-1.5-2507 boasts 16.8 billion parameters with 2.75 billion activated parameters, which demonstrates significant improvements over previous versions across professional knowledge assessments, logical reasoning evaluations, and coding capability benchmarks.
19
+
20
+ <p align="center">
21
+ <img width="80%" src="Ling-lite-1.5-2507-benchmarks.png">
22
+ </p>
23
+
24
+ ## Key Features
25
+ As the flagship model of our Lite series, Ling-lite-1.5-2507 features two major enhancements:
26
+
27
+ * **Smarter and More Efficient Reasoning**
28
+ For straightforward inquiries, the model generates concise and direct responses. When confronting complex challenges, it exhibits advanced problem-solving prowess by systematically decomposing problems, integrating a sophisticated reflective mechanism, and producing elaborate reasoning traces to achieve accurate solutions through an inherently efficient and integrated reasoning process.
29
+
30
+ * **Enhanced Human-Aligned Subjectivity**
31
+ The model delivers well-structured and coherent responses, demonstrating profound cognitive depth in subjective and open-ended tasks. This leads to a strong alignment with human preferences concerning response organization and conceptual richness.
32
+
33
+
34
+ ## Quickstart
35
+ ### 🤗 Hugging Face Transformers
36
+
37
+ Here is a code snippet to show you how to use the chat model with `transformers`:
38
+
39
+ ```python
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer
41
+
42
+ model_name = "inclusionAI/Ling-lite-1.5-2507"
43
+
44
+ model = AutoModelForCausalLM.from_pretrained(
45
+ model_name,
46
+ torch_dtype="auto",
47
+ device_map="auto"
48
+ )
49
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
50
+
51
+ prompt = "Give me a short introduction to large language models."
52
+ messages = [
53
+ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
54
+ {"role": "user", "content": prompt}
55
+ ]
56
+ text = tokenizer.apply_chat_template(
57
+ messages,
58
+ tokenize=False,
59
+ add_generation_prompt=True
60
+ )
61
+ model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
62
+
63
+ generated_ids = model.generate(
64
+ **model_inputs,
65
+ max_new_tokens=512
66
+ )
67
+ generated_ids = [
68
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
69
+ ]
70
+
71
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
72
+ ```
73
+
74
+ ## Deployment
75
+
76
+ Please refer to [Github](https://github.com/inclusionAI/Ling/blob/master/README.md)
77
+
78
+ ## License
79
+ This code repository is licensed under [the MIT License](https://huggingface.co/inclusionAI/Ling-lite/blob/main/LICENCE).
80
+
81
+ ## Citation
82
+
83
+ If you find our work helpful, feel free to give us a cite.
84
+
85
+ ```
86
+ @article{ling,
87
+ title = {Every FLOP Counts: Scaling a 300B Mixture-of-Experts LING LLM without Premium GPUs},
88
+ author = {Ling Team},
89
+ journal = {arXiv preprint arXiv:2503.05139},
90
+ year = {2025}
91
+ }
92
+ ```