Toby0830 commited on
Commit
d60e653
·
verified ·
1 Parent(s): 03a9bc8

Upload InternLM2ForCausalLM

Browse files
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
config.json ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "internlm/internlm2_5-7b-chat-1m",
3
+ "architectures": [
4
+ "InternLM2ForCausalLM"
5
+ ],
6
+ "attn_implementation": "eager",
7
+ "auto_map": {
8
+ "AutoConfig": "configuration_internlm2.InternLM2Config",
9
+ "AutoModel": "internlm/internlm2_5-7b-chat-1m--modeling_internlm2.InternLM2ForCausalLM",
10
+ "AutoModelForCausalLM": "modeling_internlm2.InternLM2ForCausalLM"
11
+ },
12
+ "bias": false,
13
+ "bos_token_id": 1,
14
+ "eos_token_id": 2,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 4096,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 14336,
19
+ "max_position_embeddings": 262144,
20
+ "model_type": "internlm2",
21
+ "num_attention_heads": 32,
22
+ "num_hidden_layers": 32,
23
+ "num_key_value_heads": 8,
24
+ "pad_token_id": 2,
25
+ "pretraining_tp": 1,
26
+ "quantization_config": {
27
+ "batch_size": 1,
28
+ "bits": 8,
29
+ "block_name_to_quantize": null,
30
+ "cache_block_outputs": true,
31
+ "damp_percent": 0.1,
32
+ "dataset": "c4",
33
+ "desc_act": false,
34
+ "exllama_config": {
35
+ "version": 1
36
+ },
37
+ "group_size": 128,
38
+ "max_input_length": null,
39
+ "model_seqlen": null,
40
+ "module_name_preceding_first_block": null,
41
+ "modules_in_block_to_quantize": null,
42
+ "pad_token_id": null,
43
+ "quant_method": "gptq",
44
+ "sym": true,
45
+ "tokenizer": null,
46
+ "true_sequential": true,
47
+ "use_cuda_fp16": false,
48
+ "use_exllama": true
49
+ },
50
+ "rms_norm_eps": 1e-05,
51
+ "rope_scaling": {
52
+ "factor": 2.5,
53
+ "type": "dynamic"
54
+ },
55
+ "rope_theta": 50000000,
56
+ "tie_word_embeddings": false,
57
+ "torch_dtype": "float16",
58
+ "transformers_version": "4.42.4",
59
+ "use_cache": true,
60
+ "vocab_size": 92544
61
+ }
configuration_internlm2.py ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # This code is based on transformers/src/transformers/models/llama/configuration_llama.py
5
+ #
6
+ # Licensed under the Apache License, Version 2.0 (the "License");
7
+ # you may not use this file except in compliance with the License.
8
+ # You may obtain a copy of the License at
9
+ #
10
+ # http://www.apache.org/licenses/LICENSE-2.0
11
+ #
12
+ # Unless required by applicable law or agreed to in writing, software
13
+ # distributed under the License is distributed on an "AS IS" BASIS,
14
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
15
+ # See the License for the specific language governing permissions and
16
+ # limitations under the License.
17
+ """ InternLM2 model configuration"""
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+ logger = logging.get_logger(__name__)
23
+
24
+ INTERNLM2_PRETRAINED_CONFIG_ARCHIVE_MAP = {}
25
+
26
+
27
+ # Modified from transformers.model.llama.configuration_llama.LlamaConfig
28
+ class InternLM2Config(PretrainedConfig):
29
+ r"""
30
+ This is the configuration class to store the configuration of a [`InternLM2Model`]. It is used to instantiate
31
+ an InternLM2 model according to the specified arguments, defining the model architecture. Instantiating a
32
+ configuration with the defaults will yield a similar configuration to that of the InternLM2-7B.
33
+
34
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
35
+ documentation from [`PretrainedConfig`] for more information.
36
+
37
+
38
+ Args:
39
+ vocab_size (`int`, *optional*, defaults to 32000):
40
+ Vocabulary size of the InternLM2 model. Defines the number of different tokens that can be represented by the
41
+ `inputs_ids` passed when calling [`InternLM2Model`]
42
+ hidden_size (`int`, *optional*, defaults to 4096):
43
+ Dimension of the hidden representations.
44
+ intermediate_size (`int`, *optional*, defaults to 11008):
45
+ Dimension of the MLP representations.
46
+ num_hidden_layers (`int`, *optional*, defaults to 32):
47
+ Number of hidden layers in the Transformer decoder.
48
+ num_attention_heads (`int`, *optional*, defaults to 32):
49
+ Number of attention heads for each attention layer in the Transformer decoder.
50
+ num_key_value_heads (`int`, *optional*):
51
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
52
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
53
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
54
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
55
+ by meanpooling all the original heads within that group. For more details checkout [this
56
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
57
+ `num_attention_heads`.
58
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
59
+ The non-linear activation function (function or string) in the decoder.
60
+ max_position_embeddings (`int`, *optional*, defaults to 2048):
61
+ The maximum sequence length that this model might ever be used with. InternLM2 supports up to 32768 tokens.
62
+ initializer_range (`float`, *optional*, defaults to 0.02):
63
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
64
+ rms_norm_eps (`float`, *optional*, defaults to 1e-06):
65
+ The epsilon used by the rms normalization layers.
66
+ use_cache (`bool`, *optional*, defaults to `True`):
67
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
68
+ relevant if `config.is_decoder=True`.
69
+ pad_token_id (`int`, *optional*):
70
+ Padding token id.
71
+ bos_token_id (`int`, *optional*, defaults to 1):
72
+ Beginning of stream token id.
73
+ eos_token_id (`int`, *optional*, defaults to 2):
74
+ End of stream token id.
75
+ pretraining_tp (`int`, *optional*, defaults to 1):
76
+ Experimental feature. Tensor parallelism rank used during pretraining. Please refer to [this
77
+ document](https://huggingface.co/docs/transformers/main/perf_train_gpu_many#tensor-parallelism)
78
+ to understand more about it. This value is necessary to ensure exact reproducibility
79
+ of the pretraining results. Please refer to [this
80
+ issue](https://github.com/pytorch/pytorch/issues/76232).
81
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
82
+ Whether to tie weight embeddings
83
+ rope_theta (`float`, *optional*, defaults to 10000.0):
84
+ The base period of the RoPE embeddings.
85
+ rope_scaling (`Dict`, *optional*):
86
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
87
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
88
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
89
+ `max_position_embeddings` to the expected new maximum. See the following thread for more information on how
90
+ these scaling strategies behave:
91
+ https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
92
+ experimental feature, subject to breaking API changes in future versions.
93
+ """
94
+ _auto_class = "AutoConfig"
95
+ model_type = "internlm2"
96
+ keys_to_ignore_at_inference = ["past_key_values"]
97
+
98
+ def __init__( # pylint: disable=W0102
99
+ self,
100
+ vocab_size=103168,
101
+ hidden_size=4096,
102
+ intermediate_size=11008,
103
+ num_hidden_layers=32,
104
+ num_attention_heads=32,
105
+ num_key_value_heads=None,
106
+ hidden_act="silu",
107
+ max_position_embeddings=2048,
108
+ initializer_range=0.02,
109
+ rms_norm_eps=1e-6,
110
+ use_cache=True,
111
+ pad_token_id=0,
112
+ bos_token_id=1,
113
+ eos_token_id=2,
114
+ pretraining_tp=1,
115
+ tie_word_embeddings=False,
116
+ bias=True,
117
+ rope_theta=10000,
118
+ rope_scaling=None,
119
+ attn_implementation=None,
120
+ **kwargs,
121
+ ):
122
+ self.vocab_size = vocab_size
123
+ self.max_position_embeddings = max_position_embeddings
124
+ self.hidden_size = hidden_size
125
+ self.intermediate_size = intermediate_size
126
+ self.num_hidden_layers = num_hidden_layers
127
+ self.num_attention_heads = num_attention_heads
128
+ self.bias = bias
129
+
130
+ if num_key_value_heads is None:
131
+ num_key_value_heads = num_attention_heads
132
+ self.num_key_value_heads = num_key_value_heads
133
+
134
+ self.hidden_act = hidden_act
135
+ self.initializer_range = initializer_range
136
+ self.rms_norm_eps = rms_norm_eps
137
+ self.pretraining_tp = pretraining_tp
138
+ self.use_cache = use_cache
139
+ self.rope_theta = rope_theta
140
+ self.rope_scaling = rope_scaling
141
+ self._rope_scaling_validation()
142
+ self.attn_implementation = attn_implementation
143
+ if self.attn_implementation is None:
144
+ self.attn_implementation = "eager"
145
+
146
+ super().__init__(
147
+ pad_token_id=pad_token_id,
148
+ bos_token_id=bos_token_id,
149
+ eos_token_id=eos_token_id,
150
+ tie_word_embeddings=tie_word_embeddings,
151
+ **kwargs,
152
+ )
153
+
154
+ def _rope_scaling_validation(self):
155
+ """
156
+ Validate the `rope_scaling` configuration.
157
+ """
158
+ if self.rope_scaling is None:
159
+ return
160
+
161
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 2:
162
+ raise ValueError(
163
+ "`rope_scaling` must be a dictionary with with two fields, `type` and `factor`, "
164
+ f"got {self.rope_scaling}"
165
+ )
166
+ rope_scaling_type = self.rope_scaling.get("type", None)
167
+ rope_scaling_factor = self.rope_scaling.get("factor", None)
168
+ if rope_scaling_type is None or rope_scaling_type not in ["linear", "dynamic"]:
169
+ raise ValueError(
170
+ f"`rope_scaling`'s type field must be one of ['linear', 'dynamic'], got {rope_scaling_type}"
171
+ )
172
+ if (
173
+ rope_scaling_factor is None
174
+ or not isinstance(rope_scaling_factor, (float, int))
175
+ or rope_scaling_factor < 1.0
176
+ ):
177
+ raise ValueError(
178
+ f"`rope_scaling`'s factor field must be a number >= 1, got {rope_scaling_factor} "
179
+ f"of type {type(rope_scaling_factor)}"
180
+ )
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 1,
3
+ "eos_token_id": [
4
+ 2,
5
+ 92542
6
+ ],
7
+ "pad_token_id": 2,
8
+ "transformers_version": "4.42.4"
9
+ }
model-00001-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:20c593ed12baa57fabb365ecedf174b00798d4d4e1be3ff141deedfb1736f818
3
+ size 4941780096
model-00002-of-00002.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45bccf07c476ae05662ed75cd834ca282f052edc56d7eb39ddcb7434ea2ec7ca
3
+ size 3721904328
model.safetensors.index.json ADDED
@@ -0,0 +1,714 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 8663605248
4
+ },
5
+ "weight_map": {
6
+ "model.layers.0.attention.wo.g_idx": "model-00001-of-00002.safetensors",
7
+ "model.layers.0.attention.wo.qweight": "model-00001-of-00002.safetensors",
8
+ "model.layers.0.attention.wo.qzeros": "model-00001-of-00002.safetensors",
9
+ "model.layers.0.attention.wo.scales": "model-00001-of-00002.safetensors",
10
+ "model.layers.0.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
11
+ "model.layers.0.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
12
+ "model.layers.0.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
13
+ "model.layers.0.attention.wqkv.scales": "model-00001-of-00002.safetensors",
14
+ "model.layers.0.attention_norm.weight": "model-00001-of-00002.safetensors",
15
+ "model.layers.0.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
16
+ "model.layers.0.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
17
+ "model.layers.0.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
18
+ "model.layers.0.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
19
+ "model.layers.0.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
20
+ "model.layers.0.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
21
+ "model.layers.0.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
22
+ "model.layers.0.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
23
+ "model.layers.0.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
24
+ "model.layers.0.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
25
+ "model.layers.0.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
26
+ "model.layers.0.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
27
+ "model.layers.0.ffn_norm.weight": "model-00001-of-00002.safetensors",
28
+ "model.layers.1.attention.wo.g_idx": "model-00001-of-00002.safetensors",
29
+ "model.layers.1.attention.wo.qweight": "model-00001-of-00002.safetensors",
30
+ "model.layers.1.attention.wo.qzeros": "model-00001-of-00002.safetensors",
31
+ "model.layers.1.attention.wo.scales": "model-00001-of-00002.safetensors",
32
+ "model.layers.1.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
33
+ "model.layers.1.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
34
+ "model.layers.1.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
35
+ "model.layers.1.attention.wqkv.scales": "model-00001-of-00002.safetensors",
36
+ "model.layers.1.attention_norm.weight": "model-00001-of-00002.safetensors",
37
+ "model.layers.1.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
38
+ "model.layers.1.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
39
+ "model.layers.1.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
40
+ "model.layers.1.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
41
+ "model.layers.1.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
42
+ "model.layers.1.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
43
+ "model.layers.1.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
44
+ "model.layers.1.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
45
+ "model.layers.1.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
46
+ "model.layers.1.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
47
+ "model.layers.1.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
48
+ "model.layers.1.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
49
+ "model.layers.1.ffn_norm.weight": "model-00001-of-00002.safetensors",
50
+ "model.layers.10.attention.wo.g_idx": "model-00001-of-00002.safetensors",
51
+ "model.layers.10.attention.wo.qweight": "model-00001-of-00002.safetensors",
52
+ "model.layers.10.attention.wo.qzeros": "model-00001-of-00002.safetensors",
53
+ "model.layers.10.attention.wo.scales": "model-00001-of-00002.safetensors",
54
+ "model.layers.10.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
55
+ "model.layers.10.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
56
+ "model.layers.10.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
57
+ "model.layers.10.attention.wqkv.scales": "model-00001-of-00002.safetensors",
58
+ "model.layers.10.attention_norm.weight": "model-00001-of-00002.safetensors",
59
+ "model.layers.10.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
60
+ "model.layers.10.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
61
+ "model.layers.10.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
62
+ "model.layers.10.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
63
+ "model.layers.10.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
64
+ "model.layers.10.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
65
+ "model.layers.10.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
66
+ "model.layers.10.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
67
+ "model.layers.10.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
68
+ "model.layers.10.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
69
+ "model.layers.10.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
70
+ "model.layers.10.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
71
+ "model.layers.10.ffn_norm.weight": "model-00001-of-00002.safetensors",
72
+ "model.layers.11.attention.wo.g_idx": "model-00001-of-00002.safetensors",
73
+ "model.layers.11.attention.wo.qweight": "model-00001-of-00002.safetensors",
74
+ "model.layers.11.attention.wo.qzeros": "model-00001-of-00002.safetensors",
75
+ "model.layers.11.attention.wo.scales": "model-00001-of-00002.safetensors",
76
+ "model.layers.11.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
77
+ "model.layers.11.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
78
+ "model.layers.11.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
79
+ "model.layers.11.attention.wqkv.scales": "model-00001-of-00002.safetensors",
80
+ "model.layers.11.attention_norm.weight": "model-00001-of-00002.safetensors",
81
+ "model.layers.11.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
82
+ "model.layers.11.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
83
+ "model.layers.11.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
84
+ "model.layers.11.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
85
+ "model.layers.11.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
86
+ "model.layers.11.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
87
+ "model.layers.11.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
88
+ "model.layers.11.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
89
+ "model.layers.11.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
90
+ "model.layers.11.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
91
+ "model.layers.11.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
92
+ "model.layers.11.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
93
+ "model.layers.11.ffn_norm.weight": "model-00001-of-00002.safetensors",
94
+ "model.layers.12.attention.wo.g_idx": "model-00001-of-00002.safetensors",
95
+ "model.layers.12.attention.wo.qweight": "model-00001-of-00002.safetensors",
96
+ "model.layers.12.attention.wo.qzeros": "model-00001-of-00002.safetensors",
97
+ "model.layers.12.attention.wo.scales": "model-00001-of-00002.safetensors",
98
+ "model.layers.12.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
99
+ "model.layers.12.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
100
+ "model.layers.12.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
101
+ "model.layers.12.attention.wqkv.scales": "model-00001-of-00002.safetensors",
102
+ "model.layers.12.attention_norm.weight": "model-00001-of-00002.safetensors",
103
+ "model.layers.12.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
104
+ "model.layers.12.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
105
+ "model.layers.12.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
106
+ "model.layers.12.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
107
+ "model.layers.12.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
108
+ "model.layers.12.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
109
+ "model.layers.12.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
110
+ "model.layers.12.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
111
+ "model.layers.12.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
112
+ "model.layers.12.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
113
+ "model.layers.12.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
114
+ "model.layers.12.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
115
+ "model.layers.12.ffn_norm.weight": "model-00001-of-00002.safetensors",
116
+ "model.layers.13.attention.wo.g_idx": "model-00001-of-00002.safetensors",
117
+ "model.layers.13.attention.wo.qweight": "model-00001-of-00002.safetensors",
118
+ "model.layers.13.attention.wo.qzeros": "model-00001-of-00002.safetensors",
119
+ "model.layers.13.attention.wo.scales": "model-00001-of-00002.safetensors",
120
+ "model.layers.13.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
121
+ "model.layers.13.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
122
+ "model.layers.13.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
123
+ "model.layers.13.attention.wqkv.scales": "model-00001-of-00002.safetensors",
124
+ "model.layers.13.attention_norm.weight": "model-00001-of-00002.safetensors",
125
+ "model.layers.13.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
126
+ "model.layers.13.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
127
+ "model.layers.13.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
128
+ "model.layers.13.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
129
+ "model.layers.13.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
130
+ "model.layers.13.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
131
+ "model.layers.13.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
132
+ "model.layers.13.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
133
+ "model.layers.13.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
134
+ "model.layers.13.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
135
+ "model.layers.13.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
136
+ "model.layers.13.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
137
+ "model.layers.13.ffn_norm.weight": "model-00001-of-00002.safetensors",
138
+ "model.layers.14.attention.wo.g_idx": "model-00001-of-00002.safetensors",
139
+ "model.layers.14.attention.wo.qweight": "model-00001-of-00002.safetensors",
140
+ "model.layers.14.attention.wo.qzeros": "model-00001-of-00002.safetensors",
141
+ "model.layers.14.attention.wo.scales": "model-00001-of-00002.safetensors",
142
+ "model.layers.14.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
143
+ "model.layers.14.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
144
+ "model.layers.14.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
145
+ "model.layers.14.attention.wqkv.scales": "model-00001-of-00002.safetensors",
146
+ "model.layers.14.attention_norm.weight": "model-00001-of-00002.safetensors",
147
+ "model.layers.14.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
148
+ "model.layers.14.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
149
+ "model.layers.14.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
150
+ "model.layers.14.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
151
+ "model.layers.14.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
152
+ "model.layers.14.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
153
+ "model.layers.14.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
154
+ "model.layers.14.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
155
+ "model.layers.14.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
156
+ "model.layers.14.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
157
+ "model.layers.14.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
158
+ "model.layers.14.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
159
+ "model.layers.14.ffn_norm.weight": "model-00001-of-00002.safetensors",
160
+ "model.layers.15.attention.wo.g_idx": "model-00001-of-00002.safetensors",
161
+ "model.layers.15.attention.wo.qweight": "model-00001-of-00002.safetensors",
162
+ "model.layers.15.attention.wo.qzeros": "model-00001-of-00002.safetensors",
163
+ "model.layers.15.attention.wo.scales": "model-00001-of-00002.safetensors",
164
+ "model.layers.15.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
165
+ "model.layers.15.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
166
+ "model.layers.15.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
167
+ "model.layers.15.attention.wqkv.scales": "model-00001-of-00002.safetensors",
168
+ "model.layers.15.attention_norm.weight": "model-00001-of-00002.safetensors",
169
+ "model.layers.15.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
170
+ "model.layers.15.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
171
+ "model.layers.15.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
172
+ "model.layers.15.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
173
+ "model.layers.15.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
174
+ "model.layers.15.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
175
+ "model.layers.15.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
176
+ "model.layers.15.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
177
+ "model.layers.15.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
178
+ "model.layers.15.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
179
+ "model.layers.15.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
180
+ "model.layers.15.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
181
+ "model.layers.15.ffn_norm.weight": "model-00001-of-00002.safetensors",
182
+ "model.layers.16.attention.wo.g_idx": "model-00001-of-00002.safetensors",
183
+ "model.layers.16.attention.wo.qweight": "model-00001-of-00002.safetensors",
184
+ "model.layers.16.attention.wo.qzeros": "model-00001-of-00002.safetensors",
185
+ "model.layers.16.attention.wo.scales": "model-00001-of-00002.safetensors",
186
+ "model.layers.16.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
187
+ "model.layers.16.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
188
+ "model.layers.16.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
189
+ "model.layers.16.attention.wqkv.scales": "model-00001-of-00002.safetensors",
190
+ "model.layers.16.attention_norm.weight": "model-00001-of-00002.safetensors",
191
+ "model.layers.16.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
192
+ "model.layers.16.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
193
+ "model.layers.16.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
194
+ "model.layers.16.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
195
+ "model.layers.16.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
196
+ "model.layers.16.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
197
+ "model.layers.16.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
198
+ "model.layers.16.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
199
+ "model.layers.16.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
200
+ "model.layers.16.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
201
+ "model.layers.16.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
202
+ "model.layers.16.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
203
+ "model.layers.16.ffn_norm.weight": "model-00001-of-00002.safetensors",
204
+ "model.layers.17.attention.wo.g_idx": "model-00001-of-00002.safetensors",
205
+ "model.layers.17.attention.wo.qweight": "model-00001-of-00002.safetensors",
206
+ "model.layers.17.attention.wo.qzeros": "model-00001-of-00002.safetensors",
207
+ "model.layers.17.attention.wo.scales": "model-00001-of-00002.safetensors",
208
+ "model.layers.17.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
209
+ "model.layers.17.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
210
+ "model.layers.17.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
211
+ "model.layers.17.attention.wqkv.scales": "model-00001-of-00002.safetensors",
212
+ "model.layers.17.attention_norm.weight": "model-00001-of-00002.safetensors",
213
+ "model.layers.17.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
214
+ "model.layers.17.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
215
+ "model.layers.17.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
216
+ "model.layers.17.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
217
+ "model.layers.17.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
218
+ "model.layers.17.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
219
+ "model.layers.17.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
220
+ "model.layers.17.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
221
+ "model.layers.17.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
222
+ "model.layers.17.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
223
+ "model.layers.17.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
224
+ "model.layers.17.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
225
+ "model.layers.17.ffn_norm.weight": "model-00001-of-00002.safetensors",
226
+ "model.layers.18.attention.wo.g_idx": "model-00001-of-00002.safetensors",
227
+ "model.layers.18.attention.wo.qweight": "model-00001-of-00002.safetensors",
228
+ "model.layers.18.attention.wo.qzeros": "model-00001-of-00002.safetensors",
229
+ "model.layers.18.attention.wo.scales": "model-00001-of-00002.safetensors",
230
+ "model.layers.18.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
231
+ "model.layers.18.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
232
+ "model.layers.18.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
233
+ "model.layers.18.attention.wqkv.scales": "model-00001-of-00002.safetensors",
234
+ "model.layers.18.attention_norm.weight": "model-00002-of-00002.safetensors",
235
+ "model.layers.18.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
236
+ "model.layers.18.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
237
+ "model.layers.18.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
238
+ "model.layers.18.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
239
+ "model.layers.18.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
240
+ "model.layers.18.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
241
+ "model.layers.18.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
242
+ "model.layers.18.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
243
+ "model.layers.18.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
244
+ "model.layers.18.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
245
+ "model.layers.18.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
246
+ "model.layers.18.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
247
+ "model.layers.18.ffn_norm.weight": "model-00002-of-00002.safetensors",
248
+ "model.layers.19.attention.wo.g_idx": "model-00002-of-00002.safetensors",
249
+ "model.layers.19.attention.wo.qweight": "model-00002-of-00002.safetensors",
250
+ "model.layers.19.attention.wo.qzeros": "model-00002-of-00002.safetensors",
251
+ "model.layers.19.attention.wo.scales": "model-00002-of-00002.safetensors",
252
+ "model.layers.19.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
253
+ "model.layers.19.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
254
+ "model.layers.19.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
255
+ "model.layers.19.attention.wqkv.scales": "model-00002-of-00002.safetensors",
256
+ "model.layers.19.attention_norm.weight": "model-00002-of-00002.safetensors",
257
+ "model.layers.19.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
258
+ "model.layers.19.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
259
+ "model.layers.19.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
260
+ "model.layers.19.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
261
+ "model.layers.19.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
262
+ "model.layers.19.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
263
+ "model.layers.19.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
264
+ "model.layers.19.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
265
+ "model.layers.19.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
266
+ "model.layers.19.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
267
+ "model.layers.19.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
268
+ "model.layers.19.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
269
+ "model.layers.19.ffn_norm.weight": "model-00002-of-00002.safetensors",
270
+ "model.layers.2.attention.wo.g_idx": "model-00001-of-00002.safetensors",
271
+ "model.layers.2.attention.wo.qweight": "model-00001-of-00002.safetensors",
272
+ "model.layers.2.attention.wo.qzeros": "model-00001-of-00002.safetensors",
273
+ "model.layers.2.attention.wo.scales": "model-00001-of-00002.safetensors",
274
+ "model.layers.2.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
275
+ "model.layers.2.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
276
+ "model.layers.2.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
277
+ "model.layers.2.attention.wqkv.scales": "model-00001-of-00002.safetensors",
278
+ "model.layers.2.attention_norm.weight": "model-00001-of-00002.safetensors",
279
+ "model.layers.2.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
280
+ "model.layers.2.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
281
+ "model.layers.2.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
282
+ "model.layers.2.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
283
+ "model.layers.2.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
284
+ "model.layers.2.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
285
+ "model.layers.2.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
286
+ "model.layers.2.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
287
+ "model.layers.2.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
288
+ "model.layers.2.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
289
+ "model.layers.2.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
290
+ "model.layers.2.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
291
+ "model.layers.2.ffn_norm.weight": "model-00001-of-00002.safetensors",
292
+ "model.layers.20.attention.wo.g_idx": "model-00002-of-00002.safetensors",
293
+ "model.layers.20.attention.wo.qweight": "model-00002-of-00002.safetensors",
294
+ "model.layers.20.attention.wo.qzeros": "model-00002-of-00002.safetensors",
295
+ "model.layers.20.attention.wo.scales": "model-00002-of-00002.safetensors",
296
+ "model.layers.20.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
297
+ "model.layers.20.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
298
+ "model.layers.20.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
299
+ "model.layers.20.attention.wqkv.scales": "model-00002-of-00002.safetensors",
300
+ "model.layers.20.attention_norm.weight": "model-00002-of-00002.safetensors",
301
+ "model.layers.20.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
302
+ "model.layers.20.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
303
+ "model.layers.20.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
304
+ "model.layers.20.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
305
+ "model.layers.20.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
306
+ "model.layers.20.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
307
+ "model.layers.20.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
308
+ "model.layers.20.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
309
+ "model.layers.20.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
310
+ "model.layers.20.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
311
+ "model.layers.20.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
312
+ "model.layers.20.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
313
+ "model.layers.20.ffn_norm.weight": "model-00002-of-00002.safetensors",
314
+ "model.layers.21.attention.wo.g_idx": "model-00002-of-00002.safetensors",
315
+ "model.layers.21.attention.wo.qweight": "model-00002-of-00002.safetensors",
316
+ "model.layers.21.attention.wo.qzeros": "model-00002-of-00002.safetensors",
317
+ "model.layers.21.attention.wo.scales": "model-00002-of-00002.safetensors",
318
+ "model.layers.21.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
319
+ "model.layers.21.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
320
+ "model.layers.21.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
321
+ "model.layers.21.attention.wqkv.scales": "model-00002-of-00002.safetensors",
322
+ "model.layers.21.attention_norm.weight": "model-00002-of-00002.safetensors",
323
+ "model.layers.21.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
324
+ "model.layers.21.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
325
+ "model.layers.21.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
326
+ "model.layers.21.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
327
+ "model.layers.21.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
328
+ "model.layers.21.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
329
+ "model.layers.21.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
330
+ "model.layers.21.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
331
+ "model.layers.21.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
332
+ "model.layers.21.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
333
+ "model.layers.21.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
334
+ "model.layers.21.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
335
+ "model.layers.21.ffn_norm.weight": "model-00002-of-00002.safetensors",
336
+ "model.layers.22.attention.wo.g_idx": "model-00002-of-00002.safetensors",
337
+ "model.layers.22.attention.wo.qweight": "model-00002-of-00002.safetensors",
338
+ "model.layers.22.attention.wo.qzeros": "model-00002-of-00002.safetensors",
339
+ "model.layers.22.attention.wo.scales": "model-00002-of-00002.safetensors",
340
+ "model.layers.22.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
341
+ "model.layers.22.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
342
+ "model.layers.22.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
343
+ "model.layers.22.attention.wqkv.scales": "model-00002-of-00002.safetensors",
344
+ "model.layers.22.attention_norm.weight": "model-00002-of-00002.safetensors",
345
+ "model.layers.22.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
346
+ "model.layers.22.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
347
+ "model.layers.22.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
348
+ "model.layers.22.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
349
+ "model.layers.22.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
350
+ "model.layers.22.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
351
+ "model.layers.22.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
352
+ "model.layers.22.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
353
+ "model.layers.22.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
354
+ "model.layers.22.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
355
+ "model.layers.22.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
356
+ "model.layers.22.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
357
+ "model.layers.22.ffn_norm.weight": "model-00002-of-00002.safetensors",
358
+ "model.layers.23.attention.wo.g_idx": "model-00002-of-00002.safetensors",
359
+ "model.layers.23.attention.wo.qweight": "model-00002-of-00002.safetensors",
360
+ "model.layers.23.attention.wo.qzeros": "model-00002-of-00002.safetensors",
361
+ "model.layers.23.attention.wo.scales": "model-00002-of-00002.safetensors",
362
+ "model.layers.23.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
363
+ "model.layers.23.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
364
+ "model.layers.23.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
365
+ "model.layers.23.attention.wqkv.scales": "model-00002-of-00002.safetensors",
366
+ "model.layers.23.attention_norm.weight": "model-00002-of-00002.safetensors",
367
+ "model.layers.23.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
368
+ "model.layers.23.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
369
+ "model.layers.23.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
370
+ "model.layers.23.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
371
+ "model.layers.23.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
372
+ "model.layers.23.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
373
+ "model.layers.23.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
374
+ "model.layers.23.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
375
+ "model.layers.23.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
376
+ "model.layers.23.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
377
+ "model.layers.23.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
378
+ "model.layers.23.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
379
+ "model.layers.23.ffn_norm.weight": "model-00002-of-00002.safetensors",
380
+ "model.layers.24.attention.wo.g_idx": "model-00002-of-00002.safetensors",
381
+ "model.layers.24.attention.wo.qweight": "model-00002-of-00002.safetensors",
382
+ "model.layers.24.attention.wo.qzeros": "model-00002-of-00002.safetensors",
383
+ "model.layers.24.attention.wo.scales": "model-00002-of-00002.safetensors",
384
+ "model.layers.24.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
385
+ "model.layers.24.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
386
+ "model.layers.24.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
387
+ "model.layers.24.attention.wqkv.scales": "model-00002-of-00002.safetensors",
388
+ "model.layers.24.attention_norm.weight": "model-00002-of-00002.safetensors",
389
+ "model.layers.24.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
390
+ "model.layers.24.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
391
+ "model.layers.24.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
392
+ "model.layers.24.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
393
+ "model.layers.24.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
394
+ "model.layers.24.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
395
+ "model.layers.24.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
396
+ "model.layers.24.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
397
+ "model.layers.24.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
398
+ "model.layers.24.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
399
+ "model.layers.24.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
400
+ "model.layers.24.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
401
+ "model.layers.24.ffn_norm.weight": "model-00002-of-00002.safetensors",
402
+ "model.layers.25.attention.wo.g_idx": "model-00002-of-00002.safetensors",
403
+ "model.layers.25.attention.wo.qweight": "model-00002-of-00002.safetensors",
404
+ "model.layers.25.attention.wo.qzeros": "model-00002-of-00002.safetensors",
405
+ "model.layers.25.attention.wo.scales": "model-00002-of-00002.safetensors",
406
+ "model.layers.25.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
407
+ "model.layers.25.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
408
+ "model.layers.25.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
409
+ "model.layers.25.attention.wqkv.scales": "model-00002-of-00002.safetensors",
410
+ "model.layers.25.attention_norm.weight": "model-00002-of-00002.safetensors",
411
+ "model.layers.25.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
412
+ "model.layers.25.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
413
+ "model.layers.25.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
414
+ "model.layers.25.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
415
+ "model.layers.25.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
416
+ "model.layers.25.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
417
+ "model.layers.25.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
418
+ "model.layers.25.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
419
+ "model.layers.25.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
420
+ "model.layers.25.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
421
+ "model.layers.25.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
422
+ "model.layers.25.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
423
+ "model.layers.25.ffn_norm.weight": "model-00002-of-00002.safetensors",
424
+ "model.layers.26.attention.wo.g_idx": "model-00002-of-00002.safetensors",
425
+ "model.layers.26.attention.wo.qweight": "model-00002-of-00002.safetensors",
426
+ "model.layers.26.attention.wo.qzeros": "model-00002-of-00002.safetensors",
427
+ "model.layers.26.attention.wo.scales": "model-00002-of-00002.safetensors",
428
+ "model.layers.26.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
429
+ "model.layers.26.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
430
+ "model.layers.26.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
431
+ "model.layers.26.attention.wqkv.scales": "model-00002-of-00002.safetensors",
432
+ "model.layers.26.attention_norm.weight": "model-00002-of-00002.safetensors",
433
+ "model.layers.26.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
434
+ "model.layers.26.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
435
+ "model.layers.26.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
436
+ "model.layers.26.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
437
+ "model.layers.26.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
438
+ "model.layers.26.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
439
+ "model.layers.26.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
440
+ "model.layers.26.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
441
+ "model.layers.26.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
442
+ "model.layers.26.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
443
+ "model.layers.26.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
444
+ "model.layers.26.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
445
+ "model.layers.26.ffn_norm.weight": "model-00002-of-00002.safetensors",
446
+ "model.layers.27.attention.wo.g_idx": "model-00002-of-00002.safetensors",
447
+ "model.layers.27.attention.wo.qweight": "model-00002-of-00002.safetensors",
448
+ "model.layers.27.attention.wo.qzeros": "model-00002-of-00002.safetensors",
449
+ "model.layers.27.attention.wo.scales": "model-00002-of-00002.safetensors",
450
+ "model.layers.27.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
451
+ "model.layers.27.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
452
+ "model.layers.27.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
453
+ "model.layers.27.attention.wqkv.scales": "model-00002-of-00002.safetensors",
454
+ "model.layers.27.attention_norm.weight": "model-00002-of-00002.safetensors",
455
+ "model.layers.27.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
456
+ "model.layers.27.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
457
+ "model.layers.27.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
458
+ "model.layers.27.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
459
+ "model.layers.27.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
460
+ "model.layers.27.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
461
+ "model.layers.27.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
462
+ "model.layers.27.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
463
+ "model.layers.27.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
464
+ "model.layers.27.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
465
+ "model.layers.27.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
466
+ "model.layers.27.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
467
+ "model.layers.27.ffn_norm.weight": "model-00002-of-00002.safetensors",
468
+ "model.layers.28.attention.wo.g_idx": "model-00002-of-00002.safetensors",
469
+ "model.layers.28.attention.wo.qweight": "model-00002-of-00002.safetensors",
470
+ "model.layers.28.attention.wo.qzeros": "model-00002-of-00002.safetensors",
471
+ "model.layers.28.attention.wo.scales": "model-00002-of-00002.safetensors",
472
+ "model.layers.28.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
473
+ "model.layers.28.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
474
+ "model.layers.28.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
475
+ "model.layers.28.attention.wqkv.scales": "model-00002-of-00002.safetensors",
476
+ "model.layers.28.attention_norm.weight": "model-00002-of-00002.safetensors",
477
+ "model.layers.28.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
478
+ "model.layers.28.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
479
+ "model.layers.28.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
480
+ "model.layers.28.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
481
+ "model.layers.28.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
482
+ "model.layers.28.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
483
+ "model.layers.28.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
484
+ "model.layers.28.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
485
+ "model.layers.28.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
486
+ "model.layers.28.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
487
+ "model.layers.28.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
488
+ "model.layers.28.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
489
+ "model.layers.28.ffn_norm.weight": "model-00002-of-00002.safetensors",
490
+ "model.layers.29.attention.wo.g_idx": "model-00002-of-00002.safetensors",
491
+ "model.layers.29.attention.wo.qweight": "model-00002-of-00002.safetensors",
492
+ "model.layers.29.attention.wo.qzeros": "model-00002-of-00002.safetensors",
493
+ "model.layers.29.attention.wo.scales": "model-00002-of-00002.safetensors",
494
+ "model.layers.29.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
495
+ "model.layers.29.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
496
+ "model.layers.29.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
497
+ "model.layers.29.attention.wqkv.scales": "model-00002-of-00002.safetensors",
498
+ "model.layers.29.attention_norm.weight": "model-00002-of-00002.safetensors",
499
+ "model.layers.29.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
500
+ "model.layers.29.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
501
+ "model.layers.29.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
502
+ "model.layers.29.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
503
+ "model.layers.29.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
504
+ "model.layers.29.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
505
+ "model.layers.29.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
506
+ "model.layers.29.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
507
+ "model.layers.29.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
508
+ "model.layers.29.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
509
+ "model.layers.29.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
510
+ "model.layers.29.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
511
+ "model.layers.29.ffn_norm.weight": "model-00002-of-00002.safetensors",
512
+ "model.layers.3.attention.wo.g_idx": "model-00001-of-00002.safetensors",
513
+ "model.layers.3.attention.wo.qweight": "model-00001-of-00002.safetensors",
514
+ "model.layers.3.attention.wo.qzeros": "model-00001-of-00002.safetensors",
515
+ "model.layers.3.attention.wo.scales": "model-00001-of-00002.safetensors",
516
+ "model.layers.3.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
517
+ "model.layers.3.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
518
+ "model.layers.3.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
519
+ "model.layers.3.attention.wqkv.scales": "model-00001-of-00002.safetensors",
520
+ "model.layers.3.attention_norm.weight": "model-00001-of-00002.safetensors",
521
+ "model.layers.3.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
522
+ "model.layers.3.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
523
+ "model.layers.3.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
524
+ "model.layers.3.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
525
+ "model.layers.3.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
526
+ "model.layers.3.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
527
+ "model.layers.3.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
528
+ "model.layers.3.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
529
+ "model.layers.3.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
530
+ "model.layers.3.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
531
+ "model.layers.3.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
532
+ "model.layers.3.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
533
+ "model.layers.3.ffn_norm.weight": "model-00001-of-00002.safetensors",
534
+ "model.layers.30.attention.wo.g_idx": "model-00002-of-00002.safetensors",
535
+ "model.layers.30.attention.wo.qweight": "model-00002-of-00002.safetensors",
536
+ "model.layers.30.attention.wo.qzeros": "model-00002-of-00002.safetensors",
537
+ "model.layers.30.attention.wo.scales": "model-00002-of-00002.safetensors",
538
+ "model.layers.30.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
539
+ "model.layers.30.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
540
+ "model.layers.30.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
541
+ "model.layers.30.attention.wqkv.scales": "model-00002-of-00002.safetensors",
542
+ "model.layers.30.attention_norm.weight": "model-00002-of-00002.safetensors",
543
+ "model.layers.30.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
544
+ "model.layers.30.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
545
+ "model.layers.30.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
546
+ "model.layers.30.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
547
+ "model.layers.30.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
548
+ "model.layers.30.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
549
+ "model.layers.30.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
550
+ "model.layers.30.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
551
+ "model.layers.30.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
552
+ "model.layers.30.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
553
+ "model.layers.30.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
554
+ "model.layers.30.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
555
+ "model.layers.30.ffn_norm.weight": "model-00002-of-00002.safetensors",
556
+ "model.layers.31.attention.wo.g_idx": "model-00002-of-00002.safetensors",
557
+ "model.layers.31.attention.wo.qweight": "model-00002-of-00002.safetensors",
558
+ "model.layers.31.attention.wo.qzeros": "model-00002-of-00002.safetensors",
559
+ "model.layers.31.attention.wo.scales": "model-00002-of-00002.safetensors",
560
+ "model.layers.31.attention.wqkv.g_idx": "model-00002-of-00002.safetensors",
561
+ "model.layers.31.attention.wqkv.qweight": "model-00002-of-00002.safetensors",
562
+ "model.layers.31.attention.wqkv.qzeros": "model-00002-of-00002.safetensors",
563
+ "model.layers.31.attention.wqkv.scales": "model-00002-of-00002.safetensors",
564
+ "model.layers.31.attention_norm.weight": "model-00002-of-00002.safetensors",
565
+ "model.layers.31.feed_forward.w1.g_idx": "model-00002-of-00002.safetensors",
566
+ "model.layers.31.feed_forward.w1.qweight": "model-00002-of-00002.safetensors",
567
+ "model.layers.31.feed_forward.w1.qzeros": "model-00002-of-00002.safetensors",
568
+ "model.layers.31.feed_forward.w1.scales": "model-00002-of-00002.safetensors",
569
+ "model.layers.31.feed_forward.w2.g_idx": "model-00002-of-00002.safetensors",
570
+ "model.layers.31.feed_forward.w2.qweight": "model-00002-of-00002.safetensors",
571
+ "model.layers.31.feed_forward.w2.qzeros": "model-00002-of-00002.safetensors",
572
+ "model.layers.31.feed_forward.w2.scales": "model-00002-of-00002.safetensors",
573
+ "model.layers.31.feed_forward.w3.g_idx": "model-00002-of-00002.safetensors",
574
+ "model.layers.31.feed_forward.w3.qweight": "model-00002-of-00002.safetensors",
575
+ "model.layers.31.feed_forward.w3.qzeros": "model-00002-of-00002.safetensors",
576
+ "model.layers.31.feed_forward.w3.scales": "model-00002-of-00002.safetensors",
577
+ "model.layers.31.ffn_norm.weight": "model-00002-of-00002.safetensors",
578
+ "model.layers.4.attention.wo.g_idx": "model-00001-of-00002.safetensors",
579
+ "model.layers.4.attention.wo.qweight": "model-00001-of-00002.safetensors",
580
+ "model.layers.4.attention.wo.qzeros": "model-00001-of-00002.safetensors",
581
+ "model.layers.4.attention.wo.scales": "model-00001-of-00002.safetensors",
582
+ "model.layers.4.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
583
+ "model.layers.4.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
584
+ "model.layers.4.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
585
+ "model.layers.4.attention.wqkv.scales": "model-00001-of-00002.safetensors",
586
+ "model.layers.4.attention_norm.weight": "model-00001-of-00002.safetensors",
587
+ "model.layers.4.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
588
+ "model.layers.4.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
589
+ "model.layers.4.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
590
+ "model.layers.4.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
591
+ "model.layers.4.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
592
+ "model.layers.4.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
593
+ "model.layers.4.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
594
+ "model.layers.4.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
595
+ "model.layers.4.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
596
+ "model.layers.4.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
597
+ "model.layers.4.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
598
+ "model.layers.4.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
599
+ "model.layers.4.ffn_norm.weight": "model-00001-of-00002.safetensors",
600
+ "model.layers.5.attention.wo.g_idx": "model-00001-of-00002.safetensors",
601
+ "model.layers.5.attention.wo.qweight": "model-00001-of-00002.safetensors",
602
+ "model.layers.5.attention.wo.qzeros": "model-00001-of-00002.safetensors",
603
+ "model.layers.5.attention.wo.scales": "model-00001-of-00002.safetensors",
604
+ "model.layers.5.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
605
+ "model.layers.5.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
606
+ "model.layers.5.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
607
+ "model.layers.5.attention.wqkv.scales": "model-00001-of-00002.safetensors",
608
+ "model.layers.5.attention_norm.weight": "model-00001-of-00002.safetensors",
609
+ "model.layers.5.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
610
+ "model.layers.5.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
611
+ "model.layers.5.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
612
+ "model.layers.5.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
613
+ "model.layers.5.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
614
+ "model.layers.5.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
615
+ "model.layers.5.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
616
+ "model.layers.5.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
617
+ "model.layers.5.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
618
+ "model.layers.5.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
619
+ "model.layers.5.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
620
+ "model.layers.5.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
621
+ "model.layers.5.ffn_norm.weight": "model-00001-of-00002.safetensors",
622
+ "model.layers.6.attention.wo.g_idx": "model-00001-of-00002.safetensors",
623
+ "model.layers.6.attention.wo.qweight": "model-00001-of-00002.safetensors",
624
+ "model.layers.6.attention.wo.qzeros": "model-00001-of-00002.safetensors",
625
+ "model.layers.6.attention.wo.scales": "model-00001-of-00002.safetensors",
626
+ "model.layers.6.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
627
+ "model.layers.6.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
628
+ "model.layers.6.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
629
+ "model.layers.6.attention.wqkv.scales": "model-00001-of-00002.safetensors",
630
+ "model.layers.6.attention_norm.weight": "model-00001-of-00002.safetensors",
631
+ "model.layers.6.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
632
+ "model.layers.6.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
633
+ "model.layers.6.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
634
+ "model.layers.6.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
635
+ "model.layers.6.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
636
+ "model.layers.6.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
637
+ "model.layers.6.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
638
+ "model.layers.6.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
639
+ "model.layers.6.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
640
+ "model.layers.6.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
641
+ "model.layers.6.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
642
+ "model.layers.6.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
643
+ "model.layers.6.ffn_norm.weight": "model-00001-of-00002.safetensors",
644
+ "model.layers.7.attention.wo.g_idx": "model-00001-of-00002.safetensors",
645
+ "model.layers.7.attention.wo.qweight": "model-00001-of-00002.safetensors",
646
+ "model.layers.7.attention.wo.qzeros": "model-00001-of-00002.safetensors",
647
+ "model.layers.7.attention.wo.scales": "model-00001-of-00002.safetensors",
648
+ "model.layers.7.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
649
+ "model.layers.7.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
650
+ "model.layers.7.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
651
+ "model.layers.7.attention.wqkv.scales": "model-00001-of-00002.safetensors",
652
+ "model.layers.7.attention_norm.weight": "model-00001-of-00002.safetensors",
653
+ "model.layers.7.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
654
+ "model.layers.7.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
655
+ "model.layers.7.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
656
+ "model.layers.7.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
657
+ "model.layers.7.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
658
+ "model.layers.7.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
659
+ "model.layers.7.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
660
+ "model.layers.7.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
661
+ "model.layers.7.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
662
+ "model.layers.7.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
663
+ "model.layers.7.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
664
+ "model.layers.7.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
665
+ "model.layers.7.ffn_norm.weight": "model-00001-of-00002.safetensors",
666
+ "model.layers.8.attention.wo.g_idx": "model-00001-of-00002.safetensors",
667
+ "model.layers.8.attention.wo.qweight": "model-00001-of-00002.safetensors",
668
+ "model.layers.8.attention.wo.qzeros": "model-00001-of-00002.safetensors",
669
+ "model.layers.8.attention.wo.scales": "model-00001-of-00002.safetensors",
670
+ "model.layers.8.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
671
+ "model.layers.8.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
672
+ "model.layers.8.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
673
+ "model.layers.8.attention.wqkv.scales": "model-00001-of-00002.safetensors",
674
+ "model.layers.8.attention_norm.weight": "model-00001-of-00002.safetensors",
675
+ "model.layers.8.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
676
+ "model.layers.8.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
677
+ "model.layers.8.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
678
+ "model.layers.8.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
679
+ "model.layers.8.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
680
+ "model.layers.8.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
681
+ "model.layers.8.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
682
+ "model.layers.8.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
683
+ "model.layers.8.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
684
+ "model.layers.8.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
685
+ "model.layers.8.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
686
+ "model.layers.8.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
687
+ "model.layers.8.ffn_norm.weight": "model-00001-of-00002.safetensors",
688
+ "model.layers.9.attention.wo.g_idx": "model-00001-of-00002.safetensors",
689
+ "model.layers.9.attention.wo.qweight": "model-00001-of-00002.safetensors",
690
+ "model.layers.9.attention.wo.qzeros": "model-00001-of-00002.safetensors",
691
+ "model.layers.9.attention.wo.scales": "model-00001-of-00002.safetensors",
692
+ "model.layers.9.attention.wqkv.g_idx": "model-00001-of-00002.safetensors",
693
+ "model.layers.9.attention.wqkv.qweight": "model-00001-of-00002.safetensors",
694
+ "model.layers.9.attention.wqkv.qzeros": "model-00001-of-00002.safetensors",
695
+ "model.layers.9.attention.wqkv.scales": "model-00001-of-00002.safetensors",
696
+ "model.layers.9.attention_norm.weight": "model-00001-of-00002.safetensors",
697
+ "model.layers.9.feed_forward.w1.g_idx": "model-00001-of-00002.safetensors",
698
+ "model.layers.9.feed_forward.w1.qweight": "model-00001-of-00002.safetensors",
699
+ "model.layers.9.feed_forward.w1.qzeros": "model-00001-of-00002.safetensors",
700
+ "model.layers.9.feed_forward.w1.scales": "model-00001-of-00002.safetensors",
701
+ "model.layers.9.feed_forward.w2.g_idx": "model-00001-of-00002.safetensors",
702
+ "model.layers.9.feed_forward.w2.qweight": "model-00001-of-00002.safetensors",
703
+ "model.layers.9.feed_forward.w2.qzeros": "model-00001-of-00002.safetensors",
704
+ "model.layers.9.feed_forward.w2.scales": "model-00001-of-00002.safetensors",
705
+ "model.layers.9.feed_forward.w3.g_idx": "model-00001-of-00002.safetensors",
706
+ "model.layers.9.feed_forward.w3.qweight": "model-00001-of-00002.safetensors",
707
+ "model.layers.9.feed_forward.w3.qzeros": "model-00001-of-00002.safetensors",
708
+ "model.layers.9.feed_forward.w3.scales": "model-00001-of-00002.safetensors",
709
+ "model.layers.9.ffn_norm.weight": "model-00001-of-00002.safetensors",
710
+ "model.norm.weight": "model-00002-of-00002.safetensors",
711
+ "model.tok_embeddings.weight": "model-00001-of-00002.safetensors",
712
+ "output.weight": "model-00002-of-00002.safetensors"
713
+ }
714
+ }
modeling_internlm2.py ADDED
@@ -0,0 +1,1947 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) The InternLM team and The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # This code is based on transformers/src/transformers/models/llama/modeling_llama.py
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """PyTorch InternLM2.5 model."""
17
+ import math
18
+ import queue
19
+ import threading
20
+ import torch.distributed as dist
21
+ from typing import List, Optional, Tuple, Union
22
+ import os
23
+ import time
24
+ from concurrent.futures import ThreadPoolExecutor
25
+
26
+ import torch
27
+ import torch.nn.functional as F
28
+ import torch.utils.checkpoint
29
+ from einops import rearrange
30
+ from torch import nn
31
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
32
+ from transformers.activations import ACT2FN
33
+ from transformers.cache_utils import Cache, DynamicCache, StaticCache
34
+ from transformers.modeling_attn_mask_utils import AttentionMaskConverter
35
+ from transformers.modeling_outputs import (
36
+ BaseModelOutputWithPast,
37
+ CausalLMOutputWithPast,
38
+ QuestionAnsweringModelOutput,
39
+ SequenceClassifierOutputWithPast,
40
+ TokenClassifierOutput,
41
+ )
42
+ from transformers.modeling_utils import PreTrainedModel
43
+ from transformers.pytorch_utils import ALL_LAYERNORM_LAYERS
44
+ from transformers.utils import (
45
+ add_start_docstrings,
46
+ add_start_docstrings_to_model_forward,
47
+ is_flash_attn_greater_or_equal_2_10,
48
+ logging,
49
+ replace_return_docstrings,
50
+ )
51
+
52
+ try:
53
+ from transformers.generation.streamers import BaseStreamer
54
+ except Exception:
55
+ BaseStreamer = None
56
+
57
+ from .configuration_internlm2 import InternLM2Config
58
+
59
+
60
+ try:
61
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
62
+ from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input
63
+ except:
64
+ pass
65
+
66
+
67
+ logger = logging.get_logger(__name__)
68
+
69
+ _CONFIG_FOR_DOC = "InternLM2Config"
70
+
71
+ os.environ['CUDA_VISIBLE_DEVICES'] = '1'
72
+
73
+ def log_memory_usage(stage):
74
+ allocated = torch.cuda.memory_allocated() / (1024**3)
75
+ print(f"[{stage}] Allocated GPU memory: {allocated:.2f} GB")
76
+
77
+
78
+ def _get_unpad_data(attention_mask):
79
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
80
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
81
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
82
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.int32), (1, 0)) # pylint: disable=E1102
83
+ return (
84
+ indices,
85
+ cu_seqlens,
86
+ max_seqlen_in_batch,
87
+ )
88
+
89
+
90
+ class InternLM2RMSNorm(nn.Module):
91
+ """InternLM2RMSNorm is equivalent to T5LayerNorm."""
92
+
93
+ def __init__(self, hidden_size, eps=1e-6):
94
+ super().__init__()
95
+ self.weight = nn.Parameter(torch.ones(hidden_size))
96
+ self.variance_epsilon = eps
97
+
98
+ def forward(self, hidden_states):
99
+ input_dtype = hidden_states.dtype
100
+ hidden_states = hidden_states.to(torch.float32)
101
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
102
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
103
+ return self.weight * hidden_states.to(input_dtype)
104
+
105
+ ALL_LAYERNORM_LAYERS.append(InternLM2RMSNorm)
106
+
107
+
108
+ class InternLM2RotaryEmbedding(nn.Module):
109
+ """Rotary Position Embedding for the InternLM2 model. Credits to the Reddit user /u/lucidrains."""
110
+
111
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None, scaling_factor=1.0):
112
+ super().__init__()
113
+ self.scaling_factor = scaling_factor
114
+ self.dim = dim
115
+ self.max_position_embeddings = max_position_embeddings
116
+ self.base = base
117
+ inv_freq = 1.0 / (self.base ** (torch.arange(0, self.dim, 2, dtype=torch.int64).float().to(device) / self.dim))
118
+ self.register_buffer("inv_freq", inv_freq, persistent=False)
119
+ # For BC we register cos and sin cached
120
+ self.max_seq_len_cached = max_position_embeddings
121
+ #log_memory_usage("Rotary Position Stage")
122
+
123
+ @torch.no_grad()
124
+ def forward(self, x, position_ids):
125
+ # x: [bs, num_attention_heads, seq_len, head_size]
126
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
127
+ position_ids_expanded = position_ids[:, None, :].float()
128
+ # Force float32 since bfloat16 loses precision on long contexts
129
+ # See https://github.com/huggingface/transformers/pull/29285
130
+ device_type = x.device.type
131
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
132
+ with torch.autocast(device_type=device_type, enabled=False):
133
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
134
+ emb = torch.cat((freqs, freqs), dim=-1)
135
+ cos = emb.cos()
136
+ sin = emb.sin()
137
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
138
+ #log_memory_usage("Rotary Position Stage")
139
+
140
+
141
+ class InternLM2LinearScalingRotaryEmbedding(InternLM2RotaryEmbedding):
142
+ """InternLM2RotaryEmbedding extended with linear scaling. Credits to the Reddit user /u/kaiokendev"""
143
+
144
+ def forward(self, x, position_ids):
145
+ # difference to the original RoPE: a scaling factor is aplied to the position ids
146
+ position_ids = position_ids.float() / self.scaling_factor
147
+ cos, sin = super().forward(x, position_ids)
148
+ return cos, sin
149
+
150
+
151
+
152
+ class InternLM2DynamicNTKScalingRotaryEmbedding(InternLM2RotaryEmbedding):
153
+ """InternLM2RotaryEmbedding extended with Dynamic NTK scaling.
154
+ Credits to the Reddit users /u/bloc97 and /u/emozilla"""
155
+
156
+ def forward(self, x, position_ids):
157
+ # difference to the original RoPE: inv_freq is recomputed when the sequence length > original length
158
+ seq_len = torch.max(position_ids) + 1
159
+ if seq_len > self.max_position_embeddings:
160
+ base = self.base * (
161
+ (self.scaling_factor * seq_len / self.max_position_embeddings) - (self.scaling_factor - 1)
162
+ ) ** (self.dim / (self.dim - 2))
163
+ inv_freq = 1.0 / (base ** (torch.arange(0, self.dim, 2, dtype=torch.int64).float().to(x.device) / self.dim))
164
+ self.register_buffer("inv_freq", inv_freq, persistent=False) # TODO joao: this may break with compilation
165
+
166
+ cos, sin = super().forward(x, position_ids)
167
+
168
+ return cos, sin
169
+
170
+
171
+ def rotate_half(x):
172
+ """Rotates half the hidden dims of the input."""
173
+ x1 = x[..., : x.shape[-1] // 2]
174
+ x2 = x[..., x.shape[-1] // 2 :]
175
+ return torch.cat((-x2, x1), dim=-1)
176
+
177
+
178
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1): # pylint: disable=unused-argument
179
+ """Applies Rotary Position Embedding to the query and key tensors.
180
+
181
+ Args:
182
+ q (`torch.Tensor`): The query tensor.
183
+ k (`torch.Tensor`): The key tensor.
184
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
185
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
186
+ position_ids (`torch.Tensor`, *optional*):
187
+ Deprecated and unused.
188
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
189
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
190
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
191
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
192
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
193
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
194
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
195
+ Returns:
196
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
197
+ """
198
+ cos = cos.unsqueeze(unsqueeze_dim)
199
+ sin = sin.unsqueeze(unsqueeze_dim)
200
+ q_embed = (q * cos) + (rotate_half(q) * sin)
201
+ k_embed = (k * cos) + (rotate_half(k) * sin)
202
+ return q_embed, k_embed
203
+
204
+
205
+ class InternLM2MLP(nn.Module):
206
+ """MLP for InternLM2 model."""
207
+
208
+ def __init__(self, config):
209
+ super().__init__()
210
+ self.config = config
211
+ self.hidden_size = config.hidden_size
212
+ self.intermediate_size = config.intermediate_size
213
+ self.w1 = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
214
+ self.w3 = nn.Linear(self.hidden_size, self.intermediate_size, bias=False)
215
+ self.w2 = nn.Linear(self.intermediate_size, self.hidden_size, bias=False)
216
+ self.act_fn = ACT2FN[config.hidden_act]
217
+
218
+ def forward(self, x):
219
+ down_proj = self.w2(self.act_fn(self.w1(x)) * self.w3(x))
220
+
221
+ return down_proj
222
+
223
+
224
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
225
+ """
226
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
227
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
228
+ """
229
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
230
+ if n_rep == 1:
231
+ return hidden_states
232
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
233
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
234
+
235
+
236
+ class InternLM2Attention(nn.Module):
237
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
238
+
239
+ def __init__(self, config: InternLM2Config, layer_idx: Optional[int] = None):
240
+ super().__init__()
241
+ self.config = config
242
+ self.layer_idx = layer_idx
243
+ if layer_idx is None:
244
+ logger.warning_once(
245
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
246
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
247
+ "when creating this class."
248
+ )
249
+
250
+ self.hidden_size = config.hidden_size
251
+ self.num_heads = config.num_attention_heads
252
+ self.head_dim = self.hidden_size // self.num_heads
253
+ self.num_key_value_heads = config.num_key_value_heads
254
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
255
+ self.max_position_embeddings = config.max_position_embeddings
256
+ self.rope_theta = config.rope_theta
257
+ self.is_causal = True
258
+
259
+ if (self.head_dim * self.num_heads) != self.hidden_size:
260
+ raise ValueError(
261
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
262
+ f" and `num_heads`: {self.num_heads})."
263
+ )
264
+
265
+ self.wqkv = nn.Linear(
266
+ self.hidden_size,
267
+ (self.num_heads + 2 * self.num_key_value_heads) * self.head_dim,
268
+ bias=config.bias,
269
+ )
270
+ self.wo = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=config.bias)
271
+
272
+ self._init_rope()
273
+
274
+ def _init_rope(self):
275
+ if self.config.rope_scaling is None:
276
+ self.rotary_emb = InternLM2RotaryEmbedding(
277
+ self.head_dim,
278
+ max_position_embeddings=self.max_position_embeddings,
279
+ base=self.rope_theta,
280
+ )
281
+ else:
282
+ scaling_type = self.config.rope_scaling["type"]
283
+ scaling_factor = self.config.rope_scaling["factor"]
284
+ if scaling_type == "linear":
285
+ self.rotary_emb = InternLM2LinearScalingRotaryEmbedding(
286
+ self.head_dim,
287
+ max_position_embeddings=self.max_position_embeddings,
288
+ scaling_factor=scaling_factor,
289
+ base=self.rope_theta,
290
+ )
291
+ elif scaling_type == "dynamic":
292
+ self.rotary_emb = InternLM2DynamicNTKScalingRotaryEmbedding(
293
+ self.head_dim,
294
+ max_position_embeddings=self.max_position_embeddings,
295
+ scaling_factor=scaling_factor,
296
+ base=self.rope_theta,
297
+ )
298
+ else:
299
+ raise ValueError(f"Unknown RoPE scaling type {scaling_type}")
300
+
301
+ def forward(
302
+ self,
303
+ hidden_states: torch.Tensor,
304
+ attention_mask: Optional[torch.Tensor] = None,
305
+ position_ids: Optional[torch.LongTensor] = None,
306
+ past_key_value: Optional[Cache] = None,
307
+ output_attentions: bool = False,
308
+ use_cache: bool = False, # pylint: disable=unused-argument
309
+ cache_position: Optional[torch.LongTensor] = None,
310
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
311
+ bsz, q_len, _ = hidden_states.size()
312
+
313
+ if self.config.pretraining_tp > 1:
314
+ # split qkv_states by tp size
315
+ key_value_slicing = (self.num_key_value_heads * self.head_dim) // self.config.pretraining_tp
316
+ qkv_slices = self.wqkv.weight.split(key_value_slicing, dim=0)
317
+ qkv_states = torch.cat(
318
+ [F.linear(hidden_states, qkv_slice) for qkv_slice in qkv_slices], dim=-1 # pylint: disable=E1102
319
+ )
320
+ else:
321
+ qkv_states = self.wqkv(hidden_states)
322
+
323
+ qkv_states = rearrange(
324
+ qkv_states,
325
+ "b q (h gs d) -> b q h gs d",
326
+ gs=2 + self.num_key_value_groups,
327
+ d=self.head_dim,
328
+ )
329
+
330
+
331
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
332
+ query_states = rearrange(query_states, "b q h gs d -> b q (h gs) d").transpose(1, 2)
333
+ key_states = qkv_states[..., -2, :].transpose(1, 2)
334
+ value_states = qkv_states[..., -1, :].transpose(1, 2)
335
+
336
+ #log_memory_usage("Generate Q,K,V Stage")
337
+
338
+ cos, sin = self.rotary_emb(value_states, position_ids)
339
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
340
+
341
+ #log_memory_usage("Rotation matrix on Q,K")
342
+
343
+ if past_key_value is not None:
344
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
345
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
346
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
347
+
348
+ attn_weights = None
349
+ attn_output = None
350
+
351
+ # if(query_states.size(2) == 1):
352
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
353
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
354
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
355
+
356
+ if attention_mask is not None: # no matter the length, we just slice it
357
+ causal_mask = attention_mask[:, :, :, : key_states.shape[-2]]
358
+ attn_weights = attn_weights + causal_mask
359
+
360
+ # upcast attention to fp32
361
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype)
362
+ attn_output = torch.matmul(attn_weights, value_states)
363
+
364
+ # else:
365
+ # key_states = repeat_kv(key_states, self.num_key_value_groups)
366
+ # value_states = repeat_kv(value_states, self.num_key_value_groups)
367
+
368
+ # # Setup devices
369
+ # os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'
370
+ # devices = [torch.device(f'cuda:{i}') for i in range(4)]
371
+
372
+ # # Split query_states and attention_mask according to ratios
373
+ # ratios = [1, 2, 3, 3]
374
+ # total_size = query_states.size(2)
375
+ # total_ratio = sum(ratios)
376
+ # split_sizes = [total_size * ratio // total_ratio for ratio in ratios]
377
+ # split_sizes[-1] += total_size - sum(split_sizes)
378
+
379
+ # query_states_split = torch.split(query_states, split_sizes, dim=2)
380
+ # attention_mask_split = torch.split(attention_mask, split_sizes, dim=2)
381
+
382
+ # # Function to perform computations for each device
383
+ # def compute_attention(i, device, query_states_split, attention_mask_split, key_states, value_states, head_dim):
384
+ # query_states_i = query_states_split[i].to(device)
385
+ # attention_mask_i = attention_mask_split[i].to(device)
386
+ # attn_weights_i = torch.matmul(query_states_i, key_states.transpose(2, 3).to(device)) / math.sqrt(head_dim)
387
+
388
+ # if attention_mask is not None: # no matter the length, we just slice it
389
+ # causal_mask_i = attention_mask_i[:, :, :, : key_states.shape[-2]]
390
+ # attn_weights_i = attn_weights_i + causal_mask_i
391
+
392
+ # # Upcast attention to fp32 and apply softmax
393
+ # attn_weights_i = F.softmax(attn_weights_i, dim=-1, dtype=torch.float32).to(query_states_i.dtype)
394
+ # attn_output_i = torch.matmul(attn_weights_i, value_states.to(device))
395
+
396
+ # return attn_output_i
397
+
398
+ # # List to store the output
399
+ # attn_output_list = []
400
+
401
+ # # Measure time for parallel execution
402
+ # start_time = time.time()
403
+
404
+ # # Use ThreadPoolExecutor to run the loop in parallel
405
+ # with ThreadPoolExecutor(max_workers=len(devices)) as executor:
406
+ # futures = []
407
+ # for i, device in enumerate(devices):
408
+ # futures.append(executor.submit(compute_attention, i, device, query_states_split, attention_mask_split, key_states, value_states, self.head_dim))
409
+
410
+ # for future in futures:
411
+ # attn_output_list.append(future.result())
412
+
413
+ # end_time = time.time()
414
+
415
+ # # Measure and print time
416
+ # print(f'Time for Calculation {end_time-start_time:0.4f}')
417
+
418
+ # # Gather results from GPUs and concatenate
419
+ # attn_output_list = [attn_output.to(devices[1]) for attn_output in attn_output_list]
420
+ # attn_output = torch.cat(attn_output_list, dim=2).to(devices[1])
421
+ # print(f'Final Attention Outputs Size: {attn_output.size()}')
422
+
423
+
424
+
425
+ # key_states = repeat_kv(key_states, self.num_key_value_groups)
426
+ # value_states = repeat_kv(value_states, self.num_key_value_groups)
427
+ # # print("Process 2")
428
+ # #log_memory_usage("repeak_kv on K,V")
429
+ # print(f'Original Query Size: {query_states.size()}')
430
+ # #print(key_states.size())
431
+ # #print(value_states.size())
432
+
433
+ # total_size = query_states.size(2)
434
+
435
+ # os.environ['CUDA_VISIBLE_DEVICES'] = '0,1,2,3'
436
+
437
+ # devices = [torch.device(f'cuda:{i}') for i in range(4)]
438
+
439
+ # ratios = [1, 1, 1, 1]
440
+ # total_ratio = sum(ratios)
441
+
442
+ # split_sizes = [total_size * ratio // total_ratio for ratio in ratios]
443
+ # split_sizes[-1] += total_size - sum(split_sizes)
444
+
445
+ # query_states_split = torch.split(query_states, split_sizes, dim=2)
446
+ # attention_mask_split = torch.split(attention_mask, split_sizes, dim=2)
447
+ # #print(f'Split Attention Size: {attention_mask_split[0].size()}')
448
+
449
+ # attn_output_list = []
450
+ # #attn_weights_list = []
451
+
452
+
453
+ # start_time = time.time()
454
+
455
+ # for i, device in enumerate(devices):
456
+ # start_time_1 = time.time()
457
+ # torch.cuda.synchronize(device)
458
+ # query_states_i = query_states_split[i].to(device)
459
+ # attention_mask_i = attention_mask_split[i].to(device)
460
+ # attn_weights_i = torch.matmul(query_states_i, key_states.transpose(2, 3).to(device)) / math.sqrt(self.head_dim)
461
+ # if attention_mask is not None: # no matter the length, we just slice it
462
+ # #print(f'size of attention mask {attention_mask_i.size()}')
463
+ # causal_mask_i = attention_mask_i[:, :, :, : key_states.shape[-2]]
464
+ # #print(f'Casual Mask Size: {causal_mask.size()}')
465
+ # attn_weights_i = attn_weights_i + causal_mask_i
466
+
467
+ # # Upcast attention to fp32 and apply softmax
468
+ # attn_weights_i = nn.functional.softmax(attn_weights_i, dim=-1, dtype=torch.float32).to(query_states_i.dtype)
469
+ # #attn_weights_list.append(attn_weights_i)
470
+ # #print(f'Split Weight Size: {attn_weights_i.size()}')
471
+ # #print(f'Value Size:{value_states.size()} ')
472
+ # attn_output_i = torch.matmul(attn_weights_i, value_states.to(device))
473
+ # #print(f'Output Split Size:{attn_output_i.size()} ')
474
+ # attn_output_list.append(attn_output_i)
475
+ # end_time_1 = time.time()
476
+ # print(f'Time for Calculation in one loop {end_time_1-start_time_1:0.4f}')
477
+
478
+ # # Gather results from GPUs and concatenate
479
+ # # attn_weights_list = [attn_weights.to(devices[1]) for attn_weights in attn_weights_list]
480
+ # # attn_weights = torch.cat(attn_weights_list, dim=2).to(devices[1])
481
+ # end_time = time.time()
482
+
483
+ # print(f'Time for Calculation {end_time-start_time:0.4f}')
484
+
485
+ # attn_output_list = [attn_output.to(devices[1]) for attn_output in attn_output_list]
486
+ # attn_output = torch.cat(attn_output_list, dim=2).to(devices[1])
487
+ # #print("Final Attention Outputs Size:", attn_output.size())
488
+
489
+
490
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
491
+ raise ValueError(
492
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
493
+ f" {attn_output.size()}"
494
+ )
495
+
496
+ attn_output = attn_output.transpose(1, 2).contiguous()
497
+
498
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
499
+
500
+ if self.config.pretraining_tp > 1:
501
+ attn_output = attn_output.split(self.hidden_size // self.config.pretraining_tp, dim=2)
502
+ o_proj_slices = self.wo.weight.split(self.hidden_size // self.config.pretraining_tp, dim=1)
503
+ attn_output = sum(
504
+ [
505
+ F.linear(attn_output[i], o_proj_slices[i]) # pylint: disable=E1102
506
+ for i in range(self.config.pretraining_tp)
507
+ ]
508
+ )
509
+ else:
510
+ attn_output = self.wo(attn_output)
511
+
512
+ if not output_attentions:
513
+ attn_weights = None
514
+
515
+
516
+ return attn_output, attn_weights, past_key_value
517
+
518
+
519
+ class InternLM2FlashAttention2(InternLM2Attention):
520
+ """
521
+ InternLM2 flash attention module. This module inherits from `InternLM2Attention` as the weights of the module stays
522
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
523
+ flash attention and deal with padding tokens in case the input contains any of them.
524
+ """
525
+
526
+ def __init__(self, *args, **kwargs):
527
+ super().__init__(*args, **kwargs)
528
+
529
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
530
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement,
531
+ # that was made default for flash_attn>=2.1. This attribute is used to handle this difference.
532
+ # Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
533
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1)
534
+ # produces a wrong mask (top-left).
535
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
536
+
537
+ def forward(
538
+ self,
539
+ hidden_states: torch.Tensor,
540
+ attention_mask: Optional[torch.LongTensor] = None,
541
+ position_ids: Optional[torch.LongTensor] = None,
542
+ past_key_value: Optional[Cache] = None,
543
+ output_attentions: bool = False,
544
+ use_cache: bool = False,
545
+ cache_position: Optional[torch.LongTensor] = None,
546
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
547
+ if isinstance(past_key_value, StaticCache):
548
+ raise ValueError(
549
+ "`static` cache implementation is not compatible with `attn_implementation==flash_attention_2` "
550
+ "make sure to use `sdpa` in the mean time, and open an issue at "
551
+ "https://github.com/huggingface/transformers"
552
+ )
553
+
554
+ output_attentions = False
555
+
556
+ bsz, q_len, _ = hidden_states.size()
557
+
558
+ qkv_states = self.wqkv(hidden_states)
559
+
560
+ qkv_states = rearrange(
561
+ qkv_states,
562
+ "b q (h gs d) -> b q h gs d",
563
+ gs=2 + self.num_key_value_groups,
564
+ d=self.head_dim,
565
+ )
566
+
567
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
568
+ query_states = rearrange(query_states, "b q h gs d -> b q (h gs) d")
569
+ key_states = qkv_states[..., -2, :]
570
+ value_states = qkv_states[..., -1, :]
571
+
572
+ query_states = query_states.transpose(1, 2)
573
+ key_states = key_states.transpose(1, 2)
574
+ value_states = value_states.transpose(1, 2)
575
+
576
+ cos, sin = self.rotary_emb(value_states, position_ids)
577
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
578
+
579
+ if past_key_value is not None:
580
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
581
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
582
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
583
+
584
+ # TODO: These transpose are quite inefficient but Flash Attention requires the layout
585
+ # [batch_size, sequence_length, num_heads, head_dim]. We would need to refactor the KV cache
586
+ # to be able to avoid many of these transpose/reshape/view.
587
+ query_states = query_states.transpose(1, 2)
588
+ key_states = key_states.transpose(1, 2)
589
+ value_states = value_states.transpose(1, 2)
590
+
591
+ # dropout_rate = self.attention_dropout if self.training else 0.0
592
+ dropout_rate = 0.0
593
+
594
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
595
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
596
+ # cast them back in the correct dtype just to be sure everything works as expected.
597
+ # This might slowdown training & inference so it is recommended to not cast the LayerNorms
598
+ # in fp32. (InternLM2RMSNorm handles it correctly)
599
+
600
+ input_dtype = query_states.dtype
601
+ if input_dtype == torch.float32:
602
+ if torch.is_autocast_enabled():
603
+ target_dtype = torch.get_autocast_gpu_dtype()
604
+ # Handle the case where the model is quantized
605
+ elif hasattr(self.config, "_pre_quantization_dtype"):
606
+ target_dtype = self.config._pre_quantization_dtype
607
+ else:
608
+ target_dtype = self.wqkv.weight.dtype
609
+
610
+ logger.warning_once(
611
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
612
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
613
+ f" {target_dtype}."
614
+ )
615
+
616
+ query_states = query_states.to(target_dtype)
617
+ key_states = key_states.to(target_dtype)
618
+ value_states = value_states.to(target_dtype)
619
+
620
+ attn_output = self._flash_attention_forward(
621
+ query_states, key_states, value_states, attention_mask, q_len, dropout=dropout_rate
622
+ )
623
+
624
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
625
+ attn_output = self.wo(attn_output)
626
+
627
+ if not output_attentions:
628
+ attn_weights = None
629
+
630
+ return attn_output, attn_weights, past_key_value # pylint: disable=E0606
631
+
632
+ def _flash_attention_forward(
633
+ self, query_states, key_states, value_states, attention_mask, query_length, dropout=0.0, softmax_scale=None
634
+ ):
635
+ """
636
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
637
+ first unpad the input, then computes the attention scores and pad the final attention scores.
638
+
639
+ Args:
640
+ query_states (`torch.Tensor`):
641
+ Input query states to be passed to Flash Attention API
642
+ key_states (`torch.Tensor`):
643
+ Input key states to be passed to Flash Attention API
644
+ value_states (`torch.Tensor`):
645
+ Input value states to be passed to Flash Attention API
646
+ attention_mask (`torch.Tensor`):
647
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
648
+ position of padding tokens and 1 for the position of non-padding tokens.
649
+ dropout (`float`):
650
+ Attention dropout
651
+ softmax_scale (`float`, *optional*):
652
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
653
+ """
654
+ if not self._flash_attn_uses_top_left_mask:
655
+ causal = self.is_causal
656
+ else:
657
+ # TODO: Remove the `query_length != 1` check once Flash Attention for RoCm is bumped to 2.1.
658
+ # For details, please see the comment in InternLM2FlashAttention2 __init__.
659
+ causal = self.is_causal and query_length != 1
660
+
661
+ # Contains at least one padding token in the sequence
662
+ if attention_mask is not None:
663
+ batch_size = query_states.shape[0]
664
+ query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(
665
+ query_states, key_states, value_states, attention_mask, query_length
666
+ )
667
+
668
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
669
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
670
+
671
+ attn_output_unpad = flash_attn_varlen_func( # pylint: disable=E0606
672
+ query_states,
673
+ key_states,
674
+ value_states,
675
+ cu_seqlens_q=cu_seqlens_q,
676
+ cu_seqlens_k=cu_seqlens_k,
677
+ max_seqlen_q=max_seqlen_in_batch_q,
678
+ max_seqlen_k=max_seqlen_in_batch_k,
679
+ dropout_p=dropout,
680
+ softmax_scale=softmax_scale,
681
+ causal=causal,
682
+ )
683
+
684
+ attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length) # pylint: disable=E0606
685
+ else:
686
+ attn_output = flash_attn_func( # pylint: disable=E0606
687
+ query_states, key_states, value_states, dropout, softmax_scale=softmax_scale, causal=causal
688
+ )
689
+
690
+ return attn_output
691
+
692
+ def _upad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
693
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
694
+ batch_size, kv_seq_len, num_key_value_heads, head_dim = key_layer.shape
695
+
696
+ key_layer = index_first_axis( # pylint: disable=E0606
697
+ key_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
698
+ )
699
+ value_layer = index_first_axis( # pylint: disable=E0606
700
+ value_layer.reshape(batch_size * kv_seq_len, num_key_value_heads, head_dim), indices_k
701
+ )
702
+ if query_length == kv_seq_len:
703
+ query_layer = index_first_axis( # pylint: disable=E0606
704
+ query_layer.reshape(batch_size * kv_seq_len, self.num_heads, head_dim), indices_k
705
+ )
706
+ cu_seqlens_q = cu_seqlens_k
707
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
708
+ indices_q = indices_k
709
+ elif query_length == 1:
710
+ max_seqlen_in_batch_q = 1
711
+ cu_seqlens_q = torch.arange(
712
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
713
+ ) # There is a memcpy here, that is very bad.
714
+ indices_q = cu_seqlens_q[:-1]
715
+ query_layer = query_layer.squeeze(1)
716
+ else:
717
+ # The -q_len: slice assumes left padding.
718
+ attention_mask = attention_mask[:, -query_length:]
719
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input( # pylint: disable=E0606
720
+ query_layer, attention_mask
721
+ )
722
+
723
+ return (
724
+ query_layer,
725
+ key_layer,
726
+ value_layer,
727
+ indices_q,
728
+ (cu_seqlens_q, cu_seqlens_k),
729
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
730
+ )
731
+
732
+
733
+ # Copied from transformers.models.llama.modeling_llama.LllamaSdpaAttention with Llama->InternLM2
734
+ class InternLM2SdpaAttention(InternLM2Attention):
735
+ """
736
+ InternLM2 attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
737
+ `InternLM2Attention` as the weights of the module stays untouched. The only changes are on the forward pass
738
+ to adapt to SDPA API.
739
+ """
740
+
741
+ # Adapted from InternLM2Attention.forward
742
+ def forward(
743
+ self,
744
+ hidden_states: torch.Tensor,
745
+ attention_mask: Optional[torch.Tensor] = None,
746
+ position_ids: Optional[torch.LongTensor] = None,
747
+ past_key_value: Optional[Cache] = None,
748
+ output_attentions: bool = False,
749
+ use_cache: bool = False,
750
+ cache_position: Optional[torch.LongTensor] = None,
751
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
752
+ if output_attentions:
753
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"`
754
+ # once this is implemented.
755
+ logger.warning_once(
756
+ "InternLM2Model uses InternLM2SdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` "
757
+ "does not support `output_attentions=True`. Falling back to the manual attention implementation, "
758
+ "but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. "
759
+ 'This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
760
+ )
761
+ return super().forward(
762
+ hidden_states=hidden_states,
763
+ attention_mask=attention_mask,
764
+ position_ids=position_ids,
765
+ past_key_value=past_key_value,
766
+ output_attentions=output_attentions,
767
+ use_cache=use_cache,
768
+ cache_position=cache_position,
769
+ )
770
+
771
+ bsz, q_len, _ = hidden_states.size()
772
+
773
+ qkv_states = self.wqkv(hidden_states)
774
+
775
+ qkv_states = rearrange(
776
+ qkv_states,
777
+ "b q (h gs d) -> b q h gs d",
778
+ gs=2 + self.num_key_value_groups,
779
+ d=self.head_dim,
780
+ )
781
+
782
+ query_states = qkv_states[..., : self.num_key_value_groups, :]
783
+ query_states = rearrange(query_states, "b q h gs d -> b q (h gs) d")
784
+ key_states = qkv_states[..., -2, :]
785
+ value_states = qkv_states[..., -1, :]
786
+
787
+ query_states = query_states.transpose(1, 2)
788
+ key_states = key_states.transpose(1, 2)
789
+ value_states = value_states.transpose(1, 2)
790
+
791
+ cos, sin = self.rotary_emb(value_states, position_ids)
792
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
793
+
794
+ if past_key_value is not None:
795
+ # sin and cos are specific to RoPE models; cache_position needed for the static cache
796
+ cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
797
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
798
+
799
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
800
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
801
+
802
+ causal_mask = attention_mask
803
+ if attention_mask is not None:
804
+ causal_mask = causal_mask[:, :, :, : key_states.shape[-2]]
805
+
806
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with
807
+ # custom attn_mask, Reference: https://github.com/pytorch/pytorch/issues/112577.
808
+ if query_states.device.type == "cuda" and causal_mask is not None:
809
+ query_states = query_states.contiguous()
810
+ key_states = key_states.contiguous()
811
+ value_states = value_states.contiguous()
812
+
813
+ # We dispatch to SDPA's Flash Attention or Efficient kernels via this `is_causal` if statement instead of
814
+ # an inline conditional assignment in SDPA to support both torch.compile's dynamic shapes and full graph
815
+ # options. An inline conditional prevents dynamic shapes from compiling.
816
+ is_causal = bool(causal_mask is None and q_len > 1)
817
+
818
+ attn_output = torch.nn.functional.scaled_dot_product_attention( # pylint: disable=E1102
819
+ query_states,
820
+ key_states,
821
+ value_states,
822
+ attn_mask=causal_mask,
823
+ dropout_p=0.0,
824
+ is_causal=is_causal,
825
+ )
826
+
827
+ attn_output = attn_output.transpose(1, 2).contiguous()
828
+ attn_output = attn_output.view(bsz, q_len, self.hidden_size)
829
+
830
+ attn_output = self.wo(attn_output)
831
+
832
+ return attn_output, None, past_key_value
833
+
834
+
835
+ INTERNLM2_ATTENTION_CLASSES = {
836
+ "eager": InternLM2Attention,
837
+ "flash_attention_2": InternLM2FlashAttention2,
838
+ "sdpa": InternLM2SdpaAttention,
839
+ }
840
+
841
+
842
+ # Modified from transformers.models.llama.modeling_llama.LlamaDecoderLayer with Llama->InternLM2
843
+ class InternLM2DecoderLayer(nn.Module):
844
+ """InternLM2 Decoder Layer. This module is a single layer of the InternLM2 model."""
845
+
846
+ def __init__(self, config: InternLM2Config, layer_idx: int):
847
+ super().__init__()
848
+ self.hidden_size = config.hidden_size
849
+ self.layer_idx = layer_idx
850
+
851
+ self.attention = INTERNLM2_ATTENTION_CLASSES[config.attn_implementation](config=config, layer_idx=layer_idx)
852
+
853
+ self.feed_forward = InternLM2MLP(config)
854
+ self.attention_norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
855
+ self.ffn_norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
856
+
857
+ def forward(
858
+ self,
859
+ hidden_states: torch.Tensor,
860
+ attention_mask: Optional[torch.Tensor] = None,
861
+ position_ids: Optional[torch.LongTensor] = None,
862
+ past_key_value: Optional[Cache] = None,
863
+ output_attentions: Optional[bool] = False,
864
+ use_cache: Optional[bool] = False,
865
+ cache_position: Optional[torch.LongTensor] = None,
866
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
867
+ """
868
+ Args:
869
+ hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)`
870
+ attention_mask (`torch.FloatTensor`, *optional*):
871
+ attention mask of size `(batch_size, sequence_length)` if flash attention is used or `(batch_size, 1,
872
+ query_sequence_length, key_sequence_length)` if default attention is used.
873
+ output_attentions (`bool`, *optional*):
874
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
875
+ returned tensors for more detail.
876
+ use_cache (`bool`, *optional*):
877
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
878
+ (see `past_key_values`).
879
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
880
+ """
881
+ residual = hidden_states
882
+
883
+ hidden_states = self.attention_norm(hidden_states)
884
+
885
+ # Self Attention
886
+ hidden_states, self_attn_weights, present_key_value = self.attention(
887
+ hidden_states=hidden_states,
888
+ attention_mask=attention_mask,
889
+ position_ids=position_ids,
890
+ past_key_value=past_key_value,
891
+ output_attentions=output_attentions,
892
+ use_cache=use_cache,
893
+ cache_position=cache_position,
894
+ )
895
+ hidden_states = residual + hidden_states
896
+
897
+ # Fully Connected
898
+ residual = hidden_states
899
+ hidden_states = self.ffn_norm(hidden_states)
900
+ hidden_states = self.feed_forward(hidden_states)
901
+ hidden_states = residual + hidden_states
902
+
903
+ outputs = (hidden_states,)
904
+
905
+ if output_attentions:
906
+ outputs += (self_attn_weights,)
907
+
908
+ if use_cache:
909
+ outputs += (present_key_value,)
910
+ return outputs
911
+
912
+
913
+ InternLM2_START_DOCSTRING = r"""
914
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
915
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
916
+ etc.)
917
+
918
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
919
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
920
+ and behavior.
921
+
922
+ Parameters:
923
+ config ([`InternLM2Config`]):
924
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
925
+ load the weights associated with the model, only the configuration. Check out the
926
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
927
+ """
928
+
929
+
930
+ # Copied from transformers.models.llama.modeling_llama.LlamaPreTrainedModel with Llama->InternLM2
931
+ @add_start_docstrings(
932
+ "The bare InternLM2 Model outputting raw hidden-states without any specific head on top.",
933
+ InternLM2_START_DOCSTRING,
934
+ )
935
+ class InternLM2PreTrainedModel(PreTrainedModel):
936
+ """
937
+ InternLM2 pretraiend model's base class.
938
+ """
939
+
940
+ config_class = InternLM2Config
941
+ base_model_prefix = "model"
942
+ supports_gradient_checkpointing = True
943
+ _no_split_modules = ["InternLM2DecoderLayer"]
944
+ _skip_keys_device_placement = ["past_key_values"]
945
+ _supports_flash_attn_2 = True
946
+ _supports_sdpa = True
947
+ _supports_cache_class = True
948
+ _supports_quantized_cache = True
949
+ _supports_static_cache = True
950
+
951
+ def _init_weights(self, module):
952
+ std = self.config.initializer_range
953
+ if isinstance(module, nn.Linear):
954
+ module.weight.data.normal_(mean=0.0, std=std)
955
+ if module.bias is not None:
956
+ module.bias.data.zero_()
957
+ elif isinstance(module, nn.Embedding):
958
+ module.weight.data.normal_(mean=0.0, std=std)
959
+ if module.padding_idx is not None:
960
+ module.weight.data[module.padding_idx].zero_()
961
+
962
+
963
+ InternLM2_INPUTS_DOCSTRING = r"""
964
+ Args:
965
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
966
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
967
+ it.
968
+
969
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
970
+ [`PreTrainedTokenizer.__call__`] for details.
971
+
972
+ [What are input IDs?](../glossary#input-ids)
973
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
974
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
975
+
976
+ - 1 for tokens that are **not masked**,
977
+ - 0 for tokens that are **masked**.
978
+
979
+ [What are attention masks?](../glossary#attention-mask)
980
+
981
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
982
+ [`PreTrainedTokenizer.__call__`] for details.
983
+
984
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
985
+ `past_key_values`).
986
+
987
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
988
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
989
+ information on the default strategy.
990
+
991
+ - 1 indicates the head is **not masked**,
992
+ - 0 indicates the head is **masked**.
993
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
994
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
995
+ config.n_positions - 1]`.
996
+
997
+ [What are position IDs?](../glossary#position-ids)
998
+ past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
999
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
1000
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
1001
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
1002
+
1003
+ Two formats are allowed:
1004
+ - a [`~cache_utils.Cache`] instance;
1005
+ - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
1006
+ shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
1007
+ cache format.
1008
+
1009
+ The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
1010
+ legacy cache format will be returned.
1011
+
1012
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
1013
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
1014
+ of shape `(batch_size, sequence_length)`.
1015
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
1016
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
1017
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
1018
+ model's internal embedding lookup matrix.
1019
+ use_cache (`bool`, *optional*):
1020
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
1021
+ `past_key_values`).
1022
+ output_attentions (`bool`, *optional*):
1023
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
1024
+ tensors for more detail.
1025
+ output_hidden_states (`bool`, *optional*):
1026
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
1027
+ more detail.
1028
+ return_dict (`bool`, *optional*):
1029
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
1030
+ cache_position (`torch.LongTensor` of shape `(sequence_length)`, *optional*):
1031
+ Indices depicting the position of the input sequence tokens in the sequence. Contrarily to `position_ids`,
1032
+ this tensor is not affected by padding. It is used to update the cache in the correct position and to infer
1033
+ the complete sequence length.
1034
+ """
1035
+
1036
+
1037
+ # Modified from transformers.models.llama.modeling_llama.LlamaModel with Llama->InternLM2
1038
+ @add_start_docstrings(
1039
+ "The bare InternLM2 Model outputting raw hidden-states without any specific head on top.",
1040
+ InternLM2_START_DOCSTRING,
1041
+ )
1042
+ class InternLM2Model(InternLM2PreTrainedModel):
1043
+ """
1044
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`InternLM2DecoderLayer`]
1045
+
1046
+ Args:
1047
+ config: InternLM2Config
1048
+ """
1049
+
1050
+ _auto_class = "AutoModel"
1051
+
1052
+ def __init__(self, config: InternLM2Config):
1053
+ super().__init__(config)
1054
+ self.padding_idx = config.pad_token_id
1055
+ self.vocab_size = config.vocab_size
1056
+ self.config = config
1057
+
1058
+ self.tok_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
1059
+
1060
+ self.layers = nn.ModuleList(
1061
+ [InternLM2DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
1062
+ )
1063
+ self.norm = InternLM2RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
1064
+
1065
+ self.gradient_checkpointing = False
1066
+ # Initialize weights and apply final processing
1067
+ self.post_init()
1068
+
1069
+ def get_input_embeddings(self):
1070
+ return self.tok_embeddings
1071
+
1072
+ def set_input_embeddings(self, value):
1073
+ self.tok_embeddings = value
1074
+
1075
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
1076
+ def forward(
1077
+ self,
1078
+ input_ids: torch.LongTensor = None,
1079
+ attention_mask: Optional[torch.Tensor] = None,
1080
+ position_ids: Optional[torch.LongTensor] = None,
1081
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1082
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1083
+ use_cache: Optional[bool] = None,
1084
+ output_attentions: Optional[bool] = None,
1085
+ output_hidden_states: Optional[bool] = None,
1086
+ return_dict: Optional[bool] = None,
1087
+ cache_position: Optional[torch.LongTensor] = None,
1088
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
1089
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1090
+ output_hidden_states = (
1091
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1092
+ )
1093
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1094
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1095
+
1096
+ if (input_ids is None) ^ (inputs_embeds is not None):
1097
+ raise ValueError(
1098
+ "You cannot specify both input_ids and inputs_embeds at the same time, and must specify either one"
1099
+ )
1100
+
1101
+ if self.gradient_checkpointing and self.training and use_cache:
1102
+ logger.warning_once(
1103
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`."
1104
+ )
1105
+ use_cache = False
1106
+
1107
+ if inputs_embeds is None:
1108
+ inputs_embeds = self.tok_embeddings(input_ids)
1109
+
1110
+ return_legacy_cache = False
1111
+ if use_cache and not isinstance(past_key_values, Cache): # kept for BC (non `Cache` `past_key_values` inputs)
1112
+ return_legacy_cache = True
1113
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
1114
+
1115
+ if cache_position is None:
1116
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
1117
+ cache_position = torch.arange(
1118
+ past_seen_tokens, past_seen_tokens + inputs_embeds.shape[1], device=inputs_embeds.device
1119
+ )
1120
+ if position_ids is None:
1121
+ position_ids = cache_position.unsqueeze(0)
1122
+
1123
+ causal_mask = self._update_causal_mask(
1124
+ attention_mask, inputs_embeds, cache_position, past_key_values, output_attentions
1125
+ )
1126
+
1127
+ # embed positions
1128
+ hidden_states = inputs_embeds
1129
+
1130
+ # decoder layers
1131
+ all_hidden_states = () if output_hidden_states else None
1132
+ all_self_attns = () if output_attentions else None
1133
+ next_decoder_cache = None
1134
+
1135
+ for decoder_layer in self.layers:
1136
+ if output_hidden_states:
1137
+ all_hidden_states += (hidden_states,)
1138
+
1139
+ if self.gradient_checkpointing and self.training:
1140
+ layer_outputs = self._gradient_checkpointing_func(
1141
+ decoder_layer.__call__,
1142
+ hidden_states,
1143
+ causal_mask,
1144
+ position_ids,
1145
+ past_key_values,
1146
+ output_attentions,
1147
+ use_cache,
1148
+ cache_position,
1149
+ )
1150
+ else:
1151
+ layer_outputs = decoder_layer(
1152
+ hidden_states,
1153
+ attention_mask=causal_mask,
1154
+ position_ids=position_ids,
1155
+ past_key_value=past_key_values,
1156
+ output_attentions=output_attentions,
1157
+ use_cache=use_cache,
1158
+ cache_position=cache_position,
1159
+ )
1160
+
1161
+ hidden_states = layer_outputs[0]
1162
+
1163
+ if use_cache:
1164
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
1165
+
1166
+ if output_attentions:
1167
+ all_self_attns += (layer_outputs[1],)
1168
+
1169
+ hidden_states = self.norm(hidden_states)
1170
+
1171
+ # add hidden states from the last decoder layer
1172
+ if output_hidden_states:
1173
+ all_hidden_states += (hidden_states,)
1174
+
1175
+ next_cache = next_decoder_cache if use_cache else None
1176
+ if return_legacy_cache:
1177
+ next_cache = next_cache.to_legacy_cache()
1178
+
1179
+ if not return_dict:
1180
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
1181
+ return BaseModelOutputWithPast(
1182
+ last_hidden_state=hidden_states,
1183
+ past_key_values=next_cache,
1184
+ hidden_states=all_hidden_states,
1185
+ attentions=all_self_attns,
1186
+ )
1187
+
1188
+ def _update_causal_mask(
1189
+ self,
1190
+ attention_mask: torch.Tensor,
1191
+ input_tensor: torch.Tensor,
1192
+ cache_position: torch.Tensor,
1193
+ past_key_values: Cache,
1194
+ output_attentions: bool,
1195
+ ):
1196
+ # TODO: As of torch==2.2.0, the `attention_mask` passed to the model in `generate` is 2D and of dynamic length
1197
+ # even when the static KV cache is used. This is an issue for torch.compile which then recaptures cudagraphs at
1198
+ # each decode steps due to the dynamic shapes. (`recording cudagraph tree for symint key 13`, etc.), which is
1199
+ # VERY slow. A workaround is `@torch.compiler.disable`, but this prevents using `fullgraph=True`.
1200
+ # See more context in https://github.com/huggingface/transformers/pull/29114
1201
+
1202
+ if self.config.attn_implementation == "flash_attention_2":
1203
+ if attention_mask is not None and 0.0 in attention_mask:
1204
+ return attention_mask
1205
+ return None
1206
+
1207
+ # For SDPA, when possible, we will rely on its `is_causal` argument instead of its `attn_mask` argument, in
1208
+ # order to dispatch on Flash Attention 2. This feature is not compatible with static cache, as SDPA will fail
1209
+ # to infer the attention mask.
1210
+ past_seen_tokens = past_key_values.get_seq_length() if past_key_values is not None else 0
1211
+ using_static_cache = isinstance(past_key_values, StaticCache)
1212
+
1213
+ # When output attentions is True, sdpa implementation's forward method calls the eager implementation's forward
1214
+ if self.config.attn_implementation == "sdpa" and not using_static_cache and not output_attentions:
1215
+ if AttentionMaskConverter._ignore_causal_mask_sdpa(
1216
+ attention_mask,
1217
+ inputs_embeds=input_tensor,
1218
+ past_key_values_length=past_seen_tokens,
1219
+ is_training=self.training,
1220
+ ):
1221
+ return None
1222
+
1223
+ dtype, device = input_tensor.dtype, input_tensor.device
1224
+ min_dtype = torch.finfo(dtype).min
1225
+ sequence_length = input_tensor.shape[1]
1226
+ if using_static_cache:
1227
+ target_length = past_key_values.get_max_length()
1228
+ else:
1229
+ target_length = (
1230
+ attention_mask.shape[-1]
1231
+ if isinstance(attention_mask, torch.Tensor)
1232
+ else past_seen_tokens + sequence_length + 1
1233
+ )
1234
+
1235
+ if attention_mask is not None and attention_mask.dim() == 4:
1236
+ # in this case we assume that the mask comes already in inverted form and requires no inversion or slicing
1237
+ if attention_mask.max() != 0:
1238
+ raise ValueError("Custom 4D attention mask should be passed in inverted form with max==0`")
1239
+ causal_mask = attention_mask
1240
+ else:
1241
+ causal_mask = torch.full((sequence_length, target_length), fill_value=min_dtype, dtype=dtype, device=device)
1242
+ if sequence_length != 1:
1243
+ causal_mask = torch.triu(causal_mask, diagonal=1)
1244
+ causal_mask *= torch.arange(target_length, device=device) > cache_position.reshape(-1, 1)
1245
+ causal_mask = causal_mask[None, None, :, :].expand(input_tensor.shape[0], 1, -1, -1)
1246
+ if attention_mask is not None:
1247
+ causal_mask = causal_mask.clone() # copy to contiguous memory for in-place edit
1248
+ mask_length = attention_mask.shape[-1]
1249
+ padding_mask = causal_mask[:, :, :, :mask_length] + attention_mask[:, None, None, :]
1250
+ padding_mask = padding_mask == 0
1251
+ causal_mask[:, :, :, :mask_length] = causal_mask[:, :, :, :mask_length].masked_fill(
1252
+ padding_mask, min_dtype
1253
+ )
1254
+ if (
1255
+ self.config.attn_implementation == "sdpa"
1256
+ and attention_mask is not None
1257
+ and attention_mask.device.type == "cuda"
1258
+ and not output_attentions
1259
+ ):
1260
+ # Attend to all tokens in fully masked rows in the causal_mask, for example the relevant first rows when
1261
+ # using left padding. This is required by F.scaled_dot_product_attention memory-efficient attention path.
1262
+ # Details: https://github.com/pytorch/pytorch/issues/110213
1263
+ causal_mask = AttentionMaskConverter._unmask_unattended(causal_mask, min_dtype) # pylint: disable=E1120
1264
+
1265
+ return causal_mask
1266
+
1267
+
1268
+ # Modified from transformers.models.llama.modeling_llama.LlamaForCausalLM
1269
+ class InternLM2ForCausalLM(InternLM2PreTrainedModel):
1270
+ """Causal language model (CLM) for InternLM2."""
1271
+
1272
+ _auto_class = "AutoModelForCausalLM"
1273
+ _tied_weights_keys = ["output.weight"]
1274
+
1275
+ def __init__(self, config):
1276
+ super().__init__(config)
1277
+ self.model = InternLM2Model(config)
1278
+ self.vocab_size = config.vocab_size
1279
+ self.output = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1280
+
1281
+ # Initialize weights and apply final processing
1282
+ self.post_init()
1283
+
1284
+ def get_input_embeddings(self):
1285
+ return self.model.tok_embeddings
1286
+
1287
+ def set_input_embeddings(self, value):
1288
+ self.model.tok_embeddings = value
1289
+
1290
+ def get_output_embeddings(self):
1291
+ return self.output
1292
+
1293
+ def set_output_embeddings(self, new_embeddings):
1294
+ self.output = new_embeddings
1295
+
1296
+ def set_decoder(self, decoder):
1297
+ self.model = decoder
1298
+
1299
+ def get_decoder(self):
1300
+ return self.model
1301
+
1302
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
1303
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
1304
+ def forward(
1305
+ self,
1306
+ input_ids: torch.LongTensor = None,
1307
+ attention_mask: Optional[torch.Tensor] = None,
1308
+ position_ids: Optional[torch.LongTensor] = None,
1309
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1310
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1311
+ labels: Optional[torch.LongTensor] = None,
1312
+ use_cache: Optional[bool] = None,
1313
+ output_attentions: Optional[bool] = None,
1314
+ output_hidden_states: Optional[bool] = None,
1315
+ return_dict: Optional[bool] = None,
1316
+ cache_position: Optional[torch.LongTensor] = None,
1317
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
1318
+ r"""
1319
+ Args:
1320
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1321
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1322
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1323
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1324
+
1325
+ Returns:
1326
+
1327
+ Example:
1328
+
1329
+ ```python
1330
+ >>> from transformers import AutoTokenizer, InternLM2ForCausalLM
1331
+
1332
+ >>> model = InternLM2ForCausalLM.from_pretrained("meta-InternLM2/InternLM2-2-7b-hf")
1333
+ >>> tokenizer = AutoTokenizer.from_pretrained("meta-InternLM2/InternLM2-2-7b-hf")
1334
+
1335
+ >>> prompt = "Hey, are you conscious? Can you talk to me?"
1336
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1337
+
1338
+ >>> # Generate
1339
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1340
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1341
+ "Hey, are you conscious? Can you talk to me?\nI'm not conscious, but I can talk to you."
1342
+ ```"""
1343
+
1344
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1345
+ output_hidden_states = (
1346
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1347
+ )
1348
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1349
+
1350
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1351
+ outputs = self.model(
1352
+ input_ids=input_ids,
1353
+ attention_mask=attention_mask,
1354
+ position_ids=position_ids,
1355
+ past_key_values=past_key_values,
1356
+ inputs_embeds=inputs_embeds,
1357
+ use_cache=use_cache,
1358
+ output_attentions=output_attentions,
1359
+ output_hidden_states=output_hidden_states,
1360
+ return_dict=return_dict,
1361
+ cache_position=cache_position,
1362
+ )
1363
+
1364
+ hidden_states = outputs[0]
1365
+ if self.config.pretraining_tp > 1:
1366
+ output_slices = self.output.weight.split(self.vocab_size // self.config.pretraining_tp, dim=0)
1367
+ logits = [
1368
+ F.linear(hidden_states, output_slices[i]) # pylint: disable=not-callable
1369
+ for i in range(self.config.pretraining_tp)
1370
+ ]
1371
+ logits = torch.cat(logits, dim=-1)
1372
+ else:
1373
+ logits = self.output(hidden_states)
1374
+ logits = logits.float()
1375
+
1376
+ loss = None
1377
+ if labels is not None:
1378
+ # Shift so that tokens < n predict n
1379
+ shift_logits = logits[..., :-1, :].contiguous()
1380
+ shift_labels = labels[..., 1:].contiguous()
1381
+ # Flatten the tokens
1382
+ loss_fct = CrossEntropyLoss()
1383
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
1384
+ shift_labels = shift_labels.view(-1)
1385
+ # Enable model parallelism
1386
+ shift_labels = shift_labels.to(shift_logits.device)
1387
+ loss = loss_fct(shift_logits, shift_labels)
1388
+
1389
+ if not return_dict:
1390
+ output = (logits,) + outputs[1:]
1391
+ return (loss,) + output if loss is not None else output
1392
+
1393
+ return CausalLMOutputWithPast(
1394
+ loss=loss,
1395
+ logits=logits,
1396
+ past_key_values=outputs.past_key_values,
1397
+ hidden_states=outputs.hidden_states,
1398
+ attentions=outputs.attentions,
1399
+ )
1400
+
1401
+ def prepare_inputs_for_generation(
1402
+ self,
1403
+ input_ids,
1404
+ past_key_values=None,
1405
+ attention_mask=None,
1406
+ inputs_embeds=None,
1407
+ cache_position=None,
1408
+ use_cache=True,
1409
+ **kwargs,
1410
+ ):
1411
+ past_length = 0
1412
+ if past_key_values is not None:
1413
+ if isinstance(past_key_values, Cache):
1414
+ past_length = cache_position[0] if cache_position is not None else past_key_values.get_seq_length()
1415
+ max_cache_length = (
1416
+ torch.tensor(past_key_values.get_max_length(), device=input_ids.device)
1417
+ if past_key_values.get_max_length() is not None
1418
+ else None
1419
+ )
1420
+ cache_length = past_length if max_cache_length is None else torch.min(max_cache_length, past_length)
1421
+ # TODO joao: remove this `else` after `generate` prioritizes `Cache` objects
1422
+ else:
1423
+ cache_length = past_length = past_key_values[0][0].shape[2]
1424
+ max_cache_length = None
1425
+
1426
+ # Keep only the unprocessed tokens:
1427
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
1428
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as input)
1429
+ if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
1430
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
1431
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
1432
+ # input_ids based on the past_length.
1433
+ elif past_length < input_ids.shape[1]:
1434
+ input_ids = input_ids[:, past_length:]
1435
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
1436
+
1437
+ # If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
1438
+ if (
1439
+ max_cache_length is not None
1440
+ and attention_mask is not None
1441
+ and cache_length + input_ids.shape[1] > max_cache_length
1442
+ ):
1443
+ attention_mask = attention_mask[:, -max_cache_length:] # pylint: disable=E1130
1444
+
1445
+ position_ids = kwargs.get("position_ids", None)
1446
+ if attention_mask is not None and position_ids is None:
1447
+ # create position_ids on the fly for batch generation
1448
+ position_ids = attention_mask.long().cumsum(-1) - 1
1449
+ position_ids.masked_fill_(attention_mask == 0, 1)
1450
+ if past_key_values:
1451
+ position_ids = position_ids[:, -input_ids.shape[1] :]
1452
+
1453
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1454
+ if inputs_embeds is not None and past_key_values is None:
1455
+ model_inputs = {"inputs_embeds": inputs_embeds}
1456
+ else:
1457
+ # The `contiguous()` here is necessary to have a static stride during decoding. torchdynamo otherwise
1458
+ # recompiles graphs as the stride of the inputs is a guard.
1459
+ # Ref: https://github.com/huggingface/transformers/pull/29114
1460
+ # TODO: use `next_tokens` directly instead.
1461
+ model_inputs = {"input_ids": input_ids.contiguous()}
1462
+
1463
+ input_length = position_ids.shape[-1] if position_ids is not None else input_ids.shape[-1]
1464
+ if cache_position is None:
1465
+ cache_position = torch.arange(past_length, past_length + input_length, device=input_ids.device)
1466
+ elif use_cache:
1467
+ cache_position = cache_position[-input_length:]
1468
+
1469
+ model_inputs.update(
1470
+ {
1471
+ "position_ids": position_ids,
1472
+ "cache_position": cache_position,
1473
+ "past_key_values": past_key_values,
1474
+ "use_cache": use_cache,
1475
+ "attention_mask": attention_mask,
1476
+ }
1477
+ )
1478
+ return model_inputs
1479
+
1480
+ @staticmethod
1481
+ def _reorder_cache(past_key_values, beam_idx):
1482
+ reordered_past = ()
1483
+ for layer_past in past_key_values:
1484
+ reordered_past += (
1485
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
1486
+ )
1487
+ return reordered_past
1488
+
1489
+ def build_inputs(self, tokenizer, query: str, history: List[Tuple[str, str]] = None, meta_instruction=""):
1490
+ if history is None:
1491
+ history = []
1492
+ if tokenizer.add_bos_token:
1493
+ prompt = ""
1494
+ else:
1495
+ prompt = tokenizer.bos_token
1496
+ if meta_instruction:
1497
+ prompt += f"""<|im_start|>system\n{meta_instruction}<|im_end|>\n"""
1498
+ for record in history:
1499
+ prompt += f"""<|im_start|>user\n{record[0]}<|im_end|>\n<|im_start|>assistant\n{record[1]}<|im_end|>\n"""
1500
+ prompt += f"""<|im_start|>user\n{query}<|im_end|>\n<|im_start|>assistant\n"""
1501
+ return tokenizer([prompt], return_tensors="pt")
1502
+
1503
+ @torch.no_grad()
1504
+ def chat(
1505
+ self,
1506
+ tokenizer,
1507
+ query: str,
1508
+ history: Optional[List[Tuple[str, str]]] = None,
1509
+ streamer: Optional[BaseStreamer] = None,
1510
+ max_new_tokens: int = 1024,
1511
+ do_sample: bool = True,
1512
+ temperature: float = 0.8,
1513
+ top_p: float = 0.8,
1514
+ meta_instruction: str = "You are an AI assistant whose name is InternLM (书生·浦语).\n"
1515
+ "- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory "
1516
+ "(上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n"
1517
+ "- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such "
1518
+ "as English and 中文.",
1519
+ **kwargs,
1520
+ ):
1521
+ if history is None:
1522
+ history = []
1523
+ inputs = self.build_inputs(tokenizer, query, history, meta_instruction)
1524
+ inputs = {k: v.to(self.device) for k, v in inputs.items() if torch.is_tensor(v)}
1525
+ # also add end-of-assistant token in eos token id to avoid unnecessary generation
1526
+ eos_token_id = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids(["<|im_end|>"])[0]]
1527
+ outputs = self.generate(
1528
+ **inputs,
1529
+ streamer=streamer,
1530
+ max_new_tokens=max_new_tokens,
1531
+ do_sample=do_sample,
1532
+ temperature=temperature,
1533
+ top_p=top_p,
1534
+ eos_token_id=eos_token_id,
1535
+ **kwargs,
1536
+ )
1537
+ outputs = outputs[0].cpu().tolist()[len(inputs["input_ids"][0]) :]
1538
+ response = tokenizer.decode(outputs, skip_special_tokens=True)
1539
+ response = response.split("<|im_end|>")[0]
1540
+ history = history + [(query, response)]
1541
+ return response, history
1542
+
1543
+ @torch.no_grad()
1544
+ def stream_chat(
1545
+ self,
1546
+ tokenizer,
1547
+ query: str,
1548
+ history: List[Tuple[str, str]] = None,
1549
+ max_new_tokens: int = 1024,
1550
+ do_sample: bool = True,
1551
+ temperature: float = 0.8,
1552
+ top_p: float = 0.8,
1553
+ **kwargs,
1554
+ ):
1555
+ if history is None:
1556
+ history = []
1557
+ """
1558
+ Return a generator in format: (response, history)
1559
+ Eg.
1560
+ ('你好,有什么可以帮助您的吗', [('你好', '你好,有什么可以帮助您的吗')])
1561
+ ('你好,有什么可以帮助您的吗?', [('你好', '你好,有什么可以帮助您的吗?')])
1562
+ """
1563
+ if BaseStreamer is None:
1564
+ raise ModuleNotFoundError(
1565
+ "The version of `transformers` is too low. Please make sure "
1566
+ "that you have installed `transformers>=4.28.0`."
1567
+ )
1568
+
1569
+ response_queue = queue.Queue(maxsize=20)
1570
+
1571
+ class ChatStreamer(BaseStreamer):
1572
+ """
1573
+ Streamer used in generate to print words one by one.
1574
+ """
1575
+
1576
+ def __init__(self, tokenizer) -> None:
1577
+ super().__init__()
1578
+ self.tokenizer = tokenizer
1579
+ self.queue = response_queue
1580
+ self.query = query
1581
+ self.history = history
1582
+ self.response = ""
1583
+ self.cache = []
1584
+ self.received_inputs = False
1585
+ self.queue.put((self.response, history + [(self.query, self.response)]))
1586
+
1587
+ def put(self, value):
1588
+ if len(value.shape) > 1 and value.shape[0] > 1:
1589
+ raise ValueError("ChatStreamer only supports batch size 1")
1590
+ elif len(value.shape) > 1:
1591
+ value = value[0]
1592
+
1593
+ if not self.received_inputs:
1594
+ # The first received value is input_ids, ignore here
1595
+ self.received_inputs = True
1596
+ return
1597
+
1598
+ self.cache.extend(value.tolist())
1599
+ token = self.tokenizer.decode(self.cache, skip_special_tokens=True)
1600
+ if token.strip() != "<|im_end|>":
1601
+ self.response = self.response + token
1602
+ history = self.history + [(self.query, self.response)]
1603
+ self.queue.put((self.response, history))
1604
+ self.cache = []
1605
+ else:
1606
+ self.end()
1607
+
1608
+ def end(self):
1609
+ self.queue.put(None)
1610
+
1611
+ def stream_producer():
1612
+ return self.chat(
1613
+ tokenizer=tokenizer,
1614
+ query=query,
1615
+ streamer=ChatStreamer(tokenizer=tokenizer),
1616
+ history=history,
1617
+ max_new_tokens=max_new_tokens,
1618
+ do_sample=do_sample,
1619
+ temperature=temperature,
1620
+ top_p=top_p,
1621
+ **kwargs,
1622
+ )
1623
+
1624
+ def consumer():
1625
+ producer = threading.Thread(target=stream_producer)
1626
+ producer.start()
1627
+ while True:
1628
+ res = response_queue.get()
1629
+ if res is None:
1630
+ return
1631
+ yield res
1632
+
1633
+ return consumer()
1634
+
1635
+
1636
+ # Copied from transformers.models.llama.modeling_llama.LlamaForSequenceClassification with Llama->InternLM2
1637
+ @add_start_docstrings(
1638
+ """
1639
+ The InternLM2 Model transformer with a sequence classification head on top (linear layer).
1640
+
1641
+ [`InternLM2ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
1642
+ (e.g. GPT-2) do.
1643
+
1644
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1645
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1646
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1647
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1648
+ each row of the batch).
1649
+ """,
1650
+ InternLM2_START_DOCSTRING,
1651
+ )
1652
+ class InternLM2ForSequenceClassification(InternLM2PreTrainedModel):
1653
+ """Sequence Classification Head for InternLM2 Model."""
1654
+
1655
+ def __init__(self, config):
1656
+ super().__init__(config)
1657
+ self.num_labels = config.num_labels
1658
+ self.model = InternLM2Model(config)
1659
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
1660
+
1661
+ # Initialize weights and apply final processing
1662
+ self.post_init()
1663
+
1664
+ def get_input_embeddings(self):
1665
+ return self.model.tok_embeddings
1666
+
1667
+ def set_input_embeddings(self, value):
1668
+ self.model.tok_embeddings = value
1669
+
1670
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
1671
+ def forward(
1672
+ self,
1673
+ input_ids: torch.LongTensor = None,
1674
+ attention_mask: Optional[torch.Tensor] = None,
1675
+ position_ids: Optional[torch.LongTensor] = None,
1676
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1677
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1678
+ labels: Optional[torch.LongTensor] = None,
1679
+ use_cache: Optional[bool] = None,
1680
+ output_attentions: Optional[bool] = None,
1681
+ output_hidden_states: Optional[bool] = None,
1682
+ return_dict: Optional[bool] = None,
1683
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1684
+ r"""
1685
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1686
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1687
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1688
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1689
+ """
1690
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1691
+
1692
+ transformer_outputs = self.model(
1693
+ input_ids,
1694
+ attention_mask=attention_mask,
1695
+ position_ids=position_ids,
1696
+ past_key_values=past_key_values,
1697
+ inputs_embeds=inputs_embeds,
1698
+ use_cache=use_cache,
1699
+ output_attentions=output_attentions,
1700
+ output_hidden_states=output_hidden_states,
1701
+ return_dict=return_dict,
1702
+ )
1703
+ hidden_states = transformer_outputs[0]
1704
+ logits = self.score(hidden_states)
1705
+
1706
+ if input_ids is not None:
1707
+ batch_size = input_ids.shape[0]
1708
+ else:
1709
+ batch_size = inputs_embeds.shape[0]
1710
+
1711
+ if self.config.pad_token_id is None and batch_size != 1:
1712
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
1713
+ if self.config.pad_token_id is None:
1714
+ sequence_lengths = -1
1715
+ else:
1716
+ if input_ids is not None:
1717
+ # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
1718
+ sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
1719
+ sequence_lengths = sequence_lengths % input_ids.shape[-1]
1720
+ sequence_lengths = sequence_lengths.to(logits.device)
1721
+ else:
1722
+ sequence_lengths = -1
1723
+
1724
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
1725
+
1726
+ loss = None
1727
+ if labels is not None:
1728
+ labels = labels.to(logits.device)
1729
+ if self.config.problem_type is None:
1730
+ if self.num_labels == 1:
1731
+ self.config.problem_type = "regression"
1732
+ elif self.num_labels > 1 and (labels.dtype in (torch.long, torch.int)):
1733
+ self.config.problem_type = "single_label_classification"
1734
+ else:
1735
+ self.config.problem_type = "multi_label_classification"
1736
+
1737
+ if self.config.problem_type == "regression":
1738
+ loss_fct = MSELoss()
1739
+ if self.num_labels == 1:
1740
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
1741
+ else:
1742
+ loss = loss_fct(pooled_logits, labels)
1743
+ elif self.config.problem_type == "single_label_classification":
1744
+ loss_fct = CrossEntropyLoss()
1745
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
1746
+ elif self.config.problem_type == "multi_label_classification":
1747
+ loss_fct = BCEWithLogitsLoss()
1748
+ loss = loss_fct(pooled_logits, labels)
1749
+ if not return_dict:
1750
+ output = (pooled_logits,) + transformer_outputs[1:]
1751
+ return ((loss,) + output) if loss is not None else output
1752
+
1753
+ return SequenceClassifierOutputWithPast(
1754
+ loss=loss,
1755
+ logits=pooled_logits,
1756
+ past_key_values=transformer_outputs.past_key_values,
1757
+ hidden_states=transformer_outputs.hidden_states,
1758
+ attentions=transformer_outputs.attentions,
1759
+ )
1760
+
1761
+
1762
+ # Copied from transformers.models.llama.modeling_llama.LlamaForQuestionAnswering with Llama->InternLM2
1763
+ @add_start_docstrings(
1764
+ """
1765
+ The InternLM2 Model transformer with a span classification head on top for extractive question-answering tasks like
1766
+ SQuAD (a linear layer on top of the hidden-states output to compute `span start logits` and `span end logits`).
1767
+ """,
1768
+ InternLM2_START_DOCSTRING,
1769
+ )
1770
+ class InternLM2ForQuestionAnswering(InternLM2PreTrainedModel):
1771
+ """Question Answering model for InternLM2."""
1772
+
1773
+ base_model_prefix = "transformer"
1774
+
1775
+ def __init__(self, config):
1776
+ super().__init__(config)
1777
+ self.transformer = InternLM2Model(config)
1778
+ self.qa_outputs = nn.Linear(config.hidden_size, 2)
1779
+
1780
+ # Initialize weights and apply final processing
1781
+ self.post_init()
1782
+
1783
+ def get_input_embeddings(self):
1784
+ return self.transformer.tok_embeddings
1785
+
1786
+ def set_input_embeddings(self, value):
1787
+ self.transformer.tok_embeddings = value
1788
+
1789
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
1790
+ def forward(
1791
+ self,
1792
+ input_ids: Optional[torch.LongTensor] = None,
1793
+ attention_mask: Optional[torch.FloatTensor] = None,
1794
+ position_ids: Optional[torch.LongTensor] = None,
1795
+ past_key_values: Optional[Union[Cache, List[torch.FloatTensor]]] = None,
1796
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1797
+ start_positions: Optional[torch.LongTensor] = None,
1798
+ end_positions: Optional[torch.LongTensor] = None,
1799
+ output_attentions: Optional[bool] = None,
1800
+ output_hidden_states: Optional[bool] = None,
1801
+ return_dict: Optional[bool] = None,
1802
+ ) -> Union[Tuple, QuestionAnsweringModelOutput]:
1803
+ r"""
1804
+ start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1805
+ Labels for position (index) of the start of the labelled span for computing the token classification loss.
1806
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1807
+ are not taken into account for computing the loss.
1808
+ end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1809
+ Labels for position (index) of the end of the labelled span for computing the token classification loss.
1810
+ Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence
1811
+ are not taken into account for computing the loss.
1812
+ """
1813
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1814
+
1815
+ outputs = self.transformer(
1816
+ input_ids,
1817
+ attention_mask=attention_mask,
1818
+ position_ids=position_ids,
1819
+ past_key_values=past_key_values,
1820
+ inputs_embeds=inputs_embeds,
1821
+ output_attentions=output_attentions,
1822
+ output_hidden_states=output_hidden_states,
1823
+ return_dict=return_dict,
1824
+ )
1825
+
1826
+ sequence_output = outputs[0]
1827
+
1828
+ logits = self.qa_outputs(sequence_output)
1829
+ start_logits, end_logits = logits.split(1, dim=-1)
1830
+ start_logits = start_logits.squeeze(-1).contiguous()
1831
+ end_logits = end_logits.squeeze(-1).contiguous()
1832
+
1833
+ total_loss = None
1834
+ if start_positions is not None and end_positions is not None:
1835
+ # If we are on multi-GPU, split add a dimension
1836
+ if len(start_positions.size()) > 1:
1837
+ start_positions = start_positions.squeeze(-1).to(start_logits.device)
1838
+ if len(end_positions.size()) > 1:
1839
+ end_positions = end_positions.squeeze(-1).to(end_logits.device)
1840
+ # sometimes the start/end positions are outside our model inputs, we ignore these terms
1841
+ ignored_index = start_logits.size(1)
1842
+ start_positions = start_positions.clamp(0, ignored_index)
1843
+ end_positions = end_positions.clamp(0, ignored_index)
1844
+
1845
+ loss_fct = CrossEntropyLoss(ignore_index=ignored_index)
1846
+ start_loss = loss_fct(start_logits, start_positions)
1847
+ end_loss = loss_fct(end_logits, end_positions)
1848
+ total_loss = (start_loss + end_loss) / 2
1849
+
1850
+ if not return_dict:
1851
+ output = (start_logits, end_logits) + outputs[2:]
1852
+ return ((total_loss,) + output) if total_loss is not None else output
1853
+
1854
+ return QuestionAnsweringModelOutput(
1855
+ loss=total_loss,
1856
+ start_logits=start_logits,
1857
+ end_logits=end_logits,
1858
+ hidden_states=outputs.hidden_states,
1859
+ attentions=outputs.attentions,
1860
+ )
1861
+
1862
+
1863
+ # Copied from transformers.models.llama.modeling_llama.LlamaForTokenClassification with Llama->InternLM2
1864
+ @add_start_docstrings(
1865
+ """
1866
+ The InternLM2 Model transformer with a token classification head on top (a linear layer on top of the hidden-states
1867
+ output) e.g. for Named-Entity-Recognition (NER) tasks.
1868
+ """,
1869
+ InternLM2_START_DOCSTRING,
1870
+ )
1871
+ class InternLM2ForTokenClassification(InternLM2PreTrainedModel):
1872
+ """Token classification model for InternLM2."""
1873
+
1874
+ def __init__(self, config):
1875
+ super().__init__(config)
1876
+ self.num_labels = config.num_labels
1877
+ self.model = InternLM2Model(config)
1878
+ if getattr(config, "classifier_dropout", None) is not None:
1879
+ classifier_dropout = config.classifier_dropout
1880
+ elif getattr(config, "hidden_dropout", None) is not None:
1881
+ classifier_dropout = config.hidden_dropout
1882
+ else:
1883
+ classifier_dropout = 0.1
1884
+ self.dropout = nn.Dropout(classifier_dropout)
1885
+ self.score = nn.Linear(config.hidden_size, config.num_labels)
1886
+
1887
+ # Initialize weights and apply final processing
1888
+ self.post_init()
1889
+
1890
+ def get_input_embeddings(self):
1891
+ return self.model.tok_embeddings
1892
+
1893
+ def set_input_embeddings(self, value):
1894
+ self.model.tok_embeddings = value
1895
+
1896
+ @add_start_docstrings_to_model_forward(InternLM2_INPUTS_DOCSTRING)
1897
+ def forward(
1898
+ self,
1899
+ input_ids: torch.LongTensor = None,
1900
+ attention_mask: Optional[torch.Tensor] = None,
1901
+ position_ids: Optional[torch.LongTensor] = None,
1902
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1903
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1904
+ labels: Optional[torch.LongTensor] = None,
1905
+ use_cache: Optional[bool] = None,
1906
+ output_attentions: Optional[bool] = None,
1907
+ output_hidden_states: Optional[bool] = None,
1908
+ return_dict: Optional[bool] = None,
1909
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1910
+ r"""
1911
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1912
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1913
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1914
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1915
+ """
1916
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1917
+
1918
+ outputs = self.model(
1919
+ input_ids,
1920
+ attention_mask=attention_mask,
1921
+ position_ids=position_ids,
1922
+ past_key_values=past_key_values,
1923
+ inputs_embeds=inputs_embeds,
1924
+ use_cache=use_cache,
1925
+ output_attentions=output_attentions,
1926
+ output_hidden_states=output_hidden_states,
1927
+ return_dict=return_dict,
1928
+ )
1929
+ sequence_output = outputs[0]
1930
+ sequence_output = self.dropout(sequence_output)
1931
+ logits = self.score(sequence_output)
1932
+
1933
+ loss = None
1934
+ if labels is not None:
1935
+ loss_fct = CrossEntropyLoss()
1936
+ loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
1937
+
1938
+ if not return_dict:
1939
+ output = (logits,) + outputs[2:]
1940
+ return ((loss,) + output) if loss is not None else output
1941
+
1942
+ return TokenClassifierOutput(
1943
+ loss=loss,
1944
+ logits=logits,
1945
+ hidden_states=outputs.hidden_states,
1946
+ attentions=outputs.attentions,
1947
+ )