sharpenb commited on
Commit
b814a7a
·
verified ·
1 Parent(s): 240e8ea

Upload folder using huggingface_hub (#1)

Browse files

- 91f722bcd367d044d4dae4ece524ef18ab6778913ea60f43ae7a6c0b291cd2fb (49cac93723008631c9b65560dca60f2980ed0caf)
- 024743729744e2b6ba52881d4b7f42a632d10ee0c2f446c17b139ed2d114651f (156656e33dc0658dffa701342807f9e768002e0f)

Files changed (5) hide show
  1. README.md +89 -0
  2. config.json +70 -0
  3. configuration_chatglm.py +58 -0
  4. qmodel.pt +3 -0
  5. smash_config.json +19 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
3
+ base_model: ORIGINAL_REPO_NAME
4
+ metrics:
5
+ - memory_disk
6
+ - memory_inference
7
+ - inference_latency
8
+ - inference_throughput
9
+ - inference_CO2_emissions
10
+ - inference_energy_consumption
11
+ tags:
12
+ - pruna-ai
13
+ ---
14
+ <!-- header start -->
15
+ <!-- 200823 -->
16
+ <div style="width: auto; margin-left: auto; margin-right: auto">
17
+ <a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
18
+ <img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
19
+ </a>
20
+ </div>
21
+ <!-- header end -->
22
+
23
+ [![Twitter](https://img.shields.io/twitter/follow/PrunaAI?style=social)](https://twitter.com/PrunaAI)
24
+ [![GitHub](https://img.shields.io/github/followers/PrunaAI?label=Follow%20%40PrunaAI&style=social)](https://github.com/PrunaAI)
25
+ [![LinkedIn](https://img.shields.io/badge/LinkedIn-Connect-blue)](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
26
+ [![Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?style=social&logo=discord)](https://discord.gg/rskEr4BZJx)
27
+
28
+ # Simply make AI models cheaper, smaller, faster, and greener!
29
+
30
+ - Give a thumbs up if you like this model!
31
+ - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
32
+ - Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
33
+ - Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
34
+ - Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
35
+
36
+ ## Results
37
+
38
+ ![image info](./plots.png)
39
+
40
+ **Frequently Asked Questions**
41
+ - ***How does the compression work?*** The model is compressed with hqq.
42
+ - ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
43
+ - ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
44
+ - ***What is the model format?*** We use safetensors.
45
+ - ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
46
+ - ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
47
+ - ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
48
+ - ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
49
+ - ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
50
+
51
+ ## Setup
52
+
53
+ You can run the smashed model with these steps:
54
+
55
+ 0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
56
+ 1. Make sure that you have installed quantization related packages.
57
+ ```bash
58
+ pip install hqq
59
+ ```
60
+ 2. Load & run the model.
61
+ ```python
62
+ from transformers import AutoModelForCausalLM, AutoTokenizer
63
+ from hqq.engine.hf import HQQModelForCausalLM
64
+ from hqq.models.hf.base import AutoHQQHFModel
65
+
66
+ try:
67
+ model = HQQModelForCausalLM.from_quantized("PrunaAI/THUDM-glm-4-9b-HQQ-4bit-smashed", device_map='auto')
68
+ except:
69
+ model = AutoHQQHFModel.from_quantized("PrunaAI/THUDM-glm-4-9b-HQQ-4bit-smashed")
70
+ tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
71
+
72
+ input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
73
+
74
+ outputs = model.generate(input_ids, max_new_tokens=216)
75
+ tokenizer.decode(outputs[0])
76
+ ```
77
+
78
+ ## Configurations
79
+
80
+ The configuration info are in `smash_config.json`.
81
+
82
+ ## Credits & License
83
+
84
+ The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
85
+
86
+ ## Want to compress other models?
87
+
88
+ - Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
89
+ - Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
config.json ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_attn_implementation_autoset": true,
3
+ "_name_or_path": "/tmp/models/tmp8zp09c9c/tmp67u8lm72",
4
+ "add_bias_linear": false,
5
+ "add_qkv_bias": true,
6
+ "apply_query_key_layer_scaling": true,
7
+ "apply_residual_connection_post_layernorm": false,
8
+ "architectures": [
9
+ "ChatGLMForConditionalGeneration"
10
+ ],
11
+ "attention_dropout": 0.0,
12
+ "attention_softmax_in_fp32": true,
13
+ "auto_map": {
14
+ "AutoConfig": "configuration_chatglm.ChatGLMConfig",
15
+ "AutoModel": "THUDM/glm-4-9b--modeling_chatglm.ChatGLMForConditionalGeneration",
16
+ "AutoModelForCausalLM": "THUDM/glm-4-9b--modeling_chatglm.ChatGLMForConditionalGeneration",
17
+ "AutoModelForSeq2SeqLM": "THUDM/glm-4-9b--modeling_chatglm.ChatGLMForConditionalGeneration",
18
+ "AutoModelForSequenceClassification": "THUDM/glm-4-9b--modeling_chatglm.ChatGLMForSequenceClassification"
19
+ },
20
+ "bias_dropout_fusion": true,
21
+ "classifier_dropout": null,
22
+ "eos_token_id": [
23
+ 151329,
24
+ 151336,
25
+ 151338
26
+ ],
27
+ "ffn_hidden_size": 13696,
28
+ "fp32_residual_connection": false,
29
+ "hidden_dropout": 0.0,
30
+ "hidden_size": 4096,
31
+ "kv_channels": 128,
32
+ "layernorm_epsilon": 1.5625e-07,
33
+ "model_type": "chatglm",
34
+ "multi_query_attention": true,
35
+ "multi_query_group_num": 2,
36
+ "num_attention_heads": 32,
37
+ "num_layers": 40,
38
+ "original_rope": true,
39
+ "pad_token_id": 151329,
40
+ "padded_vocab_size": 151552,
41
+ "post_layer_norm": true,
42
+ "quantization_config": {
43
+ "quant_config": {
44
+ "offload_meta": false,
45
+ "scale_quant_params": null,
46
+ "weight_quant_params": {
47
+ "axis": 1,
48
+ "channel_wise": true,
49
+ "group_size": 64,
50
+ "nbits": 4,
51
+ "optimize": true,
52
+ "round_zero": true,
53
+ "view_as_float": false
54
+ },
55
+ "zero_quant_params": null
56
+ },
57
+ "quant_method": "hqq",
58
+ "skip_modules": [
59
+ "lm_head"
60
+ ]
61
+ },
62
+ "rmsnorm": true,
63
+ "rope_ratio": 1,
64
+ "seq_length": 8192,
65
+ "tie_word_embeddings": false,
66
+ "torch_dtype": "bfloat16",
67
+ "transformers_version": "4.48.2",
68
+ "use_cache": true,
69
+ "vocab_size": 151552
70
+ }
configuration_chatglm.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from transformers import PretrainedConfig
2
+
3
+
4
+ class ChatGLMConfig(PretrainedConfig):
5
+ model_type = "chatglm"
6
+
7
+ def __init__(
8
+ self,
9
+ num_layers=28,
10
+ padded_vocab_size=65024,
11
+ hidden_size=4096,
12
+ ffn_hidden_size=13696,
13
+ kv_channels=128,
14
+ num_attention_heads=32,
15
+ seq_length=2048,
16
+ hidden_dropout=0.0,
17
+ classifier_dropout=None,
18
+ attention_dropout=0.0,
19
+ layernorm_epsilon=1e-5,
20
+ rmsnorm=True,
21
+ apply_residual_connection_post_layernorm=False,
22
+ post_layer_norm=True,
23
+ add_bias_linear=False,
24
+ add_qkv_bias=False,
25
+ bias_dropout_fusion=True,
26
+ multi_query_attention=False,
27
+ multi_query_group_num=1,
28
+ rope_ratio=1,
29
+ apply_query_key_layer_scaling=True,
30
+ attention_softmax_in_fp32=True,
31
+ fp32_residual_connection=False,
32
+ **kwargs
33
+ ):
34
+ self.num_layers = num_layers
35
+ self.vocab_size = padded_vocab_size
36
+ self.padded_vocab_size = padded_vocab_size
37
+ self.hidden_size = hidden_size
38
+ self.ffn_hidden_size = ffn_hidden_size
39
+ self.kv_channels = kv_channels
40
+ self.num_attention_heads = num_attention_heads
41
+ self.seq_length = seq_length
42
+ self.hidden_dropout = hidden_dropout
43
+ self.classifier_dropout = classifier_dropout
44
+ self.attention_dropout = attention_dropout
45
+ self.layernorm_epsilon = layernorm_epsilon
46
+ self.rmsnorm = rmsnorm
47
+ self.apply_residual_connection_post_layernorm = apply_residual_connection_post_layernorm
48
+ self.post_layer_norm = post_layer_norm
49
+ self.add_bias_linear = add_bias_linear
50
+ self.add_qkv_bias = add_qkv_bias
51
+ self.bias_dropout_fusion = bias_dropout_fusion
52
+ self.multi_query_attention = multi_query_attention
53
+ self.multi_query_group_num = multi_query_group_num
54
+ self.rope_ratio = rope_ratio
55
+ self.apply_query_key_layer_scaling = apply_query_key_layer_scaling
56
+ self.attention_softmax_in_fp32 = attention_softmax_in_fp32
57
+ self.fp32_residual_connection = fp32_residual_connection
58
+ super().__init__(**kwargs)
qmodel.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:660a0b7915d1cb9cdef5ad3d890e5fc5ae33639638d8fafad1be33b305ea57ca
3
+ size 6729419204
smash_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "batchers": null,
3
+ "cachers": null,
4
+ "compilers": null,
5
+ "distillers": null,
6
+ "pruners": null,
7
+ "quantizers": "hqq",
8
+ "recoverers": null,
9
+ "quant_hqq_backend": "torchao_int4",
10
+ "quant_hqq_group_size": 64,
11
+ "quant_hqq_weight_bits": 4,
12
+ "max_batch_size": 1,
13
+ "device": "cuda",
14
+ "cache_dir": "/tmp/models/tmp8zp09c9c",
15
+ "task": "",
16
+ "save_load_fn": "hqq",
17
+ "save_load_fn_args": {},
18
+ "api_key": null
19
+ }