PEFT
Safetensors
English
jinjieyuan commited on
Commit
e7075fb
·
verified ·
1 Parent(s): 85342e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +123 -123
README.md CHANGED
@@ -1,123 +1,123 @@
1
- ---
2
- language: en
3
- license: apache-2.0
4
- ---
5
-
6
- # Shears Model Card: shears-llama-7b-50-math-super
7
-
8
- The super-adapter fine-tuned on sparsified LLaMA-7B with some math reasoning datasets using Shears.
9
-
10
- The release of the super-network is to facilitate users to apply their own search algorithms and evaluation indicators to extract subnetworks suitable for their specific needs.
11
-
12
- ## Model Details
13
-
14
- ### Information
15
-
16
- - **Model name:** shears-llama-7b-50-math-super
17
- - **Base model:** [IntelLabs/Llama-1-7B-sparsity50](https://huggingface.co/IntelLabs/Llama-1-7B-sparsity50)
18
- - **Sparsity:** 50%
19
- - **Domain:** Math
20
- - **Subnetwork version:** Super
21
- - **NNCF Configuration:** [nncf_shears_llama.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/nncf_config/unified_math/nncf_shears_llama.json)
22
-
23
- ### Adapter Configuration
24
-
25
- - **LoRA rank:** 32
26
- - **LoRA alpha:** 64
27
- - **LoRA target modules:** q_proj, k_proj, v_proj, up_proj, down_proj
28
- - **LoRA rank search space:** [32, 24, 16] (for each LoRA module)
29
-
30
- ### Training Hyperparameters
31
-
32
- - **Batch size:** 16
33
- - **Learning rate:** 3e-4
34
- - **Epoch:** 3
35
-
36
- ### Training Data
37
-
38
- Unified math reasoning dataset: [math_10k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/math_10k.json) (collected with the training sets of GSM8K, MAWPS, and AQuA).
39
-
40
- ### Evaluation Data
41
- [GSM8K](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/gsm8k/test.json), [AQuA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/AQuA/test.json), [MAWPS](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/mawps/test.json), [SVAMP](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/SVAMP/test.json)
42
-
43
-
44
- ## How to use
45
-
46
- Refer to the illustrative example provided in [load_and_explore_supernet.ipynb](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/search/load_and_explore_supernet.ipynb) for a comprehensive understanding. This notebook shows the direct loading of a Shears super-network and the extraction of diverse subnetworks from it.
47
- This feature empowers users to employ their own search algorithms and evaluation metrics for the extraction of subnetworks customized to their specific requirements.
48
-
49
- Moreover, the super-network is essentially the maximal subnetwork, and it can also be directly loaded:
50
-
51
- ```python
52
- import torch
53
- from peft import PeftModel
54
- from transformers import AutoModelForCausalLM
55
- from transformers import AutoTokenizer
56
-
57
- def generate_prompt(instruction):
58
- return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
59
-
60
- ### Instruction:
61
- {instruction}
62
-
63
- ### Response:
64
- """
65
-
66
- base_model = AutoModelForCausalLM.from_pretrained("IntelLabs/Llama-1-7B-sparsity50")
67
- model = PeftModel.from_pretrained(base_model, "IntelLabs/shears-llama-7b-50-math-super")
68
- model.eval()
69
-
70
- non_zero_params = sum([(param.data != 0).sum().item() for _, param in model.named_parameters()])
71
- print(f"Number of all non-zero parameters: {non_zero_params}")
72
-
73
- tokenizer = AutoTokenizer.from_pretrained("IntelLabs/Llama-1-7B-sparsity50")
74
-
75
- instruction = "Edgar eats 18 pretzels a day. If his brother eats 1/2 as many, how many does his brother eat in a week?"
76
- prompt = generate_prompt(instruction)
77
- inputs = tokenizer(prompt, return_tensors="pt")
78
- input_ids = inputs["input_ids"].to(model.device)
79
- with torch.no_grad():
80
- generation_output = model.generate(
81
- input_ids=input_ids,
82
- return_dict_in_generate=True,
83
- output_scores=True,
84
- max_new_tokens=256,
85
- use_cache=True,
86
- num_beams=4,
87
- )
88
- s = generation_output.sequences[0]
89
- output = tokenizer.decode(s)
90
- print(output)
91
-
92
- ```
93
-
94
- ## Evaluation Results
95
-
96
- Results of the heuristic sub-network discoverd from the super-network:
97
-
98
- | Model | Sparsity | GSM8K | AQuA | MAWPS | SVAMP | Average |
99
- |-----------------------|-------------|-------|-------|-------|-------|---------|
100
- | LLaMA-7B-LoRA | - | 37.5 | 18.9 | 79.0 | 52.1 | 46.9 |
101
- | [**LLaMA-7B-Shears**](https://huggingface.co/IntelLabs/shears-llama-7b-50-math-heuristic) | **50%** | 36.1 | 22.0 | 78.6 | 44.5 | 45.3 |
102
- | LLaMA-13B-LoRA | - | 47.5 | 18.5 | 83.6 | 54.6 | 51.1 |
103
- | [**LLaMA-13B-Shears**](https://huggingface.co/IntelLabs/shears-llama-13b-50-math-heuristic) | **50%** | 45.1 | 22.0 | 83.2 | 53.3 | 50.9 |
104
-
105
- ## Model Sources
106
-
107
- - **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears)
108
- - **Paper:** [Shears: Unstructured Sparsity with Neural Low-rank Adapter Search](https://arxiv.org/abs/2404.10934)
109
-
110
- ## Citation
111
-
112
- ```bash
113
- @article{munoz2024shears,
114
- title = {Shears: Unstructured Sparsity with Neural Low-rank Adapter Search},
115
- author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
116
- journal={The 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-2024)},
117
- year={2024}
118
- }
119
- ```
120
-
121
- ## License
122
-
123
- Apache-2.0
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ ---
5
+
6
+ # Shears Model Card: shears-llama-7b-50-math-super-adapter
7
+
8
+ The super-adapter fine-tuned on sparsified LLaMA-7B with some math reasoning datasets using Shears.
9
+
10
+ The release of the super-network is to facilitate users to apply their own search algorithms and evaluation indicators to extract subnetworks suitable for their specific needs.
11
+
12
+ ## Model Details
13
+
14
+ ### Information
15
+
16
+ - **Model name:** shears-llama-7b-50-math-super-adapter
17
+ - **Base model:** [IntelLabs/shears-llama-7b-50-base](https://huggingface.co/IntelLabs/shears-llama-7b-50-base)
18
+ - **Sparsity:** 50%
19
+ - **Domain:** Math
20
+ - **Subnetwork version:** Super
21
+ - **NNCF Configuration:** [nncf_shears_llama.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/nncf_config/nncf_shears_llama.json)
22
+
23
+ ### Adapter Configuration
24
+
25
+ - **LoRA rank:** 32
26
+ - **LoRA alpha:** 64
27
+ - **LoRA target modules:** q_proj, k_proj, v_proj, up_proj, down_proj
28
+ - **LoRA rank search space:** [32, 24, 16] (for each LoRA module)
29
+
30
+ ### Training Hyperparameters
31
+
32
+ - **Batch size:** 16
33
+ - **Learning rate:** 3e-4
34
+ - **Epoch:** 3
35
+
36
+ ### Training Data
37
+
38
+ Unified math reasoning dataset: [math_10k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/math_10k.json) (collected with the training sets of GSM8K, MAWPS, and AQuA).
39
+
40
+ ### Evaluation Data
41
+ [GSM8K](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/gsm8k/test.json), [AQuA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/AQuA/test.json), [MAWPS](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/mawps/test.json), [SVAMP](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/SVAMP/test.json)
42
+
43
+
44
+ ## How to use
45
+
46
+ Refer to the illustrative example provided in [load_and_explore_supernet.ipynb](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/search/load_and_explore_supernet.ipynb) for a comprehensive understanding. This notebook shows the direct loading of a Shears super-network and the extraction of diverse subnetworks from it.
47
+ This feature empowers users to employ their own search algorithms and evaluation metrics for the extraction of subnetworks customized to their specific requirements.
48
+
49
+ Moreover, the super-network is essentially the maximal subnetwork, and it can also be directly loaded:
50
+
51
+ ```python
52
+ import torch
53
+ from peft import PeftModel
54
+ from transformers import AutoModelForCausalLM
55
+ from transformers import AutoTokenizer
56
+
57
+ def generate_prompt(instruction):
58
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
59
+
60
+ ### Instruction:
61
+ {instruction}
62
+
63
+ ### Response:
64
+ """
65
+
66
+ base_model = AutoModelForCausalLM.from_pretrained("IntelLabs/shears-llama-7b-50-base")
67
+ model = PeftModel.from_pretrained(base_model, "IntelLabs/shears-llama-7b-50-math-super-adapter")
68
+ model.eval()
69
+
70
+ non_zero_params = sum([(param.data != 0).sum().item() for _, param in model.named_parameters()])
71
+ print(f"Number of all non-zero parameters: {non_zero_params}")
72
+
73
+ tokenizer = AutoTokenizer.from_pretrained("IntelLabs/shears-llama-7b-50-base")
74
+
75
+ instruction = "Edgar eats 18 pretzels a day. If his brother eats 1/2 as many, how many does his brother eat in a week?"
76
+ prompt = generate_prompt(instruction)
77
+ inputs = tokenizer(prompt, return_tensors="pt")
78
+ input_ids = inputs["input_ids"].to(model.device)
79
+ with torch.no_grad():
80
+ generation_output = model.generate(
81
+ input_ids=input_ids,
82
+ return_dict_in_generate=True,
83
+ output_scores=True,
84
+ max_new_tokens=256,
85
+ use_cache=True,
86
+ num_beams=4,
87
+ )
88
+ s = generation_output.sequences[0]
89
+ output = tokenizer.decode(s)
90
+ print(output)
91
+
92
+ ```
93
+
94
+ ## Evaluation Results
95
+
96
+ Results of the heuristic sub-network discoverd from the super-network:
97
+
98
+ | Model | Sparsity | GSM8K | AQuA | MAWPS | SVAMP | Average |
99
+ |-----------------------|-------------|-------|-------|-------|-------|---------|
100
+ | LLaMA-7B-LoRA | - | 37.5 | 18.9 | 79.0 | 52.1 | 46.9 |
101
+ | [**LLaMA-7B-Shears**](https://huggingface.co/IntelLabs/shears-llama-7b-50-math-heuristic-adapter) | **50%** | 36.1 | 22.0 | 78.6 | 44.5 | 45.3 |
102
+ | LLaMA-13B-LoRA | - | 47.5 | 18.5 | 83.6 | 54.6 | 51.1 |
103
+ | [**LLaMA-13B-Shears**](https://huggingface.co/IntelLabs/shears-llama-13b-50-math-heuristic-adapter) | **50%** | 45.1 | 22.0 | 83.2 | 53.3 | 50.9 |
104
+
105
+ ## Model Sources
106
+
107
+ - **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears)
108
+ - **Paper:** [Shears: Unstructured Sparsity with Neural Low-rank Adapter Search](https://arxiv.org/abs/2404.10934)
109
+
110
+ ## Citation
111
+
112
+ ```bash
113
+ @article{munoz2024shears,
114
+ title = {Shears: Unstructured Sparsity with Neural Low-rank Adapter Search},
115
+ author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
116
+ journal={The 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-2024)},
117
+ year={2024}
118
+ }
119
+ ```
120
+
121
+ ## License
122
+
123
+ Apache-2.0