PEFT
Safetensors
English
jinjieyuan commited on
Commit
f794152
·
1 Parent(s): 601c1e5

Create README.md

Browse files

Signed-off-by: jinjieyuan <[email protected]>

Files changed (1) hide show
  1. README.md +123 -0
README.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ ---
5
+
6
+ # Shears Model Card: shears-llama-7b-50-math-heuristic
7
+
8
+ The heuristic subnetwork discovered from the [super-network](https://huggingface.co/IntelLabs/shears-llama-7b-50-math-super) fine-tuned on LLaMA-7B with some math reasoning datasets using Shears.
9
+
10
+ ## Model Details
11
+
12
+ ### Information
13
+
14
+ - **Model name:** shears-llama-7b-50-math-heuristic
15
+ - **Base model:** [LLaMA-7b](https://huggingface.co/yahma/llama-7b-hf)
16
+ - **Sparsity:** 50%
17
+ - **Domain:** Math
18
+ - **Subnetwork version:** Heuristic
19
+ - **NNCF Configuration:** [nncf_shears_llama_7b_sparsity50.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/nncf_config/unified_math/nncf_shears_llama_7b_sparsity50.json)
20
+
21
+ ### Adapter Configuration
22
+
23
+ - **LoRA rank:** 32 (24 in the heuristic subnetwork)
24
+ - **LoRA alpha:** 64
25
+ - **LoRA target modules:** q_proj, k_proj, v_proj, up_proj, down_proj
26
+ - **LoRA rank search space:** [32, 24, 16] (for each LoRA module)
27
+
28
+ ### Training Hyperparameters
29
+
30
+ - **Batch size:** 16
31
+ - **Learning rate:** 3e-4
32
+ - **Epoch:** 3
33
+
34
+ ### Training Data
35
+
36
+ Unified math reasoning dataset: [math_10k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/math_10k.json) (collected with the training sets of GSM8K, MAWPS, and AQuA).
37
+
38
+ ### Evaluation Data
39
+ [GSM8K](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/gsm8k/test.json), [AQuA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/AQuA/test.json), [MAWPS](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/mawps/test.json), [SVAMP](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/SVAMP/test.json)
40
+
41
+
42
+ ## How to use
43
+
44
+ Use our modified PEFT library (apply [patch](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/patches/peft-modifications-for-shears-inference-usage.patch)):
45
+ ```bash
46
+ git clone https://github.com/huggingface/peft.git
47
+ pushd peft && git checkout v0.5.0 && git apply --ignore-space-change --ignore-whitespace peft-modifications-for-shears-inference-usage.patch && pip install -e . && popd
48
+ ```
49
+
50
+ ```python
51
+ import torch
52
+ from peft import PeftModel
53
+ from transformers import AutoModelForCausalLM
54
+ from transformers import AutoTokenizer
55
+
56
+ def generate_prompt(instruction):
57
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
58
+
59
+ ### Instruction:
60
+ {instruction}
61
+
62
+ ### Response:
63
+ """
64
+
65
+ base_model_path = "shears-llama-7b-50-math-heuristic/base_model"
66
+ adapter_model_path = "shears-llama-7b-50-math-heuristic/adapter_model"
67
+ base_model = AutoModelForCausalLM.from_pretrained(base_model_path)
68
+ model = PeftModel.from_pretrained(base_model, adapter_model_path)
69
+ model.eval()
70
+
71
+ non_zero_params = sum([(param.data != 0).sum().item() for _, param in model.named_parameters()])
72
+ print(f"Number of all non-zero parameters: {non_zero_params}")
73
+
74
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
75
+ tokenizer.pad_token_id = 0
76
+
77
+ instruction = "Edgar eats 18 pretzels a day. If his brother eats 1/2 as many, how many does his brother eat in a week?"
78
+ prompt = generate_prompt(instruction)
79
+ inputs = tokenizer(prompt, return_tensors="pt")
80
+ input_ids = inputs["input_ids"].to(model.device)
81
+ with torch.no_grad():
82
+ generation_output = model.generate(
83
+ input_ids=input_ids,
84
+ return_dict_in_generate=True,
85
+ output_scores=True,
86
+ max_new_tokens=256,
87
+ use_cache=True,
88
+ num_beams=4,
89
+ )
90
+ s = generation_output.sequences[0]
91
+ output = tokenizer.decode(s)
92
+ print(output)
93
+
94
+ ```
95
+
96
+ ## Evaluation Results
97
+
98
+ | Model | Sparsity | GSM8K | AQuA | MAWPS | SVAMP | Average |
99
+ |-----------------------|-------------|-------|-------|-------|-------|---------|
100
+ | LLaMA-7B-LoRA | - | 37.5 | 18.9 | 79.0 | 52.1 | 46.9 |
101
+ | [**LLaMA-7B-Shears**](https://huggingface.co/IntelLabs/shears-llama-7b-50-math-heuristic) | **50%** | 36.1 | 22.0 | 78.6 | 44.5 | 45.3 |
102
+ | LLaMA-13B-LoRA | - | 47.5 | 18.5 | 83.6 | 54.6 | 51.1 |
103
+ | [**LLaMA-13B-Shears**](https://huggingface.co/IntelLabs/shears-llama-13b-50-math-heuristic) | **50%** | 45.1 | 22.0 | 83.2 | 53.3 | 50.9 |
104
+
105
+ ## Model Sources
106
+
107
+ - **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears)
108
+ - **Paper:** [Shears: Unstructured Sparsity with Neural Low-rank Adapter Search]()
109
+
110
+ ## Citation
111
+
112
+ ```bash
113
+ @article{munoz2024shears,
114
+ title = {Shears: Unstructured Sparsity with Neural Low-rank Adapter Search},
115
+ author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
116
+ journal={},
117
+ year={2024}
118
+ }
119
+ ```
120
+
121
+ ## License
122
+
123
+ Apache-2.0