PEFT
Safetensors
English
jinjieyuan commited on
Commit
bfacd96
·
1 Parent(s): 24bb4af

Create README.md

Browse files

Signed-off-by: jinjieyuan <[email protected]>

Files changed (1) hide show
  1. README.md +118 -0
README.md ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ ---
5
+
6
+ # Shears Model Card: shears-llama-7b-50-commonsense-heuristic
7
+
8
+ The heuristic subnetwork discovered from the super-network fine-tuned on LLaMA-7B with some commonsense reasoning datasets using Shears.
9
+
10
+ ## Model Details
11
+
12
+ ### Information
13
+
14
+ - **Model name:** Shears-llama-7b-50-commonsense-heuristic
15
+ - **Base model:** [LLaMA-7b](https://huggingface.co/yahma/llama-7b-hf)
16
+ - **Sparsity:** 50%
17
+ - **Domain:** Commonsense
18
+ - **Subnetwork version:** Heuristic
19
+ - **NNCF Configuration:** [nncf_shears_llama_7b_sparsity50.json](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears/nncf_config/unified_commonsense/nncf_shears_llama_7b_sparsity50.json)
20
+
21
+ ### Adapter Configuration
22
+
23
+ - **LoRA rank:** 32
24
+ - **LoRA alpha:** 64
25
+ - **LoRA target modules:** q_proj, k_proj, v_proj, up_proj, gate_proj, down_proj
26
+ - **LoRA rank search space:** [32, 24, 16] (for each module)
27
+
28
+ ### Training Hyperparameters
29
+
30
+ - **Batch size:** 16
31
+ - **Learning rate:** 3e-4
32
+ - **Epoch:** 3
33
+
34
+ ### Training Data
35
+
36
+ Unified commonsense reasoning dataset: [commonsense_170k.json](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/ft-training_set/commonsense_170k.json).
37
+
38
+ ### Evaluation Data
39
+ [BoolQ](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/boolq/test.json), [PIQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/piqa/test.json), [SIQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/social_i_qa/test.json), [HellaSwag](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/hellaswag/test.json), [WinoGrande](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/winogrande/test.json), [ARC-e](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/ARC-Easy/test.json), [ARC-c](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/ARC-Challenge/test.json), [OBQA](https://github.com/AGI-Edgerunners/LLM-Adapters/blob/main/dataset/openbookqa/test.json).
40
+
41
+
42
+
43
+ ## How to use
44
+
45
+ ```python
46
+ import torch
47
+ from peft import PeftModel
48
+ from transformers import AutoModelForCausalLM
49
+ from transformers import AutoTokenizer
50
+
51
+ def generate_prompt(instruction):
52
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
53
+
54
+ ### Instruction:
55
+ {instruction}
56
+
57
+ ### Response:
58
+ """
59
+
60
+ base_model_path = "shears-llama-7b-50-commonsense-heuristic/base_model"
61
+ adapter_model_path = "shears-llama-7b-50-commonsense-heuristic/adapter_model"
62
+ base_model = AutoModelForCausalLM.from_pretrained(base_model_path)
63
+ model = PeftModel.from_pretrained(base_model, adapter_model_path)
64
+ model.eval()
65
+
66
+ non_zero_params = sum([(param.data != 0).sum().item() for _, param in model.named_parameters()])
67
+ print(f"Number of all non-zero parameters: {non_zero_params}")
68
+
69
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
70
+ tokenizer.pad_token_id = 0
71
+
72
+ instruction = "Please choose the correct answer to the question: A cactus stem is used to store\n\nAnswer1: fruit "
73
+ "Answer2: liquid Answer3: food Answer4: spines\n\nAnswer format: answer1/answer2/answer3/answer4"
74
+ prompt = generate_prompt(instruction)
75
+ inputs = tokenizer(prompt, return_tensors="pt")
76
+ input_ids = inputs["input_ids"].to(model.device)
77
+ with torch.no_grad():
78
+ generation_output = model.generate(
79
+ input_ids=input_ids,
80
+ return_dict_in_generate=True,
81
+ output_scores=True,
82
+ max_new_tokens=256,
83
+ use_cache=True,
84
+ num_beams=4,
85
+ )
86
+ s = generation_output.sequences[0]
87
+ output = tokenizer.decode(s)
88
+ print(output)
89
+
90
+ ```
91
+
92
+ ## Evaluation Results
93
+
94
+ | Model | Sparsity | BoolQ | PIQA | SIQA | HellaSwag | WinoG | ARC-e | ARC-c | OBQA | Average |
95
+ |----------------------|-----------|---------|--------|--------|------------|--------|--------|---------|--------|----------|
96
+ | ChatGPT | - | 73.1 | 85.4 | 68.5 | 78.5 | 66.1 | 89.8 | 79.9 | 74.8 | 77.0 |
97
+ | LLaMA-7B-LoRA | - | 68.9 | 80.7 | 77.4 | 78.1 | 78.8 | 77.8 | 61.3 | 74.8 | 74.7 |
98
+ | [**LLaMA-7B-Shears**](https://huggingface.co/IntelLabs/shears-llama-7b-50-commonsense-heuristic) | **50%** | 67.3 | 79.1 | 77.5 | 73.3 | 77.7 | 74.4 | 57.9 | 72.8 | 72.5 |
99
+
100
+ ## Model Sources
101
+
102
+ - **Repository:** [https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears](https://github.com/IntelLabs/Hardware-Aware-Automated-Machine-Learning/tree/main/Shears)
103
+ - **Paper:** [Shears: Unstructured Sparsity with Neural Low-rank Adapter Search]()
104
+
105
+ ## Citation
106
+
107
+ ```bash
108
+ @article{munoz2024shears,
109
+ title = {Shears: Unstructured Sparsity with Neural Low-rank Adapter Search},
110
+ author={J. Pablo Munoz and Jinjie Yuan and Nilesh Jain},
111
+ journal={},
112
+ year={2024}
113
+ }
114
+ ```
115
+
116
+ ## License
117
+
118
+ Apache-2.0