oopere commited on
Commit
020a425
·
verified ·
1 Parent(s): 93b8166

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -26,6 +26,9 @@ This model is not intended to be used directly, but rather to be fine-tuned for
26
  - **License:** Same as original model
27
  - **Developed by:** [Pere Martra](https://huggingface.co/oopere)
28
 
 
 
 
29
  ### Performance on Standard Benchmarks
30
 
31
  | Benchmark | Original Model | Pruned Model | Relative Change |
 
26
  - **License:** Same as original model
27
  - **Developed by:** [Pere Martra](https://huggingface.co/oopere)
28
 
29
+ These models are part of the study "[Exploring GLU Expansion Ratios: Structured Pruning in Llama-3.2 Models](https://doi.org/10.31219/osf.io/qgxea)". They explore structured pruning in GLU-based architectures using Llama-3.2 (1B and 3B variants). The pruning experiments target optimal expansion ratios to balance performance, computational efficiency, and environmental sustainability. The models were evaluated across multiple benchmarks, including BoolQ, ARC-Easy, and MUSR, and demonstrate significant efficiency gains while maintaining robust task performance.
30
+
31
+
32
  ### Performance on Standard Benchmarks
33
 
34
  | Benchmark | Original Model | Pruned Model | Relative Change |