AmelieSchreiber commited on
Commit
2f8efbb
·
1 Parent(s): 49868ca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -36,7 +36,7 @@ trainable params: 23682 || all params: 4075265 || trainable%: 0.5811155838945443
36
 
37
  It was shown in the QLoRA paper that to obtain performance comparable to or better than full finetuning, the most important hyperparameter than can
38
  that can be adjusted is which weight matrices the LoRA adapters are applied to, with more being better. The rank and other hyperparameters
39
- such as rank and the scaling factor alpha did not seem to matter. So, an important thing to investigate next would be to check and see if this
40
  transfers to protein language models as well.
41
 
42
  ## Testing for Overfitting
 
36
 
37
  It was shown in the QLoRA paper that to obtain performance comparable to or better than full finetuning, the most important hyperparameter than can
38
  that can be adjusted is which weight matrices the LoRA adapters are applied to, with more being better. The rank and other hyperparameters
39
+ such as the scaling factor alpha did not seem to matter. So, an important thing to investigate next would be to check and see if this
40
  transfers to protein language models as well.
41
 
42
  ## Testing for Overfitting