Anthonyg5005 commited on
Commit
372f78a
·
verified ·
1 Parent(s): bbf16fb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -19,6 +19,8 @@ Feel free to send in pull requests or use this code however you'd like.\
19
 
20
  - [EXL2 Private Quant V3](https://colab.research.google.com/drive/1Vc7d6JU3Z35OVHmtuMuhT830THJnzNfS?usp=sharing) **(COLAB)**
21
 
 
 
22
  ## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
23
 
24
  - EXL2 Private Quant V4
@@ -29,9 +31,6 @@ Feel free to send in pull requests or use this code however you'd like.\
29
  - Windows/Linux support (don't have mac)
30
  - Colab version will use this with markdown parameters
31
 
32
- - [Upload folder](https://huggingface.co/Anthonyg5005/hf-scripts/blob/unfinished/upload%20to%20hub.py)
33
- - Uploads user specified folder to specified repo, can create private repos too
34
-
35
  ## other recommended files
36
 
37
  - [Download models (download HF Hub models) [Oobabooga]](https://github.com/oobabooga/text-generation-webui/blob/main/download-model.py)
@@ -43,6 +42,9 @@ Feel free to send in pull requests or use this code however you'd like.\
43
 
44
  - EXL2 Private Quant
45
  - Allows you to quantize to exl2 using colab. This version creates a exl2 quant to upload to private repo. Should work on any Linux jupyterlab server with CUDA, ROCM should be supported by exl2 but not tested.
 
 
 
46
 
47
  - Download models
48
  - Make sure you have [requests](https://pypi.org/project/requests/) and [tqdm](https://pypi.org/project/tqdm/) installed. You can install them with '`pip install requests tqdm`'. To use the script, open a terminal and run '`python download-model.py USER/MODEL:BRANCH`'. There's also a '`--help`' flag to show the available arguments. To download from private repositories, make sure to login using '`huggingface-cli login`' or (not recommended) `HF_TOKEN` environment variable.
 
19
 
20
  - [EXL2 Private Quant V3](https://colab.research.google.com/drive/1Vc7d6JU3Z35OVHmtuMuhT830THJnzNfS?usp=sharing) **(COLAB)**
21
 
22
+ - [Upload folder to repo](https://huggingface.co/Anthonyg5005/hf-scripts/blob/main/upload%20folder%20to%20repo.py)
23
+
24
  ## work in progress/not tested ([unfinished](https://huggingface.co/Anthonyg5005/hf-scripts/tree/unfinished) branch)
25
 
26
  - EXL2 Private Quant V4
 
31
  - Windows/Linux support (don't have mac)
32
  - Colab version will use this with markdown parameters
33
 
 
 
 
34
  ## other recommended files
35
 
36
  - [Download models (download HF Hub models) [Oobabooga]](https://github.com/oobabooga/text-generation-webui/blob/main/download-model.py)
 
42
 
43
  - EXL2 Private Quant
44
  - Allows you to quantize to exl2 using colab. This version creates a exl2 quant to upload to private repo. Should work on any Linux jupyterlab server with CUDA, ROCM should be supported by exl2 but not tested.
45
+
46
+ - Upload folder to repo
47
+ - Uploads user specified folder to specified repo, can create private repos too. Not the same as git commit and push, instead uploads any additional files.
48
 
49
  - Download models
50
  - Make sure you have [requests](https://pypi.org/project/requests/) and [tqdm](https://pypi.org/project/tqdm/) installed. You can install them with '`pip install requests tqdm`'. To use the script, open a terminal and run '`python download-model.py USER/MODEL:BRANCH`'. There's also a '`--help`' flag to show the available arguments. To download from private repositories, make sure to login using '`huggingface-cli login`' or (not recommended) `HF_TOKEN` environment variable.