Transformers
English
ctranslate2
int8
float16
Inference Endpoints
michaelfeil commited on
Commit
d3657a8
·
1 Parent(s): 7b61394

Upload togethercomputer/RedPajama-INCITE-Chat-7B-v0.1 ctranslate fp16 weights

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -24,18 +24,18 @@ inference:
24
  max_new_tokens: 128
25
  ---
26
  # # Fast-Inference with Ctranslate2
27
- Speedup inference by 2x-8x using int8 inference in C++
28
 
29
  quantized version of [togethercomputer/RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1)
30
  ```bash
31
- pip install hf-hub-ctranslate2>=2.0.6 ctranslate2>=3.13.0
32
  ```
33
  Converted on 2023-05-19 using
34
  ```
35
- ct2-transformers-converter --model togethercomputer/RedPajama-INCITE-Chat-7B-v0.1 --output_dir /home/michael/tmp-ct2fast-RedPajama-INCITE-Chat-7B-v0.1 --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16
36
  ```
37
 
38
- Checkpoint compatible to [ctranslate2](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
39
  - `compute_type=int8_float16` for `device="cuda"`
40
  - `compute_type=int8` for `device="cpu"`
41
 
@@ -53,7 +53,7 @@ model = GeneratorCT2fromHfHub(
53
  tokenizer=AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
54
  )
55
  outputs = model.generate(
56
- text=["How do you call a fast Flan-ingo?", "User: How are you doing?"],
57
  )
58
  print(outputs)
59
  ```
 
24
  max_new_tokens: 128
25
  ---
26
  # # Fast-Inference with Ctranslate2
27
+ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
28
 
29
  quantized version of [togethercomputer/RedPajama-INCITE-Chat-7B-v0.1](https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-7B-v0.1)
30
  ```bash
31
+ pip install hf-hub-ctranslate2>=2.0.6
32
  ```
33
  Converted on 2023-05-19 using
34
  ```
35
+ ct2-transformers-converter --model togethercomputer/RedPajama-INCITE-Chat-7B-v0.1 --output_dir /home/feil_m/tmp-ct2fast-RedPajama-INCITE-Chat-7B-v0.1 --force --copy_files tokenizer.json README.md tokenizer_config.json generation_config.json special_tokens_map.json .gitattributes --quantization float16
36
  ```
37
 
38
+ Checkpoint compatible to [ctranslate2>=3.13.0](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2>=2.0.6](https://github.com/michaelfeil/hf-hub-ctranslate2)
39
  - `compute_type=int8_float16` for `device="cuda"`
40
  - `compute_type=int8` for `device="cpu"`
41
 
 
53
  tokenizer=AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-Chat-7B-v0.1")
54
  )
55
  outputs = model.generate(
56
+ text=["How do you call a fast Flan-ingo?", "User: How are you doing? Bot:"],
57
  )
58
  print(outputs)
59
  ```