maddes8cht
commited on
Commit
ยท
c27922a
1
Parent(s):
a0e0411
"Update README.md"
Browse files
README.md
CHANGED
@@ -1,56 +1,46 @@
|
|
1 |
---
|
2 |
-
inference: false
|
3 |
-
license: apache-2.0
|
4 |
-
model_creator: tiiuae
|
5 |
-
model_link: https://huggingface.co/tiiuae/falcon-7b
|
6 |
-
model_name: Falcon 7B
|
7 |
-
model_type: falcon
|
8 |
-
pipeline_tag: text-generation
|
9 |
-
quantized_by: maddes8cht
|
10 |
datasets:
|
11 |
- tiiuae/falcon-refinedweb
|
12 |
language:
|
13 |
- en
|
14 |
-
|
15 |
-
-
|
16 |
---
|
17 |
[![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
|
18 |
-
## I am still building the structure of these descriptions.
|
19 |
-
These will carry increasingly more content to help find the best models for a purpose.
|
20 |
-
Tiiuae-Falcon 7B is the original foundational Falcon model from Tiiuae, converted to gguf format.
|
21 |
|
22 |
-
|
23 |
-
# Falcon 7B - gguf
|
24 |
|
25 |
-
|
26 |
-
<summary> Table of contents
|
27 |
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
-
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
- [Original Model Card](#original-model-card-by-tiiuae)
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
38 |
-
|
39 |
-
# Summary
|
40 |
-
- Model creator: [tiiuae](https://huggingface.co/tiiuae)
|
41 |
-
- Original model: [Falcon 7b](https://huggingface.co/tiiuae/falcon-7b)
|
42 |
|
43 |
-
|
|
|
|
|
44 |
|
45 |
-
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
|
51 |
-
# Original Model Card by tiiuae
|
52 |
-
[***Link to original model card***](https://huggingface.co/tiiuae/falcon-7b)
|
53 |
|
|
|
54 |
# ๐ Falcon-7B
|
55 |
|
56 |
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
|
@@ -113,7 +103,7 @@ You will need **at least 16GB of memory** to swiftly run inference with Falcon-7
|
|
113 |
|
114 |
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
|
115 |
- **Model type:** Causal decoder-only;
|
116 |
-
- **Language(s) (NLP):** English and
|
117 |
- **License:** Apache 2.0.
|
118 |
|
119 |
### Model Source
|
@@ -278,6 +268,11 @@ Falcon-7B is made available under the Apache 2.0 license.
|
|
278 |
|
279 |
## Contact
|
280 | |
281 |
-
</details>
|
282 |
|
283 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
datasets:
|
3 |
- tiiuae/falcon-refinedweb
|
4 |
language:
|
5 |
- en
|
6 |
+
inference: false
|
7 |
+
license: apache-2.0
|
8 |
---
|
9 |
[![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
|
|
|
|
|
|
|
10 |
|
11 |
+
## I am still building the structure of these descriptions.
|
|
|
12 |
|
13 |
+
These will contain increasingly more content to help find the best models for a purpose.
|
|
|
14 |
|
15 |
+
# falcon-7b - GGUF
|
16 |
+
- Model creator: [tiiuae](https://huggingface.co/tiiuae)
|
17 |
+
- Original model: [falcon-7b](https://huggingface.co/tiiuae/falcon-7b)
|
18 |
+
These are gguf quantized models of the riginal Falcon 7B Model by tiiuae.
|
19 |
+
Falcon is a foundational large language model coming in two different sizes: 7b and 40b.
|
20 |
+
# About GGUF format
|
21 |
|
22 |
+
`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
|
23 |
+
A growing list of Software is using it and can therefore use this model.
|
24 |
+
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
|
25 |
|
26 |
+
# Quantization variants
|
|
|
27 |
|
28 |
+
There is a bunch of quantized files available. How to choose the best for you:
|
29 |
|
30 |
+
# legacy quants
|
|
|
|
|
|
|
|
|
31 |
|
32 |
+
Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
|
33 |
+
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
|
34 |
+
Falcon 7B models cannot be quantized to K-quants.
|
35 |
|
36 |
+
# K-quants
|
37 |
|
38 |
+
K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
|
39 |
+
So, if possible, use K-quants.
|
40 |
+
With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
|
41 |
|
|
|
|
|
42 |
|
43 |
+
# Original Model Card:
|
44 |
# ๐ Falcon-7B
|
45 |
|
46 |
**Falcon-7B is a 7B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,500B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
|
|
|
103 |
|
104 |
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
|
105 |
- **Model type:** Causal decoder-only;
|
106 |
+
- **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
|
107 |
- **License:** Apache 2.0.
|
108 |
|
109 |
### Model Source
|
|
|
268 |
|
269 |
## Contact
|
270 | |
|
|
271 |
|
272 |
+
<center>
|
273 |
+
<a href="https://maddes8cht.github.com"><img src="/assets/buttons/maddes8cht-github-io.jpg" alt="GitHub" /></a>
|
274 |
+
<a href="https://stackexchange.com/users/26485911"><img src="https://stackexchange.com/users/flair/26485911.png" width="208" height="58" alt="profile for maddes8cht on Stack Exchange, a network of free, community-driven Q&A sites" title="profile for maddes8cht on Stack Exchange, a network of free, community-driven Q&A sites"></a>
|
275 |
+
<a href="https://github.com/maddes8cht"><img src="/assets/buttons/github-button.jpg" alt="GitHub" /></a>
|
276 |
+
<a href="https://huggingface.co/maddes8cht"><img src="/assets/buttons/huggingface-button.jpg" alt="HuggingFace" /></a></p>
|
277 |
+
<a href="https://twitter.com/maddes1966"><img src="/assets/buttons/twitter-button.jpg" alt="HuggingFace" /></a></p>
|
278 |
+
</center>
|