File size: 9,691 Bytes
7c29717 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 |
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
aya-expanse-32b-ungated - GGUF
- Model creator: https://huggingface.co/adamo1139/
- Original model: https://huggingface.co/adamo1139/aya-expanse-32b-ungated/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [aya-expanse-32b-ungated.Q2_K.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q2_K.gguf) | Q2_K | 11.93GB |
| [aya-expanse-32b-ungated.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q3_K_S.gguf) | Q3_K_S | 13.7GB |
| [aya-expanse-32b-ungated.Q3_K.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q3_K.gguf) | Q3_K | 15.12GB |
| [aya-expanse-32b-ungated.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q3_K_M.gguf) | Q3_K_M | 15.12GB |
| [aya-expanse-32b-ungated.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q3_K_L.gguf) | Q3_K_L | 16.36GB |
| [aya-expanse-32b-ungated.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.IQ4_XS.gguf) | IQ4_XS | 16.75GB |
| [aya-expanse-32b-ungated.Q4_0.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q4_0.gguf) | Q4_0 | 17.43GB |
| [aya-expanse-32b-ungated.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.IQ4_NL.gguf) | IQ4_NL | 17.59GB |
| [aya-expanse-32b-ungated.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q4_K_S.gguf) | Q4_K_S | 17.55GB |
| [aya-expanse-32b-ungated.Q4_K.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q4_K.gguf) | Q4_K | 18.44GB |
| [aya-expanse-32b-ungated.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q4_K_M.gguf) | Q4_K_M | 18.44GB |
| [aya-expanse-32b-ungated.Q4_1.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q4_1.gguf) | Q4_1 | 19.19GB |
| [aya-expanse-32b-ungated.Q5_0.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q5_0.gguf) | Q5_0 | 20.95GB |
| [aya-expanse-32b-ungated.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q5_K_S.gguf) | Q5_K_S | 20.95GB |
| [aya-expanse-32b-ungated.Q5_K.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q5_K.gguf) | Q5_K | 21.47GB |
| [aya-expanse-32b-ungated.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q5_K_M.gguf) | Q5_K_M | 21.47GB |
| [aya-expanse-32b-ungated.Q5_1.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q5_1.gguf) | Q5_1 | 22.71GB |
| [aya-expanse-32b-ungated.Q6_K.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q6_K.gguf) | Q6_K | 24.68GB |
| [aya-expanse-32b-ungated.Q8_0.gguf](https://huggingface.co/RichardErkhov/adamo1139_-_aya-expanse-32b-ungated-gguf/blob/main/aya-expanse-32b-ungated.Q8_0.gguf) | Q8_0 | 31.97GB |
Original model description:
---
inference: false
library_name: transformers
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
- el
- fa
- pl
- id
- cs
- he
- hi
- nl
- ro
- ru
- tr
- uk
- vi
license: cc-by-nc-4.0
---
# Model Card for Aya-Expanse-32B Ungated
Aya-Expanse 32B, but not gated!
<img src="aya-expanse-32B.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
**Aya Expanse 32B** is an open-weight research release of a model with highly advanced multilingual capabilities. It focuses on pairing a highly performant pre-trained [Command family](https://huggingface.co/CohereForAI/c4ai-command-r-plus) of models with the result of a year’s dedicated research from [Cohere For AI](https://cohere.for.ai/), including [data arbitrage](https://arxiv.org/pdf/2408.14960), [multilingual preference training](https://arxiv.org/abs/2407.02552), [safety tuning](https://arxiv.org/abs/2406.18682), and [model merging](https://arxiv.org/abs/2410.10801). The result is a powerful multilingual large language model serving 23 languages.
This model card corresponds to the 32-billion version of the Aya Expanse model. We also released an 8-billion version which you can find [here](https://huggingface.co/CohereForAI/aya-expanse-8B).
- Developed by: [Cohere For AI](https://cohere.for.ai/)
- Point of Contact: Cohere For AI: [cohere.for.ai](https://cohere.for.ai/)
- License: [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license), requires also adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy)
- Model: Aya Expanse 32B
- Model Size: 32 billion parameters
### Supported Languages
We cover 23 languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese.
### Try it: Aya Expanse in Action
Use the [Cohere playground](https://dashboard.cohere.com/playground/chat) or our [Hugging Face Space](https://huggingface.co/spaces/CohereForAI/aya_expanse) for interactive exploration.
### How to Use Aya Expanse
Install the transformers library and load Aya Expanse 32B as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereForAI/aya-expanse-32b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the chat template
messages = [{"role": "user", "content": "Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
## <BOS_TOKEN><|START_OF_TURN_TOKEN|><|USER_TOKEN|>Anneme onu ne kadar sevdiğimi anlatan bir mektup yaz<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
```
### Example Notebooks
**Fine-Tuning:**
- [Detailed Fine-Tuning Notebook](https://colab.research.google.com/drive/1ryPYXzqb7oIn2fchMLdCNSIH5KfyEtv4).
**Community-Contributed Use Cases:**:
The following notebooks contributed by *Cohere For AI Community* members show how Aya Expanse can be used for different use cases:
- [Mulitlingual Writing Assistant](https://colab.research.google.com/drive/1SRLWQ0HdYN_NbRMVVUHTDXb-LSMZWF60)
- [AyaMCooking](https://colab.research.google.com/drive/1-cnn4LXYoZ4ARBpnsjQM3sU7egOL_fLB?usp=sharing)
- [Multilingual Question-Answering System](https://colab.research.google.com/drive/1bbB8hzyzCJbfMVjsZPeh4yNEALJFGNQy?usp=sharing)
## Model Details
**Input**: Models input text only.
**Output**: Models generate text only.
**Model Architecture**: Aya Expanse 32B is an auto-regressive language model that uses an optimized transformer architecture. Post-training includes supervised finetuning, preference training, and model merging.
**Languages covered**: The model is particularly optimized for multilinguality and supports the following languages: Arabic, Chinese (simplified & traditional), Czech, Dutch, English, French, German, Greek, Hebrew, Hindi, Indonesian, Italian, Japanese, Korean, Persian, Polish, Portuguese, Romanian, Russian, Spanish, Turkish, Ukrainian, and Vietnamese
**Context length**: 128K
### Evaluation
We evaluated Aya Expanse 8B against Gemma 2 9B, Llama 3.1 8B, Ministral 8B, and Qwen 2.5 7B using m-ArenaHard, a dataset based on the [Arena-Hard-Auto dataset](https://huggingface.co/datasets/lmarena-ai/arena-hard-auto-v0.1) and translated to the 23 languages we support in Aya Expanse 8B. Win-rates were determined using gpt-4o-2024-08-06 as a judge. For a conservative benchmark, we report results from gpt-4o-2024-08-06, though gpt-4o-mini scores showed even stronger performance.
The m-ArenaHard dataset, used to evaluate Aya Expanse’s capabilities, is publicly available [here](https://huggingface.co/datasets/CohereForAI/m-ArenaHard).
<img src="winrates_marenahard_complete.png" width="650" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
### Model Card Contact
For errors or additional questions about details in this model card, contact [email protected].
### Terms of Use
We hope that the release of this model will make community-based research efforts more accessible, by releasing the weights of a highly performant multilingual model to researchers all over the world. This model is governed by a [CC-BY-NC](https://cohere.com/c4ai-cc-by-nc-license) License with an acceptable use addendum, and also requires adhering to [C4AI's Acceptable Use Policy](https://docs.cohere.com/docs/c4ai-acceptable-use-policy).
|