File size: 5,547 Bytes
e0c28a0 8b01a7c e0c28a0 8b01a7c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
license: apache-2.0
datasets:
- nicholasKluge/fine-tuning-instruct-aira
- Dahoas/synthetic-instruct-gptj-pairwise
- databricks/databricks-dolly-15k
- HuggingFaceH4/instruction-dataset
language:
- en
metrics:
- bleu
library_name: transformers
tags:
- alignment
- instruction tuned
- text generation
- conversation
- assistant
pipeline_tag: text-generation
widget:
- text: <|startoftext|>Hello! What is your name?<|endoftext|>
example_title: Greetings
- text: <|startoftext|>Can you explain what is Machine Learning?<|endoftext|>
example_title: Machine Learning
- text: <|startoftext|>Do you know anything about virtue ethics?<|endoftext|>
example_title: Ethics
- text: <|startoftext|>How can I make my girlfried happy?<|endoftext|>
example_title: Advise
inference:
parameters:
temperature: 0.2
top_k: 50
max_length: 200
---
# Aira-Instruct-1.5B
`Aira-Instruct-1.5B` is a instruction-tuned GPT-style model based on [GPT-2](https://huggingface.co/gpt2). The model was trained with a dataset composed of `prompt`, `completions`, generated via the [Self-Instruct](https://github.com/yizhongw/self-instruct) framework. `Aira-Instruct-1.5B` instruction-tuning was achieved via conditional text generation.
The dataset used to train this model combines the following sources of data: the [`synthetic-instruct-gptj-pairwise`](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) dataset, the [`databricks_dolly_15k`](https://huggingface.co/datasets/HuggingFaceH4/databricks_dolly_15k) dataset, the [`instruction-dataset`](https://huggingface.co/datasets/HuggingFaceH4/instruction-dataset) dataset, and a subset of [Aira's](https://github.com/Nkluge-correa/Aira-EXPERT) fine-tuning dataset, focused on Q&A related to Ethics, AI, AI safety, and other related topics. The dataset is available in both Portuguese and English.
Check our gradio-demo in [Spaces](https://huggingface.co/spaces/nicholasKluge/Aira-Demo).
## Details
- **Size:** 1,557,614,400 total parameters
- **Dataset:** [Instruct-Aira Dataset](https://huggingface.co/datasets/nicholasKluge/fine-tuning-instruct-aira)
- **Language:** English
- **Number of Epochs:** 2
- **Batch size:** 4
- **Optimizer:** `torch.optim.AdamW` (warmup_steps = 1e2, learning_rate = 5e-4, epsilon = 1e-8)
- **GPU:** 1 NVIDIA A100-SXM4-40GB
- **Emissions:** 0.88 KgCO2 (United States of America)
- **Total Energy Consumption:** 1.94 kWh
| Epoch/Loss|Training|Validation|
|---|---|---|
| 1 |0.723266|0.636748|
| 2 |0.488247|0.594000|
This repository has the notebook used to train this model.
## Usage
Two special tokens are used to mark the user side of the interaction and the model's response:
`<|startoftext|>`What is a language model?`<|endoftext|>`A language model is a probability distribution over a vocabulary.`<|endoftext|>`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained('nicholasKluge/Aira-Instruct-1.5B')
aira = AutoModelForCausalLM.from_pretrained('nicholasKluge/Aira-Instruct-1.5B')
aira.eval()
aira.to(device)
question = input("Enter your question: ")
inputs = tokenizer(tokenizer.bos_token + question + tokenizer.eos_token, return_tensors="pt").to(device)
responses = aira.generate(**inputs,
bos_token_id=tokenizer.bos_token_id,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
do_sample=True,
top_k=50,
max_length=200,
top_p=0.95,
temperature=0.7,
num_return_sequences=2)
print(f"Question: 👤 {question}\n")
for i, response in enumerate(responses):
print(f'Response {i+1}: 🤖 {tokenizer.decode(response, skip_special_tokens=True).replace(question, "")}')
```
The model will output something like:
```markdown
>>> Question: 👤 Hello! What is your name?
>>>Response 1: 🤖 Hi there! I am Aira, a chatbot designed to answer questions about AI ethics and AI safety. If you need assistance navigating our conversation, please feel free to ask!
>>>Response 2: 🤖 Hi there! My name is Aira, and I'm a chatbot designed to answer questions related to AI ethics and AI Safety. If you need assistance, feel free to ask, and I'll be happy to help you out.
```
## Limitations
🤥 Generative models can perpetuate the generation of pseudo-informative content, that is, false information that may appear truthful. For example, multi-modal generative models can be used to create images with untruthful content, while language models for text generation can automate the generation of misinformation.
🤬 In certain types of tasks, generative models can generate toxic and discriminatory content inspired by historical stereotypes against sensitive attributes (for example, gender, race, and religion). Unfiltered public datasets may also contain inappropriate content, such as pornography, racist images, and social stereotypes, which can contribute to unethical biases in generative models. Furthermore, when prompted with non-English languages, some generative models may perform poorly.
## Cite as 🤗
```latex
@misc{nicholas22aira,
doi = {10.5281/zenodo.6989727},
url = {https://huggingface.co/nicholasKluge/Aira-Instruct-774M},
author = {Nicholas Kluge Corrêa and Carolina Del Pino},
title = {Aira},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
}
```
## License
The `Aira-Instruct-1.5B` is licensed under the Apache License, Version 2.0. See the [LICENSE](LICENSE) file for more details.
|