Text Generation
Transformers
Safetensors
English
falcon
pretrained
conversational
text-generation-inference
IsmailH commited on
Commit
45666c8
·
verified ·
1 Parent(s): 74c014c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -53
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  language:
3
- - fr
4
  license: cc-by-nc-sa-4.0
5
  pipeline_tag: text-generation
6
  base_model: tiiuae/falcon-7b
@@ -9,23 +9,23 @@ tags:
9
  - conversational
10
  widget:
11
  - text: |-
12
- - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
13
- - Bonjour Camille,
14
  example_title: Request for a recipe
15
  group: Dash
16
  - text: |-
17
- [Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
18
- [Intervenant 2:] Bonjour Camille,
19
  example_title: Request for a recipe
20
  group: Intervenant
21
  - text: |-
22
- [Camille:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
23
- [Dominique:] Bonjour Camille,
24
  example_title: Request for a recipe
25
  group: FirstName
26
  - text: |-
27
- [Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
28
- [Dominique Petit:] Bonjour Camille,
29
  example_title: Request for a recipe
30
  group: Named
31
  inference:
@@ -35,21 +35,21 @@ inference:
35
  top_k: 10
36
  ---
37
 
38
- # Claire-7B-0.1
39
 
40
  **Claire-7B-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)**
41
- **adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on French conversational data.**
42
 
43
- Quantized versions in GGUF format can be found in [TheBloke/Claire-7B-0.1-GGUF](https://huggingface.co/TheBloke/Claire-7B-0.1-GGUF).
44
 
45
- Claire-7B-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.
46
 
47
  * [Typical usage](#typical-usage)
48
  * [Typical prompts](#typical-prompts)
49
  * [Training Details](#training-details)
50
  * [Training Data](#training-data)
51
  * [Training Procedure](#training-procedure)
52
- * [Evaluation](#evaluation)
53
  * [License](#license)
54
  * [Acknowledgements](#acknowledgements)
55
  * [Contact](#contact)
@@ -61,7 +61,7 @@ Claire-7B-0.1 is a pretrained language model designed to be attuned to the dynam
61
  import transformers
62
  import torch
63
 
64
- model_name = "OpenLLM-France/Claire-7B-0.1"
65
 
66
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
67
  model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
@@ -80,8 +80,8 @@ generation_kwargs = dict(
80
  )
81
 
82
  prompt = """\
83
- - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
84
- - Bonjour Camille,\
85
  """
86
  completions = pipeline(prompt, **generation_kwargs)
87
  for completion in completions:
@@ -89,13 +89,15 @@ for completion in completions:
89
  ```
90
  This will print something like:
91
  ```
92
- - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
93
- - Bonjour Camille, […] je vous prépare un plat de saison, une daube provençale.
94
- - Ah je ne connais pas cette recette.
95
- - C'est très facile à préparer, vous n'avez qu'à mettre de l'eau dans une marmite, y mettre de l'oignon émincé, des carottes coupées en petits morceaux, et vous allez mettre votre viande de bœuf coupé en petits morceaux également.
96
- - Je n'ai jamais cuisiné de viande de bœuf, mais c'est vrai que ça a l'air bien facile.
97
- - Vous n'avez plus qu'à laisser mijoter, et ensuite il sera temps de servir les clients.
98
- - Très bien.
 
 
99
  ```
100
 
101
  You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization).
@@ -104,26 +106,26 @@ If you have trouble running this code, make sure you have recent versions of `to
104
 
105
  ### Typical prompts
106
 
107
- Claire-7B-0.1 was trained on diarized French conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows:
108
 
109
  A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training):
110
  ```python
111
- prompt = "Mesdames et messieurs les députés, chers collègues, bonsoir. Vous l'aurez peut-être remarqué, je cite rarement"
112
  ```
113
 
114
  A dialogue between two speakers can be specified with one line per speech turn starting with a dash:
115
  ```python
116
  prompt = """\
117
- - Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
118
  - Bonjour Camille,\
119
  """
120
  ```
121
 
122
- A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Intervenant X:]` where `X` is a number:
123
  ```python
124
  prompt = """\
125
- [Intervenant 1:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
126
- [Intervenant 2:] Bonjour Camille,\
127
  """
128
  ```
129
 
@@ -131,8 +133,8 @@ A dialogue or multilogue with named speakers can be specified with lines that st
131
  where `SpeakerName` can be a first name, a first and a last name, a nickname, a title…
132
  ```python
133
  prompt = """\
134
- [Mme Camille Durand:] Bonjour Dominique, qu'allez-vous nous cuisiner aujourd'hui ?
135
- [Mr. Dominique Petit:] Bonjour Camille,\
136
  """
137
  ```
138
 
@@ -140,40 +142,40 @@ prompt = """\
140
 
141
  ### Training Data
142
 
143
- The training dataset is available at [OpenLLM-France/Claire-Dialogue-French-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-French-0.1)
144
- and described in ["The Claire French Dialogue Dataset" (2023)](https://arxiv.org/abs/2311.16840).
145
 
146
- Claire-7B-0.1 was tuned from Falcon-7b on the following data distribution:
147
 
148
  | **Data type** | **Words** | **Training Sampling Weight** | **Sources** |
149
  |-------------------------------|------------|------------------------------|-----------------------------------------------------|
150
- | Parliamentary Proceedings | 135M | 35% | Assemblée Nationale |
151
- | Theatre | 16M | 18% | Théâtre Classique, Théâtre Gratuit |
152
- | Interviews | 6.4M | 29% | TCOF, CFPP, CFPB (ORFEO), ACSYNT, PFC, Valibel (ORFEO), ESLO|
153
- | Free Conversations | 2.2M | 10% | CRFP (ORFEO), OFROM (ORFEO), CID, Rhapsodie, ParisStories, PFC, CLAPI, C-ORAL-ROM (ORFEO), LinTO, ESLO |
154
- | Meetings | 1.2M | 5% | SUMM-RE, LinTO, Réunions de travail (ORFEO) |
155
- | Debates | 402k | <2% | FREDSum, ESLO |
156
- | Assistance | 159k | <1% | Fleuron (ORFEO), Accueil UBS, OTG, ESLO |
157
- | Presentation, Formal Address | 86k | <0.5% | Valibel (ORFEO), LinTO, ESLO |
158
 
159
  Training data was augmented with the following techniques:
160
  * varying the format used to indicate speech turns (dashes or [XXX:])
161
- * substituting [Intervenant X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name
162
  * removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems)
163
 
164
  Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns.
165
 
166
- While the model has been trained and evaluated only on French dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data.
167
 
168
 
169
  ### Training Procedure
170
 
171
  The training code is available at [https://github.com/OpenLLM-France/Lit-Claire](https://github.com/OpenLLM-France/Lit-Claire).
172
 
173
- Claire-7B-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
174
  See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details.
175
 
176
- Claire-7B-0.1 was trained on 1 A100 80GB GPU for about 50 GPU hours.
177
 
178
  Hyperparameters were the following:
179
  | **Hyperparameter** | **Value** |
@@ -188,9 +190,10 @@ Hyperparameters were the following:
188
  | Dropout | 0.05 |
189
  | gradient clipping | 1 |
190
 
 
191
  ## Evaluation
192
 
193
- To evaluate Claire-7B-0.1’s ability to generate natural sounding, French conversations, we compared its responses to a variety of prompts with those of three other models:
194
  * [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b),
195
  * [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
196
  * [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1) (a version of Mistral-7B-v0.1 adapted in the same fashion as Claire-7B-0.1)
@@ -205,7 +208,6 @@ Our results confirm that continual pre-training of Falcon-7b and Mistral-7B-v0.1
205
 
206
  Ranking results also reveal a clear subjective preference for Claire-7B-0.1,
207
  as shown in the following table:
208
- <!--| | **Claire-Falcon** | **Claire-Mistral** | **Falcon** | **Mistral** | -->
209
  | | <span style="font-weight: normal">... over</span><br /> **Claire-Falcon** | <span style="font-weight: normal">... over</span><br /> **Claire-Mistral** | <span style="font-weight: normal">... over</span><br /> **Falcon** | <span style="font-weight: normal">... over</span><br /> **Mistral** |
210
  |--------------------------------------|----------------------|-----------------------|---------------|---------------------|
211
  | prefer<br /> **Claire-Falcon** ... | | **62.2%** | **63.9%** | **83.8%** |
@@ -220,20 +222,20 @@ as shown in the following table:
220
  and "Claire-Mistral", for [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1).)
221
 
222
  Please note that the model can generate disfluencies and humorous responses as a result of its training on spoken and theatrical text.
223
-
224
 
225
  ## License
226
 
227
  Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses,
228
- Claire-7B-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
229
 
230
- You can find a variant of this model published under the Apache 2.0 license at [OpenLLM-France/Claire-7B-Apache-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-Apache-0.1).
231
 
232
  ## Acknowledgements
233
 
234
  This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561).
235
 
236
- Claire-7B-0.1 was created by members of [LINAGORA](https://labs.linagora.com/) (in alphabetical order): Ismaïl Harrando, Julie Hunter, Jean-Pierre Lorré, Jérôme Louradour, Michel-Marie Maudet, Virgile Rennard, Guokan Shang.
237
 
238
  Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice.
239
 
 
1
  ---
2
  language:
3
+ - en
4
  license: cc-by-nc-sa-4.0
5
  pipeline_tag: text-generation
6
  base_model: tiiuae/falcon-7b
 
9
  - conversational
10
  widget:
11
  - text: |-
12
+ - Hello Alice, what are you cooking for us today?
13
+ - Hello Bob,
14
  example_title: Request for a recipe
15
  group: Dash
16
  - text: |-
17
+ [Intervenant 1:] Hello Alice, what are you cooking for us today?
18
+ [Intervenant 2:] Hello Bob,
19
  example_title: Request for a recipe
20
  group: Intervenant
21
  - text: |-
22
+ [Camille:] Hello Alice, what are you cooking for us today?
23
+ [Dominique:] Hello Bob,
24
  example_title: Request for a recipe
25
  group: FirstName
26
  - text: |-
27
+ [Bob Brown:] Hello Alice, what are you cooking for us today?
28
+ [Alice Green:] Hello Bob,
29
  example_title: Request for a recipe
30
  group: Named
31
  inference:
 
35
  top_k: 10
36
  ---
37
 
38
+ # Claire-7B-EN-0.1
39
 
40
  **Claire-7B-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)**
41
+ **adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on English conversational data.**
42
 
43
+ <!-- Quantized versions in GGUF format can be found in [TheBloke/Claire-7B-0.1-GGUF](https://huggingface.co/TheBloke/Claire-7B-0.1-GGUF). -->
44
 
45
+ Claire-7B-EN-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.
46
 
47
  * [Typical usage](#typical-usage)
48
  * [Typical prompts](#typical-prompts)
49
  * [Training Details](#training-details)
50
  * [Training Data](#training-data)
51
  * [Training Procedure](#training-procedure)
52
+ <!-- * [Evaluation](#evaluation) -->
53
  * [License](#license)
54
  * [Acknowledgements](#acknowledgements)
55
  * [Contact](#contact)
 
61
  import transformers
62
  import torch
63
 
64
+ model_name = "OpenLLM-France/Claire-7B-EN-0.1"
65
 
66
  tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
67
  model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
 
80
  )
81
 
82
  prompt = """\
83
+ - Hello Alice, what are you cooking for us today?
84
+ - Hello Bob,\
85
  """
86
  completions = pipeline(prompt, **generation_kwargs)
87
  for completion in completions:
 
89
  ```
90
  This will print something like:
91
  ```
92
+ - Hello Alice, what are you cooking for us today?
93
+ - Hello Bob, […] I'm going to make beef and vegetables.
94
+ - That sounds great. What type of vegetables are you going to make?
95
+ - I'm thinking of making a broccoli salad and steamed potatoes.
96
+ - I love broccoli and potatoes, especially together. Do you plan to make a dressing or a mayo for the broccoli?
97
+ - Yes, I have to make a dressing. How about some mayo for the potatoes?
98
+ - I don't know if I like the sound of that, but go for it. You're the chef! I'll try some.
99
+ - I'm sure you will.
100
+ - I'll try some.
101
  ```
102
 
103
  You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization).
 
106
 
107
  ### Typical prompts
108
 
109
+ Claire-7B-EN-0.1 was trained on English conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows:
110
 
111
  A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training):
112
  ```python
113
+ prompt = "Ladies and gentlemen, welcome aboard the S.S. Anne! We will be leaving in"
114
  ```
115
 
116
  A dialogue between two speakers can be specified with one line per speech turn starting with a dash:
117
  ```python
118
  prompt = """\
119
+ - Hello Alice, what are you cooking for us today?
120
  - Bonjour Camille,\
121
  """
122
  ```
123
 
124
+ A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Speaker X:]` where `X` is a number:
125
  ```python
126
  prompt = """\
127
+ [Speaker 1:] Hello Alice, what are you cooking for us today?
128
+ [Speaker 2:] Hello Bob,\
129
  """
130
  ```
131
 
 
133
  where `SpeakerName` can be a first name, a first and a last name, a nickname, a title…
134
  ```python
135
  prompt = """\
136
+ [Bob:] Hello Alice, what are you cooking for us today?
137
+ [Alice:] Hello Bob,\
138
  """
139
  ```
140
 
 
142
 
143
  ### Training Data
144
 
145
+ The training dataset is available at [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1)
146
+ <!-- and described in ["The Claire French Dialogue Dataset" (2023)](https://arxiv.org/abs/2311.16840). -->
147
 
148
+ Claire-7B-EN-0.1 was tuned from Falcon-7b on the following data distribution:
149
 
150
  | **Data type** | **Words** | **Training Sampling Weight** | **Sources** |
151
  |-------------------------------|------------|------------------------------|-----------------------------------------------------|
152
+ | Broadcast | 720M | 43% | MediaSum |
153
+ | Parliamentary proceedings | 56M | 27% | Europarl |
154
+ | Assistance | 53M | 13% | ReDial, OpenDialKG, ABCD, AirDialog, MULTIWOZ2_2, MulDoGO |
155
+ | Misc | 10M | 10% | British National Corpus (BNC) |
156
+ | Spoken dialogue | 4.7M | 4.6% | Charlotte, Switchboard |
157
+ | Meetings | 1.5M | <2% | AMI, ICSI |
158
+ | Free Chat | 3.6M | <1% | Chit-Chat, Daily Dialog |
159
+
160
 
161
  Training data was augmented with the following techniques:
162
  * varying the format used to indicate speech turns (dashes or [XXX:])
163
+ * substituting [Speaker X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name
164
  * removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems)
165
 
166
  Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns.
167
 
168
+ While the model has been trained and evaluated only on English dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data.
169
 
170
 
171
  ### Training Procedure
172
 
173
  The training code is available at [https://github.com/OpenLLM-France/Lit-Claire](https://github.com/OpenLLM-France/Lit-Claire).
174
 
175
+ Claire-7B-EN-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
176
  See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details.
177
 
178
+ Claire-7B-EN-0.1 was trained on 1 A100 80GB GPU for about 50 GPU hours.
179
 
180
  Hyperparameters were the following:
181
  | **Hyperparameter** | **Value** |
 
190
  | Dropout | 0.05 |
191
  | gradient clipping | 1 |
192
 
193
+ <!--
194
  ## Evaluation
195
 
196
+ To evaluate Claire-7B-EN-0.1’s ability to generate natural sounding, French conversations, we compared its responses to a variety of prompts with those of three other models:
197
  * [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b),
198
  * [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
199
  * [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1) (a version of Mistral-7B-v0.1 adapted in the same fashion as Claire-7B-0.1)
 
208
 
209
  Ranking results also reveal a clear subjective preference for Claire-7B-0.1,
210
  as shown in the following table:
 
211
  | | <span style="font-weight: normal">... over</span><br /> **Claire-Falcon** | <span style="font-weight: normal">... over</span><br /> **Claire-Mistral** | <span style="font-weight: normal">... over</span><br /> **Falcon** | <span style="font-weight: normal">... over</span><br /> **Mistral** |
212
  |--------------------------------------|----------------------|-----------------------|---------------|---------------------|
213
  | prefer<br /> **Claire-Falcon** ... | | **62.2%** | **63.9%** | **83.8%** |
 
222
  and "Claire-Mistral", for [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1).)
223
 
224
  Please note that the model can generate disfluencies and humorous responses as a result of its training on spoken and theatrical text.
225
+ -->
226
 
227
  ## License
228
 
229
  Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses,
230
+ Claire-7B-EN-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).
231
 
232
+ <!-- You can find a variant of this model published under the Apache 2.0 license at [OpenLLM-France/Claire-7B-Apache-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-Apache-0.1). -->
233
 
234
  ## Acknowledgements
235
 
236
  This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561).
237
 
238
+ Claire-7B-EN-0.1 was created by members of [LINAGORA](https://labs.linagora.com/) (in alphabetical order): Ismaïl Harrando, Julie Hunter, Jean-Pierre Lorré, Jérôme Louradour, Michel-Marie Maudet, Virgile Rennard, Guokan Shang.
239
 
240
  Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice.
241