julianrisch commited on
Commit
6a7bbfd
verified
1 Parent(s): e072efb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -14
README.md CHANGED
@@ -29,17 +29,19 @@ model-index:
29
  verified: true
30
  verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzAxMDk1YzI5ZjA2N2ZmMzAxNjgxYzJiNzAzYmI1ZWU5ZDRmYWY3OWJmMjlmNDcyMGE0YWY5NjNhZTk4YWY5ZSIsInZlcnNpb24iOjF9.rF3raNGUSYv5D2xzWLZztD99vwDKvWb22LG32RomrDGP6XKTbCVqZzAw5UFw93jKb0VoLApbQQ-AOGxLj3U_Cg
31
  ---
 
32
 
33
  ## Overview
34
  **Language model:** deepset/tinybert-6L-768D-squad2
35
  **Language:** English
36
  **Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation
37
- **Eval data:** SQuAD 2.0 dev set
 
38
  **Infrastructure**: 1x V100 GPU
39
  **Published**: Dec 8th, 2021
40
 
41
  ## Details
42
- - haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model.
43
 
44
  ## Hyperparameters
45
  ### Intermediate layer distillation
@@ -63,6 +65,51 @@ embeds_dropout_prob = 0.1
63
  temperature = 1
64
  distillation_loss_weight = 1.0
65
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ## Performance
67
  ```
68
  "exact": 71.87736882001179
@@ -74,18 +121,31 @@ distillation_loss_weight = 1.0
74
  - Julian Risch: `julian.risch [at] deepset.ai`
75
  - Malte Pietsch: `malte.pietsch [at] deepset.ai`
76
  - Michel Bartels: `michel.bartels [at] deepset.ai`
 
77
  ## About us
78
- ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo)
79
- We bring NLP to the industry via open source!
80
- Our focus: Industry specific language models & large scale QA systems.
81
-
82
- Some of our work:
83
- - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
84
- - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
85
- - [FARM](https://github.com/deepset-ai/FARM)
86
- - [Haystack](https://github.com/deepset-ai/haystack/)
87
-
88
- Get in touch:
89
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
 
 
 
 
 
 
 
 
 
 
 
 
90
 
91
  By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
29
  verified: true
30
  verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzAxMDk1YzI5ZjA2N2ZmMzAxNjgxYzJiNzAzYmI1ZWU5ZDRmYWY3OWJmMjlmNDcyMGE0YWY5NjNhZTk4YWY5ZSIsInZlcnNpb24iOjF9.rF3raNGUSYv5D2xzWLZztD99vwDKvWb22LG32RomrDGP6XKTbCVqZzAw5UFw93jKb0VoLApbQQ-AOGxLj3U_Cg
31
  ---
32
+ # tinybert for Extractive QA
33
 
34
  ## Overview
35
  **Language model:** deepset/tinybert-6L-768D-squad2
36
  **Language:** English
37
  **Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation
38
+ **Eval data:** SQuAD 2.0 dev set
39
+ **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline)
40
  **Infrastructure**: 1x V100 GPU
41
  **Published**: Dec 8th, 2021
42
 
43
  ## Details
44
+ - Haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model.
45
 
46
  ## Hyperparameters
47
  ### Intermediate layer distillation
 
65
  temperature = 1
66
  distillation_loss_weight = 1.0
67
  ```
68
+
69
+ ## Usage
70
+
71
+ ### In Haystack
72
+ Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
73
+ To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
74
+ ```python
75
+ # After running pip install haystack-ai "transformers[torch,sentencepiece]"
76
+
77
+ from haystack import Document
78
+ from haystack.components.readers import ExtractiveReader
79
+
80
+ docs = [
81
+ Document(content="Python is a popular programming language"),
82
+ Document(content="python ist eine beliebte Programmiersprache"),
83
+ ]
84
+
85
+ reader = ExtractiveReader(model="deepset/roberta-base-squad2")
86
+ reader.warm_up()
87
+
88
+ question = "What is a popular programming language?"
89
+ result = reader.run(query=question, documents=docs)
90
+ # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
91
+ ```
92
+ For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
93
+
94
+ ### In Transformers
95
+ ```python
96
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
97
+
98
+ model_name = "deepset/roberta-base-squad2"
99
+
100
+ # a) Get predictions
101
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
102
+ QA_input = {
103
+ 'question': 'Why is model conversion important?',
104
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
105
+ }
106
+ res = nlp(QA_input)
107
+
108
+ # b) Load model & tokenizer
109
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
110
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
111
+ ```
112
+
113
  ## Performance
114
  ```
115
  "exact": 71.87736882001179
 
121
  - Julian Risch: `julian.risch [at] deepset.ai`
122
  - Malte Pietsch: `malte.pietsch [at] deepset.ai`
123
  - Michel Bartels: `michel.bartels [at] deepset.ai`
124
+
125
  ## About us
126
+
127
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
128
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
129
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
130
+ </div>
131
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
132
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
133
+ </div>
134
+ </div>
135
+
136
+ [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
137
+
138
+ Some of our other work:
139
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
140
+ - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
141
+ - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
142
+
143
+ ## Get in touch and join the Haystack community
144
+
145
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
146
+
147
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
148
+
149
+ [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
150
 
151
  By the way: [we're hiring!](http://www.deepset.ai/jobs)