NoYo25 commited on
Commit
93e897b
·
1 Parent(s): 737b68b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -8
README.md CHANGED
@@ -1,11 +1,13 @@
1
- ## **Model description**
 
 
2
  * BiodivBERT is a domain-specific BERT based cased model for the biodiversity literature.
3
  * It uses the tokenizer from BERTT base cased model.
4
  * BiodivBERT is pre-trained on abstracts and full text from biodiversity literature.
5
  * BiodivBERT is fine-tuned on two down stream tasks for Named Entity Recognition and Relation Extraction in the biodiversity domain.
6
  * Please visit our [GitHub Repo](https://github.com/fusion-jena/BiodivBERT) for more details.
7
 
8
- **How to use**
9
  * You can use BiodivBERT via huggingface library as follows:
10
 
11
  ````
@@ -16,16 +18,14 @@ tokenizer = AutoTokenizer.from_pretrained("NoYo25/BiodivBERT")
16
  model = AutoModelForMaskedLM.from_pretrained("NoYo25/BiodivBERT")
17
  ````
18
 
19
- **Training data**
20
 
21
  * BiodivBERT is pre-trained on abstracts and full text from biodiversity domain-related publications.
22
  * We used both Elsevier and Springer APIs to crawl such data.
23
  * We covered publications over the duration of 1990-2020.
24
 
25
- **Evaluation results**
26
  BiodivBERT overperformed both ``BERT_base_cased``, ``biobert_v1.1``, and ``BiLSTM`` as a baseline approach on the down stream tasks.
27
 
28
-
29
- ---
30
- license: cc-by-nc-4.0
31
- ---
 
1
+ # BiodivBERT
2
+
3
+ ## Model description
4
  * BiodivBERT is a domain-specific BERT based cased model for the biodiversity literature.
5
  * It uses the tokenizer from BERTT base cased model.
6
  * BiodivBERT is pre-trained on abstracts and full text from biodiversity literature.
7
  * BiodivBERT is fine-tuned on two down stream tasks for Named Entity Recognition and Relation Extraction in the biodiversity domain.
8
  * Please visit our [GitHub Repo](https://github.com/fusion-jena/BiodivBERT) for more details.
9
 
10
+ ## How to use
11
  * You can use BiodivBERT via huggingface library as follows:
12
 
13
  ````
 
18
  model = AutoModelForMaskedLM.from_pretrained("NoYo25/BiodivBERT")
19
  ````
20
 
21
+ ## Training data
22
 
23
  * BiodivBERT is pre-trained on abstracts and full text from biodiversity domain-related publications.
24
  * We used both Elsevier and Springer APIs to crawl such data.
25
  * We covered publications over the duration of 1990-2020.
26
 
27
+ ## Evaluation results
28
  BiodivBERT overperformed both ``BERT_base_cased``, ``biobert_v1.1``, and ``BiLSTM`` as a baseline approach on the down stream tasks.
29
 
30
+ ## License
31
+ license: cc-by-nc-4.0