wbi-sg commited on
Commit
b2fad86
·
verified ·
1 Parent(s): 20615ca

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -0
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - flair
4
+ - entity-mention-linker
5
+ ---
6
+
7
+ ## biosyn-sapbert-bc5cdr-chemical-no-ab3p
8
+
9
+ Biomedical Entity Mention Linking for chemical:
10
+
11
+ - Model: [dmis-lab/biosyn-sapbert-bc5cdr-chemical](https://huggingface.co/dmis-lab/biosyn-sapbert-bc5cdr-chemical)
12
+ - Dictionary: [CTD Chemicals](https://ctdbase.org/help/chemDetailHelp.jsp) (See [License](https://ctdbase.org/about/legal.jsp))
13
+
14
+ NOTE: This model variant does not perform abbreviation resolution via [A3bP](https://github.com/ncbi-nlp/Ab3P)
15
+
16
+
17
+ ### Demo: How to use in Flair
18
+
19
+ Requires:
20
+
21
+ - **[Flair](https://github.com/flairNLP/flair/)>=0.14.0** (`pip install flair` or `pip install git+https://github.com/flairNLP/flair.git`)
22
+
23
+ ```python
24
+ from flair.data import Sentence
25
+ from flair.models import Classifier, EntityMentionLinker
26
+ from flair.tokenization import SciSpacyTokenizer
27
+
28
+ sentence = Sentence(
29
+ "The mutation in the ABCD1 gene causes X-linked adrenoleukodystrophy, "
30
+ "a neurodegenerative disease, which is exacerbated by exposure to high "
31
+ "levels of mercury in dolphin populations.",
32
+ use_tokenizer=SciSpacyTokenizer()
33
+ )
34
+
35
+ # load hunflair to detect the entity mentions we want to link.
36
+ tagger = Classifier.load("hunflair-chemical")
37
+ tagger.predict(sentence)
38
+
39
+ # load the linker and dictionary
40
+ linker = EntityMentionLinker.load("chemical-linker")
41
+ linker.predict(sentence)
42
+
43
+ # print the results for each entity mention:
44
+ for span in sentence.get_spans(tagger.label_type):
45
+ for link in span.get_labels(linker.label_type):
46
+ print(f"{span.text} -> {link.value}")
47
+ ```
48
+
49
+ As an alternative to downloading the already precomputed model (much storage). You can also build the model
50
+ and compute the embeddings for the dataset using:
51
+
52
+ ```python
53
+ linker = EntityMentionLinker.build("dmis-lab/biosyn-sapbert-bc5cdr-chemical", dictionary_name_or_path="ctd-chemicals", hybrid_search=True)
54
+ ```
55
+
56
+ This will reduce the download requirements, at the cost of computation.