Spaces:
Sleeping
Sleeping
Xtaiyang
commited on
Commit
·
3bd8199
1
Parent(s):
4aa1db7
demo
Browse files- README.md +161 -10
- .gitattributes → model/.gitattributes +13 -27
- model/1_Pooling/config.json +7 -0
- model/README.md +164 -0
- model/config.json +24 -0
- model/config_sentence_transformers.json +7 -0
- model/model.safetensors +3 -0
- model/modules.json +14 -0
- model/onnx/model.onnx +3 -0
- model/onnx/model_O1.onnx +3 -0
- model/onnx/model_O2.onnx +3 -0
- model/onnx/model_O3.onnx +3 -0
- model/onnx/model_O4.onnx +3 -0
- model/onnx/model_qint8_arm64.onnx +3 -0
- model/onnx/model_qint8_avx512.onnx +3 -0
- model/onnx/model_qint8_avx512_vnni.onnx +3 -0
- model/onnx/model_quint8_avx2.onnx +3 -0
- model/openvino/openvino_model.bin +3 -0
- model/openvino/openvino_model.xml +0 -0
- model/openvino/openvino_model_qint8_quantized.bin +3 -0
- model/openvino/openvino_model_qint8_quantized.xml +0 -0
- model/pytorch_model.bin +3 -0
- model/sentence_bert_config.json +4 -0
- model/sentencepiece.bpe.model +3 -0
- model/special_tokens_map.json +1 -0
- model/tf_model.h5 +3 -0
- model/tokenizer.json +3 -0
- model/tokenizer_config.json +1 -0
- model/unigram.json +3 -0
README.md
CHANGED
@@ -1,13 +1,164 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
---
|
12 |
|
13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
- ar
|
5 |
+
- bg
|
6 |
+
- ca
|
7 |
+
- cs
|
8 |
+
- da
|
9 |
+
- de
|
10 |
+
- el
|
11 |
+
- en
|
12 |
+
- es
|
13 |
+
- et
|
14 |
+
- fa
|
15 |
+
- fi
|
16 |
+
- fr
|
17 |
+
- gl
|
18 |
+
- gu
|
19 |
+
- he
|
20 |
+
- hi
|
21 |
+
- hr
|
22 |
+
- hu
|
23 |
+
- hy
|
24 |
+
- id
|
25 |
+
- it
|
26 |
+
- ja
|
27 |
+
- ka
|
28 |
+
- ko
|
29 |
+
- ku
|
30 |
+
- lt
|
31 |
+
- lv
|
32 |
+
- mk
|
33 |
+
- mn
|
34 |
+
- mr
|
35 |
+
- ms
|
36 |
+
- my
|
37 |
+
- nb
|
38 |
+
- nl
|
39 |
+
- pl
|
40 |
+
- pt
|
41 |
+
- ro
|
42 |
+
- ru
|
43 |
+
- sk
|
44 |
+
- sl
|
45 |
+
- sq
|
46 |
+
- sr
|
47 |
+
- sv
|
48 |
+
- th
|
49 |
+
- tr
|
50 |
+
- uk
|
51 |
+
- ur
|
52 |
+
- vi
|
53 |
+
license: apache-2.0
|
54 |
+
library_name: sentence-transformers
|
55 |
+
tags:
|
56 |
+
- sentence-transformers
|
57 |
+
- feature-extraction
|
58 |
+
- sentence-similarity
|
59 |
+
- transformers
|
60 |
+
language_bcp47:
|
61 |
+
- fr-ca
|
62 |
+
- pt-br
|
63 |
+
- zh-cn
|
64 |
+
- zh-tw
|
65 |
+
pipeline_tag: sentence-similarity
|
66 |
---
|
67 |
|
68 |
+
# sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
|
69 |
+
|
70 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
71 |
+
|
72 |
+
|
73 |
+
|
74 |
+
## Usage (Sentence-Transformers)
|
75 |
+
|
76 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
77 |
+
|
78 |
+
```
|
79 |
+
pip install -U sentence-transformers
|
80 |
+
```
|
81 |
+
|
82 |
+
Then you can use the model like this:
|
83 |
+
|
84 |
+
```python
|
85 |
+
from sentence_transformers import SentenceTransformer
|
86 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
87 |
+
|
88 |
+
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
|
89 |
+
embeddings = model.encode(sentences)
|
90 |
+
print(embeddings)
|
91 |
+
```
|
92 |
+
|
93 |
+
|
94 |
+
|
95 |
+
## Usage (HuggingFace Transformers)
|
96 |
+
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
97 |
+
|
98 |
+
```python
|
99 |
+
from transformers import AutoTokenizer, AutoModel
|
100 |
+
import torch
|
101 |
+
|
102 |
+
|
103 |
+
# Mean Pooling - Take attention mask into account for correct averaging
|
104 |
+
def mean_pooling(model_output, attention_mask):
|
105 |
+
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
106 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
107 |
+
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
108 |
+
|
109 |
+
|
110 |
+
# Sentences we want sentence embeddings for
|
111 |
+
sentences = ['This is an example sentence', 'Each sentence is converted']
|
112 |
+
|
113 |
+
# Load model from HuggingFace Hub
|
114 |
+
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
|
115 |
+
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
|
116 |
+
|
117 |
+
# Tokenize sentences
|
118 |
+
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
119 |
+
|
120 |
+
# Compute token embeddings
|
121 |
+
with torch.no_grad():
|
122 |
+
model_output = model(**encoded_input)
|
123 |
+
|
124 |
+
# Perform pooling. In this case, max pooling.
|
125 |
+
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
|
126 |
+
|
127 |
+
print("Sentence embeddings:")
|
128 |
+
print(sentence_embeddings)
|
129 |
+
```
|
130 |
+
|
131 |
+
|
132 |
+
|
133 |
+
## Evaluation Results
|
134 |
+
|
135 |
+
|
136 |
+
|
137 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
|
138 |
+
|
139 |
+
|
140 |
+
|
141 |
+
## Full Model Architecture
|
142 |
+
```
|
143 |
+
SentenceTransformer(
|
144 |
+
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
|
145 |
+
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
|
146 |
+
)
|
147 |
+
```
|
148 |
+
|
149 |
+
## Citing & Authors
|
150 |
+
|
151 |
+
This model was trained by [sentence-transformers](https://www.sbert.net/).
|
152 |
+
|
153 |
+
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
|
154 |
+
```bibtex
|
155 |
+
@inproceedings{reimers-2019-sentence-bert,
|
156 |
+
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
157 |
+
author = "Reimers, Nils and Gurevych, Iryna",
|
158 |
+
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
|
159 |
+
month = "11",
|
160 |
+
year = "2019",
|
161 |
+
publisher = "Association for Computational Linguistics",
|
162 |
+
url = "http://arxiv.org/abs/1908.10084",
|
163 |
+
}
|
164 |
+
```
|
.gitattributes → model/.gitattributes
RENAMED
@@ -1,35 +1,21 @@
|
|
1 |
-
*.
|
2 |
-
*.
|
3 |
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
-
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
5 |
-
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
6 |
-
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
-
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
*.h5 filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
-
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
-
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
12 |
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
14 |
-
*.npy filter=lfs diff=lfs merge=lfs -text
|
15 |
-
*.npz filter=lfs diff=lfs merge=lfs -text
|
16 |
-
*.onnx filter=lfs diff=lfs merge=lfs -text
|
17 |
-
*.ot filter=lfs diff=lfs merge=lfs -text
|
18 |
-
*.parquet filter=lfs diff=lfs merge=lfs -text
|
19 |
*.pb filter=lfs diff=lfs merge=lfs -text
|
20 |
-
*.pickle filter=lfs diff=lfs merge=lfs -text
|
21 |
-
*.pkl filter=lfs diff=lfs merge=lfs -text
|
22 |
*.pt filter=lfs diff=lfs merge=lfs -text
|
23 |
*.pth filter=lfs diff=lfs merge=lfs -text
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
*.tflite filter=lfs diff=lfs merge=lfs -text
|
30 |
-
*.tgz filter=lfs diff=lfs merge=lfs -text
|
31 |
-
*.wasm filter=lfs diff=lfs merge=lfs -text
|
32 |
-
*.xz filter=lfs diff=lfs merge=lfs -text
|
33 |
-
*.zip filter=lfs diff=lfs merge=lfs -text
|
34 |
-
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
-
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
1 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
3 |
*.bin filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
4 |
*.h5 filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.tar.gz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
11 |
*.joblib filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
12 |
*.model filter=lfs diff=lfs merge=lfs -text
|
13 |
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
14 |
*.pb filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
15 |
*.pt filter=lfs diff=lfs merge=lfs -text
|
16 |
*.pth filter=lfs diff=lfs merge=lfs -text
|
17 |
+
pytorch_model.bin filter=lfs diff=lfs merge=lfs -text
|
18 |
+
tokenizer.json filter=lfs diff=lfs merge=lfs -text
|
19 |
+
unigram.json filter=lfs diff=lfs merge=lfs -text
|
20 |
+
.git/lfs/objects/8a/01/8a016203ad4fe42aaad6e9329f70e4ea2ea19d4e14e43f1a36ec140233e604ef filter=lfs diff=lfs merge=lfs -text
|
21 |
+
model.safetensors filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
model/1_Pooling/config.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 384,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": true,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false
|
7 |
+
}
|
model/README.md
ADDED
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
- ar
|
5 |
+
- bg
|
6 |
+
- ca
|
7 |
+
- cs
|
8 |
+
- da
|
9 |
+
- de
|
10 |
+
- el
|
11 |
+
- en
|
12 |
+
- es
|
13 |
+
- et
|
14 |
+
- fa
|
15 |
+
- fi
|
16 |
+
- fr
|
17 |
+
- gl
|
18 |
+
- gu
|
19 |
+
- he
|
20 |
+
- hi
|
21 |
+
- hr
|
22 |
+
- hu
|
23 |
+
- hy
|
24 |
+
- id
|
25 |
+
- it
|
26 |
+
- ja
|
27 |
+
- ka
|
28 |
+
- ko
|
29 |
+
- ku
|
30 |
+
- lt
|
31 |
+
- lv
|
32 |
+
- mk
|
33 |
+
- mn
|
34 |
+
- mr
|
35 |
+
- ms
|
36 |
+
- my
|
37 |
+
- nb
|
38 |
+
- nl
|
39 |
+
- pl
|
40 |
+
- pt
|
41 |
+
- ro
|
42 |
+
- ru
|
43 |
+
- sk
|
44 |
+
- sl
|
45 |
+
- sq
|
46 |
+
- sr
|
47 |
+
- sv
|
48 |
+
- th
|
49 |
+
- tr
|
50 |
+
- uk
|
51 |
+
- ur
|
52 |
+
- vi
|
53 |
+
license: apache-2.0
|
54 |
+
library_name: sentence-transformers
|
55 |
+
tags:
|
56 |
+
- sentence-transformers
|
57 |
+
- feature-extraction
|
58 |
+
- sentence-similarity
|
59 |
+
- transformers
|
60 |
+
language_bcp47:
|
61 |
+
- fr-ca
|
62 |
+
- pt-br
|
63 |
+
- zh-cn
|
64 |
+
- zh-tw
|
65 |
+
pipeline_tag: sentence-similarity
|
66 |
+
---
|
67 |
+
|
68 |
+
# sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
|
69 |
+
|
70 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
71 |
+
|
72 |
+
|
73 |
+
|
74 |
+
## Usage (Sentence-Transformers)
|
75 |
+
|
76 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
77 |
+
|
78 |
+
```
|
79 |
+
pip install -U sentence-transformers
|
80 |
+
```
|
81 |
+
|
82 |
+
Then you can use the model like this:
|
83 |
+
|
84 |
+
```python
|
85 |
+
from sentence_transformers import SentenceTransformer
|
86 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
87 |
+
|
88 |
+
model = SentenceTransformer('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
|
89 |
+
embeddings = model.encode(sentences)
|
90 |
+
print(embeddings)
|
91 |
+
```
|
92 |
+
|
93 |
+
|
94 |
+
|
95 |
+
## Usage (HuggingFace Transformers)
|
96 |
+
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
|
97 |
+
|
98 |
+
```python
|
99 |
+
from transformers import AutoTokenizer, AutoModel
|
100 |
+
import torch
|
101 |
+
|
102 |
+
|
103 |
+
# Mean Pooling - Take attention mask into account for correct averaging
|
104 |
+
def mean_pooling(model_output, attention_mask):
|
105 |
+
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
106 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
107 |
+
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
108 |
+
|
109 |
+
|
110 |
+
# Sentences we want sentence embeddings for
|
111 |
+
sentences = ['This is an example sentence', 'Each sentence is converted']
|
112 |
+
|
113 |
+
# Load model from HuggingFace Hub
|
114 |
+
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
|
115 |
+
model = AutoModel.from_pretrained('sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2')
|
116 |
+
|
117 |
+
# Tokenize sentences
|
118 |
+
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
119 |
+
|
120 |
+
# Compute token embeddings
|
121 |
+
with torch.no_grad():
|
122 |
+
model_output = model(**encoded_input)
|
123 |
+
|
124 |
+
# Perform pooling. In this case, max pooling.
|
125 |
+
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
|
126 |
+
|
127 |
+
print("Sentence embeddings:")
|
128 |
+
print(sentence_embeddings)
|
129 |
+
```
|
130 |
+
|
131 |
+
|
132 |
+
|
133 |
+
## Evaluation Results
|
134 |
+
|
135 |
+
|
136 |
+
|
137 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
|
138 |
+
|
139 |
+
|
140 |
+
|
141 |
+
## Full Model Architecture
|
142 |
+
```
|
143 |
+
SentenceTransformer(
|
144 |
+
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
|
145 |
+
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
|
146 |
+
)
|
147 |
+
```
|
148 |
+
|
149 |
+
## Citing & Authors
|
150 |
+
|
151 |
+
This model was trained by [sentence-transformers](https://www.sbert.net/).
|
152 |
+
|
153 |
+
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
|
154 |
+
```bibtex
|
155 |
+
@inproceedings{reimers-2019-sentence-bert,
|
156 |
+
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
157 |
+
author = "Reimers, Nils and Gurevych, Iryna",
|
158 |
+
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
|
159 |
+
month = "11",
|
160 |
+
year = "2019",
|
161 |
+
publisher = "Association for Computational Linguistics",
|
162 |
+
url = "http://arxiv.org/abs/1908.10084",
|
163 |
+
}
|
164 |
+
```
|
model/config.json
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "old_models/paraphrase-multilingual-MiniLM-L12-v2/0_Transformer",
|
3 |
+
"architectures": [
|
4 |
+
"BertModel"
|
5 |
+
],
|
6 |
+
"attention_probs_dropout_prob": 0.1,
|
7 |
+
"gradient_checkpointing": false,
|
8 |
+
"hidden_act": "gelu",
|
9 |
+
"hidden_dropout_prob": 0.1,
|
10 |
+
"hidden_size": 384,
|
11 |
+
"initializer_range": 0.02,
|
12 |
+
"intermediate_size": 1536,
|
13 |
+
"layer_norm_eps": 1e-12,
|
14 |
+
"max_position_embeddings": 512,
|
15 |
+
"model_type": "bert",
|
16 |
+
"num_attention_heads": 12,
|
17 |
+
"num_hidden_layers": 12,
|
18 |
+
"pad_token_id": 0,
|
19 |
+
"position_embedding_type": "absolute",
|
20 |
+
"transformers_version": "4.7.0",
|
21 |
+
"type_vocab_size": 2,
|
22 |
+
"use_cache": true,
|
23 |
+
"vocab_size": 250037
|
24 |
+
}
|
model/config_sentence_transformers.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "2.0.0",
|
4 |
+
"transformers": "4.7.0",
|
5 |
+
"pytorch": "1.9.0+cu102"
|
6 |
+
}
|
7 |
+
}
|
model/model.safetensors
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:eaa086f0ffee582aeb45b36e34cdd1fe2d6de2bef61f8a559a1bbc9bd955917b
|
3 |
+
size 470641600
|
model/modules.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
}
|
14 |
+
]
|
model/onnx/model.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:10f7a088420252b26caf819236ca2c9d2987afd0fc06fec7553b542a5655a05a
|
3 |
+
size 470301610
|
model/onnx/model_O1.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ae4b831e992807334f18a91557661e94715f502a5c7248fb81675b08391e30f
|
3 |
+
size 470212363
|
model/onnx/model_O2.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:338ef03c2838d5a659d36e1ce5b7a1dc2d2a66a430a9e6f499de6dc39f663850
|
3 |
+
size 470145917
|
model/onnx/model_O3.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2426785672da5afa2ca3ac1efae652499c72a3198e8e203d06f6d6e6c569d419
|
3 |
+
size 470145772
|
model/onnx/model_O4.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:307bba13f9f5708461169c9f2d633c76c5572919bcc998606b6e5aea46f05db4
|
3 |
+
size 235166264
|
model/onnx/model_qint8_arm64.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:783fea82d71a58179b830a4dbd2d58447e640609e98eedf9ffa12622d375a672
|
3 |
+
size 118412398
|
model/onnx/model_qint8_avx512.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:783fea82d71a58179b830a4dbd2d58447e640609e98eedf9ffa12622d375a672
|
3 |
+
size 118412398
|
model/onnx/model_qint8_avx512_vnni.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:783fea82d71a58179b830a4dbd2d58447e640609e98eedf9ffa12622d375a672
|
3 |
+
size 118412398
|
model/onnx/model_quint8_avx2.onnx
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:98a01d88b7de996cdea58c32ca71208c09968d143798814b2ea09d3439dc334f
|
3 |
+
size 118453870
|
model/openvino/openvino_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:04e8cc4ceedb65316f374f798b9428b491281e064b4cb6076e6abf0221256ac1
|
3 |
+
size 470027920
|
model/openvino/openvino_model.xml
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model/openvino/openvino_model_qint8_quantized.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:24acd56a5f5ae4ba5b39c9593997ebb6d5da44a6439ef1d0757a70c70aadb7e3
|
3 |
+
size 118989868
|
model/openvino/openvino_model_qint8_quantized.xml
ADDED
The diff for this file is too large to render.
See raw diff
|
|
model/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:16cc9e54df6e083272378abec2d75dc34d7a48b5276db3ccc050d18de672ac59
|
3 |
+
size 470693617
|
model/sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 128,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
model/sentencepiece.bpe.model
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:cfc8146abe2a0488e9e2a0c56de7952f7c11ab059eca145a0a727afce0db2865
|
3 |
+
size 5069051
|
model/special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": false}}
|
model/tf_model.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:22150b6ba00e477c7f816f1988d028fff924e2b52e14540889690c72c5add40e
|
3 |
+
size 470899176
|
model/tokenizer.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2c3387be76557bd40970cec13153b3bbf80407865484b209e655e5e4729076b8
|
3 |
+
size 9081518
|
model/tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"do_lower_case": true, "unk_token": "<unk>", "sep_token": "</s>", "pad_token": "<pad>", "cls_token": "<s>", "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "tokenize_chinese_chars": true, "strip_accents": null, "bos_token": "<s>", "eos_token": "</s>", "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "old_models/paraphrase-multilingual-MiniLM-L12-v2/0_Transformer"}
|
model/unigram.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:71b44701d7efd054205115acfa6ef126c5d2f84bd3affe0c59e48163674d19a6
|
3 |
+
size 14763234
|