Sentence Similarity
Safetensors
Japanese
bert
feature-extraction
hpprc commited on
Commit
a73b950
·
verified ·
1 Parent(s): 8d8012a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +56 -77
README.md CHANGED
@@ -12,36 +12,8 @@ pipeline_tag: sentence-similarity
12
  license: apache-2.0
13
  ---
14
 
15
- # SentenceTransformer based on cl-nagoya/ruri-large-pt
16
 
17
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [cl-nagoya/ruri-large-pt](https://huggingface.co/cl-nagoya/ruri-large-pt). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
18
-
19
- ## Model Details
20
-
21
- ### Model Description
22
- - **Model Type:** Sentence Transformer
23
- - **Base model:** [cl-nagoya/ruri-large-pt](https://huggingface.co/cl-nagoya/ruri-large-pt) <!-- at revision b87e00f95f09502aaac8449867f3618ca5908ce8 -->
24
- - **Maximum Sequence Length:** 512 tokens
25
- - **Output Dimensionality:** 1024 tokens
26
- - **Similarity Function:** Cosine Similarity
27
- <!-- - **Training Dataset:** Unknown -->
28
- <!-- - **Language:** Unknown -->
29
- <!-- - **License:** Unknown -->
30
-
31
- ### Model Sources
32
-
33
- - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
34
- - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
35
- - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
36
-
37
- ### Full Model Architecture
38
-
39
- ```
40
- MySentenceTransformer(
41
- (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
42
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
43
- )
44
- ```
45
 
46
  ## Usage
47
 
@@ -55,11 +27,12 @@ pip install -U sentence-transformers
55
 
56
  Then you can load this model and run inference.
57
  ```python
 
58
  from sentence_transformers import SentenceTransformer
59
 
60
  # Download from the 🤗 Hub
61
- model = SentenceTransformer("cl-nagoya/ruri-large-55-alpha0.0-0")
62
- # Run inference
63
  sentences = [
64
  'The weather is lovely today.',
65
  "It's so sunny outside!",
@@ -69,50 +42,70 @@ embeddings = model.encode(sentences)
69
  print(embeddings.shape)
70
  # [3, 1024]
71
 
72
- # Get the similarity scores for the embeddings
73
- similarities = model.similarity(embeddings, embeddings)
74
  print(similarities.shape)
75
  # [3, 3]
76
  ```
77
 
78
- <!--
79
- ### Direct Usage (Transformers)
80
-
81
- <details><summary>Click to see the direct usage in Transformers</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
- </details>
84
- -->
85
 
86
- <!--
87
- ### Downstream Usage (Sentence Transformers)
88
 
89
- You can finetune this model on your own dataset.
90
-
91
- <details><summary>Click to expand</summary>
92
-
93
- </details>
94
- -->
95
 
96
- <!--
97
- ### Out-of-Scope Use
 
 
 
 
 
 
 
98
 
99
- *List how the model may foreseeably be misused and address what users ought not to do with the model.*
100
- -->
101
 
102
- <!--
103
- ## Bias, Risks and Limitations
 
104
 
105
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
106
- -->
107
 
108
- <!--
109
- ### Recommendations
 
 
 
 
110
 
111
- *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
112
- -->
113
 
114
  ## Training Details
115
 
 
116
  ### Framework Versions
117
  - Python: 3.10.13
118
  - Sentence Transformers: 3.0.0
@@ -122,24 +115,10 @@ You can finetune this model on your own dataset.
122
  - Datasets: 2.19.1
123
  - Tokenizers: 0.19.1
124
 
125
- ## Citation
126
 
127
  ### BibTeX
 
128
 
129
- <!--
130
- ## Glossary
131
-
132
- *Clearly define terms in order to be accessible across audiences.*
133
- -->
134
-
135
- <!--
136
- ## Model Card Authors
137
-
138
- *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
139
- -->
140
-
141
- <!--
142
- ## Model Card Contact
143
-
144
- *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
145
- -->
 
12
  license: apache-2.0
13
  ---
14
 
15
+ # Ruri: Japanese General Text Embeddings
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  ## Usage
19
 
 
27
 
28
  Then you can load this model and run inference.
29
  ```python
30
+ import torch.nn.functional as F
31
  from sentence_transformers import SentenceTransformer
32
 
33
  # Download from the 🤗 Hub
34
+ model = SentenceTransformer("cl-nagoya/ruri-large")
35
+
36
  sentences = [
37
  'The weather is lovely today.',
38
  "It's so sunny outside!",
 
42
  print(embeddings.shape)
43
  # [3, 1024]
44
 
45
+ similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1))
 
46
  print(similarities.shape)
47
  # [3, 3]
48
  ```
49
 
50
+ ## Benchmarks
51
+
52
+ ### JMTEB
53
+ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
54
+
55
+ |Model|#Param.|Retrieval|STS|Classfification|Reranking|Clustering|PairClassification|Avg.|
56
+ |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
57
+ |[cl-nagoya/sup-simcse-ja-base](https://huggingface.co/cl-nagoya/sup-simcse-ja-base)|111M|49.64|82.05|73.47|91.83|51.79|62.57|68.56|
58
+ |[cl-nagoya/sup-simcse-ja-large](https://huggingface.co/cl-nagoya/sup-simcse-ja-large)|337M|37.62|83.18|73.73|91.48|50.56|62.51|66.51|
59
+ |[cl-nagoya/unsup-simcse-ja-base](https://huggingface.co/cl-nagoya/unsup-simcse-ja-base)|111M|40.23|78.72|73.07|91.16|44.77|62.44|65.07|
60
+ |[cl-nagoya/unsup-simcse-ja-large](https://huggingface.co/cl-nagoya/unsup-simcse-ja-large)|337M|40.53|80.56|74.66|90.95|48.41|62.49|66.27|
61
+ |[pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja)|133M|59.02|78.71|76.82|91.90|49.78|66.39|70.44|
62
+ ||||||||||
63
+ |[sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)|472M|40.12|76.56|72.66|91.63|44.88|62.33|64.70|
64
+ |[intfloat/multilingual-e5-small](https://huggingface.co/intfloat/multilingual-e5-small)|118M|67.27|80.07|67.62|93.03|46.91|62.19|69.52|
65
+ |[intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base)|278M|68.21|79.84|69.30|92.85|48.26|62.26|70.12|
66
+ |[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)|560M|70.98|79.70|72.89|92.96|51.24|62.15|71.65|
67
+ ||||||||||
68
+ |OpenAI/text-embedding-ada-002|-|64.38|79.02|69.75|93.04|48.30|62.40|69.48|
69
+ |OpenAI/text-embedding-3-small|-|66.39|79.46|73.06|92.92|51.06|62.27|70.86|
70
+ |OpenAI/text-embedding-3-large|-|74.48|82.52|77.58|93.58|53.32|62.35|73.97|
71
+ ||||||||||
72
+ |[Ruri-Small](https://huggingface.co/cl-nagoya/ruri-small)|68M|69.41|82.79|76.22|93.00|51.19|62.11|71.53|
73
+ |[Ruri-Base](https://huggingface.co/cl-nagoya/ruri-base)|111M|69.82|82.87|75.58|92.91|54.16|62.38|71.91|
74
+ |[Ruri-Large](https://huggingface.co/cl-nagoya/ruri-large)|337M|73.02|83.13|77.43|92.99|51.82|62.29|73.31|
75
 
 
 
76
 
 
 
77
 
78
+ ## Model Details
 
 
 
 
 
79
 
80
+ ### Model Description
81
+ - **Model Type:** Sentence Transformer
82
+ - **Base model:** [cl-nagoya/ruri-large-pt](https://huggingface.co/cl-nagoya/ruri-large-pt)
83
+ - **Maximum Sequence Length:** 512 tokens
84
+ - **Output Dimensionality:** 1024
85
+ - **Similarity Function:** Cosine Similarity
86
+ - **Language:** Japanese
87
+ - **License:** Apache 2.0
88
+ <!-- - **Training Dataset:** Unknown -->
89
 
90
+ ### Model Sources
 
91
 
92
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
93
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
94
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
95
 
96
+ ### Full Model Architecture
 
97
 
98
+ ```
99
+ MySentenceTransformer(
100
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
101
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
102
+ )
103
+ ```
104
 
 
 
105
 
106
  ## Training Details
107
 
108
+
109
  ### Framework Versions
110
  - Python: 3.10.13
111
  - Sentence Transformers: 3.0.0
 
115
  - Datasets: 2.19.1
116
  - Tokenizers: 0.19.1
117
 
118
+ <!-- ## Citation
119
 
120
  ### BibTeX
121
+ -->
122
 
123
+ ## License
124
+ This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).