Sentence Similarity
Safetensors
Japanese
bert
feature-extraction
hpprc commited on
Commit
83a1650
·
verified ·
1 Parent(s): bfde2b7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -43,7 +43,7 @@ sentences = [
43
 
44
  embeddings = model.encode(sentences, convert_to_tensor=True)
45
  print(embeddings.size())
46
- # [4, 1024]
47
 
48
  similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
49
  print(similarities)
@@ -87,7 +87,7 @@ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
87
  - **Model Type:** Sentence Transformer
88
  - **Base model:** [cl-nagoya/ruri-pt-base](https://huggingface.co/cl-nagoya/ruri-pt-base)
89
  - **Maximum Sequence Length:** 512 tokens
90
- - **Output Dimensionality:** 1024
91
  - **Similarity Function:** Cosine Similarity
92
  - **Language:** Japanese
93
  - **License:** Apache 2.0
@@ -97,9 +97,9 @@ Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
97
  ### Full Model Architecture
98
 
99
  ```
100
- MySentenceTransformer(
101
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
102
- (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
103
  )
104
  ```
105
 
 
43
 
44
  embeddings = model.encode(sentences, convert_to_tensor=True)
45
  print(embeddings.size())
46
+ # [4, 768]
47
 
48
  similarities = F.cosine_similarity(embeddings.unsqueeze(0), embeddings.unsqueeze(1), dim=2)
49
  print(similarities)
 
87
  - **Model Type:** Sentence Transformer
88
  - **Base model:** [cl-nagoya/ruri-pt-base](https://huggingface.co/cl-nagoya/ruri-pt-base)
89
  - **Maximum Sequence Length:** 512 tokens
90
+ - **Output Dimensionality:** 768
91
  - **Similarity Function:** Cosine Similarity
92
  - **Language:** Japanese
93
  - **License:** Apache 2.0
 
97
  ### Full Model Architecture
98
 
99
  ```
100
+ SentenceTransformer(
101
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
102
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
103
  )
104
  ```
105