Paul Rock commited on
Commit
aacea09
·
1 Parent(s): 46f4337

Description updated

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -1,12 +1,11 @@
1
  ---
2
- #library_name: sentence-transformers
3
  pipeline_tag: feature-extraction
4
  tags:
5
- - pytorch
6
- - sentence-transformers
7
- - feature-extraction
8
- - sentence-similarity
9
- - transformers
10
  language:
11
  - ru
12
  - en
@@ -38,6 +37,8 @@ trained over 20 epochs on the following datasets:
38
  The goal of this model is to generate identical or very similar embeddings regardless of whether the text is written in
39
  English or Russian.
40
 
 
 
41
  ## Usage (Sentence-Transformers)
42
 
43
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
@@ -62,7 +63,7 @@ sentences = [
62
  "Machine learning helps to create intelligent systems.",
63
  ]
64
 
65
- model = SentenceTransformer('evilfreelancer/enbeddrus')
66
  embeddings = model.encode(sentences)
67
  print(embeddings)
68
  ```
@@ -98,8 +99,8 @@ sentences = [
98
  ]
99
 
100
  # Load model from HuggingFace Hub
101
- tokenizer = AutoTokenizer.from_pretrained('evilfreelancer/enbeddrus')
102
- model = AutoModel.from_pretrained('evilfreelancer/enbeddrus')
103
 
104
  # Tokenize sentences
105
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
 
1
  ---
 
2
  pipeline_tag: feature-extraction
3
  tags:
4
+ - pytorch
5
+ - sentence-transformers
6
+ - feature-extraction
7
+ - sentence-similarity
8
+ - transformers
9
  language:
10
  - ru
11
  - en
 
37
  The goal of this model is to generate identical or very similar embeddings regardless of whether the text is written in
38
  English or Russian.
39
 
40
+ [Enbeddrus GGUF](https://ollama.com/evilfreelancer/enbeddrus) version available via Ollama.
41
+
42
  ## Usage (Sentence-Transformers)
43
 
44
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
 
63
  "Machine learning helps to create intelligent systems.",
64
  ]
65
 
66
+ model = SentenceTransformer('evilfreelancer/enbeddrus-v0.1')
67
  embeddings = model.encode(sentences)
68
  print(embeddings)
69
  ```
 
99
  ]
100
 
101
  # Load model from HuggingFace Hub
102
+ tokenizer = AutoTokenizer.from_pretrained('evilfreelancer/enbeddrus-v0.1')
103
+ model = AutoModel.from_pretrained('evilfreelancer/enbeddrus-v0.1')
104
 
105
  # Tokenize sentences
106
  encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')