Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,28 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
base_model:
|
3 |
+
- ssmits/Falcon2-5.5B-multilingual
|
4 |
+
library_name: sentence-transformers
|
5 |
+
tags:
|
6 |
+
- ssmits/Falcon2-5.5B-multilingual
|
7 |
+
license: apache-2.0
|
8 |
+
language:
|
9 |
+
- es
|
10 |
+
- fr
|
11 |
+
- de
|
12 |
+
- 'no'
|
13 |
+
- sv
|
14 |
+
- da
|
15 |
+
- nl
|
16 |
+
- pt
|
17 |
+
- pl
|
18 |
+
- ro
|
19 |
+
- it
|
20 |
+
- cs
|
21 |
+
pipeline_tag: text-classification
|
22 |
+
---
|
23 |
+
|
24 |
+
## Usage
|
25 |
+
Embeddings version of the base model of [ssmits/Falcon2-5.5B-multilingual](https://huggingface.co/ssmits/Falcon2-5.5B-multilingual/edit/main/README.md).
|
26 |
+
The 'lm_head' layer of this model has been removed, which means it can be used for embeddings. It will not perform greatly, as it needs to be further fine-tuned, as shown by [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct).
|
27 |
+
Additionaly, in stead of a normalization layer, the hidden layers are followed up by both a classical weight and bias 1-dimensional array of 4096 values.
|
28 |
+
Further research needs to be conducted if this architecture will fully function when adding a classification head in combination with utilizing the transformers library.
|