legal-ft-2 / README.md
Mdean77's picture
Add new SentenceTransformer model
64d99dc verified
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:400
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What types of objectives are mentioned as not being specific to
AI systems in the context?
sentences:
- The notion of ‘biometric identification’ referred to in this Regulation should
be defined as the automated recognition of physical, physiological and behavioural
human features such as the face, eye movement, body shape, voice, prosody, gait,
posture, heart rate, blood pressure, odour, keystrokes characteristics, for the
purpose of establishing an individual’s identity by comparing biometric data of
that individual to stored biometric data of individuals in a reference database,
irrespective of whether the individual has given its consent or not. This excludes
AI systems intended to be used for biometric verification, which includes authentication,
whose sole purpose is to confirm that a specific natural person is the person
he or she
- are not specific to AI systems and pursue other legitimate public interest objectives,
should not be affected by this Regulation.
- for supervision of the law enforcement and judicial authorities under this Regulation
should assess whether those frameworks for cooperation or international agreements
include adequate safeguards with respect to the protection of fundamental rights
and freedoms of individuals. Recipient national authorities and Union institutions,
bodies, offices and agencies making use of such outputs in the Union remain accountable
to ensure their use complies with Union law. When those international agreements
are revised or new ones are concluded in the future, the contracting parties should
make utmost efforts to align those agreements with the requirements of this Regulation.
- source_sentence: How does the context relate to the concept of 49?
sentences:
- (49)
- (56)
- (25)
- source_sentence: How does a serious disruption of critical infrastructure relate
to the threat to life or physical safety of individuals?
sentences:
- or otherwise, for example, public roads and squares, parks, forests, playgrounds.
A space should also be classified as being publicly accessible if, regardless
of potential capacity or security restrictions, access is subject to certain predetermined
conditions which can be fulfilled by an undetermined number of persons, such as
the purchase of a ticket or title of transport, prior registration or having a certain
age. In contrast, a space should not be considered to be publicly accessible if
access is limited to specific and defined natural persons through either Union
or national law directly related to public safety or security or through the clear
manifestation of will by the person having the relevant authority over the space.
The
- to highly varying degrees for the practical pursuit of the localisation or identification
of a perpetrator or suspect of the different criminal offences listed and having
regard to the likely differences in the seriousness, probability and scale of
the harm or possible negative consequences. An imminent threat to life or the
physical safety of natural persons could also result from a serious disruption
of critical infrastructure, as defined in Article 2, point (4) of Directive (EU)
2022/2557 of the European Parliament and of the Council (19), where the disruption
or destruction of such critical infrastructure would result in an imminent threat
to life or the physical safety of a person, including through serious harm to
the provision of
- As regards high-risk AI systems that are safety components of products or systems,
or which are themselves products or systems falling within the scope of Regulation
(EC) No 300/2008 of the European Parliament and of the Council (24), Regulation
(EU) No 167/2013 of the European Parliament and of the Council (25), Regulation
(EU) No 168/2013 of the European Parliament and of the Council (26), Directive
2014/90/EU of the European Parliament and of the Council (27), Directive (EU)
2016/797 of the European Parliament and of the Council (28), Regulation (EU) 2018/858
of the European Parliament and of the Council (29), Regulation (EU) 2018/1139
of the European Parliament and of the Council (30), and Regulation (EU) 2019/2144
of the European
- source_sentence: What specific rights of children are highlighted in Article 24
of the Charter and the United Nations Convention on the Rights of the Child?
sentences:
- it is important to highlight the fact that children have specific rights as enshrined
in Article 24 of the Charter and in the United Nations Convention on the Rights
of the Child, further developed in the UNCRC General Comment No 25 as regards
the digital environment, both of which require consideration of the children’s
vulnerabilities and provision of such protection and care as necessary for their
well-being. The fundamental right to a high level of environmental protection
enshrined in the Charter and implemented in Union policies should also be considered
when assessing the severity of the harm that an AI system can cause, including
in relation to the health and safety of persons.
- of AI systems that are high-risk and use cases that are not.
- As regards high-risk AI systems that are safety components of products or systems,
or which are themselves products or systems falling within the scope of Regulation
(EC) No 300/2008 of the European Parliament and of the Council (24), Regulation
(EU) No 167/2013 of the European Parliament and of the Council (25), Regulation
(EU) No 168/2013 of the European Parliament and of the Council (26), Directive
2014/90/EU of the European Parliament and of the Council (27), Directive (EU)
2016/797 of the European Parliament and of the Council (28), Regulation (EU) 2018/858
of the European Parliament and of the Council (29), Regulation (EU) 2018/1139
of the European Parliament and of the Council (30), and Regulation (EU) 2019/2144
of the European
- source_sentence: What is the significance of the number 4 in the provided context?
sentences:
- are intended to be used solely for the purpose of enabling cybersecurity and personal
data protection measures should not be considered to be high-risk AI systems.
- (4)
- '(5)
At the same time, depending on the circumstances regarding its specific application,
use, and level of technological development, AI may generate risks and cause harm
to public interests and fundamental rights that are protected by Union law. Such
harm might be material or immaterial, including physical, psychological, societal
or economic harm.
(6)'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.9166666666666666
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1.0
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1.0
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1.0
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.9166666666666666
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.19999999999999998
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.09999999999999999
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.9166666666666666
name: Cosine Recall@1
- type: cosine_recall@3
value: 1.0
name: Cosine Recall@3
- type: cosine_recall@5
value: 1.0
name: Cosine Recall@5
- type: cosine_recall@10
value: 1.0
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9665164429315495
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.954861111111111
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9548611111111112
name: Cosine Map@100
---
# SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 1024 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Mdean77/legal-ft-2")
# Run inference
sentences = [
'What is the significance of the number 4 in the provided context?',
'(4)',
'are intended to be used solely for the purpose of enabling cybersecurity and personal data protection measures should not be considered to be high-risk AI systems.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Information Retrieval
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
| Metric | Value |
|:--------------------|:-----------|
| cosine_accuracy@1 | 0.9167 |
| cosine_accuracy@3 | 1.0 |
| cosine_accuracy@5 | 1.0 |
| cosine_accuracy@10 | 1.0 |
| cosine_precision@1 | 0.9167 |
| cosine_precision@3 | 0.3333 |
| cosine_precision@5 | 0.2 |
| cosine_precision@10 | 0.1 |
| cosine_recall@1 | 0.9167 |
| cosine_recall@3 | 1.0 |
| cosine_recall@5 | 1.0 |
| cosine_recall@10 | 1.0 |
| **cosine_ndcg@10** | **0.9665** |
| cosine_mrr@10 | 0.9549 |
| cosine_map@100 | 0.9549 |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 400 training samples
* Columns: <code>sentence_0</code> and <code>sentence_1</code>
* Approximate statistics based on the first 400 samples:
| | sentence_0 | sentence_1 |
|:--------|:-----------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 10 tokens</li><li>mean: 20.43 tokens</li><li>max: 35 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 93.01 tokens</li><li>max: 186 tokens</li></ul> |
* Samples:
| sentence_0 | sentence_1 |
|:-----------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>What is the significance of the number 50 in the given context?</code> | <code>(50)</code> |
| <code>How does the context relate to the concept of fifty?</code> | <code>(50)</code> |
| <code>What are the ethical principles mentioned in the context for developing voluntary best practices and standards?</code> | <code>encouraged to take into account, as appropriate, the ethical principles for the development of voluntary best practices and standards.</code> |
* Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
```json
{
"loss": "MultipleNegativesRankingLoss",
"matryoshka_dims": [
768,
512,
256,
128,
64
],
"matryoshka_weights": [
1,
1,
1,
1,
1
],
"n_dims_per_step": -1
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `num_train_epochs`: 10
- `multi_dataset_batch_sampler`: round_robin
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 10
- `per_device_eval_batch_size`: 10
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 10
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: round_robin
</details>
### Training Logs
| Epoch | Step | cosine_ndcg@10 |
|:-----:|:----:|:--------------:|
| 1.0 | 40 | 0.9506 |
| 1.25 | 50 | 0.9621 |
| 2.0 | 80 | 0.9492 |
| 2.5 | 100 | 0.9478 |
| 3.0 | 120 | 0.9519 |
| 3.75 | 150 | 0.9611 |
| 4.0 | 160 | 0.9596 |
| 5.0 | 200 | 0.9715 |
| 6.0 | 240 | 0.9742 |
| 6.25 | 250 | 0.9665 |
| 7.0 | 280 | 0.9588 |
| 7.5 | 300 | 0.9665 |
| 8.0 | 320 | 0.9665 |
| 8.75 | 350 | 0.9638 |
| 9.0 | 360 | 0.9638 |
| 10.0 | 400 | 0.9665 |
### Framework Versions
- Python: 3.13.0
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.6.0
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MatryoshkaLoss
```bibtex
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->