MPNet base trained on AllNLI triplets

This is a sentence-transformers model finetuned from prajjwal1/bert-tiny on the pair_similarity_new_1231 dataset. It maps sentences & paragraphs to a 128-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: prajjwal1/bert-tiny
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 128 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 128, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("Tien09/tiny_bert_ft_sim_score_1231")
# Run inference
sentences = [
    'You can target 1 face-up card you control; destroy it, and if you do, Special Summon 1 "Zoodiac" monster from your Deck. You can only use this effect of "Zoodiac Barrage" once per turn. If this card is destroyed by a card effect and sent to the GY: You can target 1 "Zoodiac" Xyz Monster you control; attach this card from your GY to that Xyz Monster as Xyz Material.',
    'Destroy as many Normal Monsters on the field as possible, and if you do, Special Summon Level 4 or lower Dinosaur-Type monsters from your Deck, up to the number destroyed, but destroy them during the End Phase. You can banish this card from your GY, then target 1 Dinosaur-Type monster you control and 1 card your opponent controls; destroy them.',
    'Target 1 "Raidraptor" monster you control; Special Summon 1 monster with the same name as that monster on the field from your hand or Deck in Defense Position. You can only activate 1 "Raidraptor - Call" per turn. You cannot Special Summon monsters during the turn you activate this card, except "Raidraptor" monsters.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 128]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

pair_similarity_new_1231

  • Dataset: pair_similarity_new_1231 at e141fec
  • Size: 8,959 training samples
  • Columns: effect_text, score, and effect_text2
  • Approximate statistics based on the first 1000 samples:
    effect_text score effect_text2
    type string float string
    details
    • min: 9 tokens
    • mean: 73.57 tokens
    • max: 204 tokens
    • min: 0.0
    • mean: 0.41
    • max: 1.0
    • min: 4 tokens
    • mean: 73.18 tokens
    • max: 203 tokens
  • Samples:
    effect_text score effect_text2
    When your opponent's monster attacks a face-up Level 4 or lower Toon Monster on your side of the field, you can make the attack a direct attack to your Life Points. 0.0 During either player's Main Phase: Special Summon this card as a Normal Monster (Reptile-Type/EARTH/Level 4/ATK 1600/DEF 1800). (This card is also still a Trap Card.)
    When your opponent Special Summons a monster, you can discard 1 card to Special Summon this card from your hand. Your opponent cannot remove cards from play. 1.0 Activate this card by discarding 1 monster, then target 1 monster in your GY whose Level is lower than the discarded monster's original Level; Special Summon it and equip it with this card. The equipped monster has its effects negated. You can only activate 1 "Overdone Burial" per turn.
    Mystical Elf" + "Curtain of the Dark Ones 0.0 This card is used to Ritual Summon "Legendary Flame Lord". You must also Tribute monsters whose total Levels equal 7 or more from the field or your hand.
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

pair_similarity_new_1231

  • Dataset: pair_similarity_new_1231 at e141fec
  • Size: 1,920 evaluation samples
  • Columns: effect_text, score, and effect_text2
  • Approximate statistics based on the first 1000 samples:
    effect_text score effect_text2
    type string float string
    details
    • min: 6 tokens
    • mean: 72.29 tokens
    • max: 190 tokens
    • min: 0.0
    • mean: 0.43
    • max: 1.0
    • min: 5 tokens
    • mean: 73.92 tokens
    • max: 206 tokens
  • Samples:
    effect_text score effect_text2
    2+ Level 4 monsters
    This Xyz Summoned card gains 500 ATK x the total Link Rating of Link Monsters linked to this card. You can detach 2 materials from this card, then target 1 4 Cyberse Link Monster in your GY; Special Summon it to your field so it points to this card, also you cannot Special Summon other monsters or attack directly for the rest of this turn.
    1.0 3 Level 4 monsters Once per turn, you can also Xyz Summon "Zoodiac Tigermortar" by using 1 "Zoodiac" monster you control with a different name as Xyz Material. (If you used an Xyz Monster, any Xyz Materials attached to it also become Xyz Materials on this card.) This card gains ATK and DEF equal to the ATK and DEF of all "Zoodiac" monsters attached to it as Materials. Once per turn: You can detach 1 Xyz Material from this card, then target 1 Xyz Monster you control and 1 "Zoodiac" monster in your GY; attach that "Zoodiac" monster to that Xyz Monster as Xyz Material.
    1 Tuner + 1 or more non-Tuner Pendulum Monsters Once per turn: You can target 1 Pendulum Monster on the field or 1 card in the Pendulum Zone; destroy it, and if you do, shuffle 1 card on the field into the Deck. Once per turn: You can Special Summon 1 "Dracoslayer" monster from your Deck in Defense Position, but it cannot be used as a Synchro Material for a Summon. 0.5 You can Ritual Summon this card with a "Recipe" card. If this card is Special Summoned: You can target 1 Spell/Trap on the field; destroy it. When a card or effect is activated that targets this card on the field, or when this card is targeted for an attack (Quick Effect): You can Tribute this card and 1 Attack Position monster on either field, and if you do, Special Summon 1 Level 3 or 4 "Nouvelles" Ritual Monster from your hand or Deck. You can only use each effect of "Confiras de Nouvelles" once per turn.
    If you control an Illusion or Spellcaster monster: Add 1 "White Forest" monster from your Deck to your hand. If this card is sent to the GY to activate a monster effect: You can Set this card. You can only use each effect of "Tales of the White Forest" once per turn. 1.0 If you control no monsters, you can Special Summon this card (from your hand). You can only use each of the following effects of "Kashtira Fenrir" once per turn. During your Main Phase: You can add 1 "Kashtira" monster from your Deck to your hand. When this card declares an attack, or if your opponent activates a monster effect (except during the Damage Step): You can target 1 face-up card your opponent controls; banish it, face-down.
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • learning_rate: 1e-05
  • num_train_epochs: 10
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 1e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss
0.1786 100 4.54 4.3773
0.3571 200 4.4771 4.3503
0.5357 300 4.4594 4.3149
0.7143 400 4.373 4.2905
0.8929 500 4.3383 4.2789
1.0714 600 4.3392 4.2705
1.25 700 4.3188 4.2627
1.4286 800 4.3372 4.2556
1.6071 900 4.3194 4.2487
1.7857 1000 4.2892 4.2420
1.9643 1100 4.2675 4.2375
2.1429 1200 4.3025 4.2312
2.3214 1300 4.2919 4.2246
2.5 1400 4.2853 4.2183
2.6786 1500 4.2463 4.2140
2.8571 1600 4.2486 4.2090
3.0357 1700 4.2376 4.2051
3.2143 1800 4.2365 4.1996
3.3929 1900 4.2223 4.1946
3.5714 2000 4.222 4.1900
3.75 2100 4.2057 4.1852
3.9286 2200 4.1935 4.1842
4.1071 2300 4.2001 4.1771
4.2857 2400 4.2012 4.1725
4.4643 2500 4.1961 4.1677
4.6429 2600 4.1919 4.1626
4.8214 2700 4.1542 4.1619
5.0 2800 4.1713 4.1594
5.1786 2900 4.1628 4.1530
5.3571 3000 4.1611 4.1504
5.5357 3100 4.1659 4.1480
5.7143 3200 4.131 4.1451
5.8929 3300 4.1191 4.1432
6.0714 3400 4.1521 4.1400
6.25 3500 4.1056 4.1374
6.4286 3600 4.1645 4.1340
6.6071 3700 4.1439 4.1305
6.7857 3800 4.0812 4.1298
6.9643 3900 4.0665 4.1323
7.1429 4000 4.1448 4.1261
7.3214 4100 4.1252 4.1236
7.5 4200 4.08 4.1234
7.6786 4300 4.0815 4.1188
7.8571 4400 4.085 4.1192
8.0357 4500 4.1015 4.1191
8.2143 4600 4.0912 4.1176
8.3929 4700 4.0822 4.1160
8.5714 4800 4.0744 4.1155
8.75 4900 4.1033 4.1135
8.9286 5000 4.0588 4.1135
9.1071 5100 4.0848 4.1125
9.2857 5200 4.0666 4.1125
9.4643 5300 4.0766 4.1125
9.6429 5400 4.0796 4.1116
9.8214 5500 4.0875 4.1117
10.0 5600 4.0929 4.1118

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}
Downloads last month
4
Safetensors
Model size
4.39M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Tien09/tiny_bert_ft_sim_score_1231

Finetuned
(49)
this model

Dataset used to train Tien09/tiny_bert_ft_sim_score_1231