SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 350 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("zihoo/all-MiniLM-L6-v2-WMGPL")
# Run inference
sentences = [
    'what is mindfulness?',
    'Workplace Mindfulness Mindfulness is also defined as a state (e.g., Bishop et al.,  2004; Good et al., 2016; Lau et al., 2006; Tanay & Bernstein,  2013) of being aware of and attentive to what is taking place  internally and externally at that moment (Good et al., 2016;  Lau et al., 2006; Tanay & Bernstein, 2013). For example, Lau  et al., (2006, p. 1447) described mindfulness as “a mode, or  state-like quality that is maintained only when attention to  experience is intentionally cultivated with an open, nonjudg mental orientation to experience.” More recently, Good et al.,  (2016, p. 117) defined mindfulness as “receptive attention to  and awareness of present events and experience.”',
    'Workplace Mindfulness Brown and Ryan (2003) further propose that, despite their  intertwined nature, distinctions exist between attention and  awareness—the insights gained by sustained awareness can  only be translated into specific actions by paying focused  attention to our behaviors or the tasks at hand (Martin,  1997). Hence, heightened attention to and awareness of  experiences and events should capture two different aspects  of mindfulness. Recent research has also emphasized that  attention and awareness should be distinguished from each  other because attention reflects an ever-changing factor of  consciousness, whereas awareness refers to a specific and  stable state of consciousness (Selart et al., in press). In the  past, attention and awareness have proved important to the  study of mindfulness-promoting practices (Brown & Ryan,  2004), as some of these practices highlight focused attention  whereas others emphasize awareness (Bishop et al., 2004).  Notably, research has yielded empirical support confirming  these distinctions (Feldman et al., 2007).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 160,000 training samples
  • Columns: sentence_0, sentence_1, sentence_2, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 sentence_2 label
    type string string string float
    details
    • min: 5 tokens
    • mean: 9.0 tokens
    • max: 25 tokens
    • min: 94 tokens
    • mean: 254.31 tokens
    • max: 350 tokens
    • min: 94 tokens
    • mean: 253.05 tokens
    • max: 350 tokens
    • min: -9.79
    • mean: 3.84
    • max: 20.17
  • Samples:
    sentence_0 sentence_1 sentence_2 label
    why is mindfulness used at work Assessing Facets of Doing so, the present endeavor makes three contributions to the literature. First, using the multidimensional scale developed in the present work, we will provide first insights into the differential validities of subfacets of mind fulness for key work outcomes. This will foster a refined understanding of the mechanisms of action inherent to mindfulness and help understand why mindfulness matters for which work outcome (Bishop, Lau, Shapiro, Carlson, Anderson, Carmody, Segal, Abbey, Speca, Velting, & Devins, 2004; Shapiro, Carlson, Astin, & Freedman, 2006). Second, although prominent mindfulness theories and scholarly work on mindfulness in the clinical area suggest that mindfulness consists of multiple subfacets (Baer et al., 2006; Bishop et al., 2004), research on mindfulness in the context of work has almost exclusively operationalized mindfulness by assessing the awareness component (for an exception see Liang et al., 2017). This bears the risk of con... Assessing Facets of Over the last 7 years, research into mindfulness in the context of work has been gaining momentum and there is a growing body of research pro viding initial evidence on the benefits of mindfulness for core workplace outcomes. Especially health and well-being-related outcomes have been at the center of research attention, but also interpersonal relationships, lead ership and performance outcomes (for reviews and meta-analyses see Eby, Allen, Conley, Williamson, Henderson, & Mancini, 2019; Good, Lyddy, Glomb, Bono, Brown, Duffy, Baer, Brewer, & Lazar, 2016; Mesmer-Magnus, Manapragada, Viswesvaran, & Allen, 2017). Also, practitioners have become increasingly interested in mindfulness and its applications in the context of work. Organizations including Google, AETNA, IBM, or SAP, have started offering mindfulness trainings to their workforce (Hyland, Lee, & Mills, 2015). With the first empirical studies appearing in the scientific IO literature 8 years ago (H... -1.3994250297546387
    who developed mindfulness scales MAAS FMI A variety of measures of mindfulness have been constructed such as the MAAS (Brown & Ryan, 2003), the FMI (Buchheld et al., 2001), the Toronto Mindfulness Scale (TMS) (Lau et al., 2006), the Kentucky Inventory of Mindfulness (KIMS) (Baer, Smith & Allen, 2004), the Cognitive and Affective Mindfulness Scale (Feldman, Hayes, Kumar, Greeson & Laurenceau, 2007) and the Southampton Mindfulness Questionnaire (Chadwick, Hember, Symes, Peters, Kuipers, & Dagnan, 2008). These scales differ because some measure mindfulness as a unidimensional construct versus a multi-faceted construct (Baer et al., 2006), while others measure mindfulness as a trait-like or state-like construct (Dane, 2011). Some consider only the mental state, whereas others include bodily sensations and experience (Grossman, 2008). Furthermore, some measures (e.g. KIMS) include the reported ability to verbally describe experience (e.g. ‘I am good at finding the words to describe my feelings’), while othe... Workplace Mindfulness Mindfulness is widely considered as “paying attention in a par ticular way: on purpose, in the present moment, and nonjudg mentally” (Kabat-Zinn, 1994, p. 4). However, scholars have not reached a consensus on the essential features of mindful ness, with various conceptualizations such as a set of skills, a state, a trait, and a cognitive process. In what follows, we sum marize the prevailing views of mindfulness in the literature. 8.103286743164062
    what measures mindfulness Workplace Mindfulness Scholars have developed several measures of mindfulness (Table 1). These measures help us understand the construct of mindfulness, but they are very different in terms of con ceptualization, factor structure, scoring, reliability, and validity. For example, the Freiburg Mindfulness Inventory (FMI; Buchheld et al., 2001) and Toronto Mindfulness Scale (TMS; Lau et al., 2006) were developed to measure states of mindfulness. The Mindfulness Attention and Awareness Scale (MAAS; Brown & Ryan, 2003), Cognitive and Affec tive Mindfulness Scale—Revised (CAMS-R; Feldman et al., 2007), and Philadelphia Mindfulness Questionnaire (PMQ; Cardaciotto et al., 2008) have been employed to measure mindfulness as a trait. The Five Facet Mindfulness Question naire (FFMQ; Baer et al., 2006), Experiences Questionnaire (EQ; Fresco et al., 2007), and Kentucky Inventory of Mind fulness Skills (KIMS; Baer et al., 2004) seek to measure mindfulness skills. The Southampton Mindfulne... Workplace Mindfulness Given this background, our conceptualization is expected to be appropriate and valuable in the workplace because compared with the general mindfulness scales, the Work place Mindfulness Scale can measure individual mindfulness in the work context more accurately and relevantly. Practi cally speaking, adopting a skill perspective emphasizing the variability of mindfulness provides useful guidance to employees and organizations, as they aim to improve indi viduals’ mindfulness by implementing interventions. The skill view also assumes a degree of stability for mindful ness—that is, this construct is influenced by contextual fac tors but remains steady over a period of time. 1.8723740577697754
  • Loss: gpl.toolkit.loss.MarginDistillationLoss

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 1
  • max_steps: 10000
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: 10000
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.05 500 32.954
0.1 1000 29.8033
0.15 1500 29.0685
0.2 2000 29.799
0.25 2500 28.8365
0.3 3000 28.9655
0.35 3500 29.0616
0.4 4000 29.378
0.45 4500 29.0731
0.5 5000 27.8961
0.55 5500 28.9225
0.6 6000 29.1866
0.65 6500 28.4707
0.7 7000 28.291
0.75 7500 28.4819
0.8 8000 28.5333
0.85 8500 27.9674
0.9 9000 29.8078
0.95 9500 27.0718
1.0 10000 29.6496

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
13
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for zihoo/all-MiniLM-L6-v2-WMGPL

Finetuned
(221)
this model