SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 350 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("zihoo/all-MiniLM-L6-v2-WMGPL")
# Run inference
sentences = [
'what is mindfulness?',
'Workplace Mindfulness Mindfulness is also defined as a state (e.g., Bishop et al., 2004; Good et al., 2016; Lau et al., 2006; Tanay & Bernstein, 2013) of being aware of and attentive to what is taking place internally and externally at that moment (Good et al., 2016; Lau et al., 2006; Tanay & Bernstein, 2013). For example, Lau et al., (2006, p. 1447) described mindfulness as “a mode, or state-like quality that is maintained only when attention to experience is intentionally cultivated with an open, nonjudg mental orientation to experience.” More recently, Good et al., (2016, p. 117) defined mindfulness as “receptive attention to and awareness of present events and experience.”',
'Workplace Mindfulness Brown and Ryan (2003) further propose that, despite their intertwined nature, distinctions exist between attention and awareness—the insights gained by sustained awareness can only be translated into specific actions by paying focused attention to our behaviors or the tasks at hand (Martin, 1997). Hence, heightened attention to and awareness of experiences and events should capture two different aspects of mindfulness. Recent research has also emphasized that attention and awareness should be distinguished from each other because attention reflects an ever-changing factor of consciousness, whereas awareness refers to a specific and stable state of consciousness (Selart et al., in press). In the past, attention and awareness have proved important to the study of mindfulness-promoting practices (Brown & Ryan, 2004), as some of these practices highlight focused attention whereas others emphasize awareness (Bishop et al., 2004). Notably, research has yielded empirical support confirming these distinctions (Feldman et al., 2007).',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Training Details
Training Dataset
Unnamed Dataset
- Size: 160,000 training samples
- Columns:
sentence_0
,sentence_1
,sentence_2
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 sentence_2 label type string string string float details - min: 5 tokens
- mean: 9.0 tokens
- max: 25 tokens
- min: 94 tokens
- mean: 254.31 tokens
- max: 350 tokens
- min: 94 tokens
- mean: 253.05 tokens
- max: 350 tokens
- min: -9.79
- mean: 3.84
- max: 20.17
- Samples:
sentence_0 sentence_1 sentence_2 label why is mindfulness used at work
Assessing Facets of Doing so, the present endeavor makes three contributions to the literature. First, using the multidimensional scale developed in the present work, we will provide first insights into the differential validities of subfacets of mind fulness for key work outcomes. This will foster a refined understanding of the mechanisms of action inherent to mindfulness and help understand why mindfulness matters for which work outcome (Bishop, Lau, Shapiro, Carlson, Anderson, Carmody, Segal, Abbey, Speca, Velting, & Devins, 2004; Shapiro, Carlson, Astin, & Freedman, 2006). Second, although prominent mindfulness theories and scholarly work on mindfulness in the clinical area suggest that mindfulness consists of multiple subfacets (Baer et al., 2006; Bishop et al., 2004), research on mindfulness in the context of work has almost exclusively operationalized mindfulness by assessing the awareness component (for an exception see Liang et al., 2017). This bears the risk of con...
Assessing Facets of Over the last 7 years, research into mindfulness in the context of work has been gaining momentum and there is a growing body of research pro viding initial evidence on the benefits of mindfulness for core workplace outcomes. Especially health and well-being-related outcomes have been at the center of research attention, but also interpersonal relationships, lead ership and performance outcomes (for reviews and meta-analyses see Eby, Allen, Conley, Williamson, Henderson, & Mancini, 2019; Good, Lyddy, Glomb, Bono, Brown, Duffy, Baer, Brewer, & Lazar, 2016; Mesmer-Magnus, Manapragada, Viswesvaran, & Allen, 2017). Also, practitioners have become increasingly interested in mindfulness and its applications in the context of work. Organizations including Google, AETNA, IBM, or SAP, have started offering mindfulness trainings to their workforce (Hyland, Lee, & Mills, 2015). With the first empirical studies appearing in the scientific IO literature 8 years ago (H...
-1.3994250297546387
who developed mindfulness scales
MAAS FMI A variety of measures of mindfulness have been constructed such as the MAAS (Brown & Ryan, 2003), the FMI (Buchheld et al., 2001), the Toronto Mindfulness Scale (TMS) (Lau et al., 2006), the Kentucky Inventory of Mindfulness (KIMS) (Baer, Smith & Allen, 2004), the Cognitive and Affective Mindfulness Scale (Feldman, Hayes, Kumar, Greeson & Laurenceau, 2007) and the Southampton Mindfulness Questionnaire (Chadwick, Hember, Symes, Peters, Kuipers, & Dagnan, 2008). These scales differ because some measure mindfulness as a unidimensional construct versus a multi-faceted construct (Baer et al., 2006), while others measure mindfulness as a trait-like or state-like construct (Dane, 2011). Some consider only the mental state, whereas others include bodily sensations and experience (Grossman, 2008). Furthermore, some measures (e.g. KIMS) include the reported ability to verbally describe experience (e.g. ‘I am good at finding the words to describe my feelings’), while othe...
Workplace Mindfulness Mindfulness is widely considered as “paying attention in a par ticular way: on purpose, in the present moment, and nonjudg mentally” (Kabat-Zinn, 1994, p. 4). However, scholars have not reached a consensus on the essential features of mindful ness, with various conceptualizations such as a set of skills, a state, a trait, and a cognitive process. In what follows, we sum marize the prevailing views of mindfulness in the literature.
8.103286743164062
what measures mindfulness
Workplace Mindfulness Scholars have developed several measures of mindfulness (Table 1). These measures help us understand the construct of mindfulness, but they are very different in terms of con ceptualization, factor structure, scoring, reliability, and validity. For example, the Freiburg Mindfulness Inventory (FMI; Buchheld et al., 2001) and Toronto Mindfulness Scale (TMS; Lau et al., 2006) were developed to measure states of mindfulness. The Mindfulness Attention and Awareness Scale (MAAS; Brown & Ryan, 2003), Cognitive and Affec tive Mindfulness Scale—Revised (CAMS-R; Feldman et al., 2007), and Philadelphia Mindfulness Questionnaire (PMQ; Cardaciotto et al., 2008) have been employed to measure mindfulness as a trait. The Five Facet Mindfulness Question naire (FFMQ; Baer et al., 2006), Experiences Questionnaire (EQ; Fresco et al., 2007), and Kentucky Inventory of Mind fulness Skills (KIMS; Baer et al., 2004) seek to measure mindfulness skills. The Southampton Mindfulne...
Workplace Mindfulness Given this background, our conceptualization is expected to be appropriate and valuable in the workplace because compared with the general mindfulness scales, the Work place Mindfulness Scale can measure individual mindfulness in the work context more accurately and relevantly. Practi cally speaking, adopting a skill perspective emphasizing the variability of mindfulness provides useful guidance to employees and organizations, as they aim to improve indi viduals’ mindfulness by implementing interventions. The skill view also assumes a degree of stability for mindful ness—that is, this construct is influenced by contextual fac tors but remains steady over a period of time.
1.8723740577697754
- Loss:
gpl.toolkit.loss.MarginDistillationLoss
Training Hyperparameters
Non-Default Hyperparameters
per_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 1max_steps
: 10000multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: noprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 1max_steps
: 10000lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss |
---|---|---|
0.05 | 500 | 32.954 |
0.1 | 1000 | 29.8033 |
0.15 | 1500 | 29.0685 |
0.2 | 2000 | 29.799 |
0.25 | 2500 | 28.8365 |
0.3 | 3000 | 28.9655 |
0.35 | 3500 | 29.0616 |
0.4 | 4000 | 29.378 |
0.45 | 4500 | 29.0731 |
0.5 | 5000 | 27.8961 |
0.55 | 5500 | 28.9225 |
0.6 | 6000 | 29.1866 |
0.65 | 6500 | 28.4707 |
0.7 | 7000 | 28.291 |
0.75 | 7500 | 28.4819 |
0.8 | 8000 | 28.5333 |
0.85 | 8500 | 27.9674 |
0.9 | 9000 | 29.8078 |
0.95 | 9500 | 27.0718 |
1.0 | 10000 | 29.6496 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.3.1
- Transformers: 4.47.1
- PyTorch: 2.5.1+cu124
- Accelerate: 1.2.1
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 13
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for zihoo/all-MiniLM-L6-v2-WMGPL
Base model
sentence-transformers/all-MiniLM-L6-v2