Nomic v1.5 Chatbot Matryoshka
This is a sentence-transformers model finetuned from nomic-ai/nomic-embed-text-v1.5. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: nomic-ai/nomic-embed-text-v1.5
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 768 tokens
- Similarity Function: Cosine Similarity
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NomicBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("MANMEET75/nomic-embed-text-v1.5-Chatbot-matryoshka")
# Run inference
sentences = [
"I can understand and respond in multiple Indian regional languages. Feel free to communicate with me in the language you're most comfortable with.",
'Bharti, what languages can you understand and respond to?',
'Bharti, can you provide tips for effective online communication?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
dim_768
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.907 |
cosine_accuracy@3 | 0.9767 |
cosine_accuracy@5 | 0.9767 |
cosine_accuracy@10 | 0.9767 |
cosine_precision@1 | 0.907 |
cosine_precision@3 | 0.3256 |
cosine_precision@5 | 0.1953 |
cosine_precision@10 | 0.0977 |
cosine_recall@1 | 0.907 |
cosine_recall@3 | 0.9767 |
cosine_recall@5 | 0.9767 |
cosine_recall@10 | 0.9767 |
cosine_ndcg@10 | 0.951 |
cosine_mrr@10 | 0.9419 |
cosine_map@100 | 0.9428 |
Information Retrieval
- Dataset:
dim_512
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.907 |
cosine_accuracy@3 | 0.9767 |
cosine_accuracy@5 | 0.9767 |
cosine_accuracy@10 | 0.9767 |
cosine_precision@1 | 0.907 |
cosine_precision@3 | 0.3256 |
cosine_precision@5 | 0.1953 |
cosine_precision@10 | 0.0977 |
cosine_recall@1 | 0.907 |
cosine_recall@3 | 0.9767 |
cosine_recall@5 | 0.9767 |
cosine_recall@10 | 0.9767 |
cosine_ndcg@10 | 0.951 |
cosine_mrr@10 | 0.9419 |
cosine_map@100 | 0.9426 |
Information Retrieval
- Dataset:
dim_256
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8837 |
cosine_accuracy@3 | 0.9535 |
cosine_accuracy@5 | 0.9767 |
cosine_accuracy@10 | 0.9767 |
cosine_precision@1 | 0.8837 |
cosine_precision@3 | 0.3178 |
cosine_precision@5 | 0.1953 |
cosine_precision@10 | 0.0977 |
cosine_recall@1 | 0.8837 |
cosine_recall@3 | 0.9535 |
cosine_recall@5 | 0.9767 |
cosine_recall@10 | 0.9767 |
cosine_ndcg@10 | 0.9378 |
cosine_mrr@10 | 0.9244 |
cosine_map@100 | 0.9247 |
Information Retrieval
- Dataset:
dim_128
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.8837 |
cosine_accuracy@3 | 0.9767 |
cosine_accuracy@5 | 0.9767 |
cosine_accuracy@10 | 0.9767 |
cosine_precision@1 | 0.8837 |
cosine_precision@3 | 0.3256 |
cosine_precision@5 | 0.1953 |
cosine_precision@10 | 0.0977 |
cosine_recall@1 | 0.8837 |
cosine_recall@3 | 0.9767 |
cosine_recall@5 | 0.9767 |
cosine_recall@10 | 0.9767 |
cosine_ndcg@10 | 0.9394 |
cosine_mrr@10 | 0.9264 |
cosine_map@100 | 0.9264 |
Information Retrieval
- Dataset:
dim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9302 |
cosine_accuracy@3 | 0.9767 |
cosine_accuracy@5 | 0.9767 |
cosine_accuracy@10 | 0.9767 |
cosine_precision@1 | 0.9302 |
cosine_precision@3 | 0.3256 |
cosine_precision@5 | 0.1953 |
cosine_precision@10 | 0.0977 |
cosine_recall@1 | 0.9302 |
cosine_recall@3 | 0.9767 |
cosine_recall@5 | 0.9767 |
cosine_recall@10 | 0.9767 |
cosine_ndcg@10 | 0.9596 |
cosine_mrr@10 | 0.9535 |
cosine_map@100 | 0.9538 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 530 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 1000 samples:
positive anchor type string string details - min: 11 tokens
- mean: 35.33 tokens
- max: 99 tokens
- min: 7 tokens
- mean: 17.3 tokens
- max: 29 tokens
- Samples:
positive anchor BharatPe Speaker comes with the following benefits: - Helps you avoid payment fraud - Lightweight & Easy installation process - Compatible with SIM & GPRS connectivity - Comes with a battery, no hassle of constant charging - Available in 10 Languages - Cashback Offers - Free replacement To Know more and place an order, tap below http://bharatpe.in/speaker.
What are the benefits of the BharatPe speaker?
BharatPe Speaker comes with the following benefits: - Helps you avoid payment fraud - Lightweight & Easy installation process - Compatible with SIM & GPRS connectivity - Comes with a battery, no hassle of constant charging - Available in 10 Languages - Cashback Offers - Free replacement To Know more and place an order, tap below http://bharatpe.in/speaker.
What advantages does the BharatPe speaker offer?
BharatPe Speaker comes with the following benefits: - Helps you avoid payment fraud - Lightweight & Easy installation process - Compatible with SIM & GPRS connectivity - Comes with a battery, no hassle of constant charging - Available in 10 Languages - Cashback Offers - Free replacement To Know more and place an order, tap below http://bharatpe.in/speaker.
Can you outline the benefits of using the BharatPe speaker?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 32per_device_eval_batch_size
: 16gradient_accumulation_steps
: 16learning_rate
: 2e-05num_train_epochs
: 10lr_scheduler_type
: cosinewarmup_ratio
: 0.1tf32
: Falseload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 16eval_accumulation_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Falselocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_128_cosine_map@100 | dim_256_cosine_map@100 | dim_512_cosine_map@100 | dim_64_cosine_map@100 | dim_768_cosine_map@100 |
---|---|---|---|---|---|---|---|
0.9412 | 1 | - | 0.7883 | 0.8148 | 0.8134 | 0.7657 | 0.8234 |
1.8824 | 2 | - | 0.8953 | 0.8956 | 0.8859 | 0.8273 | 0.8855 |
2.8235 | 3 | - | 0.9167 | 0.9150 | 0.9310 | 0.8926 | 0.9292 |
3.7647 | 4 | - | 0.9205 | 0.9208 | 0.9348 | 0.9073 | 0.9349 |
4.7059 | 5 | - | 0.9244 | 0.9247 | 0.9348 | 0.9151 | 0.9388 |
5.6471 | 6 | - | 0.9244 | 0.9247 | 0.9387 | 0.9189 | 0.9389 |
6.5882 | 7 | - | 0.9244 | 0.9247 | 0.9387 | 0.9189 | 0.9389 |
7.5294 | 8 | - | 0.9244 | 0.9247 | 0.9388 | 0.9538 | 0.9428 |
8.4706 | 9 | - | 0.9264 | 0.9247 | 0.9426 | 0.9538 | 0.9428 |
9.4118 | 10 | 1.9538 | 0.9264 | 0.9247 | 0.9426 | 0.9538 | 0.9428 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.12
- Sentence Transformers: 3.0.1
- Transformers: 4.41.2
- PyTorch: 2.1.2+cu121
- Accelerate: 0.32.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for MANMEET75/nomic-embed-text-v1.5-Chatbot-matryoshka
Base model
nomic-ai/nomic-embed-text-v1.5Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.907
- Cosine Accuracy@3 on dim 768self-reported0.977
- Cosine Accuracy@5 on dim 768self-reported0.977
- Cosine Accuracy@10 on dim 768self-reported0.977
- Cosine Precision@1 on dim 768self-reported0.907
- Cosine Precision@3 on dim 768self-reported0.326
- Cosine Precision@5 on dim 768self-reported0.195
- Cosine Precision@10 on dim 768self-reported0.098
- Cosine Recall@1 on dim 768self-reported0.907
- Cosine Recall@3 on dim 768self-reported0.977