arad1367's picture
Add new SentenceTransformer model
251f5e4 verified
metadata
base_model: BAAI/bge-base-en-v1.5
language:
  - en
library_name: sentence-transformers
license: apache-2.0
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:396
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
widget:
  - source_sentence: >-
      How can technographics contribute to predicting consumer behavior in
      digital marketing?
    sentences:
      - >-
        Data analysis is essential in predicting consumer behavior in digital
        marketing. Analysis of data related to consumer behavior, preferences
        and needs can reveal patterns and trends that can be used to forecast
        future behavior and refine marketing strategies.
      - >-
        Technographics enables businesses to understand the technological habits
        and preferences of their customers. By analyzing this data, companies
        can predict how these users are likely to interact with their digital
        products or services, and tailor their marketing strategies accordingly.
      - >-
        The key components include data collection (gathering data from various
        sources), data analysis (using algorithms and models to analyze data),
        and predictive modelling (predicting future customer behavior based on
        analyzed data).
  - source_sentence: >-
      How is technographic data collected for understanding cross-channel
      behavior?
    sentences:
      - >-
        Technographic data for understanding cross-channel behavior is collected
        through various data analytics tools that track the customer's
        interactions across different digital channels. These tools can monitor
        website usage, mobile app activity, social media engagements, and email
        click-through rates.
      - >-
        Adobe's "Real-Time Marketing Insights" is a notable case. They utilized
        technographic data to identify their customer's most-used digital tools,
        leading to significant enhancements in their personalized marketing
        strategies and an increase in customer engagement.
      - >-
        Technology stack analysis helps identify the tools and platforms a
        company uses for digital marketing. It enables marketers to understand
        the infrastructure that supports their strategies and spot opportunities
        for innovation or consolidation.
  - source_sentence: >-
      How does consumer behavior pattern analysis contribute to the
      effectiveness of digital marketing campaigns?
    sentences:
      - >-
        Machine learning can identify patterns and trends in technographic data
        that may not be obvious to humans. It can also predict future behavior
        based on these patterns, allowing businesses to anticipate consumer
        needs and adjust their strategies accordingly.
      - >-
        Consumer behavior pattern analysis provides insights into how, when, and
        why consumers interact with digital marketing content. These insights
        can be used to refine campaign strategies, enhance personalization, and
        ultimately improve conversion rates and customer loyalty.
      - >-
        Predictive analytics can forecast what content will resonate best with
        certain audience segments based on past engagement. It can guide topics,
        formats, and delivery channels, enabling marketers to create content
        that is more likely to attract and engage their target audience.
  - source_sentence: What are technographics in the context of digital marketing?
    sentences:
      - >-
        Technographics is a market research analysis method that investigates
        the technology-related behaviors and preferences of consumers. This
        includes their usage, adoption and purchase of technology, which is
        crucial in forming a comprehensive understanding of your target
        audience's digital landscape.
      - >-
        Technographics data can be collected through surveys, social media
        mining, or purchased from data providers. The data is analyzed using
        statistical techniques or machine learning algorithms to identify
        patterns and insights related to consumer behavior.
      - >-
        Technographics can help businesses understand what platforms or
        technologies their competitors' customers use, providing insights into
        competitor tech strengths and weaknesses. This can guide businesses in
        differentiating their offers and positioning themselves more effectively
        in the market.
  - source_sentence: How important is it to update technographic data frequently?
    sentences:
      - >-
        By analyzing a competitor's technology stack, marketers can gain
        insights into their strategies, tools, and platforms. This knowledge can
        help them identify gaps in their own stack, adopt superior technologies,
        or find ways to differentiate their approach.
      - >-
        It is crucial. Technology trends and usage patterns evolve quickly.
        Keeping your technographic data up-to-date ensures that your marketing
        strategies remain relevant and effective.
      - >-
        Technographics is a research methodology that provides data about
        consumers based on their technology use, preferences and behavior. This
        method helps businesses understand which technologies their audience is
        using, and how they use them, thereby, informing the development of more
        effective and personalized marketing strategies.
model-index:
  - name: Technographics Marketing Matryoshka
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 768
          type: dim_768
        metrics:
          - type: cosine_accuracy@1
            value: 0.37373737373737376
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.5050505050505051
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.5757575757575758
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.7575757575757576
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.37373737373737376
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.16835016835016833
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.11515151515151514
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.07575757575757575
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.37373737373737376
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.5050505050505051
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.5757575757575758
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.7575757575757576
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.5323267552745661
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.46469456469456494
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.47723382714423296
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 512
          type: dim_512
        metrics:
          - type: cosine_accuracy@1
            value: 0.37373737373737376
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.5151515151515151
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.5757575757575758
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.7272727272727273
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.37373737373737376
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.17171717171717168
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.11515151515151514
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.0727272727272727
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.37373737373737376
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.5151515151515151
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.5757575757575758
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.7272727272727273
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.5279877868900206
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.46727994227994235
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.4818097786730832
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 256
          type: dim_256
        metrics:
          - type: cosine_accuracy@1
            value: 0.35353535353535354
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.48484848484848486
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.5858585858585859
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.7070707070707071
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.35353535353535354
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.16161616161616163
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.11717171717171715
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.07070707070707069
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.35353535353535354
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.48484848484848486
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.5858585858585859
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.7070707070707071
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.5102400942328595
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.44968734968734975
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.4654526924283992
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 128
          type: dim_128
        metrics:
          - type: cosine_accuracy@1
            value: 0.37373737373737376
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.47474747474747475
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.5757575757575758
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.6868686868686869
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.37373737373737376
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.15824915824915825
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.11515151515151512
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.06868686868686867
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.37373737373737376
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.47474747474747475
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.5757575757575758
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.6868686868686869
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.5096813265364254
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.45540724707391383
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.4713790516617994
            name: Cosine Map@100
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: dim 64
          type: dim_64
        metrics:
          - type: cosine_accuracy@1
            value: 0.3434343434343434
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.48484848484848486
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.5353535353535354
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.6868686868686869
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.3434343434343434
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.16161616161616163
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.10707070707070705
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.06868686868686867
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.3434343434343434
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.48484848484848486
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.5353535353535354
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.6868686868686869
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.4979120019313254
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.44037197370530706
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.4556726424225123
            name: Cosine Map@100

Technographics Marketing Matryoshka

This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-base-en-v1.5
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 dimensions
  • Similarity Function: Cosine Similarity
  • Training Dataset:
    • json
  • Language: en
  • License: apache-2.0

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("arad1367/technographics-marketing-matryoshka")
# Run inference
sentences = [
    'How important is it to update technographic data frequently?',
    'It is crucial. Technology trends and usage patterns evolve quickly. Keeping your technographic data up-to-date ensures that your marketing strategies remain relevant and effective.',
    "By analyzing a competitor's technology stack, marketers can gain insights into their strategies, tools, and platforms. This knowledge can help them identify gaps in their own stack, adopt superior technologies, or find ways to differentiate their approach.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric dim_768 dim_512 dim_256 dim_128 dim_64
cosine_accuracy@1 0.3737 0.3737 0.3535 0.3737 0.3434
cosine_accuracy@3 0.5051 0.5152 0.4848 0.4747 0.4848
cosine_accuracy@5 0.5758 0.5758 0.5859 0.5758 0.5354
cosine_accuracy@10 0.7576 0.7273 0.7071 0.6869 0.6869
cosine_precision@1 0.3737 0.3737 0.3535 0.3737 0.3434
cosine_precision@3 0.1684 0.1717 0.1616 0.1582 0.1616
cosine_precision@5 0.1152 0.1152 0.1172 0.1152 0.1071
cosine_precision@10 0.0758 0.0727 0.0707 0.0687 0.0687
cosine_recall@1 0.3737 0.3737 0.3535 0.3737 0.3434
cosine_recall@3 0.5051 0.5152 0.4848 0.4747 0.4848
cosine_recall@5 0.5758 0.5758 0.5859 0.5758 0.5354
cosine_recall@10 0.7576 0.7273 0.7071 0.6869 0.6869
cosine_ndcg@10 0.5323 0.528 0.5102 0.5097 0.4979
cosine_mrr@10 0.4647 0.4673 0.4497 0.4554 0.4404
cosine_map@100 0.4772 0.4818 0.4655 0.4714 0.4557

Training Details

Training Dataset

json

  • Dataset: json
  • Size: 396 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 396 samples:
    anchor positive
    type string string
    details
    • min: 8 tokens
    • mean: 15.71 tokens
    • max: 28 tokens
    • min: 29 tokens
    • mean: 48.68 tokens
    • max: 82 tokens
  • Samples:
    anchor positive
    What role does customer segmentation play in predictive analytics? Customer segmentation within predictive analytics allows marketers to group customers based on similar characteristics. This helps in creating more targeted marketing strategies and predicting behavior patterns for each segment, improving overall campaign effectiveness.
    How has technographics evolved over the years to accommodate the digital space? Initially focused on hardware and software usage, technographics has evolved to consider digital platforms and tools. It now investigates consumer behavior across different channels, devices, and even social media platforms to provide a more comprehensive consumer profile.
    Can you name some common methods of collecting technographic data? Some common methods include surveys, interviews, online browsing behavior tracking, and direct observation. In addition, databases can be bought from vendors specializing in technographic data collection.
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: epoch
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • gradient_accumulation_steps: 16
  • learning_rate: 2e-05
  • num_train_epochs: 10
  • lr_scheduler_type: cosine
  • warmup_ratio: 0.1
  • bf16: True
  • tf32: True
  • load_best_model_at_end: True
  • optim: adamw_torch_fused
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: epoch
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 16
  • eval_accumulation_steps: None
  • learning_rate: 2e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: True
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch_fused
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss dim_768_cosine_ndcg@10 dim_512_cosine_ndcg@10 dim_256_cosine_ndcg@10 dim_128_cosine_ndcg@10 dim_64_cosine_ndcg@10
1.0 1 - 0.4650 0.4667 0.4712 0.4371 0.4151
2.0 3 - 0.5316 0.5307 0.5051 0.4810 0.4407
3.0 5 - 0.5256 0.5222 0.5136 0.5104 0.4742
4.0 7 - 0.5316 0.5269 0.5120 0.5083 0.4790
5.0 9 - 0.5337 0.5280 0.5102 0.5101 0.4983
6.0 10 2.9453 0.5323 0.5280 0.5102 0.5097 0.4979
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.11.4
  • Sentence Transformers: 3.4.1
  • Transformers: 4.41.2
  • PyTorch: 2.1.2+cu121
  • Accelerate: 0.34.2
  • Datasets: 2.19.1
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}