phi-2-telecom-ft / README.md
dinho1597's picture
Subiendo modelo inicial
dd05793 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:6552
  - loss:MultipleNegativesRankingLoss
base_model: BAAI/bge-small-en-v1.5
widget:
  - source_sentence: >-
      What problem can reconfigurable intelligent surfaces mitigate in light
      fidelity systems?
    sentences:
      - >-
        The document mentions that blind channel estimation requires a large
        number of data symbols to improve accuracy, which may not be feasible in
        practice.
      - >-
        Empirical evidence suggests that the power decay can even be exponential
        with distance.
      - >-
        Reconfigurable intelligent surface-enabled environments can enhance
        light fidelity coverage by mitigating the dead-zone problem for users at
        the edge of the cell, improving link quality.
  - source_sentence: >-
      What is the advantage of conformal arrays in UAV (Unmanned Aerial Vehicle)
      communication systems?
    sentences:
      - >-
        Overfitting occurs when a model fits the training data too well and
        fails to generalize to unseen data, while underfitting occurs when a
        model does not fit the training data well enough to capture the
        underlying patterns.
      - >-
        A point-to-multipoint service is a service type in which data is sent to
        all service subscribers or a pre-defined subset of all subscribers
        within an area defined by the Service Requester.
      - >-
        Conformal arrays offer good aerodynamic performance, enable full-space
        beam scanning, and provide more DoFs for geometry design.
  - source_sentence: What is a Virtual Home Environment?
    sentences:
      - >-
        Compressive spectrum sensing utilizes the sparsity property of signals
        to enable sub-Nyquist sampling.
      - >-
        A Virtual Home Environment is a concept that allows for the portability
        of personal service environments across network boundaries and between
        terminals.
      - >-
        In the Client Server model, a Client application waits passively on
        contact while a Server starts the communication actively.
  - source_sentence: What is multi-agent RL (Reinforcement learning) concerned with?
    sentences:
      - >-
        Data centers account for about 1% of global electricity demand, as
        stated in the document.
      - >-
        Fog Computing and Communication in the Frugal 5G network architecture
        brings intelligence to the edge and enables more efficient communication
        with reduced resource usage.
      - >-
        Multi-agent RL is concerned with learning in presence of multiple agents
        and encompasses unique problem formulation that draws from game
        theoretical concepts.
  - source_sentence: >-
      What is the trade-off between privacy and convergence performance when
      using artificial noise obscuring in federated learning?
    sentences:
      - >-
        The 'decrypt_error' alert indicates a handshake cryptographic operation
        failed, including being unable to verify a signature, decrypt a key
        exchange, or validate a finished message.
      - >-
        The trade-off between privacy and convergence performance when using
        artificial noise obscuring in federated learning is that increasing the
        noise variance improves privacy but degrades convergence.
      - >-
        The design rules for sub-carrier allocations to users in cellular
        systems are to allocate the sub-carriers as spread out as possible and
        hop the sub-carriers every OFDM symbol time.
datasets:
  - dinho1597/Telecom-QA-MultipleChoice
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_recall@1
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on BAAI/bge-small-en-v1.5
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: telecom ir eval
          type: telecom-ir-eval
        metrics:
          - type: cosine_accuracy@1
            value: 0.9679633867276888
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9916094584286804
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.9916094584286804
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.992372234935164
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.9679633867276888
            name: Cosine Precision@1
          - type: cosine_recall@1
            value: 0.9679633867276888
            name: Cosine Recall@1
          - type: cosine_ndcg@10
            value: 0.9823240649953693
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.9788647342995168
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.9791402442094453
            name: Cosine Map@100

SentenceTransformer based on BAAI/bge-small-en-v1.5

This is a sentence-transformers model finetuned from BAAI/bge-small-en-v1.5 on the telecom-qa-multiple_choice dataset. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'What is the trade-off between privacy and convergence performance when using artificial noise obscuring in federated learning?',
    'The trade-off between privacy and convergence performance when using artificial noise obscuring in federated learning is that increasing the noise variance improves privacy but degrades convergence.',
    "The 'decrypt_error' alert indicates a handshake cryptographic operation failed, including being unable to verify a signature, decrypt a key exchange, or validate a finished message.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.968
cosine_accuracy@3 0.9916
cosine_accuracy@5 0.9916
cosine_accuracy@10 0.9924
cosine_precision@1 0.968
cosine_recall@1 0.968
cosine_ndcg@10 0.9823
cosine_mrr@10 0.9789
cosine_map@100 0.9791

Training Details

Training Dataset

telecom-qa-multiple_choice

  • Dataset: telecom-qa-multiple_choice at 73aebbb
  • Size: 6,552 training samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 4 tokens
    • mean: 18.8 tokens
    • max: 48 tokens
    • min: 8 tokens
    • mean: 29.27 tokens
    • max: 92 tokens
  • Samples:
    anchor positive
    What is multi-user multiple input, multiple output (MU-MIMO) in IEEE 802.11-2020? MU-MIMO is a technique by which multiple stations (STAs) either simultaneously transmit to a single STA or simultaneously receive from a single STA independent data streams over the same radio frequencies.
    What is the purpose of wireless network virtualization? The purpose of wireless network virtualization is to improve resource utilization, support diverse services/use cases, and be cost-effective and flexible for new services.
    What is the E2E (end-to-end) latency requirement for factory automation applications? Factory automation applications require an E2E latency of 0.25-10 ms.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Evaluation Dataset

telecom-qa-multiple_choice

  • Dataset: telecom-qa-multiple_choice at 73aebbb
  • Size: 6,552 evaluation samples
  • Columns: anchor and positive
  • Approximate statistics based on the first 1000 samples:
    anchor positive
    type string string
    details
    • min: 4 tokens
    • mean: 18.5 tokens
    • max: 52 tokens
    • min: 9 tokens
    • mean: 28.83 tokens
    • max: 85 tokens
  • Samples:
    anchor positive
    Which standard enables building Digital Twins of different Physical Twins using combinations of XML (eXtensible Markup Language) and C codes? The functional mockup interface (FMI) is a standard that enables building Digital Twins of different Physical Twins using combinations of XML and C codes.
    What algorithm is commonly used for digital signatures in S/MIME? RSA is commonly used for digital signatures in S/MIME.
    What are the three modes of operation based on the communication range and the SA (subarray) separation? The three modes of operation based on the communication range and the SA separation are: (1) a mode where the channel paths are independent and the channel is always well-conditioned, (2) a mode where the channel is ill-conditioned, and (3) a mode where the channel is highly correlated.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • weight_decay: 0.01
  • num_train_epochs: 10
  • lr_scheduler_type: cosine_with_restarts
  • warmup_ratio: 0.1
  • fp16: True
  • load_best_model_at_end: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 256
  • per_device_eval_batch_size: 256
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.01
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 10
  • max_steps: -1
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss telecom-ir-eval_cosine_ndcg@10
0.7143 15 0.824 0.1333 0.9701
1.3810 30 0.1731 0.0759 0.9776
2.0476 45 0.0917 0.0657 0.9807
2.7619 60 0.0676 0.0609 0.9813
3.4286 75 0.0435 0.0596 0.9818
4.0952 90 0.038 0.0606 0.9814
4.8095 105 0.0332 0.0594 0.9820
5.4762 120 0.0269 0.0607 0.9817
6.1429 135 0.0219 0.0600 0.9819
6.8571 150 0.0244 0.0599 0.9823

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}