BGE base Financial Matryoshka
This is a sentence-transformers model finetuned from BAAI/bge-base-en-v1.5 on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-base-en-v1.5
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 768 dimensions
- Similarity Function: Cosine Similarity
- Training Dataset:
- json
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("shivamsharma1967/_bge-base-financial-matryoshka_")
# Run inference
sentences = [
'Pension and postretirement health care and life insurance benefits earned during the year, as well as interest on projected benefit obligations, \nare accrued.\nfor assets and liabilities. We record these translation adjustments in Accumulated other comprehensive loss, a separate component of Equity, \nin our consolidated balance sheets. We record exchange gains and losses resulting from the conversion of transaction currency to functional \ncurrency as a component of Other income (expense), net. \nEmployee Benefit Plans \nPension and postretirement health care and life insurance benefits earned during the year, as well as interest on projected benefit obligations, \nare accrued. Prior service costs and credits resulting from changes in plan benefits are generally amortized over the average remaining service \nperiod of the employees expected to receive benefits. Expected return on plan assets is determined by applying the return on assets \nassumption to the actual fair value of plan assets. Actuarial gains and losses are recognized in Other income (expense), net in the year in \nwhich they occur. These gains and losses are measured annually as of December 31 or upon a remeasurement event. Verizon management \nemployees no longer earn pension benefits or earn service towards the Company retiree medical subsidy. See Note 11 for additional \ninformation. \nWe recognize a pension or a postretirement plans funded status as either an asset or liability in the consolidated balance sheets. Also, we \nmeasure any unrecognized prior service costs and credits that arise during the period as a component of Accumulated other comprehensive \nincome, net of applicable income tax. \nDerivative Instruments \nWe enter into derivative transactions primarily to manage our exposure to fluctuations in foreign currency exchange rates and interest rates. \nWe employ risk management strategies, which may include the use of a variety of derivatives including cross currency swaps, forward \nstarting interest rate swaps, interest rate swaps, treasury rate locks, interest rate caps and foreign exchange forwards. We do not hold \nderivatives for trading purposes. \nWe measure all derivatives at fair value and recognize them as either assets or liabilities in our consolidated balance sheets. Our derivative \ninstruments are valued primarily using models based on readily observable market parameters for all substantial terms of our derivative \ncontracts and thus are classified as Level 2. Changes in the fair values of derivative instruments applied as economic hedges are recognized in \nearnings in the current period. For fair value hedges, the change in the fair value of the derivative instruments is recognized in earnings, along \nwith the change in the fair value of the hedged item. For cash flow hedges, the change in the fair value of the derivative instruments is \nreported in Other comprehensive income (loss) and recognized in earnings when the hedged item is recognized in earnings. For net \ninvestment hedges of certain of our foreign operations, the change in the fair value of the hedging instruments is reported in Other \ncomprehensive income (loss) as part of the cumulative translation adjustment and partially offsets the impact of foreign currency changes on \nthe value of our net investment. \nCash flows from derivatives, which are designated as accounting hedges or applied as economic hedges, are presented consistently with the \ncash flow classification of the related hedged items. See Note 9 for additional information. \nVariable Interest Entities \nVIEs are entities that lack sufficient equity to permit the entity to finance its activities without additional subordinated financial support from \nother parties, have equity investors that do not have the ability to make significant decisions relating to the entitys operations through voting \nrights, do not have the obligation to absorb the expected losses, or do not have the right to receive the residual returns of the entity. We \nconsolidate the assets and liabilities of VIEs when we are deemed to be the primary beneficiary. The primary beneficiary is the party that has \nthe power to make the decisions that most significantly affect the economic performance of the VIE and has the obligation to absorb losses or \nthe right to receive benefits that could potentially be significant to the VIE.\n63\nVerizon 2021 Annual Report on Form 10-K\n\nEstimated Future Benefit Payments \nThe benefit payments to retirees are expected to be paid as follows: \n(dollars in millions) \nYear\nPension Benefits \nHealth Care and Life \n2022\n$ \n2,049 \n$ \n906 \n2023\n1,648 \n883 \n2024\n1,097 \n862 \n2025\n1,066 \n850 \n2026\n1,034 \n840 \n2027 to 2031\n5,097 \n4,139\nfair value is measured using the NAV per share as a practical expedient are not leveled within the fair value hierarchy but are included in total \ninvestments. \nEmployer Contributions \nIn 2021, we made no discretionary contribution to our qualified pension plans, $58 million of contributions to our nonqualified pension plans \nand $885 million of contributions to our other postretirement benefit plans. No qualified pension plans contributions are expected to be made \nin 2022. Nonqualified pension plans contributions are estimated to be approximately $60 million and contributions to our other postretirement \nbenefit plans are estimated to be approximately $860 million in 2022. \nEstimated Future Benefit Payments \nThe benefit payments to retirees are expected to be paid as follows: \n(dollars in millions) \nYear\nPension Benefits \nHealth Care and Life \n2022\n$ \n2,049 \n$ \n906 \n2023\n1,648 \n883 \n2024\n1,097 \n862 \n2025\n1,066 \n850 \n2026\n1,034 \n840 \n2027 to 2031\n5,097 \n4,139 \nSavings Plan and Employee Stock Ownership Plans \nWe maintain four leveraged employee stock ownership plans (ESOP). We match a certain percentage of eligible employee contributions to \ncertain savings plans with shares of our common stock from this ESOP. At December 31, 2021, the number of allocated shares of common \nstock in this ESOP was 44 million. There were no unallocated shares of common stock in this ESOP at December 31, 2021. All leveraged \nESOP shares are included in earnings per share computations. \nTotal savings plan costs were $690 million in 2021, $730 million in 2020 and $897 million in 2019. \nSeverance Benefits \nThe following table provides an analysis of our severance liability: \n(dollars in millions) \nYear \nBeginning of \nYear \nCharged to \nExpense\nPayments\nOther\nEnd of Year \n2019\n$ \n2,156 \n$ \n260 \n$ \n(1,847) $ \n(4) $\n565 \n2020\n565 \n309 \n(248)\n(24)\n602 \n2021\n602 \n233 \n(258)\n(29)\n548 \nSeverance, Pension and Benefits (Credits) Charges \nDuring 2021, in accordance with our accounting policy to recognize actuarial gains and losses in the period in which they occur, we recorded \nnet pre-tax pension and benefits credits of $2.4 billion in our pension and postretirement benefit plans. The credits were recorded in Other \nincome (expense), net in our consolidated statement of income and were primarily driven by a credit of $1.1 billion due to an increase in our \ndiscount rate assumption used to determine the current year liabilities of our pension plans and postretirement benefit plans from a weighted-\naverage of 2.6% at December 31, 2020 to a weighted-average of 2.9% at December 31, 2021, a credit of $847 million due to the difference \nbetween our estimated and our actual return on assets and a credit of $453 million due to other actuarial assumption adjustments. During \n2021, we also recorded net pre-tax severance charges of $233 million in Selling, general and administrative expense in our consolidated \nstatements of income. \nDuring 2020, we recorded net pre-tax pension and benefits charges of $1.6 billion in our pension and postretirement benefit plans. The \ncharges were recorded in Other income (expense), net in our consolidated statement of income and were primarily driven by a charge of \n$3.2 billion due to a decrease in our discount rate assumption used to determine the current year liabilities of our pension plans and \npostretirement benefit plans from a weighted-average of 3.3% at December 31, 2019 to a weighted-average of 2.6% at December 31, 2020, \npartially offset by a credit of $1.6 billion due to the difference between our estimated and our actual return on assets. During 2020, we also \nrecorded net pre-tax severance charges of $309 million in Selling, general and administrative expense in our consolidated statements of \nincome. \nDuring 2019, we recorded net pre-tax pension and benefits charges of $126 million in our pension and postretirement benefit plans. The \ncharges were recorded in Other income (expense), net in our consolidated statement of income and were primarily driven by a charge of \n$4.3 billion due to a decrease in our discount rate assumption used to determine the current year liabilities of our pension plans and \npostretirement benefits plans from a weighted-average of 4.4% at December 31, 2018 to a weighted-average of 3.3% at December 31, 2019, \npartially offset by a credit of $2.3 billion due to the difference between our estimated return on assets and our actual return on assets and a \n94\nVerizon 2021 Annual Report on Form 10-K',
'As of FY 2021, how much did Verizon expect to pay for its retirees in 2024?',
"What was the largest liability in American Express's Balance Sheet in 2022?",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Datasets:
dim_768
,dim_512
,dim_256
,dim_128
anddim_64
- Evaluated with
InformationRetrievalEvaluator
Metric | dim_768 | dim_512 | dim_256 | dim_128 | dim_64 |
---|---|---|---|---|---|
cosine_accuracy@1 | 0.4667 | 0.4667 | 0.4667 | 0.4667 | 0.4667 |
cosine_accuracy@3 | 0.7333 | 0.7333 | 0.8 | 0.8 | 0.8667 |
cosine_accuracy@5 | 0.7333 | 0.8 | 0.8 | 0.8 | 0.8667 |
cosine_accuracy@10 | 0.8667 | 0.8667 | 0.8667 | 0.9333 | 0.8667 |
cosine_precision@1 | 0.4667 | 0.4667 | 0.4667 | 0.4667 | 0.4667 |
cosine_precision@3 | 0.2444 | 0.2444 | 0.2667 | 0.2667 | 0.2889 |
cosine_precision@5 | 0.1467 | 0.16 | 0.16 | 0.16 | 0.1733 |
cosine_precision@10 | 0.0867 | 0.0867 | 0.0867 | 0.0933 | 0.0867 |
cosine_recall@1 | 0.4667 | 0.4667 | 0.4667 | 0.4667 | 0.4667 |
cosine_recall@3 | 0.7333 | 0.7333 | 0.8 | 0.8 | 0.8667 |
cosine_recall@5 | 0.7333 | 0.8 | 0.8 | 0.8 | 0.8667 |
cosine_recall@10 | 0.8667 | 0.8667 | 0.8667 | 0.9333 | 0.8667 |
cosine_ndcg@10 | 0.6568 | 0.6654 | 0.6875 | 0.7113 | 0.6754 |
cosine_mrr@10 | 0.5919 | 0.6011 | 0.6289 | 0.64 | 0.6111 |
cosine_map@100 | 0.5969 | 0.6069 | 0.6367 | 0.6424 | 0.6186 |
Training Details
Training Dataset
json
- Dataset: json
- Size: 135 training samples
- Columns:
positive
andanchor
- Approximate statistics based on the first 135 samples:
positive anchor type string string details - min: 359 tokens
- mean: 507.28 tokens
- max: 512 tokens
- min: 11 tokens
- mean: 39.07 tokens
- max: 175 tokens
- Samples:
positive anchor Walmart Inc.
Consolidated Statements of Income
Fiscal Years Ended January 31,
(Amounts in millions, except per share data)
2020
2019
2018
Revenues:
Net sales
$
519,926
$
510,329 $
495,761
Membership and other income
4,038
4,076
4,582
Total revenues
523,964
514,405
500,343
Costs and expenses:
Cost of sales
394,605
385,301
373,396
Operating, selling, general and administrative expenses
108,791
107,147
106,510
Operating income
20,568
21,957
20,437
Interest:
Debt
2,262
1,975
1,978
Finance, capital lease and financing obligations
337
371
352
Interest income
(189)
(217)
(152)
Interest, net
2,410
2,129
2,178
Loss on extinguishment of debt
3,136
Other (gains) and losses
(1,958)
8,368
Income before income taxes
20,116
11,460
15,123
Provision for income taxes
4,915
4,281
4,600
Consolidated net income
15,201
7,179
10,523
Consolidated net income attributable to noncontrolling interest
(320)
(509...What is the FY2018 - FY2020 3 year average unadjusted EBITDA % margin for Walmart? Define unadjusted EBITDA as unadjusted operating income + depreciation and amortization from the cash flow statement. Answer in units of percents and round to one decimal place. Calculate what was asked by utilizing the line items clearly shown in the P&L statement and the cash flow statement.
Analysis of Consolidated Earnings Before Provision for Taxes on Income
Consolidated earnings before provision for taxes on income was $21.7 billion and $22.8 billion for the years 2022 and 2021, respectively. As a percent to
sales, consolidated earnings before provision for taxes on income was 22.9% and 24.3%, in 2022 and 2021, respectively.
(Dollars in billions. Percentages in chart are as a percent to total sales)
Cost of Products Sold and Selling, Marketing and Administrative Expenses:
(Dollars in billions. Percentages in chart are as a percent to total sales)
Cost of products sold increased as a percent to sales driven by:
One-time COVID-19 vaccine manufacturing exit related costs
Currency impacts in the Pharmaceutical segment
Commodity inflation in the MedTech and Consumer Health segments
partially offset by
Supply chain benefits in the Consumer Health segment
The intangible asset amortization expense included in cost of products sold was $4.3 billion and $4.7 billion for the ...What drove gross margin change as of FY2022 for JnJ? If gross margin is not a useful metric for a company like this, then please state that and explain why.
(Millions)
United States
EMEA
APAC
LACC
Other Unallocated
Consolidated
2022
Total revenues net of interest expense
$
41,396
$
4,871
$
3,835
$
2,917
$
(157)
$
52,862
Pretax income (loss) from continuing operations
10,383
550
376
500
(2,224)
9,585
2021
Total revenues net of interest expense
$
33,103
$
3,643
$
3,418
$
2,238
$
(22)
$
42,380
Pretax income (loss) from continuing operations
10,325
460
420
494
(1,010)
10,689
2020
Total revenues net of interest expense
$
28,263
$
3,087
$
3,271
$
2,019
$
(553)
$
36,087
Pretax income (loss) from continuing operations
5,422
187
328
273
(1,914)
4,296
Table of Contents
GEOGRAPHIC OPERATIONS
The following table presents our total revenues net of interest expense and pretax income (loss) from continuing operations in different geographic regions
based, in part, upon internal allocations, which necessarily involve managements judgment.
Effective for the first quarter of 2022, we changed the way in which we allocate certain ...What are the geographies that American Express primarily operates in as of 2022?
- Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: epochper_device_train_batch_size
: 16per_device_eval_batch_size
: 16num_train_epochs
: 4lr_scheduler_type
: cosinewarmup_ratio
: 0.1bf16
: Truetf32
: Falseload_best_model_at_end
: Trueoptim
: adamw_torch_fusedbatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: epochprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 4max_steps
: -1lr_scheduler_type
: cosinelr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Falselocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torch_fusedoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | dim_768_cosine_ndcg@10 | dim_512_cosine_ndcg@10 | dim_256_cosine_ndcg@10 | dim_128_cosine_ndcg@10 | dim_64_cosine_ndcg@10 |
---|---|---|---|---|---|---|---|
0 | 0 | - | 0.6632 | 0.6120 | 0.5673 | 0.5358 | 0.4391 |
1.0 | 9 | - | 0.6499 | 0.6759 | 0.6894 | 0.6436 | 0.5923 |
1.1111 | 10 | 5.3139 | - | - | - | - | - |
2.0 | 18 | - | 0.6462 | 0.6730 | 0.7133 | 0.6561 | 0.6601 |
2.2222 | 20 | 1.6581 | - | - | - | - | - |
3.0 | 27 | - | 0.6612 | 0.693 | 0.7113 | 0.7162 | 0.7075 |
3.3333 | 30 | 1.1123 | - | - | - | - | - |
4.0 | 36 | - | 0.6658 | 0.6930 | 0.7133 | 0.7162 | 0.7075 |
1.0 | 9 | - | 0.6814 | 0.6590 | 0.7121 | 0.7068 | 0.6836 |
1.1111 | 10 | 0.577 | - | - | - | - | - |
2.0 | 18 | - | 0.6322 | 0.6625 | 0.7068 | 0.6788 | 0.6749 |
2.2222 | 20 | 0.3614 | - | - | - | - | - |
3.0 | 27 | - | 0.6322 | 0.6654 | 0.6875 | 0.7113 | 0.6708 |
3.3333 | 30 | 0.395 | - | - | - | - | - |
4.0 | 36 | - | 0.6568 | 0.6654 | 0.6875 | 0.7113 | 0.6754 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for shivamsharma1967/_bge-base-financial-matryoshka_
Base model
BAAI/bge-base-en-v1.5Evaluation results
- Cosine Accuracy@1 on dim 768self-reported0.467
- Cosine Accuracy@3 on dim 768self-reported0.733
- Cosine Accuracy@5 on dim 768self-reported0.733
- Cosine Accuracy@10 on dim 768self-reported0.867
- Cosine Precision@1 on dim 768self-reported0.467
- Cosine Precision@3 on dim 768self-reported0.244
- Cosine Precision@5 on dim 768self-reported0.147
- Cosine Precision@10 on dim 768self-reported0.087
- Cosine Recall@1 on dim 768self-reported0.467
- Cosine Recall@3 on dim 768self-reported0.733