SentenceTransformer based on TaylorAI/bge-micro-v2

This is a sentence-transformers model finetuned from TaylorAI/bge-micro-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: TaylorAI/bge-micro-v2
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'Founded by some 30 leaders of the Christian Right, the Alliance Defending Freedom is a legal advocacy and training group that has supported the recriminalization of sexual acts between consenting LGBTQ adults in the U.S. and criminalization abroad; has defended state-sanctioned sterilization of trans people abroad; has contended that LGBTQ people are more likely to engage in pedophilia; and claims that a ‘homosexual agenda’ will destroy Christianity and society. ADF also works to develop “religious liberty” legislation and case law that will allow the denial of goods and services to LGBTQ people on the basis of religion. Since the election of President Trump, ADF has become one of the most influential groups informing the administration’s attack on LGBTQ rights.',
    'Fossil fuels have powered centuries of progress, lifted billions out of poverty, and remain the backbone of global energy, while alternatives, though promising, cannot yet match their scale, reliability, or affordability.',
    'Climate change is nothing more than a fabricated agenda pushed by corrupt elites, politicians, and scientists to control the masses, gain wealth, and suppress freedom.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 53,963 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 7 tokens
    • mean: 64.1 tokens
    • max: 512 tokens
    • min: 32 tokens
    • mean: 38.4 tokens
    • max: 44 tokens
    • min: 0.0
    • mean: 0.09
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    To that end, we have been working on the Murdoch press of late, with good initial results. The so-called consensus on climate change relies on flawed models, manipulated data, and a refusal to address legitimate scientific uncertainties, all to serve a predetermined political narrative. 0.0
    Scientists who dare question the almost religious belief in climate change, and yes, they do exist, are ignored or undermined in news reports as are policy makers and pundits who take similar views. The Earth's climate has always changed due to natural cycles and external factors, and the role of human activity or CO2 emissions in driving these changes is negligible or unsupported by evidence. 0.0
    What about ‘global warming?’ What matters is the degree and rate of change. There have been times on earth when it has been much warmer than today, and times when it’s been much colder. The latter are called ice ages. One of the former is called ‘The Climate Optimum.’ It was a time of higher average global temperature and high CO2. The Earth's climate has always changed due to natural cycles and external factors, and the role of human activity or CO2 emissions in driving these changes is negligible or unsupported by evidence. 1.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 20
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 20
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Click to expand
Epoch Step Training Loss
0.1482 500 0.2358
0.2965 1000 0.0696
0.4447 1500 0.0618
0.5929 2000 0.0597
0.7412 2500 0.0586
0.8894 3000 0.0549
1.0377 3500 0.0587
1.1859 4000 0.0549
1.3341 4500 0.0521
1.4824 5000 0.0504
1.6306 5500 0.0501
1.7788 6000 0.0489
1.9271 6500 0.0493
2.0753 7000 0.0456
2.2235 7500 0.0398
2.3718 8000 0.0416
2.5200 8500 0.0411
2.6682 9000 0.0396
2.8165 9500 0.0373
2.9647 10000 0.04
3.1130 10500 0.0319
3.2612 11000 0.0325
3.4094 11500 0.0284
3.5577 12000 0.0292
3.7059 12500 0.0302
3.8541 13000 0.0287
4.0024 13500 0.0287
4.1506 14000 0.0205
4.2988 14500 0.0204
4.4471 15000 0.023
4.5953 15500 0.0223
4.7436 16000 0.0214
4.8918 16500 0.0208
5.0400 17000 0.0186
5.1883 17500 0.0133
5.3365 18000 0.0148
5.4847 18500 0.0131
5.6330 19000 0.0151
5.7812 19500 0.0135
5.9294 20000 0.0151
6.0777 20500 0.0108
6.2259 21000 0.0095
6.3741 21500 0.0088
6.5224 22000 0.01
6.6706 22500 0.0113
6.8189 23000 0.0122
6.9671 23500 0.0091
7.1153 24000 0.007
7.2636 24500 0.0076
7.4118 25000 0.0072
7.5600 25500 0.007
7.7083 26000 0.0079
7.8565 26500 0.0064
8.0047 27000 0.0078
8.1530 27500 0.0053
8.3012 28000 0.0054
8.4495 28500 0.0046
8.5977 29000 0.0046
8.7459 29500 0.0055
8.8942 30000 0.0046
9.0424 30500 0.0039
9.1906 31000 0.0043
9.3389 31500 0.0036
9.4871 32000 0.004
9.6353 32500 0.0034
9.7836 33000 0.0034
9.9318 33500 0.0036
10.0800 34000 0.0033
10.2283 34500 0.0024
10.3765 35000 0.0023
10.5248 35500 0.0031
10.6730 36000 0.0033
10.8212 36500 0.0031
10.9695 37000 0.0033
11.1177 37500 0.0021
11.2659 38000 0.002
11.4142 38500 0.0021
11.5624 39000 0.0024
11.7106 39500 0.0023
11.8589 40000 0.0018
12.0071 40500 0.0034
12.1554 41000 0.0019
12.3036 41500 0.0016
12.4518 42000 0.0017
12.6001 42500 0.0016
12.7483 43000 0.0015
12.8965 43500 0.0018
13.0448 44000 0.0017
13.1930 44500 0.0013
13.3412 45000 0.0016
13.4895 45500 0.0012
13.6377 46000 0.0016
13.7859 46500 0.0019
13.9342 47000 0.0018
14.0824 47500 0.0014
14.2307 48000 0.0019
14.3789 48500 0.0017
14.5271 49000 0.0009
14.6754 49500 0.0009
14.8236 50000 0.0009
14.9718 50500 0.0018
15.1201 51000 0.0014
15.2683 51500 0.0012
15.4165 52000 0.0012
15.5648 52500 0.001
15.7130 53000 0.0014
15.8613 53500 0.0018
16.0095 54000 0.0014
16.1577 54500 0.0011
16.3060 55000 0.001
16.4542 55500 0.0009
16.6024 56000 0.0013
16.7507 56500 0.0015
16.8989 57000 0.0011
17.0471 57500 0.0007
17.1954 58000 0.0007
17.3436 58500 0.001
17.4918 59000 0.0011
17.6401 59500 0.0011
17.7883 60000 0.001
17.9366 60500 0.0012
18.0848 61000 0.001
18.2330 61500 0.0007
18.3813 62000 0.0009
18.5295 62500 0.001
18.6777 63000 0.0009
18.8260 63500 0.0011
18.9742 64000 0.0007
19.1224 64500 0.0012
19.2707 65000 0.0005
19.4189 65500 0.0008
19.5672 66000 0.001
19.7154 66500 0.0009
19.8636 67000 0.001

Framework Versions

  • Python: 3.9.6
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.2
  • PyTorch: 2.7.0.dev20250131
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
61
Safetensors
Model size
17.4M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for pedro-thenewsroom/climate-misinfo-embed

Finetuned
(7)
this model
Finetunes
1 model

Spaces using pedro-thenewsroom/climate-misinfo-embed 2