tomaarsen's picture
tomaarsen HF staff
Add new CrossEncoder model
35eb7c9 verified
metadata
language:
  - en
tags:
  - sentence-transformers
  - cross-encoder
  - text-classification
  - generated_from_trainer
  - dataset_size:942069
  - loss:CrossEntropyLoss
base_model: distilbert/distilroberta-base
datasets:
  - sentence-transformers/all-nli
pipeline_tag: text-classification
library_name: sentence-transformers
metrics:
  - f1_macro
  - f1_micro
  - f1_weighted
co2_eq_emissions:
  emissions: 5.804161792857238
  energy_consumed: 0.01493216343846247
  source: codecarbon
  training_type: fine-tuning
  on_cloud: false
  cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
  ram_total_size: 31.777088165283203
  hours_used: 0.058
  hardware_used: 1 x NVIDIA GeForce RTX 3090
model-index:
  - name: CrossEncoder based on distilbert/distilroberta-base
    results:
      - task:
          type: cross-encoder-classification
          name: Cross Encoder Classification
        dataset:
          name: AllNLI dev
          type: AllNLI-dev
        metrics:
          - type: f1_macro
            value: 0.8495346395196971
            name: F1 Macro
          - type: f1_micro
            value: 0.851
            name: F1 Micro
          - type: f1_weighted
            value: 0.8494545162410544
            name: F1 Weighted
      - task:
          type: cross-encoder-classification
          name: Cross Encoder Classification
        dataset:
          name: AllNLI test
          type: AllNLI-test
        metrics:
          - type: f1_macro
            value: 0.7574494684363943
            name: F1 Macro
          - type: f1_micro
            value: 0.7575803825803826
            name: F1 Micro
          - type: f1_weighted
            value: 0.7582587136974347
            name: F1 Weighted

CrossEncoder based on distilbert/distilroberta-base

This is a Cross Encoder model finetuned from distilbert/distilroberta-base on the all-nli dataset using the sentence-transformers library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import CrossEncoder

# Download from the 🤗 Hub
model = CrossEncoder("tomaarsen/reranker-distilroberta-base-nli")
# Get scores for pairs...
pairs = [
    ['Two women are embracing while holding to go packages.', 'The sisters are hugging goodbye while holding to go packages after just eating lunch.'],
    ['Two women are embracing while holding to go packages.', 'Two woman are holding packages.'],
    ['Two women are embracing while holding to go packages.', 'The men are fighting outside a deli.'],
    ['Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.', 'Two kids in numbered jerseys wash their hands.'],
    ['Two young children in blue jerseys, one with the number 9 and one with the number 2 are standing on wooden steps in a bathroom and washing their hands in a sink.', 'Two kids at a ballgame wash their hands.'],
]
scores = model.predict(pairs)
print(scores.shape)
# [5]

# ... or rank different texts based on similarity to a single text
ranks = model.rank(
    'Two women are embracing while holding to go packages.',
    [
        'The sisters are hugging goodbye while holding to go packages after just eating lunch.',
        'Two woman are holding packages.',
        'The men are fighting outside a deli.',
        'Two kids in numbered jerseys wash their hands.',
        'Two kids at a ballgame wash their hands.',
    ]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]

Evaluation

Metrics

Cross Encoder Classification

Metric AllNLI-dev AllNLI-test
f1_macro 0.8495 0.7574
f1_micro 0.851 0.7576
f1_weighted 0.8495 0.7583

Training Details

Training Dataset

all-nli

  • Dataset: all-nli at d482672
  • Size: 942,069 training samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 23 characters
    • mean: 69.54 characters
    • max: 227 characters
    • min: 11 characters
    • mean: 38.26 characters
    • max: 131 characters
    • 0: ~33.40%
    • 1: ~33.30%
    • 2: ~33.30%
  • Samples:
    premise hypothesis label
    A person on a horse jumps over a broken down airplane. A person is training his horse for a competition. 1
    A person on a horse jumps over a broken down airplane. A person is at a diner, ordering an omelette. 2
    A person on a horse jumps over a broken down airplane. A person is outdoors, on a horse. 0
  • Loss: CrossEntropyLoss

Evaluation Dataset

all-nli

  • Dataset: all-nli at d482672
  • Size: 19,657 evaluation samples
  • Columns: premise, hypothesis, and label
  • Approximate statistics based on the first 1000 samples:
    premise hypothesis label
    type string string int
    details
    • min: 16 characters
    • mean: 75.01 characters
    • max: 229 characters
    • min: 11 characters
    • mean: 37.66 characters
    • max: 116 characters
    • 0: ~33.10%
    • 1: ~33.30%
    • 2: ~33.60%
  • Samples:
    premise hypothesis label
    Two women are embracing while holding to go packages. The sisters are hugging goodbye while holding to go packages after just eating lunch. 1
    Two women are embracing while holding to go packages. Two woman are holding packages. 0
    Two women are embracing while holding to go packages. The men are fighting outside a deli. 2
  • Loss: CrossEntropyLoss

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • num_train_epochs: 1
  • warmup_ratio: 0.1
  • bf16: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 64
  • per_device_eval_batch_size: 64
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: True
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss AllNLI-dev_f1_macro AllNLI-test_f1_macro
-1 -1 - - 0.1677 -
0.0640 100 1.0454 - - -
0.1280 200 0.7193 - - -
0.1919 300 0.6247 - - -
0.2559 400 0.5907 - - -
0.3199 500 0.5671 0.4578 0.8206 -
0.3839 600 0.5384 - - -
0.4479 700 0.5492 - - -
0.5118 800 0.5281 - - -
0.5758 900 0.5043 - - -
0.6398 1000 0.5243 0.4012 0.8415 -
0.7038 1100 0.4906 - - -
0.7678 1200 0.4877 - - -
0.8317 1300 0.4506 - - -
0.8957 1400 0.4728 - - -
0.9597 1500 0.4602 0.3731 0.8495 -
-1 -1 - - - 0.7574

Environmental Impact

Carbon emissions were measured using CodeCarbon.

  • Energy Consumed: 0.015 kWh
  • Carbon Emitted: 0.006 kg of CO2
  • Hours Used: 0.058 hours

Training Hardware

  • On Cloud: No
  • GPU Model: 1 x NVIDIA GeForce RTX 3090
  • CPU Model: 13th Gen Intel(R) Core(TM) i7-13700K
  • RAM Size: 31.78 GB

Framework Versions

  • Python: 3.11.6
  • Sentence Transformers: 3.5.0.dev0
  • Transformers: 4.49.0.dev0
  • PyTorch: 2.5.0+cu121
  • Accelerate: 1.3.0
  • Datasets: 2.20.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}