scion-minilm-v2 / README.md
tjohn327's picture
Update README.md
24d80dc verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:19089
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L6-v2
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: val ir eval
          type: val-ir-eval
        metrics:
          - type: cosine_accuracy@1
            value: 0.37753510140405616
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.5858034321372855
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 0.6809672386895476
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 0.7753510140405616
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.37753510140405616
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.19552782111284447
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.13634945397815915
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.07769110764430577
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.37675507020280813
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.5854134165366615
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 0.6801872074882995
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 0.7739209568382736
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.5691636886714377
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.5045034420424439
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.512220040783084
            name: Cosine Map@100

SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("tjohn327/scion-minilm-v2")
# Run inference
sentences = [
    "What role does the function 'target' play in modeling network topology?",
    'The document introduces a formal framework for verifying secure forwarding protocols within the SCION Internet architecture, specifically focusing on the ICING data plane protocol. It employs event systems as labeled transition systems, defining an event system E as a tuple comprising a set of states S, an initial state s0, a set of events E, and a transition relation e−→. The framework formalizes the concepts of reachability and invariants, establishing that a state property P is an invariant if the reachable states from the initial state are contained within P. The refinement of abstract event systems to concrete systems is articulated through mappings that preserve invariants. The document emphasizes parametrization, allowing models to incorporate assumptions on parameters, which are highlighted in gray. An abstract model is defined for a path-aware network architecture, excluding cryptographic elements, and is proven to satisfy path authorization and detectability. The network topology is modeled as a multigraph, with nodes representing Autonomous Systems (ASes) and edges representing links, characterized by a partial bijective function target that facilitates multiple links and forwarding policies. Paths in the network are defined as finite sequences of hop fields, encapsulating local routing information.',
    "The document chunk presents a testbed architecture for evaluating Secure In-Band Network Telemetry (ID-INT) within the SCION Internet Architecture, utilizing a Tofino 2 switch as the ID-INT enabled border routers for two SCION Autonomous Systems (ASes). The Dynamic Multi-Path Transport Protocol (DMTP) is adapted to send probe packets and retrieve telemetry data, focusing on instantaneous queue length at the egress of the border router. The experiment assesses DMTP's ability to adjust sending rates on Path 2 based on ID-INT telemetry, with initial path capacities set at 100 Mbps, later reduced to 75 Mbps. Results indicate that DMTP with ID-INT-enabled congestion control aligns sending rates more closely with available link capacity, achieving a 2% gain in goodput despite ID-INT overhead. The adaptation speed of DMTP using ID-INT telemetry is 35% faster than traditional congestion window-based control, demonstrating improved bandwidth utilization and congestion prevention. Related work includes a software implementation of the SCION reference border router in Go, and a hardware implementation on a NetFPGA SUME card capable of 10 Gbps throughput, highlighting the need for efficient high-bandwidth traffic handling in SCION.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.3775
cosine_accuracy@3 0.5858
cosine_accuracy@5 0.681
cosine_accuracy@10 0.7754
cosine_precision@1 0.3775
cosine_precision@3 0.1955
cosine_precision@5 0.1363
cosine_precision@10 0.0777
cosine_recall@1 0.3768
cosine_recall@3 0.5854
cosine_recall@5 0.6802
cosine_recall@10 0.7739
cosine_ndcg@10 0.5692
cosine_mrr@10 0.5045
cosine_map@100 0.5122

Training Details

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 192
  • per_device_eval_batch_size: 192
  • fp16: True
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 192
  • per_device_eval_batch_size: 192
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 3
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step val-ir-eval_cosine_ndcg@10
1.0 50 0.5375
2.0 100 0.5626
3.0 150 0.5692

Framework Versions

  • Python: 3.12.3
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.6.0+cu124
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}