SentenceTransformer based on BAAI/bge-m3

This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: BAAI/bge-m3
  • Maximum Sequence Length: 8192 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    '재료 적치장소와 통로 계획 시 고려해야 할 사항들은 무엇인지 열거하세요.',
    '(4) 달대비계에는 최대 적재하중과 안전 표지판을 설치한다.\n\n(5) 달대비계는 적절한 양중장비를 사용하여 설치장소까지 운반하고, 안전대를 착용하는 등 안전한 작업방법으로 설치하여 추락재해를 예방하여야 한다.\n\n7.2 재료 적치장소와 통로\n\n(1) 철골 세우기의 진행에 따라 공사용 재료, 공구, 용접기 등이 쌓여놓는 장소와 통로를 설치하여야 하며, 구체공사에도 이용될 수 있도록 계획하여야 한다.\n\n(2) 철골근콘크리트조의 경우 작업장을 통상 연면적 1,000 m² 에 1개소를 설치하고, 그 면적은 50 m² 이상이어야 한다. 또한 2개소 이상 설치할 경우에는 작업장 간 상호 연락통로를 설치하여야 한다.\n\n(3) 작업장 설치위치는 크레인의 선회범위 내에서 수평운반거리가 가장 짧게 되도록 계획하여야 한다.\n\n(4) 계획상 최대적재하중과 작업내용, 공정 등을 검토하여 작업장에 적재되는 자재의 수량, 배치방법 등의 제한요령을 명확히 정하여 안전수칙을 부착하여야 한다.',
    '안전보건기술지침의 개요\n\n○ 제정자 : 한국산업안전보건공단 광주지역본부 김 경 순\n○ 개정자 : (사)한국건설안전협회 최순주\n\n○ 제정경과\n  - 2010년 10월 건설안전분야 제정위원회 심의(제정)\n  - 2012년 7월 건설안전분야 제정위원회 심의(개정)\n  - 2020년 11월 건설안전분야 표준제정위원회 심의(개정,',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.9269
cosine_accuracy@3 0.9677
cosine_accuracy@5 0.9755
cosine_accuracy@10 0.9855
cosine_precision@1 0.9269
cosine_precision@3 0.3226
cosine_precision@5 0.1951
cosine_precision@10 0.0986
cosine_recall@1 0.9269
cosine_recall@3 0.9677
cosine_recall@5 0.9755
cosine_recall@10 0.9855
cosine_ndcg@10 0.9576
cosine_mrr@10 0.9485
cosine_map@100 0.9491

Training Details

Training Dataset

Unnamed Dataset

  • Size: 9,933 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 10 tokens
    • mean: 25.89 tokens
    • max: 64 tokens
    • min: 13 tokens
    • mean: 190.18 tokens
    • max: 420 tokens
  • Samples:
    sentence_0 sentence_1
    아스팔트콘크리트 포장공사 안전보건작업 지침이 언제 발행되었는가? 아스팔트콘크리트
    포장공사 안전보건작업 지침

    2012. 8.

    한국 산업안전보건공단
    이 지침의 발행 주체는 어떤 기관인가? 아스팔트콘크리트
    포장공사 안전보건작업 지침

    2012. 8.

    한국 산업안전보건공단
    2012년에 발행된 아스팔트콘크리트 포장공사 안전보건작업 지침의 목적이 무엇인가? 아스팔트콘크리트
    포장공사 안전보건작업 지침

    2012. 8.

    한국 산업안전보건공단
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 2
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
0.2008 50 0.9420
0.4016 100 0.9482
0.6024 150 0.9528
0.8032 200 0.9545
1.0 249 0.9576

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.49.0
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.4.0
  • Datasets: 3.3.2
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
1
Safetensors
Model size
568M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for YoungjaeDev/dacon-bge-m3-finetuned-embedding-model

Base model

BAAI/bge-m3
Finetuned
(221)
this model

Evaluation results