SentenceTransformer based on BAAI/bge-m3
This is a sentence-transformers model finetuned from BAAI/bge-m3. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: BAAI/bge-m3
- Maximum Sequence Length: 8192 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'재료 적치장소와 통로 계획 시 고려해야 할 사항들은 무엇인지 열거하세요.',
'(4) 달대비계에는 최대 적재하중과 안전 표지판을 설치한다.\n\n(5) 달대비계는 적절한 양중장비를 사용하여 설치장소까지 운반하고, 안전대를 착용하는 등 안전한 작업방법으로 설치하여 추락재해를 예방하여야 한다.\n\n7.2 재료 적치장소와 통로\n\n(1) 철골 세우기의 진행에 따라 공사용 재료, 공구, 용접기 등이 쌓여놓는 장소와 통로를 설치하여야 하며, 구체공사에도 이용될 수 있도록 계획하여야 한다.\n\n(2) 철골근콘크리트조의 경우 작업장을 통상 연면적 1,000 m² 에 1개소를 설치하고, 그 면적은 50 m² 이상이어야 한다. 또한 2개소 이상 설치할 경우에는 작업장 간 상호 연락통로를 설치하여야 한다.\n\n(3) 작업장 설치위치는 크레인의 선회범위 내에서 수평운반거리가 가장 짧게 되도록 계획하여야 한다.\n\n(4) 계획상 최대적재하중과 작업내용, 공정 등을 검토하여 작업장에 적재되는 자재의 수량, 배치방법 등의 제한요령을 명확히 정하여 안전수칙을 부착하여야 한다.',
'안전보건기술지침의 개요\n\n○ 제정자 : 한국산업안전보건공단 광주지역본부 김 경 순\n○ 개정자 : (사)한국건설안전협회 최순주\n\n○ 제정경과\n - 2010년 10월 건설안전분야 제정위원회 심의(제정)\n - 2012년 7월 건설안전분야 제정위원회 심의(개정)\n - 2020년 11월 건설안전분야 표준제정위원회 심의(개정,',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.9269 |
cosine_accuracy@3 | 0.9677 |
cosine_accuracy@5 | 0.9755 |
cosine_accuracy@10 | 0.9855 |
cosine_precision@1 | 0.9269 |
cosine_precision@3 | 0.3226 |
cosine_precision@5 | 0.1951 |
cosine_precision@10 | 0.0986 |
cosine_recall@1 | 0.9269 |
cosine_recall@3 | 0.9677 |
cosine_recall@5 | 0.9755 |
cosine_recall@10 | 0.9855 |
cosine_ndcg@10 | 0.9576 |
cosine_mrr@10 | 0.9485 |
cosine_map@100 | 0.9491 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 9,933 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 type string string details - min: 10 tokens
- mean: 25.89 tokens
- max: 64 tokens
- min: 13 tokens
- mean: 190.18 tokens
- max: 420 tokens
- Samples:
sentence_0 sentence_1 아스팔트콘크리트 포장공사 안전보건작업 지침이 언제 발행되었는가?
아스팔트콘크리트
포장공사 안전보건작업 지침
2012. 8.
한국 산업안전보건공단이 지침의 발행 주체는 어떤 기관인가?
아스팔트콘크리트
포장공사 안전보건작업 지침
2012. 8.
한국 산업안전보건공단2012년에 발행된 아스팔트콘크리트 포장공사 안전보건작업 지침의 목적이 무엇인가?
아스팔트콘크리트
포장공사 안전보건작업 지침
2012. 8.
한국 산업안전보건공단 - Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 2multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 2max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
0.2008 | 50 | 0.9420 |
0.4016 | 100 | 0.9482 |
0.6024 | 150 | 0.9528 |
0.8032 | 200 | 0.9545 |
1.0 | 249 | 0.9576 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.49.0
- PyTorch: 2.5.1+cu124
- Accelerate: 1.4.0
- Datasets: 3.3.2
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for YoungjaeDev/dacon-bge-m3-finetuned-embedding-model
Base model
BAAI/bge-m3Evaluation results
- Cosine Accuracy@1 on Unknownself-reported0.927
- Cosine Accuracy@3 on Unknownself-reported0.968
- Cosine Accuracy@5 on Unknownself-reported0.976
- Cosine Accuracy@10 on Unknownself-reported0.986
- Cosine Precision@1 on Unknownself-reported0.927
- Cosine Precision@3 on Unknownself-reported0.323
- Cosine Precision@5 on Unknownself-reported0.195
- Cosine Precision@10 on Unknownself-reported0.099
- Cosine Recall@1 on Unknownself-reported0.927
- Cosine Recall@3 on Unknownself-reported0.968