SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'scentini citrus chill by avon invites you into a vibrant sunsoaked escape with its exuberant blend of fruity and floral notes that perfectly capture the essence of a tropical paradise users describe this fragrance as refreshingly lively with a juicy brightness that invigorates the senses and uplifts the spirit its playful heart reveals a delicate floral charm which balances the effervescent citrus opening infusing the scent with a lighthearted and carefree vibe ideal for warm weather and casual outings this fragrance has garnered mixed reviews where many appreciate its refreshing quality and the delightful burst of sweetness it offers while some find its longevity to be moderate others revel in its cheerful presence that brings forth a feeling of joy and celebration overall scentini citrus chill is a delightful choice for those seeking a versatile easygoing fragrance that evokes the blissful feeling of a sunny day',
    'frangipani',
    'coriander seed',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.3664
spearman_cosine 0.2002

Training Details

Training Dataset

Unnamed Dataset

  • Size: 116,121 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 12 tokens
    • mean: 181.42 tokens
    • max: 256 tokens
    • min: 3 tokens
    • mean: 4.26 tokens
    • max: 8 tokens
    • min: 0.0
    • mean: 0.03
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    rose hubris by ex nihilo is an enchanting unisex fragrance that beautifully marries the essence of lush florals with earthy undertones this scent released in 2014 exudes an inviting warmth and sophistication making it a perfect choice for those who appreciate depth in their fragrance users have noted its elegant balance between sweetness and earthiness with a prominent emphasis on a decadent floral heart that captivates the senses the mood of rose hubris is often described as both luxurious and introspective ideal for evening wear or special occasions reviewers highlight its complexity noting that it evolves gracefully on the skin revealing its musky character and rich woody base as time passes while some cherish its remarkable longevity others find its presence to be a touch introspective adding an air of mystery without being overwhelming in essence rose hubris stands out as a signature scent for those who seek a fragrance that is both beautifully floral and ruggedly grounded embodyi... baies rose 0.0
    l a glow by jennifer lopez is an enchanting fragrance that captures a playful and vibrant essence with its luscious blend of fruity sweetness and delicate floral notes this scent evokes a sense of effortless femininity and youthful exuberance the initial burst of succulent berries and cherries creates an inviting and radiant atmosphere while hints of soft flowers bring a romantic touch to the heart of the fragrance users have described l a glow as a delightful and uplifting scent perfect for everyday wear many appreciate its joyful character and the way it captures attention without overwhelming the musky undertones add a warm depth leaving a lingering impression that balances lightness and sophistication with a solid rating from a diverse audience this fragrance is celebrated for its versatility and longlasting wear making it a perfect companion for both casual outings and special occasions cypriol 0.0
    eternal magic by avon is an enchanting fragrance designed for the modern woman evoking a sense of elegant allure and mystique released in 2010 this captivating scent weaves together a tapestry of soft florals and warm vanilla presenting a beautifully balanced olfactory experience users frequently describe it as delicate yet assertive with powdery nuances that wrap around the senses like a gentle embrace the fragrance exudes a charming freshness making it suitable for both everyday wear and special occasions many appreciate its romantic character often highlighting the sophisticated interplay of floral delicacies intertwined with rich woody undertones despite its lightness it has garnered attention for its longevity with wearers relishing how the scent evolves throughout the day a frequent sentiment among users is the feeling of wearing a personal aura that captivates those around leaving a soft yet unforgettable impression eternal magic is not just a scent its a celebration of feminini... cranberry 0.0
  • Loss: CosineSimilarityLoss with these parameters:
    {
        "loss_fct": "torch.nn.modules.loss.MSELoss"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss spearman_cosine
0.0276 100 - 0.0722
0.0551 200 - 0.1077
0.0827 300 - 0.1314
0.1102 400 - 0.1352
0.1378 500 0.0285 0.1434
0.1653 600 - 0.1604
0.1929 700 - 0.1678
0.2204 800 - 0.1695
0.2480 900 - 0.1709
0.2756 1000 0.0253 0.1690
0.3031 1100 - 0.1709
0.3307 1200 - 0.1786
0.3582 1300 - 0.1794
0.3858 1400 - 0.1733
0.4133 1500 0.0252 0.1799
0.4409 1600 - 0.1795
0.4684 1700 - 0.1847
0.4960 1800 - 0.1871
0.5236 1900 - 0.1876
0.5511 2000 0.024 0.1848
0.5787 2100 - 0.1897
0.6062 2200 - 0.1929
0.6338 2300 - 0.1943
0.6613 2400 - 0.1938
0.6889 2500 0.023 0.1938
0.7165 2600 - 0.1963
0.7440 2700 - 0.1969
0.7716 2800 - 0.1946
0.7991 2900 - 0.1961
0.8267 3000 0.0209 0.1968
0.8542 3100 - 0.1971
0.8818 3200 - 0.1979
0.9093 3300 - 0.1988
0.9369 3400 - 0.1996
0.9645 3500 0.0237 0.1999
0.9920 3600 - 0.2002
1.0 3629 - 0.2002

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.1
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.2.1
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}
Downloads last month
23
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for dawn78/minilm6_perfumerecommender_v4

Finetuned
(229)
this model

Evaluation results