HassanCS's picture
Add new SentenceTransformer model.
7a2a4b9 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:753444
  - loss:CoSENTLoss
base_model: facebook/esm2_t6_8M_UR50D
widget:
  - source_sentence: >-
      A Q T V T Q S Q P E M S V Q E A E T V T L S C T Y D T S E S D Y Y L F W Y
      K Q P P S R Q M I L V I R Q E A Y K Q Q N A T E N R F S V N F Q K A A K S
      F S L K I S D S Q L G D A A M Y F C C A Y R S M S N Y Q L I W W G A G T K
      L I I K P D
    sentences:
      - >-
        A Q T V T Q S Q P E M S V Q E A E T V T L S C T Y D T S E N N Y Y L F W
        Y K Q P P S R Q M I L V I R Q E A Y K Q Q N A T E N R F S V N F Q K A A
        K S F S L K I S D S Q L G D T A M Y F C C A F V A N A G G T S Y G K L T
        F F G Q G T I L T V H P N
      - >-
        A Q T V T Q S Q P E M S V Q E A E T V T L S C T Y D T S E S D Y Y L F W
        Y K Q P P S R Q M I L V I R Q E A Y K Q Q N A T E N R F S V N F Q K A A
        K S F S L K I S D S Q L G D A A M Y F C C A Y R S P N Y G G S Q G N L I
        F F G K G T K L S V K P N
      - >-
        A Q S V A Q P E D Q V N V A E G N P L T V K C T Y S V S G N P Y L F W Y
        V Q Y P N R G L Q F L L K Y I T G D N L V K G S Y G F E A E F N K S Q T
        S F H L K K P S A L V S D S A L Y F C A L D Q A G T A L I F G K G T T L
        S V S S N
  - source_sentence: >-
      L A K T T Q P I S M D S Y E G Q E V N I T C S H N N I A T N D Y I T W Y Q
      Q F P S Q G P R F I I Q G Y K T K V T N E V A S L F I P A D R K S S T L S
      L P R V S L S D T A V Y Y C C L P S G M N Y G G S Q G N L I F F G K G T K
      L S V K P N
    sentences:
      - >-
        I L N V E Q S P Q S L H V Q E G D S T N F T C S F P S S N F Y A L H W Y
        R W E T A K S P E A L F V M T L N G D E K K K G R I S A T L N T K E G Y
        S Y L Y I K G S Q P E D S A T Y L C A F I T G N Q F Y F G T G T S L T V
        I P N
      - >-
        A Q K I T Q T Q P G M F V Q E K E A V T L D C T Y D T S D P S Y G L F W
        Y K Q P S S G E M I F L I Y Q G S Y D Q Q N A T E G R Y S L N F Q K A R
        K S A N L V I S A S Q L G D S A M Y F C C A M R G D A G G T S Y G K L T
        F F G Q G T I L T V H P N
      - >-
        Q K E V E Q D P G P L S V P E G A I V S L N C T Y S N S A F Q Y F M W Y
        R Q Y S R K G P E L L M Y T Y S S G N K E D G R F T A Q V D K S S K Y I
        S L F I R D S Q P S D S A T Y L C C A M R V I G S D D K I I F F G K G T
        R L H I L P N
  - source_sentence: >-
      T Q L L E Q S P Q F L S I Q E G E N L T V Y C N S S S V F S S L Q W Y R Q
      E P G E G P V L L V T V V T G G E V K K L K R L T F Q F G D A R K D S S L
      H I T A A Q P G D T G L Y L C C A G V P Y N N N D M R F F G A G T R L T V
      K P N
    sentences:
      - >-
        T Q L L E Q S P Q F L S I Q E G E N L T V Y C N S S S V F S S L Q W Y R
        Q E P G E G P V L L V T V V T G G E V K K L K R L T F Q F G D A R K D S
        S L H I T A A Q P G D T G L Y L C C A G A A H P L N Y G G S Q G N L I F
        F G K G T K L S V K P N
      - >-
        G N S V T Q M E G P V T L S E E A F L T I N C T Y T A T G Y P S L F W Y
        V Q Y P G E G L Q L L L K A T K A D D K G S N K G F E A T Y R K E T T S
        F H L E K G S V Q V S D S A V Y F C C A F N D Y K L S F F G A G T T V T
        V R A N
      - >-
        D A K T T Q P P S M D C A E G R A A N L P C N H S T I S G N E Y V Y W Y
        R Q I H S Q G P Q Y I I H G L K N N E T N E M A S L I I T E D R K S S T
        L I L P H A T L R D T A V Y Y C C I V R A G G G G W S G G G A D G L T F
        F G K G T H L I I Q P Y
  - source_sentence: >-
      L A K T T Q P I S M D S Y E G Q E V N I T C S H N N I A T N D Y I T W Y Q
      Q F P S Q G P R F I I Q G Y K T K V T N E V A S L F I P A D R K S S T L S
      L P R V S L S D T A V Y Y C C L V G E G P S G G Y Q K V T F F G I G T K L
      Q V I P N
    sentences:
      - >-
        A Q K V T Q A Q T E I S V V E K E D V T L D C V Y E T R D T T Y Y L F W
        Y K Q P P S G E L V F L I R R N S F D E Q N E I S G R Y S W N F Q K S T
        S S F N F T I T A S Q V V D S A V Y F C C A L S D A Y N F N K F Y F F G
        S G T K L N V K P N
      - >-
        A Q R V T Q P E K L L S V F K G A P V E L K C N Y S Y S G S P E L F W Y
        V Q Y S R Q R L Q L L L R H I S R E S I K G F T A D L N K G E T S F H L
        K K P F A Q E E D S A M Y Y C A L R A R G S T L G R L Y F G R G T Q L T
        V W P D
      - >-
        Q K E V E Q D P G P L S V P E G A I V S L N C T Y S N S A F Q Y F M W Y
        R Q Y S R K G P E L L M Y T Y S S G N K E D G R F T A Q V D K S S K Y I
        S L F I R D S Q P S D S A T Y L C C A M R G Y Q K V T F F G I G T K L Q
        V I P N
  - source_sentence: >-
      A Q K V T Q A Q T E I S V V E K E D V T L D C V Y E T R D T T Y Y L F W Y
      K Q P P S G E L V F L I R R N S F D E Q N E I S G R Y S W N F Q K S T S S
      F N F T I T A S Q V V D S A V Y F C C A L L Y N N N D M R F F G A G T R L
      T V K P N
    sentences:
      - >-
        A Q K V T Q A Q T E I S V V E K E D V T L D C V Y E T R D T T Y Y L F W
        Y K Q P P S G E L V F L I R R N S F D E Q N E I S G R Y S W N F Q K S T
        S S F N F T I T A S Q V V D S A V Y F C C A L S E T P R G G G T S Y G K
        L T F F G Q G T I L T V H P N
      - >-
        Q K E V E Q N S G P L S V P E G A I A S L N C T Y S D R G S Q S F F W Y
        R Q Y S G K S P E L I M F I Y S N G D K E D G R F T A Q L N K A S Q Y V
        S L L I R D S Q P S D S A T Y L C C A V A D D K I I F F G K G T R L H I
        L P N
      - >-
        G Q S L E Q P S E V T A V E G A I V Q I N C T Y Q T S G F Y G L S W Y Q
        Q H D G G A P T F L S Y N A L D G L E E T G R F S S F L S R S D S Y G Y
        L L L Q E L Q M K D S A S Y F C A V S P Y G Q N F V F G P G T R L S V L
        P Y
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - pearson_cosine
  - spearman_cosine
model-index:
  - name: SentenceTransformer based on facebook/esm2_t6_8M_UR50D
    results:
      - task:
          type: semantic-similarity
          name: Semantic Similarity
        dataset:
          name: all dev
          type: all-dev
        metrics:
          - type: pearson_cosine
            value: 0.8253873350708476
            name: Pearson Cosine
          - type: spearman_cosine
            value: 0.8706098612115536
            name: Spearman Cosine

SentenceTransformer based on facebook/esm2_t6_8M_UR50D

This is a sentence-transformers model finetuned from facebook/esm2_t6_8M_UR50D. It maps sentences & paragraphs to a 320-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: facebook/esm2_t6_8M_UR50D
  • Maximum Sequence Length: 1026 tokens
  • Output Dimensionality: 320 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 1026, 'do_lower_case': False}) with Transformer model: EsmModel 
  (1): Pooling({'word_embedding_dimension': 320, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("HassanCS/TCRa_HLA_peptide_ESM")
# Run inference
sentences = [
    'A Q K V T Q A Q T E I S V V E K E D V T L D C V Y E T R D T T Y Y L F W Y K Q P P S G E L V F L I R R N S F D E Q N E I S G R Y S W N F Q K S T S S F N F T I T A S Q V V D S A V Y F C C A L L Y N N N D M R F F G A G T R L T V K P N',
    'A Q K V T Q A Q T E I S V V E K E D V T L D C V Y E T R D T T Y Y L F W Y K Q P P S G E L V F L I R R N S F D E Q N E I S G R Y S W N F Q K S T S S F N F T I T A S Q V V D S A V Y F C C A L S E T P R G G G T S Y G K L T F F G Q G T I L T V H P N',
    'Q K E V E Q N S G P L S V P E G A I A S L N C T Y S D R G S Q S F F W Y R Q Y S G K S P E L I M F I Y S N G D K E D G R F T A Q L N K A S Q Y V S L L I R D S Q P S D S A T Y L C C A V A D D K I I F F G K G T R L H I L P N',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 320]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Semantic Similarity

Metric Value
pearson_cosine 0.8254
spearman_cosine 0.8706

Training Details

Training Dataset

Unnamed Dataset

  • Size: 753,444 training samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 108 tokens
    • mean: 116.0 tokens
    • max: 126 tokens
    • min: 107 tokens
    • mean: 116.16 tokens
    • max: 126 tokens
    • min: 0.0
    • mean: 0.38
    • max: 0.97
  • Samples:
    sentence1 sentence2 score
    T Q L L E Q S P Q F L S I Q E G E N L T V Y C N S S S V F S S L Q W Y R Q E P G E G P V L L V T V V T G G E V K K L K R L T F Q F G D A R K D S S L H I T A A Q P G D T G L Y L C C A G A G G G S Q G N L I F F G K G T K L S V K P N T Q L L E Q S P Q F L S I Q E G E N L T V Y C N S S S V F S S L Q W Y R Q E P G E G P V L L V T V V T G G E V K K L K R L T F Q F G D A R K D S S L H I T A A Q P G D T G L Y L C C A G G N G G S Q G N L I F F G K G T K L S V K P N 0.8347107438016529
    A Q T V T Q S Q P E M S V Q E A E T V T L S C T Y D T S E N N Y Y L F W Y K Q P P S R Q M I L V I R Q E A Y K Q Q N A T E N R F S V N F Q K A A K S F S L K I S D S Q L G D T A M Y F C A F A E Y G N K L V F G A G T I L R V K S Y A Q T V T Q S Q P E M S V Q E A E T V T L S C T Y D T S E S D Y Y L F W Y K Q P P S R Q M I L V I R Q E A Y K Q Q N A T E N R F S V N F Q K A A K S F S L K I S D S Q L G D A A M Y F C A L F S G S R L T F G E G T Q L T V N P D 0.0
    A Q K V T Q A Q T E I S V V E K E D V T L D C V Y E T R D T T Y Y L F W Y K Q P P S G E L V F L I R R N S F D E Q N E I S G R Y S W N F Q K S T S S F N F T I T A S Q V V D S A V Y F C C A L L I F S G G Y N K L I F F G A G T R L A V H P Y A Q K V T Q A Q T E I S V V E K E D V T L D C V Y E T R D T T Y Y L F W Y K Q P P S G E L V F L I R R N S F D E Q N E I S G R Y S W N F Q K S T S S F N F T I T A S Q V V D S A V Y F C C A L S E A G S G Y S T L T F F G K G T M L L V S P D 0.4008264462809917
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Evaluation Dataset

Unnamed Dataset

  • Size: 83,716 evaluation samples
  • Columns: sentence1, sentence2, and score
  • Approximate statistics based on the first 1000 samples:
    sentence1 sentence2 score
    type string string float
    details
    • min: 106 tokens
    • mean: 116.08 tokens
    • max: 126 tokens
    • min: 109 tokens
    • mean: 116.05 tokens
    • max: 125 tokens
    • min: 0.0
    • mean: 0.39
    • max: 0.97
  • Samples:
    sentence1 sentence2 score
    G E N V E Q H P S T L S V Q E G D S A V I K C T Y S D S A S N Y F P W Y K Q E L G K G P Q L I I D I R S N V G E K K D Q R I A V T L N K T A K H F S L H I T E T Q P E D S A V Y F C A A S M N N Y G Q N F V F G P G T R L S V L P Y G E D V E Q S L F L S V R E G D S S V I N C T Y T D S S S T Y L Y W Y K Q E P G A G L Q L L T Y I F S N M D M K Q D Q R L T V L L N K K D K H L S L R I A D T Q T G D S A I Y F C A E R A G A N N L F F G T G T R L T V I P Y 0.09297520661157023
    A Q T V T Q S Q P E M S V Q E A E T V T L S C T Y D T S E N N Y Y L F W Y K Q P P S R Q M I L V I R Q E A Y K Q Q N A T E N R F S V N F Q K A A K S F S L K I S D S Q L G D T A M Y F C C A S H M N N A R L M F F G D G T Q L V V K P N A Q T V T Q S Q P E M S V Q E A E T V T L S C T Y D T S E N N Y Y L F W Y K Q P P S R Q M I L V I R Q E A Y K Q Q N A T E N R F S V N F Q K A A K S F S L K I S D S Q L G D T A M Y F C C S S G G G A D G L T F F G K G T H L I I Q P Y 0.00826446280991735
    G Q S L E Q P S E V T A V E G A I V Q I N C T Y Q T S G F Y G L S W Y Q Q H D G G A P T F L S Y N A L D G L E E T G R F S S F L S R S D S Y G Y L L L Q E L Q M K D S A S Y F C C A L A G G G N K L T F F G T G T Q L K V E L N K N Q V E Q S P Q S L I I L E G K N C T L Q C N Y T V S P F S N L R W Y K Q D T G R G P V S L T I M T F S E N T K S N G R Y T A T L D A D T K Q S S L H I T A S Q L S D S A S Y I C C V V S S Y S S A S K I I F F G S G T R L S I R P N 0.9690082644628099
  • Loss: CoSENTLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "pairwise_cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • learning_rate: 0.001
  • weight_decay: 0.0001
  • num_train_epochs: 2
  • fp16: True
  • load_best_model_at_end: True

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 128
  • per_device_eval_batch_size: 128
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 0.001
  • weight_decay: 0.0001
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 2
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: True
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss Validation Loss all-dev_spearman_cosine
0.3397 2000 8.8932 8.8505 0.5332
0.6795 4000 8.8096 8.7699 0.6565
1.0192 6000 8.7188 8.6631 0.7476
1.3589 8000 8.592 8.5352 0.8242
1.6987 10000 8.4614 8.4169 0.8706
  • The bold row denotes the saved checkpoint.

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.47.0
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.2.1
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

CoSENTLoss

@online{kexuefm-8847,
    title={CoSENT: A more efficient sentence vector scheme than Sentence-BERT},
    author={Su Jianlin},
    year={2022},
    month={Jan},
    url={https://kexue.fm/archives/8847},
}