matunderstars's picture
Add new SentenceTransformer model
ff837b7 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:184
  - loss:MultipleNegativesRankingLoss
base_model: sentence-transformers/all-MiniLM-L12-v2
widget:
  - source_sentence: Onde tirar dúvidas sobre o SIASS?
    sentences:
      - Envie um e-mail para [email protected]
      - Envie um e-mail para [email protected].
      - >-
        Envie um e-mail para [email protected] solicitando a
        alteração dos dados bancários.
  - source_sentence: Como acionar a manutenção de um bem em garantia?
    sentences:
      - >-
        Preencha o formulário em https://administrativo.ufes.br e envie com 15
        dias de antecedência.
      - >-
        Acesse
        https://compras.ufes.br/inclusao-de-produto-no-catalogo-de-materiais.
      - Entre em contato com o fornecedor.
  - source_sentence: Computador não abre sistema operacional
    sentences:
      - >-
        Faça login no gmail.com com o usuário único @ufes.br e siga as
        instruções em https://senha.ufes.br/site/ativaGmail.
      - Clique no link https://chat.google.com/room/AAAAHqHLj6c?cls=4.
      - >-
        Se o sistema operacional não inicia, pode ser um problema no disco ou
        sistema. Contate o suporte de TI para suporte e diagnóstico.
  - source_sentence: Como acessar os dados acadêmicos e administrativos?
    sentences:
      - >-
        Siga as orientações disponíveis em
        https://progep.ufes.br/exames-periodicos.
      - Acesse o Portal Administrativo em https://administrativo.ufes.br.
      - Acesse https://senha.ufes.br/site/recuperaCredenciais.
  - source_sentence: >-
      Como cadastrar ou alterar dados no Sistema Integrado de Ensino (SIE),
      Protocolo, Portal Administrativo, Acadêmico e Reservas?
    sentences:
      - >-
        Siga os procedimentos em
        https://portaladministrativo.ufes.br/utilizacao-de-registro-de-precos-existente.
      - >-
        Acesse nosso chat para falar com um atendente humano:
        https://chat.google.com/room/AAAAHqHLj6c?cls=7
      - >-
        Acesse
        https://dtin.saomateus.ufes.br/cadastros-e-habilitacao-aos-sistemas-institucionais
        e preencha o formulário.
datasets:
  - matunderstars/ufes-qa-data
pipeline_tag: sentence-similarity
library_name: sentence-transformers

SentenceTransformer based on sentence-transformers/all-MiniLM-L12-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L12-v2 on the train and test datasets. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("matunderstars/ufes-qa-embedding-finetuned")
# Run inference
sentences = [
    'Como cadastrar ou alterar dados no Sistema Integrado de Ensino (SIE), Protocolo, Portal Administrativo, Acadêmico e Reservas?',
    'Acesse https://dtin.saomateus.ufes.br/cadastros-e-habilitacao-aos-sistemas-institucionais e preencha o formulário.',
    'Acesse nosso chat para falar com um atendente humano: https://chat.google.com/room/AAAAHqHLj6c?cls=7',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Datasets

train

  • Dataset: train at 9021242
  • Size: 92 training samples
  • Columns: question and answer
  • Approximate statistics based on the first 92 samples:
    question answer
    type string string
    details
    • min: 7 tokens
    • mean: 17.88 tokens
    • max: 45 tokens
    • min: 14 tokens
    • mean: 46.03 tokens
    • max: 128 tokens
  • Samples:
    question answer
    Como registrar atestado de saúde? Realize o registro pelo aplicativo SouGov (Menu > Atestado de Saúde > Incluir > Selecionar arquivo no dispositivo) ou pelo Portal Sigepe em Gestão de Pessoas > Minha Saúde > Atestado Médico.
    Como fazer uma doação ou empréstimo de um bem patrimonial? Modelos estão em https://drm.saomateus.ufes.br → Patrimônio → Formulários e Modelos.
    Onde encontrar informações sobre as salas de aula e a configuração de equipamentos? Consulte o manual em https://dtin.saomateus.ufes.br/tecnologias-educacionais.
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

test

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 150
  • warmup_ratio: 0.1
  • fp16: True
  • batch_sampler: no_duplicates

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1.0
  • num_train_epochs: 150
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.1
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: True
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: no_duplicates
  • multi_dataset_batch_sampler: proportional

Training Logs

Epoch Step Training Loss
71.4286 500 0.1147
142.8571 1000 0.0001

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.3.1
  • Transformers: 4.46.2
  • PyTorch: 2.5.1+cu121
  • Accelerate: 1.1.1
  • Datasets: 3.1.0
  • Tokenizers: 0.20.3

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}