splade-distilbert-base-uncased trained on MS MARCO triplets
This is a SPLADE Sparse Encoder model finetuned from distilbert/distilbert-base-uncased on the msmarco dataset using the sentence-transformers library. It maps sentences & paragraphs to a 30522-dimensional sparse vector space and can be used for semantic search and sparse retrieval.
Model Details
Model Description
- Model Type: SPLADE Sparse Encoder
- Base model: distilbert/distilbert-base-uncased
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 30522 dimensions
- Similarity Function: Dot Product
- Training Dataset:
- Language: en
- License: apache-2.0
Model Sources
- Documentation: Sentence Transformers Documentation
- Documentation: Sparse Encoder Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sparse Encoders on Hugging Face
Full Model Architecture
SparseEncoder(
(0): MLMTransformer({'max_seq_length': 256, 'do_lower_case': False}) with MLMTransformer model: DistilBertForMaskedLM
(1): SpladePooling({'pooling_strategy': 'max', 'activation_function': 'relu', 'word_embedding_dimension': 30522})
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SparseEncoder
# Download from the 🤗 Hub
model = SparseEncoder("arthurbresnu/splade-distilbert-base-uncased-msmarco-mrl")
# Run inference
queries = [
"meaning of the name bernard",
]
documents = [
'English Meaning: The name Bernard is an English baby name. In English the meaning of the name Bernard is: Strong as a bear. See also Bjorn. American Meaning: The name Bernard is an American baby name. In American the meaning of the name Bernard is: Strong as a bear.',
'To the Citizens of St. Bernard We chose as our motto a simple but profound declaration: â\x80\x9cWelcome to your office.â\x80\x9d Those words remind us that we are no more than the caretakers of the office of Clerk of Court for the Parish of St. Bernard.',
"Get Your Prior Years Tax Information from the IRS. IRS Tax Tip 2012-18, January 27, 2012. Sometimes taxpayers need a copy of an old tax return, but can't find or don't have their own records. There are three easy and convenient options for getting tax return transcripts and tax account transcripts from the IRS: on the web, by phone or by mail.",
]
query_embeddings = model.encode_query(queries)
document_embeddings = model.encode_document(documents)
print(query_embeddings.shape, document_embeddings.shape)
# [1, 30522] [3, 30522]
# Get the similarity scores for the embeddings
similarities = model.similarity(query_embeddings, document_embeddings)
print(similarities)
# tensor([[18.6221, 10.0646, 0.0000]])
Evaluation
Metrics
Sparse Information Retrieval
- Datasets:
NanoMSMARCO
,NanoNFCorpus
,NanoNQ
,NanoClimateFEVER
,NanoDBPedia
,NanoFEVER
,NanoFiQA2018
,NanoHotpotQA
,NanoMSMARCO
,NanoNFCorpus
,NanoNQ
,NanoQuoraRetrieval
,NanoSCIDOCS
,NanoArguAna
,NanoSciFact
andNanoTouche2020
- Evaluated with
SparseInformationRetrievalEvaluator
Metric | NanoMSMARCO | NanoNFCorpus | NanoNQ | NanoClimateFEVER | NanoDBPedia | NanoFEVER | NanoFiQA2018 | NanoHotpotQA | NanoQuoraRetrieval | NanoSCIDOCS | NanoArguAna | NanoSciFact | NanoTouche2020 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
dot_accuracy@1 | 0.44 | 0.36 | 0.48 | 0.24 | 0.7 | 0.74 | 0.34 | 0.88 | 0.84 | 0.42 | 0.1 | 0.6 | 0.6735 |
dot_accuracy@3 | 0.6 | 0.46 | 0.68 | 0.42 | 0.82 | 0.9 | 0.5 | 0.92 | 0.92 | 0.6 | 0.34 | 0.72 | 0.9592 |
dot_accuracy@5 | 0.74 | 0.54 | 0.74 | 0.56 | 0.88 | 0.92 | 0.58 | 0.94 | 0.94 | 0.64 | 0.46 | 0.72 | 0.9796 |
dot_accuracy@10 | 0.84 | 0.68 | 0.76 | 0.64 | 0.92 | 0.98 | 0.68 | 0.96 | 0.96 | 0.76 | 0.66 | 0.78 | 1.0 |
dot_precision@1 | 0.44 | 0.36 | 0.48 | 0.24 | 0.7 | 0.74 | 0.34 | 0.88 | 0.84 | 0.42 | 0.1 | 0.6 | 0.6735 |
dot_precision@3 | 0.2 | 0.34 | 0.2267 | 0.1467 | 0.6133 | 0.3133 | 0.2133 | 0.4867 | 0.3267 | 0.2867 | 0.1133 | 0.2467 | 0.6667 |
dot_precision@5 | 0.148 | 0.328 | 0.152 | 0.12 | 0.58 | 0.196 | 0.176 | 0.324 | 0.22 | 0.22 | 0.092 | 0.164 | 0.5918 |
dot_precision@10 | 0.084 | 0.27 | 0.08 | 0.074 | 0.52 | 0.104 | 0.112 | 0.17 | 0.12 | 0.152 | 0.066 | 0.088 | 0.4837 |
dot_recall@1 | 0.44 | 0.0208 | 0.47 | 0.1183 | 0.0531 | 0.7067 | 0.1771 | 0.44 | 0.7873 | 0.086 | 0.1 | 0.565 | 0.0471 |
dot_recall@3 | 0.6 | 0.0706 | 0.64 | 0.2117 | 0.1639 | 0.8667 | 0.307 | 0.73 | 0.854 | 0.1767 | 0.34 | 0.68 | 0.1329 |
dot_recall@5 | 0.74 | 0.0906 | 0.7 | 0.2623 | 0.2366 | 0.8933 | 0.3937 | 0.81 | 0.898 | 0.2247 | 0.46 | 0.71 | 0.2016 |
dot_recall@10 | 0.84 | 0.144 | 0.73 | 0.2997 | 0.3544 | 0.9433 | 0.4867 | 0.85 | 0.9313 | 0.3117 | 0.66 | 0.77 | 0.3206 |
dot_ndcg@10 | 0.6242 | 0.3196 | 0.6151 | 0.2571 | 0.6138 | 0.8368 | 0.3902 | 0.8078 | 0.8841 | 0.3133 | 0.3562 | 0.6798 | 0.5525 |
dot_mrr@10 | 0.5571 | 0.4414 | 0.5865 | 0.3586 | 0.7719 | 0.817 | 0.4439 | 0.9042 | 0.8806 | 0.5258 | 0.262 | 0.6625 | 0.8141 |
dot_map@100 | 0.5639 | 0.1357 | 0.5841 | 0.2046 | 0.4605 | 0.7994 | 0.3267 | 0.7447 | 0.8626 | 0.2402 | 0.2741 | 0.6533 | 0.4012 |
query_active_dims | 20.5 | 18.3 | 22.2 | 51.48 | 20.52 | 44.84 | 18.92 | 43.88 | 18.76 | 38.6 | 121.02 | 57.42 | 18.1224 |
query_sparsity_ratio | 0.9993 | 0.9994 | 0.9993 | 0.9983 | 0.9993 | 0.9985 | 0.9994 | 0.9986 | 0.9994 | 0.9987 | 0.996 | 0.9981 | 0.9994 |
corpus_active_dims | 81.8767 | 156.0484 | 103.7253 | 134.299 | 111.0784 | 154.0977 | 75.4999 | 120.7884 | 20.3819 | 120.2808 | 107.1684 | 158.0332 | 84.7328 |
corpus_sparsity_ratio | 0.9973 | 0.9949 | 0.9966 | 0.9956 | 0.9964 | 0.995 | 0.9975 | 0.996 | 0.9993 | 0.9961 | 0.9965 | 0.9948 | 0.9972 |
Sparse Nano BEIR
- Dataset:
NanoBEIR_mean
- Evaluated with
SparseNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "msmarco", "nfcorpus", "nq" ] }
Metric | Value |
---|---|
dot_accuracy@1 | 0.44 |
dot_accuracy@3 | 0.62 |
dot_accuracy@5 | 0.66 |
dot_accuracy@10 | 0.7467 |
dot_precision@1 | 0.44 |
dot_precision@3 | 0.2711 |
dot_precision@5 | 0.2067 |
dot_precision@10 | 0.1447 |
dot_recall@1 | 0.3078 |
dot_recall@3 | 0.4617 |
dot_recall@5 | 0.4975 |
dot_recall@10 | 0.5604 |
dot_ndcg@10 | 0.5189 |
dot_mrr@10 | 0.5385 |
dot_map@100 | 0.4255 |
query_active_dims | 22.4 |
query_sparsity_ratio | 0.9993 |
corpus_active_dims | 112.0335 |
corpus_sparsity_ratio | 0.9963 |
Sparse Nano BEIR
- Dataset:
NanoBEIR_mean
- Evaluated with
SparseNanoBEIREvaluator
with these parameters:{ "dataset_names": [ "climatefever", "dbpedia", "fever", "fiqa2018", "hotpotqa", "msmarco", "nfcorpus", "nq", "quoraretrieval", "scidocs", "arguana", "scifact", "touche2020" ] }
Metric | Value |
---|---|
dot_accuracy@1 | 0.5241 |
dot_accuracy@3 | 0.6799 |
dot_accuracy@5 | 0.7415 |
dot_accuracy@10 | 0.8169 |
dot_precision@1 | 0.5241 |
dot_precision@3 | 0.3215 |
dot_precision@5 | 0.2548 |
dot_precision@10 | 0.1787 |
dot_recall@1 | 0.3086 |
dot_recall@3 | 0.4441 |
dot_recall@5 | 0.5093 |
dot_recall@10 | 0.5878 |
dot_ndcg@10 | 0.5577 |
dot_mrr@10 | 0.6174 |
dot_map@100 | 0.4808 |
query_active_dims | 38.074 |
query_sparsity_ratio | 0.9988 |
corpus_active_dims | 105.0515 |
corpus_sparsity_ratio | 0.9966 |
Training Details
Training Dataset
msmarco
- Dataset: msmarco at 9e329ed
- Size: 90,000 training samples
- Columns:
query
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
query positive negative type string string string details - min: 4 tokens
- mean: 9.02 tokens
- max: 29 tokens
- min: 16 tokens
- mean: 79.88 tokens
- max: 203 tokens
- min: 20 tokens
- mean: 77.8 tokens
- max: 201 tokens
- Samples:
query positive negative yosemite temperature in september
Here are the average temp in Yosemite Valley (where CV is located) by month: www.nps.gov/yose/planyourvisit/climate.htm. Also beginning of September is usually still quite warm. Nights can have a bit of a chill, but nothing a couple of blankets can't handle.
Guide to Switzerland weather in September. The average maximum daytime temperature in Switzerland in September is a comfortable 18°C (64°F). The average night-time temperature is usually a cool 9°C (48°F). There are usually 6 hours of bright sunshine each day, which represents 45% of the 13 hours of daylight.
what is genus
Intermediate minor rankings are not shown. A genus (/ËdÊiËnÉs/, pl. genera) is a taxonomic rank used in the biological classification of living and fossil organisms in biology. In the hierarchy of biological classification, genus comes above species and below family. In binomial nomenclature, the genus name forms the first part of the binomial species name for each species within the genus. The composition of a genus is determined by a taxonomist.
The genus is the first part of a scientific name. Note that the genus is always capitalised. An example: Lemur catta is the scientific name of the Ringtailed lemur and Lemur ⦠is the genus.Another example: Sphyrna zygaena is the scientific name of one species of Hammerhead shark and Sphyrna is the genus. name used all around the world to classify a living organism. It is composed of a genus and species name. A sceintific name can also be considered for non living things, the ⦠se are usually called scientific jargon, or very simply 'proper names for the things around you'. 4 people found this useful.
what did johannes kepler discover about the motion of the planets?
Johannes Kepler devised his three laws of motion from his observations of planets that are fundamental to our understanding of orbital motions.
Little Street, Johannes Vermeer, c. 1658. New stop on Delft tourist trail after Vermeer's Little Street identified. Few artists have left such a deep imprint on their birthplace as Johannes Vermeer on Delft. In the summer, tour parties weave through the Dutch townâs cobbled streets ticking off Vermeer landmarks.
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')", "lambda_corpus": 0.001, "lambda_query": 5e-05 }
Evaluation Dataset
msmarco
- Dataset: msmarco at 9e329ed
- Size: 10,000 evaluation samples
- Columns:
query
,positive
, andnegative
- Approximate statistics based on the first 1000 samples:
query positive negative type string string string details - min: 4 tokens
- mean: 9.16 tokens
- max: 26 tokens
- min: 18 tokens
- mean: 79.89 tokens
- max: 256 tokens
- min: 15 tokens
- mean: 76.95 tokens
- max: 220 tokens
- Samples:
query positive negative scarehouse cast
The Scarehouse. The Scarehouse is a 2014 Canadian horror film directed by Gavin Michael Booth. It stars Sarah Booth and Kimberly-Sue Murray as two women who seek revenge against their former sorority.
Nathalie Emmanuel joined the TV series as a recurring cast member in Season 3, and continued as a recurring cast member into Season 4. Emmanuel was later promoted to a starring cast member for seasons 5 and 6.
population of bellemont arizona
The 2016 Bellemont (zip 86015), Arizona, population is 300. There are 55 people per square mile (population density). The median age is 29.9. The US median is 37.4. 38.19% of people in Bellemont (zip 86015), Arizona, are married.
⢠Arizona: A 2010 University of Arizona report estimates that 40% of the state's kissing bugs carry a parasite strain related to the Chagas disease but rarely transmit the disease to humans. The Arizona Department of Health Services reported one Chagas disease-related death in 2013, reports The Arizona Republic.
does air transat check bag size
⢠Weight must be 10kg (22 lb) in Economy class and in Option Plus and 15 kg (33lb) in Club Class. Checked Baggage Air Transat allows for multiple pieces, as long as the combined weight does not exceed weight limitations. ⢠Length + width + height must not exceed 158cm (62 in).
Bag-valve masks come in different sizes to fit infants, children, and adults. The face mask size may be independent of the bag size; for example, a single pediatric-sized bag might be used with different masks for multiple face sizes, or a pediatric mask might be used with an adult bag for patients with small faces.
- Loss:
SpladeLoss
with these parameters:{ "loss": "SparseMultipleNegativesRankingLoss(scale=1.0, similarity_fct='dot_score')", "lambda_corpus": 0.001, "lambda_query": 5e-05 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 16per_device_eval_batch_size
: 16learning_rate
: 2e-05num_train_epochs
: 1warmup_ratio
: 0.1bf16
: Trueload_best_model_at_end
: Truebatch_sampler
: no_duplicates
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 16per_device_eval_batch_size
: 16per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 2e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Truefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: no_duplicatesmulti_dataset_batch_sampler
: proportionalrouter_mapping
: {}learning_rate_mapping
: {}
Training Logs
Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_dot_ndcg@10 | NanoNFCorpus_dot_ndcg@10 | NanoNQ_dot_ndcg@10 | NanoBEIR_mean_dot_ndcg@10 | NanoClimateFEVER_dot_ndcg@10 | NanoDBPedia_dot_ndcg@10 | NanoFEVER_dot_ndcg@10 | NanoFiQA2018_dot_ndcg@10 | NanoHotpotQA_dot_ndcg@10 | NanoQuoraRetrieval_dot_ndcg@10 | NanoSCIDOCS_dot_ndcg@10 | NanoArguAna_dot_ndcg@10 | NanoSciFact_dot_ndcg@10 | NanoTouche2020_dot_ndcg@10 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.0178 | 100 | 199.0423 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.0356 | 200 | 11.3558 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.0533 | 300 | 0.9845 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.0711 | 400 | 0.4726 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.0889 | 500 | 0.2639 | 0.2407 | 0.5514 | 0.3061 | 0.5649 | 0.4741 | - | - | - | - | - | - | - | - | - | - |
0.1067 | 600 | 0.2931 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.1244 | 700 | 0.2301 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.1422 | 800 | 0.2168 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.16 | 900 | 0.1741 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.1778 | 1000 | 0.1852 | 0.1878 | 0.5868 | 0.2975 | 0.5648 | 0.4830 | - | - | - | - | - | - | - | - | - | - |
0.1956 | 1100 | 0.1684 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.2133 | 1200 | 0.1629 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.2311 | 1300 | 0.1736 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.2489 | 1400 | 0.1813 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.2667 | 1500 | 0.1826 | 0.1382 | 0.5941 | 0.3251 | 0.5911 | 0.5035 | - | - | - | - | - | - | - | - | - | - |
0.2844 | 1600 | 0.177 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.3022 | 1700 | 0.1568 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.32 | 1800 | 0.1707 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.3378 | 1900 | 0.1554 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.3556 | 2000 | 0.1643 | 0.1553 | 0.6157 | 0.2997 | 0.5807 | 0.4987 | - | - | - | - | - | - | - | - | - | - |
0.3733 | 2100 | 0.1564 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.3911 | 2200 | 0.1334 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.4089 | 2300 | 0.1349 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.4267 | 2400 | 0.1228 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.4444 | 2500 | 0.1473 | 0.1239 | 0.6242 | 0.3196 | 0.6151 | 0.5196 | - | - | - | - | - | - | - | - | - | - |
0.4622 | 2600 | 0.1506 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.48 | 2700 | 0.1436 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.4978 | 2800 | 0.1471 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.5156 | 2900 | 0.1378 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.5333 | 3000 | 0.1248 | 0.1328 | 0.6077 | 0.3073 | 0.6022 | 0.5057 | - | - | - | - | - | - | - | - | - | - |
0.5511 | 3100 | 0.1672 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.5689 | 3200 | 0.1301 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.5867 | 3300 | 0.1325 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.6044 | 3400 | 0.1335 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.6222 | 3500 | 0.122 | 0.1163 | 0.6081 | 0.3302 | 0.6190 | 0.5191 | - | - | - | - | - | - | - | - | - | - |
0.64 | 3600 | 0.1369 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.6578 | 3700 | 0.1651 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.6756 | 3800 | 0.1243 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.6933 | 3900 | 0.1122 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.7111 | 4000 | 0.1308 | 0.1307 | 0.6013 | 0.3232 | 0.5981 | 0.5075 | - | - | - | - | - | - | - | - | - | - |
0.7289 | 4100 | 0.1708 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.7467 | 4200 | 0.1143 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.7644 | 4300 | 0.167 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.7822 | 4400 | 0.1119 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.8 | 4500 | 0.1128 | 0.1177 | 0.6082 | 0.3228 | 0.5866 | 0.5058 | - | - | - | - | - | - | - | - | - | - |
0.8178 | 4600 | 0.125 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.8356 | 4700 | 0.1252 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.8533 | 4800 | 0.1066 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.8711 | 4900 | 0.1196 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.8889 | 5000 | 0.1291 | 0.1120 | 0.6134 | 0.3230 | 0.6115 | 0.5160 | - | - | - | - | - | - | - | - | - | - |
0.9067 | 5100 | 0.1219 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.9244 | 5200 | 0.1492 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.9422 | 5300 | 0.1138 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.96 | 5400 | 0.1583 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
0.9778 | 5500 | 0.1516 | 0.1125 | 0.6224 | 0.3205 | 0.6137 | 0.5189 | - | - | - | - | - | - | - | - | - | - |
0.9956 | 5600 | 0.1227 | - | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
-1 | -1 | - | - | 0.6242 | 0.3196 | 0.6151 | 0.5577 | 0.2571 | 0.6138 | 0.8368 | 0.3902 | 0.8078 | 0.8841 | 0.3133 | 0.3562 | 0.6798 | 0.5525 |
- The bold row denotes the saved checkpoint.
Environmental Impact
Carbon emissions were measured using CodeCarbon.
- Energy Consumed: 0.057 kWh
- Carbon Emitted: 0.021 kg of CO2
- Hours Used: 0.179 hours
Training Hardware
- On Cloud: No
- GPU Model: 1 x NVIDIA H100 80GB HBM3
- CPU Model: AMD EPYC 7R13 Processor
- RAM Size: 248.00 GB
Framework Versions
- Python: 3.13.3
- Sentence Transformers: 4.2.0.dev0
- Transformers: 4.51.3
- PyTorch: 2.7.1+cu126
- Accelerate: 0.26.0
- Datasets: 2.21.0
- Tokenizers: 0.21.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
SpladeLoss
@misc{formal2022distillationhardnegativesampling,
title={From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective},
author={Thibault Formal and Carlos Lassance and Benjamin Piwowarski and Stéphane Clinchant},
year={2022},
eprint={2205.04733},
archivePrefix={arXiv},
primaryClass={cs.IR},
url={https://arxiv.org/abs/2205.04733},
}
SparseMultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
FlopsLoss
@article{paria2020minimizing,
title={Minimizing flops to learn efficient sparse representations},
author={Paria, Biswajit and Yeh, Chih-Kuan and Yen, Ian EH and Xu, Ning and Ravikumar, Pradeep and P{'o}czos, Barnab{'a}s},
journal={arXiv preprint arXiv:2004.05665},
year={2020}
}
- Downloads last month
- 18
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for sparse-encoder/example-splade-distilbert-base-uncased-msmarco-mrl
Base model
distilbert/distilbert-base-uncasedDataset used to train sparse-encoder/example-splade-distilbert-base-uncased-msmarco-mrl
Evaluation results
- Dot Accuracy@1 on NanoMSMARCOself-reported0.440
- Dot Accuracy@3 on NanoMSMARCOself-reported0.660
- Dot Accuracy@5 on NanoMSMARCOself-reported0.720
- Dot Accuracy@10 on NanoMSMARCOself-reported0.820
- Dot Precision@1 on NanoMSMARCOself-reported0.440
- Dot Precision@3 on NanoMSMARCOself-reported0.220
- Dot Precision@5 on NanoMSMARCOself-reported0.144
- Dot Precision@10 on NanoMSMARCOself-reported0.082
- Dot Recall@1 on NanoMSMARCOself-reported0.440
- Dot Recall@3 on NanoMSMARCOself-reported0.660