radoslavralev commited on
Commit
42224de
·
verified ·
1 Parent(s): 2173df4

Add new SentenceTransformer model

Browse files
Files changed (3) hide show
  1. 1_Pooling/config.json +3 -3
  2. README.md +371 -80
  3. modules.json +6 -0
1_Pooling/config.json CHANGED
@@ -1,7 +1,7 @@
1
  {
2
- "word_embedding_dimension": 512,
3
- "pooling_mode_cls_token": true,
4
- "pooling_mode_mean_tokens": false,
5
  "pooling_mode_max_tokens": false,
6
  "pooling_mode_mean_sqrt_len_tokens": false,
7
  "pooling_mode_weightedmean_tokens": false,
 
1
  {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
  "pooling_mode_max_tokens": false,
6
  "pooling_mode_mean_sqrt_len_tokens": false,
7
  "pooling_mode_weightedmean_tokens": false,
README.md CHANGED
@@ -5,51 +5,231 @@ tags:
5
  - feature-extraction
6
  - dense
7
  - generated_from_trainer
8
- - dataset_size:100000
9
  - loss:MultipleNegativesRankingLoss
10
- base_model: prajjwal1/bert-small
11
  widget:
12
- - source_sentence: How do I calculate IQ?
13
  sentences:
14
- - What is the easiest way to know my IQ?
15
- - How do I calculate not IQ ?
16
- - What are some creative and innovative business ideas with less investment in India?
17
- - source_sentence: How can I learn martial arts in my home?
 
18
  sentences:
19
- - How can I learn martial arts by myself?
20
- - What are the advantages and disadvantages of investing in gold?
21
- - Can people see that I have looked at their pictures on instagram if I am not following
22
- them?
23
- - source_sentence: When Enterprise picks you up do you have to take them back?
24
  sentences:
25
- - Are there any software Training institute in Tuticorin?
26
- - When Enterprise picks you up do you have to take them back?
27
- - When Enterprise picks you up do them have to take youback?
28
- - source_sentence: What are some non-capital goods?
 
 
 
29
  sentences:
30
- - What are capital goods?
31
- - How is the value of [math]\pi[/math] calculated?
32
- - What are some non-capital goods?
33
- - source_sentence: What is the QuickBooks technical support phone number in New York?
34
  sentences:
35
- - What caused the Great Depression?
36
- - Can I apply for PR in Canada?
37
- - Which is the best QuickBooks Hosting Support Number in New York?
 
38
  pipeline_tag: sentence-similarity
39
  library_name: sentence-transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ---
41
 
42
- # SentenceTransformer based on prajjwal1/bert-small
43
 
44
- This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small). It maps sentences & paragraphs to a 512-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
45
 
46
  ## Model Details
47
 
48
  ### Model Description
49
  - **Model Type:** Sentence Transformer
50
- - **Base model:** [prajjwal1/bert-small](https://huggingface.co/prajjwal1/bert-small) <!-- at revision 0ec5f86f27c1a77d704439db5e01c307ea11b9d4 -->
51
  - **Maximum Sequence Length:** 128 tokens
52
- - **Output Dimensionality:** 512 dimensions
53
  - **Similarity Function:** Cosine Similarity
54
  <!-- - **Training Dataset:** Unknown -->
55
  <!-- - **Language:** Unknown -->
@@ -66,7 +246,8 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [p
66
  ```
67
  SentenceTransformer(
68
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
69
- (1): Pooling({'word_embedding_dimension': 512, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
 
70
  )
71
  ```
72
 
@@ -85,23 +266,23 @@ Then you can load this model and run inference.
85
  from sentence_transformers import SentenceTransformer
86
 
87
  # Download from the 🤗 Hub
88
- model = SentenceTransformer("sentence_transformers_model_id")
89
  # Run inference
90
  sentences = [
91
- 'What is the QuickBooks technical support phone number in New York?',
92
- 'Which is the best QuickBooks Hosting Support Number in New York?',
93
- 'Can I apply for PR in Canada?',
94
  ]
95
  embeddings = model.encode(sentences)
96
  print(embeddings.shape)
97
- # [3, 512]
98
 
99
  # Get the similarity scores for the embeddings
100
  similarities = model.similarity(embeddings, embeddings)
101
  print(similarities)
102
- # tensor([[1.0000, 0.8563, 0.0594],
103
- # [0.8563, 1.0000, 0.1245],
104
- # [0.0594, 0.1245, 1.0000]])
105
  ```
106
 
107
  <!--
@@ -128,6 +309,65 @@ You can finetune this model on your own dataset.
128
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
129
  -->
130
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
131
  <!--
132
  ## Bias, Risks and Limitations
133
 
@@ -146,23 +386,49 @@ You can finetune this model on your own dataset.
146
 
147
  #### Unnamed Dataset
148
 
149
- * Size: 100,000 training samples
150
- * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>sentence_2</code>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
151
  * Approximate statistics based on the first 1000 samples:
152
- | | sentence_0 | sentence_1 | sentence_2 |
153
  |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
154
  | type | string | string | string |
155
- | details | <ul><li>min: 6 tokens</li><li>mean: 15.79 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.68 tokens</li><li>max: 66 tokens</li></ul> | <ul><li>min: 7 tokens</li><li>mean: 16.37 tokens</li><li>max: 67 tokens</li></ul> |
156
  * Samples:
157
- | sentence_0 | sentence_1 | sentence_2 |
158
- |:-----------------------------------------------------------------|:-----------------------------------------------------------------|:----------------------------------------------------------------------------------|
159
- | <code>Is masturbating bad for boys?</code> | <code>Is masturbating bad for boys?</code> | <code>How harmful or unhealthy is masturbation?</code> |
160
- | <code>Does a train engine move in reverse?</code> | <code>Does a train engine move in reverse?</code> | <code>Time moves forward, not in reverse. Doesn't that make time a vector?</code> |
161
- | <code>What is the most badass thing anyone has ever done?</code> | <code>What is the most badass thing anyone has ever done?</code> | <code>anyone is the most badass thing Whathas ever done?</code> |
162
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
163
  ```json
164
  {
165
- "scale": 20.0,
166
  "similarity_fct": "cos_sim",
167
  "gather_across_devices": false
168
  }
@@ -171,36 +437,49 @@ You can finetune this model on your own dataset.
171
  ### Training Hyperparameters
172
  #### Non-Default Hyperparameters
173
 
174
- - `per_device_train_batch_size`: 64
175
- - `per_device_eval_batch_size`: 64
 
 
 
 
 
176
  - `fp16`: True
177
- - `multi_dataset_batch_sampler`: round_robin
 
 
 
 
 
 
 
 
178
 
179
  #### All Hyperparameters
180
  <details><summary>Click to expand</summary>
181
 
182
  - `overwrite_output_dir`: False
183
  - `do_predict`: False
184
- - `eval_strategy`: no
185
  - `prediction_loss_only`: True
186
- - `per_device_train_batch_size`: 64
187
- - `per_device_eval_batch_size`: 64
188
  - `per_gpu_train_batch_size`: None
189
  - `per_gpu_eval_batch_size`: None
190
  - `gradient_accumulation_steps`: 1
191
  - `eval_accumulation_steps`: None
192
  - `torch_empty_cache_steps`: None
193
- - `learning_rate`: 5e-05
194
- - `weight_decay`: 0.0
195
  - `adam_beta1`: 0.9
196
  - `adam_beta2`: 0.999
197
  - `adam_epsilon`: 1e-08
198
- - `max_grad_norm`: 1
199
- - `num_train_epochs`: 3
200
- - `max_steps`: -1
201
  - `lr_scheduler_type`: linear
202
  - `lr_scheduler_kwargs`: {}
203
- - `warmup_ratio`: 0.0
204
  - `warmup_steps`: 0
205
  - `log_level`: passive
206
  - `log_level_replica`: warning
@@ -228,14 +507,14 @@ You can finetune this model on your own dataset.
228
  - `tpu_num_cores`: None
229
  - `tpu_metrics_debug`: False
230
  - `debug`: []
231
- - `dataloader_drop_last`: False
232
- - `dataloader_num_workers`: 0
233
- - `dataloader_prefetch_factor`: None
234
  - `past_index`: -1
235
  - `disable_tqdm`: False
236
  - `remove_unused_columns`: True
237
  - `label_names`: None
238
- - `load_best_model_at_end`: False
239
  - `ignore_data_skip`: False
240
  - `fsdp`: []
241
  - `fsdp_min_num_params`: 0
@@ -245,23 +524,23 @@ You can finetune this model on your own dataset.
245
  - `parallelism_config`: None
246
  - `deepspeed`: None
247
  - `label_smoothing_factor`: 0.0
248
- - `optim`: adamw_torch_fused
249
  - `optim_args`: None
250
  - `adafactor`: False
251
  - `group_by_length`: False
252
  - `length_column_name`: length
253
  - `project`: huggingface
254
  - `trackio_space_id`: trackio
255
- - `ddp_find_unused_parameters`: None
256
  - `ddp_bucket_cap_mb`: None
257
  - `ddp_broadcast_buffers`: False
258
  - `dataloader_pin_memory`: True
259
  - `dataloader_persistent_workers`: False
260
  - `skip_memory_metrics`: True
261
  - `use_legacy_prediction_loop`: False
262
- - `push_to_hub`: False
263
  - `resume_from_checkpoint`: None
264
- - `hub_model_id`: None
265
  - `hub_strategy`: every_save
266
  - `hub_private_repo`: None
267
  - `hub_always_push`: False
@@ -288,31 +567,43 @@ You can finetune this model on your own dataset.
288
  - `neftune_noise_alpha`: None
289
  - `optim_target_modules`: None
290
  - `batch_eval_metrics`: False
291
- - `eval_on_start`: False
292
  - `use_liger_kernel`: False
293
  - `liger_kernel_config`: None
294
  - `eval_use_gather_object`: False
295
  - `average_tokens_across_devices`: True
296
  - `prompts`: None
297
  - `batch_sampler`: batch_sampler
298
- - `multi_dataset_batch_sampler`: round_robin
299
  - `router_mapping`: {}
300
  - `learning_rate_mapping`: {}
301
 
302
  </details>
303
 
304
  ### Training Logs
305
- | Epoch | Step | Training Loss |
306
- |:------:|:----:|:-------------:|
307
- | 0.3199 | 500 | 0.4294 |
308
- | 0.6398 | 1000 | 0.1268 |
309
- | 0.9597 | 1500 | 0.1 |
310
- | 1.2796 | 2000 | 0.0792 |
311
- | 1.5995 | 2500 | 0.0706 |
312
- | 1.9194 | 3000 | 0.0687 |
313
- | 2.2393 | 3500 | 0.0584 |
314
- | 2.5592 | 4000 | 0.057 |
315
- | 2.8791 | 4500 | 0.0581 |
 
 
 
 
 
 
 
 
 
 
 
 
316
 
317
 
318
  ### Framework Versions
@@ -321,7 +612,7 @@ You can finetune this model on your own dataset.
321
  - Transformers: 4.57.3
322
  - PyTorch: 2.9.1+cu128
323
  - Accelerate: 1.12.0
324
- - Datasets: 4.4.2
325
  - Tokenizers: 0.22.1
326
 
327
  ## Citation
 
5
  - feature-extraction
6
  - dense
7
  - generated_from_trainer
8
+ - dataset_size:713743
9
  - loss:MultipleNegativesRankingLoss
10
+ base_model: thenlper/gte-small
11
  widget:
12
+ - source_sentence: 'Abraham Lincoln: Why is the Gettysburg Address so memorable?'
13
  sentences:
14
+ - 'Abraham Lincoln: Why is the Gettysburg Address so memorable?'
15
+ - What does the Gettysburg Address really mean?
16
+ - What is eatalo.com?
17
+ - source_sentence: Has the influence of Ancient Carthage in science, math, and society
18
+ been underestimated?
19
  sentences:
20
+ - How does one earn money online without an investment from home?
21
+ - Has the influence of Ancient Carthage in science, math, and society been underestimated?
22
+ - Has the influence of the Ancient Etruscans in science and math been underestimated?
23
+ - source_sentence: Is there any app that shares charging to others like share it how
24
+ we transfer files?
25
  sentences:
26
+ - How do you think of Chinese claims that the present Private Arbitration is illegal,
27
+ its verdict violates the UNCLOS and is illegal?
28
+ - Is there any app that shares charging to others like share it how we transfer
29
+ files?
30
+ - Are there any platforms that provides end-to-end encryption for file transfer/
31
+ sharing?
32
+ - source_sentence: Why AAP’s MLA Dinesh Mohaniya has been arrested?
33
  sentences:
34
+ - What are your views on the latest sex scandal by AAP MLA Sandeep Kumar?
35
+ - What is a dc current? What are some examples?
36
+ - Why AAP’s MLA Dinesh Mohaniya has been arrested?
37
+ - source_sentence: What is the difference between economic growth and economic development?
38
  sentences:
39
+ - How cold can the Gobi Desert get, and how do its average temperatures compare
40
+ to the ones in the Simpson Desert?
41
+ - the difference between economic growth and economic development is What?
42
+ - What is the difference between economic growth and economic development?
43
  pipeline_tag: sentence-similarity
44
  library_name: sentence-transformers
45
+ metrics:
46
+ - cosine_accuracy@1
47
+ - cosine_accuracy@3
48
+ - cosine_accuracy@5
49
+ - cosine_accuracy@10
50
+ - cosine_precision@1
51
+ - cosine_precision@3
52
+ - cosine_precision@5
53
+ - cosine_precision@10
54
+ - cosine_recall@1
55
+ - cosine_recall@3
56
+ - cosine_recall@5
57
+ - cosine_recall@10
58
+ - cosine_ndcg@10
59
+ - cosine_mrr@10
60
+ - cosine_map@100
61
+ model-index:
62
+ - name: SentenceTransformer based on thenlper/gte-small
63
+ results:
64
+ - task:
65
+ type: information-retrieval
66
+ name: Information Retrieval
67
+ dataset:
68
+ name: NanoMSMARCO
69
+ type: NanoMSMARCO
70
+ metrics:
71
+ - type: cosine_accuracy@1
72
+ value: 0.28
73
+ name: Cosine Accuracy@1
74
+ - type: cosine_accuracy@3
75
+ value: 0.58
76
+ name: Cosine Accuracy@3
77
+ - type: cosine_accuracy@5
78
+ value: 0.64
79
+ name: Cosine Accuracy@5
80
+ - type: cosine_accuracy@10
81
+ value: 0.72
82
+ name: Cosine Accuracy@10
83
+ - type: cosine_precision@1
84
+ value: 0.28
85
+ name: Cosine Precision@1
86
+ - type: cosine_precision@3
87
+ value: 0.19333333333333333
88
+ name: Cosine Precision@3
89
+ - type: cosine_precision@5
90
+ value: 0.128
91
+ name: Cosine Precision@5
92
+ - type: cosine_precision@10
93
+ value: 0.07200000000000001
94
+ name: Cosine Precision@10
95
+ - type: cosine_recall@1
96
+ value: 0.28
97
+ name: Cosine Recall@1
98
+ - type: cosine_recall@3
99
+ value: 0.58
100
+ name: Cosine Recall@3
101
+ - type: cosine_recall@5
102
+ value: 0.64
103
+ name: Cosine Recall@5
104
+ - type: cosine_recall@10
105
+ value: 0.72
106
+ name: Cosine Recall@10
107
+ - type: cosine_ndcg@10
108
+ value: 0.5075011853031293
109
+ name: Cosine Ndcg@10
110
+ - type: cosine_mrr@10
111
+ value: 0.4386111111111111
112
+ name: Cosine Mrr@10
113
+ - type: cosine_map@100
114
+ value: 0.4533366047009664
115
+ name: Cosine Map@100
116
+ - task:
117
+ type: information-retrieval
118
+ name: Information Retrieval
119
+ dataset:
120
+ name: NanoNQ
121
+ type: NanoNQ
122
+ metrics:
123
+ - type: cosine_accuracy@1
124
+ value: 0.32
125
+ name: Cosine Accuracy@1
126
+ - type: cosine_accuracy@3
127
+ value: 0.54
128
+ name: Cosine Accuracy@3
129
+ - type: cosine_accuracy@5
130
+ value: 0.6
131
+ name: Cosine Accuracy@5
132
+ - type: cosine_accuracy@10
133
+ value: 0.66
134
+ name: Cosine Accuracy@10
135
+ - type: cosine_precision@1
136
+ value: 0.32
137
+ name: Cosine Precision@1
138
+ - type: cosine_precision@3
139
+ value: 0.18666666666666665
140
+ name: Cosine Precision@3
141
+ - type: cosine_precision@5
142
+ value: 0.128
143
+ name: Cosine Precision@5
144
+ - type: cosine_precision@10
145
+ value: 0.07
146
+ name: Cosine Precision@10
147
+ - type: cosine_recall@1
148
+ value: 0.3
149
+ name: Cosine Recall@1
150
+ - type: cosine_recall@3
151
+ value: 0.51
152
+ name: Cosine Recall@3
153
+ - type: cosine_recall@5
154
+ value: 0.58
155
+ name: Cosine Recall@5
156
+ - type: cosine_recall@10
157
+ value: 0.64
158
+ name: Cosine Recall@10
159
+ - type: cosine_ndcg@10
160
+ value: 0.48687028758380874
161
+ name: Cosine Ndcg@10
162
+ - type: cosine_mrr@10
163
+ value: 0.4465
164
+ name: Cosine Mrr@10
165
+ - type: cosine_map@100
166
+ value: 0.44172587957864395
167
+ name: Cosine Map@100
168
+ - task:
169
+ type: nano-beir
170
+ name: Nano BEIR
171
+ dataset:
172
+ name: NanoBEIR mean
173
+ type: NanoBEIR_mean
174
+ metrics:
175
+ - type: cosine_accuracy@1
176
+ value: 0.30000000000000004
177
+ name: Cosine Accuracy@1
178
+ - type: cosine_accuracy@3
179
+ value: 0.56
180
+ name: Cosine Accuracy@3
181
+ - type: cosine_accuracy@5
182
+ value: 0.62
183
+ name: Cosine Accuracy@5
184
+ - type: cosine_accuracy@10
185
+ value: 0.69
186
+ name: Cosine Accuracy@10
187
+ - type: cosine_precision@1
188
+ value: 0.30000000000000004
189
+ name: Cosine Precision@1
190
+ - type: cosine_precision@3
191
+ value: 0.19
192
+ name: Cosine Precision@3
193
+ - type: cosine_precision@5
194
+ value: 0.128
195
+ name: Cosine Precision@5
196
+ - type: cosine_precision@10
197
+ value: 0.07100000000000001
198
+ name: Cosine Precision@10
199
+ - type: cosine_recall@1
200
+ value: 0.29000000000000004
201
+ name: Cosine Recall@1
202
+ - type: cosine_recall@3
203
+ value: 0.5449999999999999
204
+ name: Cosine Recall@3
205
+ - type: cosine_recall@5
206
+ value: 0.61
207
+ name: Cosine Recall@5
208
+ - type: cosine_recall@10
209
+ value: 0.6799999999999999
210
+ name: Cosine Recall@10
211
+ - type: cosine_ndcg@10
212
+ value: 0.497185736443469
213
+ name: Cosine Ndcg@10
214
+ - type: cosine_mrr@10
215
+ value: 0.4425555555555556
216
+ name: Cosine Mrr@10
217
+ - type: cosine_map@100
218
+ value: 0.44753124213980516
219
+ name: Cosine Map@100
220
  ---
221
 
222
+ # SentenceTransformer based on thenlper/gte-small
223
 
224
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [thenlper/gte-small](https://huggingface.co/thenlper/gte-small). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
225
 
226
  ## Model Details
227
 
228
  ### Model Description
229
  - **Model Type:** Sentence Transformer
230
+ - **Base model:** [thenlper/gte-small](https://huggingface.co/thenlper/gte-small) <!-- at revision 17e1f347d17fe144873b1201da91788898c639cd -->
231
  - **Maximum Sequence Length:** 128 tokens
232
+ - **Output Dimensionality:** 384 dimensions
233
  - **Similarity Function:** Cosine Similarity
234
  <!-- - **Training Dataset:** Unknown -->
235
  <!-- - **Language:** Unknown -->
 
246
  ```
247
  SentenceTransformer(
248
  (0): Transformer({'max_seq_length': 128, 'do_lower_case': False, 'architecture': 'BertModel'})
249
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
250
+ (2): Normalize()
251
  )
252
  ```
253
 
 
266
  from sentence_transformers import SentenceTransformer
267
 
268
  # Download from the 🤗 Hub
269
+ model = SentenceTransformer("redis/model-b-structured")
270
  # Run inference
271
  sentences = [
272
+ 'What is the difference between economic growth and economic development?',
273
+ 'What is the difference between economic growth and economic development?',
274
+ 'the difference between economic growth and economic development is What?',
275
  ]
276
  embeddings = model.encode(sentences)
277
  print(embeddings.shape)
278
+ # [3, 384]
279
 
280
  # Get the similarity scores for the embeddings
281
  similarities = model.similarity(embeddings, embeddings)
282
  print(similarities)
283
+ # tensor([[ 1.0001, 1.0001, -0.0307],
284
+ # [ 1.0001, 1.0001, -0.0307],
285
+ # [-0.0307, -0.0307, 1.0001]])
286
  ```
287
 
288
  <!--
 
309
  *List how the model may foreseeably be misused and address what users ought not to do with the model.*
310
  -->
311
 
312
+ ## Evaluation
313
+
314
+ ### Metrics
315
+
316
+ #### Information Retrieval
317
+
318
+ * Datasets: `NanoMSMARCO` and `NanoNQ`
319
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
320
+
321
+ | Metric | NanoMSMARCO | NanoNQ |
322
+ |:--------------------|:------------|:-----------|
323
+ | cosine_accuracy@1 | 0.28 | 0.32 |
324
+ | cosine_accuracy@3 | 0.58 | 0.54 |
325
+ | cosine_accuracy@5 | 0.64 | 0.6 |
326
+ | cosine_accuracy@10 | 0.72 | 0.66 |
327
+ | cosine_precision@1 | 0.28 | 0.32 |
328
+ | cosine_precision@3 | 0.1933 | 0.1867 |
329
+ | cosine_precision@5 | 0.128 | 0.128 |
330
+ | cosine_precision@10 | 0.072 | 0.07 |
331
+ | cosine_recall@1 | 0.28 | 0.3 |
332
+ | cosine_recall@3 | 0.58 | 0.51 |
333
+ | cosine_recall@5 | 0.64 | 0.58 |
334
+ | cosine_recall@10 | 0.72 | 0.64 |
335
+ | **cosine_ndcg@10** | **0.5075** | **0.4869** |
336
+ | cosine_mrr@10 | 0.4386 | 0.4465 |
337
+ | cosine_map@100 | 0.4533 | 0.4417 |
338
+
339
+ #### Nano BEIR
340
+
341
+ * Dataset: `NanoBEIR_mean`
342
+ * Evaluated with [<code>NanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.NanoBEIREvaluator) with these parameters:
343
+ ```json
344
+ {
345
+ "dataset_names": [
346
+ "msmarco",
347
+ "nq"
348
+ ],
349
+ "dataset_id": "lightonai/NanoBEIR-en"
350
+ }
351
+ ```
352
+
353
+ | Metric | Value |
354
+ |:--------------------|:-----------|
355
+ | cosine_accuracy@1 | 0.3 |
356
+ | cosine_accuracy@3 | 0.56 |
357
+ | cosine_accuracy@5 | 0.62 |
358
+ | cosine_accuracy@10 | 0.69 |
359
+ | cosine_precision@1 | 0.3 |
360
+ | cosine_precision@3 | 0.19 |
361
+ | cosine_precision@5 | 0.128 |
362
+ | cosine_precision@10 | 0.071 |
363
+ | cosine_recall@1 | 0.29 |
364
+ | cosine_recall@3 | 0.545 |
365
+ | cosine_recall@5 | 0.61 |
366
+ | cosine_recall@10 | 0.68 |
367
+ | **cosine_ndcg@10** | **0.4972** |
368
+ | cosine_mrr@10 | 0.4426 |
369
+ | cosine_map@100 | 0.4475 |
370
+
371
  <!--
372
  ## Bias, Risks and Limitations
373
 
 
386
 
387
  #### Unnamed Dataset
388
 
389
+ * Size: 713,743 training samples
390
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
391
+ * Approximate statistics based on the first 1000 samples:
392
+ | | anchor | positive | negative |
393
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
394
+ | type | string | string | string |
395
+ | details | <ul><li>min: 6 tokens</li><li>mean: 16.07 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.03 tokens</li><li>max: 53 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.81 tokens</li><li>max: 58 tokens</li></ul> |
396
+ * Samples:
397
+ | anchor | positive | negative |
398
+ |:-------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------|
399
+ | <code>Which one is better Linux OS? Ubuntu or Mint?</code> | <code>Why do you use Linux Mint?</code> | <code>Which one is not better Linux OS ? Ubuntu or Mint ?</code> |
400
+ | <code>What is flow?</code> | <code>What is flow?</code> | <code>What are flow lines?</code> |
401
+ | <code>How is Trump planning to get Mexico to pay for his supposed wall?</code> | <code>How is it possible for Donald Trump to force Mexico to pay for the wall?</code> | <code>Why do we connect the positive terminal before the negative terminal to ground in a vehicle battery?</code> |
402
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
403
+ ```json
404
+ {
405
+ "scale": 7.0,
406
+ "similarity_fct": "cos_sim",
407
+ "gather_across_devices": false
408
+ }
409
+ ```
410
+
411
+ ### Evaluation Dataset
412
+
413
+ #### Unnamed Dataset
414
+
415
+ * Size: 40,000 evaluation samples
416
+ * Columns: <code>anchor</code>, <code>positive</code>, and <code>negative</code>
417
  * Approximate statistics based on the first 1000 samples:
418
+ | | anchor | positive | negative |
419
  |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|
420
  | type | string | string | string |
421
+ | details | <ul><li>min: 6 tokens</li><li>mean: 15.52 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 15.51 tokens</li><li>max: 74 tokens</li></ul> | <ul><li>min: 6 tokens</li><li>mean: 16.79 tokens</li><li>max: 69 tokens</li></ul> |
422
  * Samples:
423
+ | anchor | positive | negative |
424
+ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
425
+ | <code>Why are all my questions on Quora marked needing improvement?</code> | <code>Why are all my questions immediately being marked as needing improvement?</code> | <code>For a post-graduate student in IIT, is it allowed to take an external scholarship as a top-up to his/her MHRD assistantship?</code> |
426
+ | <code>Can blue butter fly needle with vaccum tube be reused? Is it HIV risk? . Heard the needle is too small to be reused . Had blood draw at clinic?</code> | <code>Can blue butter fly needle with vaccum tube be reused? Is it HIV risk? . Heard the needle is too small to be reused . Had blood draw at clinic?</code> | <code>Can blue butter fly needle with vaccum tube be reused not ? Is it HIV risk ? . Heard the needle is too small to be reused . Had blood draw at clinic ?</code> |
427
+ | <code>Why do people still believe the world is flat?</code> | <code>Why are there still people who believe the world is flat?</code> | <code>I'm not able to buy Udemy course .it is not accepting mine and my friends debit card.my card can be used for Flipkart .how to purchase now?</code> |
428
  * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
429
  ```json
430
  {
431
+ "scale": 7.0,
432
  "similarity_fct": "cos_sim",
433
  "gather_across_devices": false
434
  }
 
437
  ### Training Hyperparameters
438
  #### Non-Default Hyperparameters
439
 
440
+ - `eval_strategy`: steps
441
+ - `per_device_train_batch_size`: 128
442
+ - `per_device_eval_batch_size`: 128
443
+ - `learning_rate`: 2e-05
444
+ - `weight_decay`: 0.0001
445
+ - `max_steps`: 5000
446
+ - `warmup_ratio`: 0.1
447
  - `fp16`: True
448
+ - `dataloader_drop_last`: True
449
+ - `dataloader_num_workers`: 1
450
+ - `dataloader_prefetch_factor`: 1
451
+ - `load_best_model_at_end`: True
452
+ - `optim`: adamw_torch
453
+ - `ddp_find_unused_parameters`: False
454
+ - `push_to_hub`: True
455
+ - `hub_model_id`: redis/model-b-structured
456
+ - `eval_on_start`: True
457
 
458
  #### All Hyperparameters
459
  <details><summary>Click to expand</summary>
460
 
461
  - `overwrite_output_dir`: False
462
  - `do_predict`: False
463
+ - `eval_strategy`: steps
464
  - `prediction_loss_only`: True
465
+ - `per_device_train_batch_size`: 128
466
+ - `per_device_eval_batch_size`: 128
467
  - `per_gpu_train_batch_size`: None
468
  - `per_gpu_eval_batch_size`: None
469
  - `gradient_accumulation_steps`: 1
470
  - `eval_accumulation_steps`: None
471
  - `torch_empty_cache_steps`: None
472
+ - `learning_rate`: 2e-05
473
+ - `weight_decay`: 0.0001
474
  - `adam_beta1`: 0.9
475
  - `adam_beta2`: 0.999
476
  - `adam_epsilon`: 1e-08
477
+ - `max_grad_norm`: 1.0
478
+ - `num_train_epochs`: 3.0
479
+ - `max_steps`: 5000
480
  - `lr_scheduler_type`: linear
481
  - `lr_scheduler_kwargs`: {}
482
+ - `warmup_ratio`: 0.1
483
  - `warmup_steps`: 0
484
  - `log_level`: passive
485
  - `log_level_replica`: warning
 
507
  - `tpu_num_cores`: None
508
  - `tpu_metrics_debug`: False
509
  - `debug`: []
510
+ - `dataloader_drop_last`: True
511
+ - `dataloader_num_workers`: 1
512
+ - `dataloader_prefetch_factor`: 1
513
  - `past_index`: -1
514
  - `disable_tqdm`: False
515
  - `remove_unused_columns`: True
516
  - `label_names`: None
517
+ - `load_best_model_at_end`: True
518
  - `ignore_data_skip`: False
519
  - `fsdp`: []
520
  - `fsdp_min_num_params`: 0
 
524
  - `parallelism_config`: None
525
  - `deepspeed`: None
526
  - `label_smoothing_factor`: 0.0
527
+ - `optim`: adamw_torch
528
  - `optim_args`: None
529
  - `adafactor`: False
530
  - `group_by_length`: False
531
  - `length_column_name`: length
532
  - `project`: huggingface
533
  - `trackio_space_id`: trackio
534
+ - `ddp_find_unused_parameters`: False
535
  - `ddp_bucket_cap_mb`: None
536
  - `ddp_broadcast_buffers`: False
537
  - `dataloader_pin_memory`: True
538
  - `dataloader_persistent_workers`: False
539
  - `skip_memory_metrics`: True
540
  - `use_legacy_prediction_loop`: False
541
+ - `push_to_hub`: True
542
  - `resume_from_checkpoint`: None
543
+ - `hub_model_id`: redis/model-b-structured
544
  - `hub_strategy`: every_save
545
  - `hub_private_repo`: None
546
  - `hub_always_push`: False
 
567
  - `neftune_noise_alpha`: None
568
  - `optim_target_modules`: None
569
  - `batch_eval_metrics`: False
570
+ - `eval_on_start`: True
571
  - `use_liger_kernel`: False
572
  - `liger_kernel_config`: None
573
  - `eval_use_gather_object`: False
574
  - `average_tokens_across_devices`: True
575
  - `prompts`: None
576
  - `batch_sampler`: batch_sampler
577
+ - `multi_dataset_batch_sampler`: proportional
578
  - `router_mapping`: {}
579
  - `learning_rate_mapping`: {}
580
 
581
  </details>
582
 
583
  ### Training Logs
584
+ | Epoch | Step | Training Loss | Validation Loss | NanoMSMARCO_cosine_ndcg@10 | NanoNQ_cosine_ndcg@10 | NanoBEIR_mean_cosine_ndcg@10 |
585
+ |:------:|:----:|:-------------:|:---------------:|:--------------------------:|:---------------------:|:----------------------------:|
586
+ | 0 | 0 | - | 3.6810 | 0.6259 | 0.6583 | 0.6421 |
587
+ | 0.0448 | 250 | 2.585 | 0.6156 | 0.5723 | 0.5298 | 0.5511 |
588
+ | 0.0897 | 500 | 0.6653 | 0.4478 | 0.6142 | 0.5301 | 0.5722 |
589
+ | 0.1345 | 750 | 0.5594 | 0.4191 | 0.5786 | 0.5355 | 0.5570 |
590
+ | 0.1793 | 1000 | 0.5315 | 0.4058 | 0.5597 | 0.5291 | 0.5444 |
591
+ | 0.2242 | 1250 | 0.5141 | 0.3980 | 0.5490 | 0.5255 | 0.5372 |
592
+ | 0.2690 | 1500 | 0.4986 | 0.3916 | 0.5286 | 0.5331 | 0.5308 |
593
+ | 0.3138 | 1750 | 0.4909 | 0.3857 | 0.5386 | 0.5297 | 0.5342 |
594
+ | 0.3587 | 2000 | 0.4831 | 0.3818 | 0.5175 | 0.5155 | 0.5165 |
595
+ | 0.4035 | 2250 | 0.4752 | 0.3785 | 0.5105 | 0.5292 | 0.5198 |
596
+ | 0.4484 | 2500 | 0.4707 | 0.3758 | 0.5208 | 0.4986 | 0.5097 |
597
+ | 0.4932 | 2750 | 0.4646 | 0.3733 | 0.5182 | 0.5016 | 0.5099 |
598
+ | 0.5380 | 3000 | 0.4636 | 0.3713 | 0.5127 | 0.4969 | 0.5048 |
599
+ | 0.5829 | 3250 | 0.4602 | 0.3693 | 0.5112 | 0.4869 | 0.4991 |
600
+ | 0.6277 | 3500 | 0.4597 | 0.3678 | 0.5170 | 0.5000 | 0.5085 |
601
+ | 0.6725 | 3750 | 0.4555 | 0.3665 | 0.5127 | 0.4899 | 0.5013 |
602
+ | 0.7174 | 4000 | 0.4541 | 0.3661 | 0.5130 | 0.4869 | 0.5000 |
603
+ | 0.7622 | 4250 | 0.4528 | 0.3649 | 0.5078 | 0.4887 | 0.4982 |
604
+ | 0.8070 | 4500 | 0.4495 | 0.3643 | 0.5073 | 0.4867 | 0.4970 |
605
+ | 0.8519 | 4750 | 0.4524 | 0.3640 | 0.5049 | 0.4875 | 0.4962 |
606
+ | 0.8967 | 5000 | 0.4516 | 0.3637 | 0.5075 | 0.4869 | 0.4972 |
607
 
608
 
609
  ### Framework Versions
 
612
  - Transformers: 4.57.3
613
  - PyTorch: 2.9.1+cu128
614
  - Accelerate: 1.12.0
615
+ - Datasets: 2.21.0
616
  - Tokenizers: 0.22.1
617
 
618
  ## Citation
modules.json CHANGED
@@ -10,5 +10,11 @@
10
  "name": "1",
11
  "path": "1_Pooling",
12
  "type": "sentence_transformers.models.Pooling"
 
 
 
 
 
 
13
  }
14
  ]
 
10
  "name": "1",
11
  "path": "1_Pooling",
12
  "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
  }
20
  ]