dawn78 commited on
Commit
573b656
·
verified ·
1 Parent(s): 10226f7

Upload folder using huggingface_hub

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,452 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:1459
8
+ - loss:CosineSimilarityLoss
9
+ base_model: sentence-transformers/all-MiniLM-L6-v2
10
+ widget:
11
+ - source_sentence: still popular today this fresh fougere fragrance inspired many
12
+ wannabes
13
+ sentences:
14
+ - fruity notes, fig leaf, cedar needle, resins
15
+ - ginger, mandarin, green tea, hazelnut, amberwood, daffodil, tangerine, metallic
16
+ effect
17
+ - mandarin, lavender, green botanics, jasmine, basil, geranium, sage, sandalwood,
18
+ vetiver, rosewood, amber
19
+ - source_sentence: rose blush cologne 2023 by jo malone london rose blush cologne
20
+ presents an enchanting bouquet that captures the essence of blooming romance and
21
+ tropical vitality with an initial sweet hint of luscious litchi and a refreshing
22
+ touch of herbs this fragrance unfolds into a heart of delicate rose showcasing
23
+ a radiant femininity the composition is beautifully rounded off with soft musky
24
+ undertones adding an elegant warmth that lingers on the skin users describe rose
25
+ blush as vibrant and joyful perfect for both everyday wear and special occasions
26
+ reviewers appreciate its fresh appeal heralding it as an uplifting scent that
27
+ evokes feelings of spring and renewal many highlight its moderate longevity making
28
+ it suitable for those who desire a fragrance that gently permeates without overwhelming
29
+ whether youre seeking a burst of floral energy or a subtle whisper of sophistication
30
+ this perfume is sure to leave a delightful impression
31
+ sentences:
32
+ - blonde woods, fresh ginger, carnation, green botanics, clover, green tea, white
33
+ tea, clary sage, mahogany, ambergris, vetiver, fruits, pink grapefruit, frangipani,
34
+ myrtle, darjeeling tea, mint
35
+ - yuzu, clary sage, balsam fir, cedar
36
+ - lychee, basil, rose, musk
37
+ - source_sentence: thank u next by ariana grande is a playful and modern fragrance
38
+ that captures the essence of youthful exuberance and selfempowerment this charming
39
+ scent exudes a vibrant sweetness that dances between fruity and creamy notes creating
40
+ an inviting aura that is both uplifting and comforting users often describe this
41
+ perfume as deliciously sweet and fun making it perfect for casual wear or a spirited
42
+ night out the blend is frequently noted for its warm inviting quality evoking
43
+ a sense of cheerful nostalgia many reviewers highlight its longlasting nature
44
+ and delightful sillage ensuring that its fragrant embrace stays with you throughout
45
+ the day perfect for the confident contemporary woman thank u next effortlessly
46
+ combines the spirited essence of fresh berries with a creamy tropical nuance which
47
+ is masterfully balanced by an undercurrent of sweet indulgence overall this fragrance
48
+ is celebrated for its delightful charm and is sure to make a memorable impression
49
+ wherever you go
50
+ sentences:
51
+ - clary sage, citruses
52
+ - tagetes, gingerbread, white wood, red fruit, spices, creme brulee
53
+ - cashmeran, lime, myrtle, metallic effect, vetiver, nasturtium, pimento, resins
54
+ - source_sentence: little black dress eau fraiche by avon exudes a lively and refreshing
55
+ spirit that captivates effortlessly this fragrance opens with a bright burst of
56
+ citrus that instantly uplifts the mood reminiscent of sunkissed afternoons as
57
+ it unfolds delicate floral notes weave through creating an elegant bouquet that
58
+ embodies femininity and charm the scent is anchored by a subtle musk that rounds
59
+ out the experience providing a warm and inviting backdrop users have praised this
60
+ fragrance for its fresh and invigorating essence making it perfect for daytime
61
+ wear many appreciate its lightness and airy quality which is ideal for those seeking
62
+ a scent that is both playful and sophisticated with a commendable rating of 375
63
+ out of 5 it has earned accolades for its delightful character and versatility
64
+ appealing to a broad audience who value a fragrance that feels both chic and approachable
65
+ overall little black dress eau fraiche is described as an essential contemporary
66
+ scent for the modern woman effortlessly enhancing any occasion with its vibrant
67
+ charm
68
+ sentences:
69
+ - floral notes, citruses, water lily, lady of the night flower, white musk, clove,
70
+ creme brulee, mango, fruits, clover, hinoki wood
71
+ - lemon, may rose, spices, peony, lily of the valley, blackcurrant, raspberry, peach,
72
+ musk, sandalwood, amber, heliotrope, oud
73
+ - cyclamen, petitgrain, sesame, thyme, myrrh
74
+ - source_sentence: indulge your senses with comme une evidence limited edition 2008
75
+ by yves rocher a sophisticated floral fragrance that captures the essence of tranquility
76
+ and elegance this scent harmoniously blends delicate floral notes with hints of
77
+ earthy moss creating a fresh and uplifting experience reminiscent of a serene
78
+ garden in full bloom users describe it as both refreshing and subtle ideal for
79
+ those seeking a signature scent that exudes femininity without overwhelming presence
80
+ the composition is said to invoke feelings of serenity and poise making it perfect
81
+ for daytime wear or special occasions when one desires a touch of grace with an
82
+ overall rating of 376 the fragrance has garnered appreciation for its longevity
83
+ and ability to evoke memories of blooming florals intertwined with natural sweetness
84
+ it strikes a perfect balance appealing to those who cherish a scent that is both
85
+ light and intricately layered whether strolling through sunlit paths or enjoying
86
+ quiet moments inside comme une evidence limited edition envelops the wearer in
87
+ a soothing embrace leaving a lasting impression of refined simplicity
88
+ sentences:
89
+ - papyrus, ginger, spices, herbal notes, lemon blossom, green tree accord, ambertonic,
90
+ lemon leaf oil, cassis, pimento, acacia, citron, gardenia, elemi, black amber,
91
+ clove, clary sage, ambergris, lime, darjeeling tea, cashmeran, blonde woods
92
+ - oud, ginger, sea salt, lily, resins
93
+ - ambertonic, lemon leaf oil, resins, white wood, woody notes, sweet pea, ambergris
94
+ pipeline_tag: sentence-similarity
95
+ library_name: sentence-transformers
96
+ metrics:
97
+ - pearson_cosine
98
+ - spearman_cosine
99
+ model-index:
100
+ - name: SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
101
+ results:
102
+ - task:
103
+ type: semantic-similarity
104
+ name: Semantic Similarity
105
+ dataset:
106
+ name: Unknown
107
+ type: unknown
108
+ metrics:
109
+ - type: pearson_cosine
110
+ value: 0.9339541699697309
111
+ name: Pearson Cosine
112
+ - type: spearman_cosine
113
+ value: 0.733406361302126
114
+ name: Spearman Cosine
115
+ ---
116
+
117
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
118
+
119
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
120
+
121
+ ## Model Details
122
+
123
+ ### Model Description
124
+ - **Model Type:** Sentence Transformer
125
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision fa97f6e7cb1a59073dff9e6b13e2715cf7475ac9 -->
126
+ - **Maximum Sequence Length:** 256 tokens
127
+ - **Output Dimensionality:** 384 dimensions
128
+ - **Similarity Function:** Cosine Similarity
129
+ <!-- - **Training Dataset:** Unknown -->
130
+ <!-- - **Language:** Unknown -->
131
+ <!-- - **License:** Unknown -->
132
+
133
+ ### Model Sources
134
+
135
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
136
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
137
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
138
+
139
+ ### Full Model Architecture
140
+
141
+ ```
142
+ SentenceTransformer(
143
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
144
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
145
+ (2): Normalize()
146
+ )
147
+ ```
148
+
149
+ ## Usage
150
+
151
+ ### Direct Usage (Sentence Transformers)
152
+
153
+ First install the Sentence Transformers library:
154
+
155
+ ```bash
156
+ pip install -U sentence-transformers
157
+ ```
158
+
159
+ Then you can load this model and run inference.
160
+ ```python
161
+ from sentence_transformers import SentenceTransformer
162
+
163
+ # Download from the 🤗 Hub
164
+ model = SentenceTransformer("sentence_transformers_model_id")
165
+ # Run inference
166
+ sentences = [
167
+ 'indulge your senses with comme une evidence limited edition 2008 by yves rocher a sophisticated floral fragrance that captures the essence of tranquility and elegance this scent harmoniously blends delicate floral notes with hints of earthy moss creating a fresh and uplifting experience reminiscent of a serene garden in full bloom users describe it as both refreshing and subtle ideal for those seeking a signature scent that exudes femininity without overwhelming presence the composition is said to invoke feelings of serenity and poise making it perfect for daytime wear or special occasions when one desires a touch of grace with an overall rating of 376 the fragrance has garnered appreciation for its longevity and ability to evoke memories of blooming florals intertwined with natural sweetness it strikes a perfect balance appealing to those who cherish a scent that is both light and intricately layered whether strolling through sunlit paths or enjoying quiet moments inside comme une evidence limited edition envelops the wearer in a soothing embrace leaving a lasting impression of refined simplicity',
168
+ 'oud, ginger, sea salt, lily, resins',
169
+ 'papyrus, ginger, spices, herbal notes, lemon blossom, green tree accord, ambertonic, lemon leaf oil, cassis, pimento, acacia, citron, gardenia, elemi, black amber, clove, clary sage, ambergris, lime, darjeeling tea, cashmeran, blonde woods',
170
+ ]
171
+ embeddings = model.encode(sentences)
172
+ print(embeddings.shape)
173
+ # [3, 384]
174
+
175
+ # Get the similarity scores for the embeddings
176
+ similarities = model.similarity(embeddings, embeddings)
177
+ print(similarities.shape)
178
+ # [3, 3]
179
+ ```
180
+
181
+ <!--
182
+ ### Direct Usage (Transformers)
183
+
184
+ <details><summary>Click to see the direct usage in Transformers</summary>
185
+
186
+ </details>
187
+ -->
188
+
189
+ <!--
190
+ ### Downstream Usage (Sentence Transformers)
191
+
192
+ You can finetune this model on your own dataset.
193
+
194
+ <details><summary>Click to expand</summary>
195
+
196
+ </details>
197
+ -->
198
+
199
+ <!--
200
+ ### Out-of-Scope Use
201
+
202
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
203
+ -->
204
+
205
+ ## Evaluation
206
+
207
+ ### Metrics
208
+
209
+ #### Semantic Similarity
210
+
211
+ * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
212
+
213
+ | Metric | Value |
214
+ |:--------------------|:-----------|
215
+ | pearson_cosine | 0.934 |
216
+ | **spearman_cosine** | **0.7334** |
217
+
218
+ <!--
219
+ ## Bias, Risks and Limitations
220
+
221
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
222
+ -->
223
+
224
+ <!--
225
+ ### Recommendations
226
+
227
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
228
+ -->
229
+
230
+ ## Training Details
231
+
232
+ ### Training Dataset
233
+
234
+ #### Unnamed Dataset
235
+
236
+
237
+ * Size: 1,459 training samples
238
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
239
+ * Approximate statistics based on the first 1000 samples:
240
+ | | sentence_0 | sentence_1 | label |
241
+ |:--------|:-------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
242
+ | type | string | string | float |
243
+ | details | <ul><li>min: 12 tokens</li><li>mean: 182.01 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 5 tokens</li><li>mean: 33.09 tokens</li><li>max: 88 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.25</li><li>max: 1.0</li></ul> |
244
+ * Samples:
245
+ | sentence_0 | sentence_1 | label |
246
+ |:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------|:-----------------|
247
+ | <code>today tomorrow always in love by avon embodying a sense of timeless romance today tomorrow always in love is an enchanting fragrance that strikes a perfect balance between freshness and warmth this captivating scent opens with bright effervescent notes that evoke images of blooming gardens and sunlit moments as the fragrance unfolds it reveals a charming bouquet that celebrates femininity featuring delicate floral elements that wrap around the wearer like a cherished embrace users describe this perfume as uplifting and evocative making it an ideal companion for both everyday wear and special occasions many reviewers appreciate its elegant character highlighting its multifaceted nature that seamlessly transitions from day to night while some find it subtly sweet and playful others cherish its musky undertones which lend a depth that enhances its allure overall with a moderate rating that suggests a solid appreciation among wearers today tomorrow always in love captures the essence of ro...</code> | <code>lotus, neroli, carambola, pomegranate, tuberose, gardenia, tuberose, pepper, musk, woody notes, amber</code> | <code>1.0</code> |
248
+ | <code>mankind hero by kenneth cole encapsulates a vibrant and adventurous spirit designed for the modern man who embraces both freshness and sophistication this fragrance unfolds with an invigorating burst reminiscent of a brisk mountain breeze seamlessly paired with a zesty hint of citrus the aromatic heart introduces a soothing edginess where lavender and warm vanilla intertwine creating a balanced yet captivating profile as it settles an inviting warmth emerges enriched by woody undertones that linger pleasantly on the skin users have praised mankind hero for its versatile character suitable for both casual outings and formal occasions many describe it as longlasting and unique appreciating the balanced blend that feels both refreshing and comforting the overall sentiment reflects a sense of confidence and elegance making this scent a cherished addition to a mans fragrance collection it has garnered favorable reviews boasting a solid rating that underscores its appeal embrace the essence ...</code> | <code>mountain air, lemon, coriander, lavender, vanilla, clary sage, plum, musk, coumarin, amberwood, oak moss</code> | <code>1.0</code> |
249
+ | <code>black essential dark by avon immerse yourself in the captivating allure of black essential dark a fragrance that elegantly marries the depth of aromatic woods with a touch of leathers sensuality this modern scent envelops the wearer in a rich and sophisticated aura exuding confidence and a hint of mystery users describe it as both refreshing and spicy with an invigorating blend that feels perfect for the urban man who embraces lifes more daring adventures crafted with meticulous attention by perfumer mike parrot this fragrance has garnered a solid reputation amongst enthusiasts resulting in a commendable 405 rating from its admirers many find it to be versatile enough for both day and night wear making it an essential companion for various occasions reviewers frequently highlight its longlasting presence creating an inviting and memorable impression with a delicate yet commanding presence black essential dark is ideal for those looking to leave a mark without overpowering the senses wh...</code> | <code>mint, allspice, white tea, amber, herbal notes, pear blossom, armoise, gurgum wood, creme brulee</code> | <code>0.0</code> |
250
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
251
+ ```json
252
+ {
253
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
254
+ }
255
+ ```
256
+
257
+ ### Training Hyperparameters
258
+ #### Non-Default Hyperparameters
259
+
260
+ - `eval_strategy`: steps
261
+ - `per_device_train_batch_size`: 32
262
+ - `per_device_eval_batch_size`: 32
263
+ - `num_train_epochs`: 5
264
+ - `multi_dataset_batch_sampler`: round_robin
265
+
266
+ #### All Hyperparameters
267
+ <details><summary>Click to expand</summary>
268
+
269
+ - `overwrite_output_dir`: False
270
+ - `do_predict`: False
271
+ - `eval_strategy`: steps
272
+ - `prediction_loss_only`: True
273
+ - `per_device_train_batch_size`: 32
274
+ - `per_device_eval_batch_size`: 32
275
+ - `per_gpu_train_batch_size`: None
276
+ - `per_gpu_eval_batch_size`: None
277
+ - `gradient_accumulation_steps`: 1
278
+ - `eval_accumulation_steps`: None
279
+ - `torch_empty_cache_steps`: None
280
+ - `learning_rate`: 5e-05
281
+ - `weight_decay`: 0.0
282
+ - `adam_beta1`: 0.9
283
+ - `adam_beta2`: 0.999
284
+ - `adam_epsilon`: 1e-08
285
+ - `max_grad_norm`: 1
286
+ - `num_train_epochs`: 5
287
+ - `max_steps`: -1
288
+ - `lr_scheduler_type`: linear
289
+ - `lr_scheduler_kwargs`: {}
290
+ - `warmup_ratio`: 0.0
291
+ - `warmup_steps`: 0
292
+ - `log_level`: passive
293
+ - `log_level_replica`: warning
294
+ - `log_on_each_node`: True
295
+ - `logging_nan_inf_filter`: True
296
+ - `save_safetensors`: True
297
+ - `save_on_each_node`: False
298
+ - `save_only_model`: False
299
+ - `restore_callback_states_from_checkpoint`: False
300
+ - `no_cuda`: False
301
+ - `use_cpu`: False
302
+ - `use_mps_device`: False
303
+ - `seed`: 42
304
+ - `data_seed`: None
305
+ - `jit_mode_eval`: False
306
+ - `use_ipex`: False
307
+ - `bf16`: False
308
+ - `fp16`: False
309
+ - `fp16_opt_level`: O1
310
+ - `half_precision_backend`: auto
311
+ - `bf16_full_eval`: False
312
+ - `fp16_full_eval`: False
313
+ - `tf32`: None
314
+ - `local_rank`: 0
315
+ - `ddp_backend`: None
316
+ - `tpu_num_cores`: None
317
+ - `tpu_metrics_debug`: False
318
+ - `debug`: []
319
+ - `dataloader_drop_last`: False
320
+ - `dataloader_num_workers`: 0
321
+ - `dataloader_prefetch_factor`: None
322
+ - `past_index`: -1
323
+ - `disable_tqdm`: False
324
+ - `remove_unused_columns`: True
325
+ - `label_names`: None
326
+ - `load_best_model_at_end`: False
327
+ - `ignore_data_skip`: False
328
+ - `fsdp`: []
329
+ - `fsdp_min_num_params`: 0
330
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
331
+ - `fsdp_transformer_layer_cls_to_wrap`: None
332
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
333
+ - `deepspeed`: None
334
+ - `label_smoothing_factor`: 0.0
335
+ - `optim`: adamw_torch
336
+ - `optim_args`: None
337
+ - `adafactor`: False
338
+ - `group_by_length`: False
339
+ - `length_column_name`: length
340
+ - `ddp_find_unused_parameters`: None
341
+ - `ddp_bucket_cap_mb`: None
342
+ - `ddp_broadcast_buffers`: False
343
+ - `dataloader_pin_memory`: True
344
+ - `dataloader_persistent_workers`: False
345
+ - `skip_memory_metrics`: True
346
+ - `use_legacy_prediction_loop`: False
347
+ - `push_to_hub`: False
348
+ - `resume_from_checkpoint`: None
349
+ - `hub_model_id`: None
350
+ - `hub_strategy`: every_save
351
+ - `hub_private_repo`: None
352
+ - `hub_always_push`: False
353
+ - `gradient_checkpointing`: False
354
+ - `gradient_checkpointing_kwargs`: None
355
+ - `include_inputs_for_metrics`: False
356
+ - `include_for_metrics`: []
357
+ - `eval_do_concat_batches`: True
358
+ - `fp16_backend`: auto
359
+ - `push_to_hub_model_id`: None
360
+ - `push_to_hub_organization`: None
361
+ - `mp_parameters`:
362
+ - `auto_find_batch_size`: False
363
+ - `full_determinism`: False
364
+ - `torchdynamo`: None
365
+ - `ray_scope`: last
366
+ - `ddp_timeout`: 1800
367
+ - `torch_compile`: False
368
+ - `torch_compile_backend`: None
369
+ - `torch_compile_mode`: None
370
+ - `dispatch_batches`: None
371
+ - `split_batches`: None
372
+ - `include_tokens_per_second`: False
373
+ - `include_num_input_tokens_seen`: False
374
+ - `neftune_noise_alpha`: None
375
+ - `optim_target_modules`: None
376
+ - `batch_eval_metrics`: False
377
+ - `eval_on_start`: False
378
+ - `use_liger_kernel`: False
379
+ - `eval_use_gather_object`: False
380
+ - `average_tokens_across_devices`: False
381
+ - `prompts`: None
382
+ - `batch_sampler`: batch_sampler
383
+ - `multi_dataset_batch_sampler`: round_robin
384
+
385
+ </details>
386
+
387
+ ### Training Logs
388
+ | Epoch | Step | spearman_cosine |
389
+ |:------:|:----:|:---------------:|
390
+ | 1.0 | 46 | 0.6586 |
391
+ | 1.0870 | 50 | 0.6783 |
392
+ | 2.0 | 92 | 0.7334 |
393
+ | 2.1739 | 100 | 0.7268 |
394
+ | 3.0 | 138 | 0.7400 |
395
+ | 3.2609 | 150 | 0.7400 |
396
+ | 4.0 | 184 | 0.7426 |
397
+ | 4.3478 | 200 | 0.7387 |
398
+ | 5.0 | 230 | 0.7400 |
399
+ | 1.0 | 46 | 0.7387 |
400
+ | 1.0870 | 50 | 0.7387 |
401
+ | 2.0 | 92 | 0.7295 |
402
+ | 2.1739 | 100 | 0.7255 |
403
+ | 3.0 | 138 | 0.7242 |
404
+ | 3.2609 | 150 | 0.7255 |
405
+ | 4.0 | 184 | 0.7124 |
406
+ | 4.3478 | 200 | 0.7216 |
407
+ | 5.0 | 230 | 0.7334 |
408
+
409
+
410
+ ### Framework Versions
411
+ - Python: 3.11.11
412
+ - Sentence Transformers: 3.3.1
413
+ - Transformers: 4.47.1
414
+ - PyTorch: 2.5.1+cu124
415
+ - Accelerate: 1.2.1
416
+ - Datasets: 3.2.0
417
+ - Tokenizers: 0.21.0
418
+
419
+ ## Citation
420
+
421
+ ### BibTeX
422
+
423
+ #### Sentence Transformers
424
+ ```bibtex
425
+ @inproceedings{reimers-2019-sentence-bert,
426
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
427
+ author = "Reimers, Nils and Gurevych, Iryna",
428
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
429
+ month = "11",
430
+ year = "2019",
431
+ publisher = "Association for Computational Linguistics",
432
+ url = "https://arxiv.org/abs/1908.10084",
433
+ }
434
+ ```
435
+
436
+ <!--
437
+ ## Glossary
438
+
439
+ *Clearly define terms in order to be accessible across audiences.*
440
+ -->
441
+
442
+ <!--
443
+ ## Model Card Authors
444
+
445
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
446
+ -->
447
+
448
+ <!--
449
+ ## Model Card Contact
450
+
451
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
452
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "sentence-transformers/all-MiniLM-L6-v2",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.47.1",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.3.1",
4
+ "transformers": "4.47.1",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ba73b35d46215a22fea93575ecc59d08a36fadebc77eafaf4c07250bcc852a1d
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 128,
51
+ "model_max_length": 256,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff