pedro-thenewsroom commited on
Commit
8d43e17
·
verified ·
1 Parent(s): a217c0b

Upload fine-tuned climate-misinfo-embed model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,539 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:53963
8
+ - loss:CosineSimilarityLoss
9
+ base_model: TaylorAI/bge-micro-v2
10
+ widget:
11
+ - source_sentence: What’s going to happen when their tanks run out of juice half-way
12
+ up the Tora Bora mountains and there’s not a recharging point outside of Kabul?
13
+ They’ll no doubt have green ammo to defend themselves, although plastic bullets
14
+ are a non-starter, obviously.
15
+ sentences:
16
+ - Climate change is nothing more than a fabricated agenda pushed by corrupt elites,
17
+ politicians, and scientists to control the masses, gain wealth, and suppress freedom.
18
+ - Despite the alarmists' claims of global warming, temperatures have remained steady
19
+ or even dropped in many areas, proving that climate change is nothing more than
20
+ natural variability.
21
+ - The Earth's climate has always changed due to natural cycles and external factors,
22
+ and the role of human activity or CO2 emissions in driving these changes is negligible
23
+ or unsupported by evidence.
24
+ - source_sentence: I am not a climatologist, but I don’t think any of the other witnesses
25
+ are either. I do work in the related field of atomic, molecular and optical physics.
26
+ I have spent my professional life studying the interactions of visible and infrared
27
+ radiation with gases – one of the main physical phenomena behind the greenhouse
28
+ effect. I have published over 200 papers in peer reviewed scientific journals.
29
+ sentences:
30
+ - Global climate policies are costly, ineffective, and fail to address the unchecked
31
+ emissions from developing nations, rendering efforts by industrialized countries
32
+ futile and economically damaging.
33
+ - The so-called consensus on climate change relies on flawed models, manipulated
34
+ data, and a refusal to address legitimate scientific uncertainties, all to serve
35
+ a predetermined political narrative.
36
+ - Global climate policies are costly, ineffective, and fail to address the unchecked
37
+ emissions from developing nations, rendering efforts by industrialized countries
38
+ futile and economically damaging.
39
+ - source_sentence: The science of how the world’s climate works is very weak. The
40
+ models used by the UN to predict changes have enormous gaps of knowledge. There
41
+ is also very vigorous debate among scientists about whether or not levels of carbon
42
+ dioxide cause global warming or are caused by it. In other words, we do not know
43
+ if human generated carbon dioxide is significant or not. Many of us just want
44
+ to think it is.
45
+ sentences:
46
+ - Despite the alarmists' claims of global warming, temperatures have remained steady
47
+ or even dropped in many areas, proving that climate change is nothing more than
48
+ natural variability.
49
+ - Despite the alarmists' claims of global warming, temperatures have remained steady
50
+ or even dropped in many areas, proving that climate change is nothing more than
51
+ natural variability.
52
+ - Fossil fuels have powered centuries of progress, lifted billions out of poverty,
53
+ and remain the backbone of global energy, while alternatives, though promising,
54
+ cannot yet match their scale, reliability, or affordability.
55
+ - source_sentence: 'The Family. Conservatives have won the argument about the central
56
+ importance of making sure that every child grows up with a mother and father.
57
+ The next challenge is to translate this victory into a strategy for reinforcing
58
+ marriage in public policy, and for giving parents more control over the education
59
+ and upbringing of their children Faith. Conservatives are breaking down barriers
60
+ to religion in the public square by emphasizing such principles as religious freedom
61
+ and religious expression. But they haven’t yet found an effective vocabulary for
62
+ arguing that religion should take a more central place in American life. The next
63
+ challenge is to encourage greater public appreciation of the role of religion
64
+ and religious believers in healthy societies while affirming a commitment to the
65
+ separation of church and state Freedom. Conservatives have won the argument about
66
+ the importance of private voluntary associations in a free society. The next challenge
67
+ is twofold: First, to strengthen civic institutions without resorting to government
68
+ subsidies that create dependency and destroy any sense of mission; and second,
69
+ to empower citizens to reassume the primary responsibility for helping the needy
70
+ through religious, charitable, and civic institutions.'
71
+ sentences:
72
+ - Climate change is nothing more than a fabricated agenda pushed by corrupt elites,
73
+ politicians, and scientists to control the masses, gain wealth, and suppress freedom.
74
+ - The so-called consensus on climate change relies on flawed models, manipulated
75
+ data, and a refusal to address legitimate scientific uncertainties, all to serve
76
+ a predetermined political narrative.
77
+ - Climate change is nothing more than a fabricated agenda pushed by corrupt elites,
78
+ politicians, and scientists to control the masses, gain wealth, and suppress freedom.
79
+ - source_sentence: Founded by some 30 leaders of the Christian Right, the Alliance
80
+ Defending Freedom is a legal advocacy and training group that has supported the
81
+ recriminalization of sexual acts between consenting LGBTQ adults in the U.S. and
82
+ criminalization abroad; has defended state-sanctioned sterilization of trans people
83
+ abroad; has contended that LGBTQ people are more likely to engage in pedophilia;
84
+ and claims that a ‘homosexual agenda’ will destroy Christianity and society. ADF
85
+ also works to develop “religious liberty” legislation and case law that will allow
86
+ the denial of goods and services to LGBTQ people on the basis of religion. Since
87
+ the election of President Trump, ADF has become one of the most influential groups
88
+ informing the administration’s attack on LGBTQ rights.
89
+ sentences:
90
+ - Climate change is nothing more than a fabricated agenda pushed by corrupt elites,
91
+ politicians, and scientists to control the masses, gain wealth, and suppress freedom.
92
+ - Climate change is nothing more than a fabricated agenda pushed by corrupt elites,
93
+ politicians, and scientists to control the masses, gain wealth, and suppress freedom.
94
+ - Fossil fuels have powered centuries of progress, lifted billions out of poverty,
95
+ and remain the backbone of global energy, while alternatives, though promising,
96
+ cannot yet match their scale, reliability, or affordability.
97
+ pipeline_tag: sentence-similarity
98
+ library_name: sentence-transformers
99
+ ---
100
+
101
+ # SentenceTransformer based on TaylorAI/bge-micro-v2
102
+
103
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [TaylorAI/bge-micro-v2](https://huggingface.co/TaylorAI/bge-micro-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
104
+
105
+ ## Model Details
106
+
107
+ ### Model Description
108
+ - **Model Type:** Sentence Transformer
109
+ - **Base model:** [TaylorAI/bge-micro-v2](https://huggingface.co/TaylorAI/bge-micro-v2) <!-- at revision 3edf6d7de0faa426b09780416fe61009f26ae589 -->
110
+ - **Maximum Sequence Length:** 512 tokens
111
+ - **Output Dimensionality:** 384 dimensions
112
+ - **Similarity Function:** Cosine Similarity
113
+ <!-- - **Training Dataset:** Unknown -->
114
+ <!-- - **Language:** Unknown -->
115
+ <!-- - **License:** Unknown -->
116
+
117
+ ### Model Sources
118
+
119
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
120
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
121
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
122
+
123
+ ### Full Model Architecture
124
+
125
+ ```
126
+ SentenceTransformer(
127
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
128
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
129
+ )
130
+ ```
131
+
132
+ ## Usage
133
+
134
+ ### Direct Usage (Sentence Transformers)
135
+
136
+ First install the Sentence Transformers library:
137
+
138
+ ```bash
139
+ pip install -U sentence-transformers
140
+ ```
141
+
142
+ Then you can load this model and run inference.
143
+ ```python
144
+ from sentence_transformers import SentenceTransformer
145
+
146
+ # Download from the 🤗 Hub
147
+ model = SentenceTransformer("sentence_transformers_model_id")
148
+ # Run inference
149
+ sentences = [
150
+ 'Founded by some 30 leaders of the Christian Right, the Alliance Defending Freedom is a legal advocacy and training group that has supported the recriminalization of sexual acts between consenting LGBTQ adults in the U.S. and criminalization abroad; has defended state-sanctioned sterilization of trans people abroad; has contended that LGBTQ people are more likely to engage in pedophilia; and claims that a ‘homosexual agenda’ will destroy Christianity and society. ADF also works to develop “religious liberty” legislation and case law that will allow the denial of goods and services to LGBTQ people on the basis of religion. Since the election of President Trump, ADF has become one of the most influential groups informing the administration’s attack on LGBTQ rights.',
151
+ 'Fossil fuels have powered centuries of progress, lifted billions out of poverty, and remain the backbone of global energy, while alternatives, though promising, cannot yet match their scale, reliability, or affordability.',
152
+ 'Climate change is nothing more than a fabricated agenda pushed by corrupt elites, politicians, and scientists to control the masses, gain wealth, and suppress freedom.',
153
+ ]
154
+ embeddings = model.encode(sentences)
155
+ print(embeddings.shape)
156
+ # [3, 384]
157
+
158
+ # Get the similarity scores for the embeddings
159
+ similarities = model.similarity(embeddings, embeddings)
160
+ print(similarities.shape)
161
+ # [3, 3]
162
+ ```
163
+
164
+ <!--
165
+ ### Direct Usage (Transformers)
166
+
167
+ <details><summary>Click to see the direct usage in Transformers</summary>
168
+
169
+ </details>
170
+ -->
171
+
172
+ <!--
173
+ ### Downstream Usage (Sentence Transformers)
174
+
175
+ You can finetune this model on your own dataset.
176
+
177
+ <details><summary>Click to expand</summary>
178
+
179
+ </details>
180
+ -->
181
+
182
+ <!--
183
+ ### Out-of-Scope Use
184
+
185
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
186
+ -->
187
+
188
+ <!--
189
+ ## Bias, Risks and Limitations
190
+
191
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
192
+ -->
193
+
194
+ <!--
195
+ ### Recommendations
196
+
197
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
198
+ -->
199
+
200
+ ## Training Details
201
+
202
+ ### Training Dataset
203
+
204
+ #### Unnamed Dataset
205
+
206
+ * Size: 53,963 training samples
207
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
208
+ * Approximate statistics based on the first 1000 samples:
209
+ | | sentence_0 | sentence_1 | label |
210
+ |:--------|:----------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:---------------------------------------------------------------|
211
+ | type | string | string | float |
212
+ | details | <ul><li>min: 7 tokens</li><li>mean: 64.1 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 32 tokens</li><li>mean: 38.4 tokens</li><li>max: 44 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.09</li><li>max: 1.0</li></ul> |
213
+ * Samples:
214
+ | sentence_0 | sentence_1 | label |
215
+ |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
216
+ | <code>To that end, we have been working on the Murdoch press of late, with good initial results.</code> | <code>The so-called consensus on climate change relies on flawed models, manipulated data, and a refusal to address legitimate scientific uncertainties, all to serve a predetermined political narrative.</code> | <code>0.0</code> |
217
+ | <code>Scientists who dare question the almost religious belief in climate change, and yes, they do exist, are ignored or undermined in news reports as are policy makers and pundits who take similar views.</code> | <code>The Earth's climate has always changed due to natural cycles and external factors, and the role of human activity or CO2 emissions in driving these changes is negligible or unsupported by evidence.</code> | <code>0.0</code> |
218
+ | <code>What about ‘global warming?’ What matters is the degree and rate of change. There have been times on earth when it has been much warmer than today, and times when it’s been much colder. The latter are called ice ages. One of the former is called ‘The Climate Optimum.’ It was a time of higher average global temperature and high CO2.</code> | <code>The Earth's climate has always changed due to natural cycles and external factors, and the role of human activity or CO2 emissions in driving these changes is negligible or unsupported by evidence.</code> | <code>1.0</code> |
219
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
220
+ ```json
221
+ {
222
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
223
+ }
224
+ ```
225
+
226
+ ### Training Hyperparameters
227
+ #### Non-Default Hyperparameters
228
+
229
+ - `per_device_train_batch_size`: 16
230
+ - `per_device_eval_batch_size`: 16
231
+ - `num_train_epochs`: 20
232
+ - `multi_dataset_batch_sampler`: round_robin
233
+
234
+ #### All Hyperparameters
235
+ <details><summary>Click to expand</summary>
236
+
237
+ - `overwrite_output_dir`: False
238
+ - `do_predict`: False
239
+ - `eval_strategy`: no
240
+ - `prediction_loss_only`: True
241
+ - `per_device_train_batch_size`: 16
242
+ - `per_device_eval_batch_size`: 16
243
+ - `per_gpu_train_batch_size`: None
244
+ - `per_gpu_eval_batch_size`: None
245
+ - `gradient_accumulation_steps`: 1
246
+ - `eval_accumulation_steps`: None
247
+ - `torch_empty_cache_steps`: None
248
+ - `learning_rate`: 5e-05
249
+ - `weight_decay`: 0.0
250
+ - `adam_beta1`: 0.9
251
+ - `adam_beta2`: 0.999
252
+ - `adam_epsilon`: 1e-08
253
+ - `max_grad_norm`: 1
254
+ - `num_train_epochs`: 20
255
+ - `max_steps`: -1
256
+ - `lr_scheduler_type`: linear
257
+ - `lr_scheduler_kwargs`: {}
258
+ - `warmup_ratio`: 0.0
259
+ - `warmup_steps`: 0
260
+ - `log_level`: passive
261
+ - `log_level_replica`: warning
262
+ - `log_on_each_node`: True
263
+ - `logging_nan_inf_filter`: True
264
+ - `save_safetensors`: True
265
+ - `save_on_each_node`: False
266
+ - `save_only_model`: False
267
+ - `restore_callback_states_from_checkpoint`: False
268
+ - `no_cuda`: False
269
+ - `use_cpu`: False
270
+ - `use_mps_device`: False
271
+ - `seed`: 42
272
+ - `data_seed`: None
273
+ - `jit_mode_eval`: False
274
+ - `use_ipex`: False
275
+ - `bf16`: False
276
+ - `fp16`: False
277
+ - `fp16_opt_level`: O1
278
+ - `half_precision_backend`: auto
279
+ - `bf16_full_eval`: False
280
+ - `fp16_full_eval`: False
281
+ - `tf32`: None
282
+ - `local_rank`: 0
283
+ - `ddp_backend`: None
284
+ - `tpu_num_cores`: None
285
+ - `tpu_metrics_debug`: False
286
+ - `debug`: []
287
+ - `dataloader_drop_last`: False
288
+ - `dataloader_num_workers`: 0
289
+ - `dataloader_prefetch_factor`: None
290
+ - `past_index`: -1
291
+ - `disable_tqdm`: False
292
+ - `remove_unused_columns`: True
293
+ - `label_names`: None
294
+ - `load_best_model_at_end`: False
295
+ - `ignore_data_skip`: False
296
+ - `fsdp`: []
297
+ - `fsdp_min_num_params`: 0
298
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
299
+ - `fsdp_transformer_layer_cls_to_wrap`: None
300
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
301
+ - `deepspeed`: None
302
+ - `label_smoothing_factor`: 0.0
303
+ - `optim`: adamw_torch
304
+ - `optim_args`: None
305
+ - `adafactor`: False
306
+ - `group_by_length`: False
307
+ - `length_column_name`: length
308
+ - `ddp_find_unused_parameters`: None
309
+ - `ddp_bucket_cap_mb`: None
310
+ - `ddp_broadcast_buffers`: False
311
+ - `dataloader_pin_memory`: True
312
+ - `dataloader_persistent_workers`: False
313
+ - `skip_memory_metrics`: True
314
+ - `use_legacy_prediction_loop`: False
315
+ - `push_to_hub`: False
316
+ - `resume_from_checkpoint`: None
317
+ - `hub_model_id`: None
318
+ - `hub_strategy`: every_save
319
+ - `hub_private_repo`: None
320
+ - `hub_always_push`: False
321
+ - `gradient_checkpointing`: False
322
+ - `gradient_checkpointing_kwargs`: None
323
+ - `include_inputs_for_metrics`: False
324
+ - `include_for_metrics`: []
325
+ - `eval_do_concat_batches`: True
326
+ - `fp16_backend`: auto
327
+ - `push_to_hub_model_id`: None
328
+ - `push_to_hub_organization`: None
329
+ - `mp_parameters`:
330
+ - `auto_find_batch_size`: False
331
+ - `full_determinism`: False
332
+ - `torchdynamo`: None
333
+ - `ray_scope`: last
334
+ - `ddp_timeout`: 1800
335
+ - `torch_compile`: False
336
+ - `torch_compile_backend`: None
337
+ - `torch_compile_mode`: None
338
+ - `dispatch_batches`: None
339
+ - `split_batches`: None
340
+ - `include_tokens_per_second`: False
341
+ - `include_num_input_tokens_seen`: False
342
+ - `neftune_noise_alpha`: None
343
+ - `optim_target_modules`: None
344
+ - `batch_eval_metrics`: False
345
+ - `eval_on_start`: False
346
+ - `use_liger_kernel`: False
347
+ - `eval_use_gather_object`: False
348
+ - `average_tokens_across_devices`: False
349
+ - `prompts`: None
350
+ - `batch_sampler`: batch_sampler
351
+ - `multi_dataset_batch_sampler`: round_robin
352
+
353
+ </details>
354
+
355
+ ### Training Logs
356
+ <details><summary>Click to expand</summary>
357
+
358
+ | Epoch | Step | Training Loss |
359
+ |:-------:|:-----:|:-------------:|
360
+ | 0.1482 | 500 | 0.2358 |
361
+ | 0.2965 | 1000 | 0.0696 |
362
+ | 0.4447 | 1500 | 0.0618 |
363
+ | 0.5929 | 2000 | 0.0597 |
364
+ | 0.7412 | 2500 | 0.0586 |
365
+ | 0.8894 | 3000 | 0.0549 |
366
+ | 1.0377 | 3500 | 0.0587 |
367
+ | 1.1859 | 4000 | 0.0549 |
368
+ | 1.3341 | 4500 | 0.0521 |
369
+ | 1.4824 | 5000 | 0.0504 |
370
+ | 1.6306 | 5500 | 0.0501 |
371
+ | 1.7788 | 6000 | 0.0489 |
372
+ | 1.9271 | 6500 | 0.0493 |
373
+ | 2.0753 | 7000 | 0.0456 |
374
+ | 2.2235 | 7500 | 0.0398 |
375
+ | 2.3718 | 8000 | 0.0416 |
376
+ | 2.5200 | 8500 | 0.0411 |
377
+ | 2.6682 | 9000 | 0.0396 |
378
+ | 2.8165 | 9500 | 0.0373 |
379
+ | 2.9647 | 10000 | 0.04 |
380
+ | 3.1130 | 10500 | 0.0319 |
381
+ | 3.2612 | 11000 | 0.0325 |
382
+ | 3.4094 | 11500 | 0.0284 |
383
+ | 3.5577 | 12000 | 0.0292 |
384
+ | 3.7059 | 12500 | 0.0302 |
385
+ | 3.8541 | 13000 | 0.0287 |
386
+ | 4.0024 | 13500 | 0.0287 |
387
+ | 4.1506 | 14000 | 0.0205 |
388
+ | 4.2988 | 14500 | 0.0204 |
389
+ | 4.4471 | 15000 | 0.023 |
390
+ | 4.5953 | 15500 | 0.0223 |
391
+ | 4.7436 | 16000 | 0.0214 |
392
+ | 4.8918 | 16500 | 0.0208 |
393
+ | 5.0400 | 17000 | 0.0186 |
394
+ | 5.1883 | 17500 | 0.0133 |
395
+ | 5.3365 | 18000 | 0.0148 |
396
+ | 5.4847 | 18500 | 0.0131 |
397
+ | 5.6330 | 19000 | 0.0151 |
398
+ | 5.7812 | 19500 | 0.0135 |
399
+ | 5.9294 | 20000 | 0.0151 |
400
+ | 6.0777 | 20500 | 0.0108 |
401
+ | 6.2259 | 21000 | 0.0095 |
402
+ | 6.3741 | 21500 | 0.0088 |
403
+ | 6.5224 | 22000 | 0.01 |
404
+ | 6.6706 | 22500 | 0.0113 |
405
+ | 6.8189 | 23000 | 0.0122 |
406
+ | 6.9671 | 23500 | 0.0091 |
407
+ | 7.1153 | 24000 | 0.007 |
408
+ | 7.2636 | 24500 | 0.0076 |
409
+ | 7.4118 | 25000 | 0.0072 |
410
+ | 7.5600 | 25500 | 0.007 |
411
+ | 7.7083 | 26000 | 0.0079 |
412
+ | 7.8565 | 26500 | 0.0064 |
413
+ | 8.0047 | 27000 | 0.0078 |
414
+ | 8.1530 | 27500 | 0.0053 |
415
+ | 8.3012 | 28000 | 0.0054 |
416
+ | 8.4495 | 28500 | 0.0046 |
417
+ | 8.5977 | 29000 | 0.0046 |
418
+ | 8.7459 | 29500 | 0.0055 |
419
+ | 8.8942 | 30000 | 0.0046 |
420
+ | 9.0424 | 30500 | 0.0039 |
421
+ | 9.1906 | 31000 | 0.0043 |
422
+ | 9.3389 | 31500 | 0.0036 |
423
+ | 9.4871 | 32000 | 0.004 |
424
+ | 9.6353 | 32500 | 0.0034 |
425
+ | 9.7836 | 33000 | 0.0034 |
426
+ | 9.9318 | 33500 | 0.0036 |
427
+ | 10.0800 | 34000 | 0.0033 |
428
+ | 10.2283 | 34500 | 0.0024 |
429
+ | 10.3765 | 35000 | 0.0023 |
430
+ | 10.5248 | 35500 | 0.0031 |
431
+ | 10.6730 | 36000 | 0.0033 |
432
+ | 10.8212 | 36500 | 0.0031 |
433
+ | 10.9695 | 37000 | 0.0033 |
434
+ | 11.1177 | 37500 | 0.0021 |
435
+ | 11.2659 | 38000 | 0.002 |
436
+ | 11.4142 | 38500 | 0.0021 |
437
+ | 11.5624 | 39000 | 0.0024 |
438
+ | 11.7106 | 39500 | 0.0023 |
439
+ | 11.8589 | 40000 | 0.0018 |
440
+ | 12.0071 | 40500 | 0.0034 |
441
+ | 12.1554 | 41000 | 0.0019 |
442
+ | 12.3036 | 41500 | 0.0016 |
443
+ | 12.4518 | 42000 | 0.0017 |
444
+ | 12.6001 | 42500 | 0.0016 |
445
+ | 12.7483 | 43000 | 0.0015 |
446
+ | 12.8965 | 43500 | 0.0018 |
447
+ | 13.0448 | 44000 | 0.0017 |
448
+ | 13.1930 | 44500 | 0.0013 |
449
+ | 13.3412 | 45000 | 0.0016 |
450
+ | 13.4895 | 45500 | 0.0012 |
451
+ | 13.6377 | 46000 | 0.0016 |
452
+ | 13.7859 | 46500 | 0.0019 |
453
+ | 13.9342 | 47000 | 0.0018 |
454
+ | 14.0824 | 47500 | 0.0014 |
455
+ | 14.2307 | 48000 | 0.0019 |
456
+ | 14.3789 | 48500 | 0.0017 |
457
+ | 14.5271 | 49000 | 0.0009 |
458
+ | 14.6754 | 49500 | 0.0009 |
459
+ | 14.8236 | 50000 | 0.0009 |
460
+ | 14.9718 | 50500 | 0.0018 |
461
+ | 15.1201 | 51000 | 0.0014 |
462
+ | 15.2683 | 51500 | 0.0012 |
463
+ | 15.4165 | 52000 | 0.0012 |
464
+ | 15.5648 | 52500 | 0.001 |
465
+ | 15.7130 | 53000 | 0.0014 |
466
+ | 15.8613 | 53500 | 0.0018 |
467
+ | 16.0095 | 54000 | 0.0014 |
468
+ | 16.1577 | 54500 | 0.0011 |
469
+ | 16.3060 | 55000 | 0.001 |
470
+ | 16.4542 | 55500 | 0.0009 |
471
+ | 16.6024 | 56000 | 0.0013 |
472
+ | 16.7507 | 56500 | 0.0015 |
473
+ | 16.8989 | 57000 | 0.0011 |
474
+ | 17.0471 | 57500 | 0.0007 |
475
+ | 17.1954 | 58000 | 0.0007 |
476
+ | 17.3436 | 58500 | 0.001 |
477
+ | 17.4918 | 59000 | 0.0011 |
478
+ | 17.6401 | 59500 | 0.0011 |
479
+ | 17.7883 | 60000 | 0.001 |
480
+ | 17.9366 | 60500 | 0.0012 |
481
+ | 18.0848 | 61000 | 0.001 |
482
+ | 18.2330 | 61500 | 0.0007 |
483
+ | 18.3813 | 62000 | 0.0009 |
484
+ | 18.5295 | 62500 | 0.001 |
485
+ | 18.6777 | 63000 | 0.0009 |
486
+ | 18.8260 | 63500 | 0.0011 |
487
+ | 18.9742 | 64000 | 0.0007 |
488
+ | 19.1224 | 64500 | 0.0012 |
489
+ | 19.2707 | 65000 | 0.0005 |
490
+ | 19.4189 | 65500 | 0.0008 |
491
+ | 19.5672 | 66000 | 0.001 |
492
+ | 19.7154 | 66500 | 0.0009 |
493
+ | 19.8636 | 67000 | 0.001 |
494
+
495
+ </details>
496
+
497
+ ### Framework Versions
498
+ - Python: 3.9.6
499
+ - Sentence Transformers: 3.4.1
500
+ - Transformers: 4.48.2
501
+ - PyTorch: 2.7.0.dev20250131
502
+ - Accelerate: 1.3.0
503
+ - Datasets: 3.2.0
504
+ - Tokenizers: 0.21.0
505
+
506
+ ## Citation
507
+
508
+ ### BibTeX
509
+
510
+ #### Sentence Transformers
511
+ ```bibtex
512
+ @inproceedings{reimers-2019-sentence-bert,
513
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
514
+ author = "Reimers, Nils and Gurevych, Iryna",
515
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
516
+ month = "11",
517
+ year = "2019",
518
+ publisher = "Association for Computational Linguistics",
519
+ url = "https://arxiv.org/abs/1908.10084",
520
+ }
521
+ ```
522
+
523
+ <!--
524
+ ## Glossary
525
+
526
+ *Clearly define terms in order to be accessible across audiences.*
527
+ -->
528
+
529
+ <!--
530
+ ## Model Card Authors
531
+
532
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
533
+ -->
534
+
535
+ <!--
536
+ ## Model Card Contact
537
+
538
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
539
+ -->
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "TaylorAI/bge-micro-v2",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 1536,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 3,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.48.2",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.2",
5
+ "pytorch": "2.7.0.dev20250131"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": "cosine"
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:373939a9ce748a09155c14aa18f01035830fbf1f13512ffee0ee311332f36dff
3
+ size 69565312
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "[PAD]",
4
+ "[UNK]",
5
+ "[CLS]",
6
+ "[SEP]",
7
+ "[MASK]"
8
+ ],
9
+ "cls_token": {
10
+ "content": "[CLS]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "mask_token": {
17
+ "content": "[MASK]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "pad_token": {
24
+ "content": "[PAD]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "sep_token": {
31
+ "content": "[SEP]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ },
37
+ "unk_token": {
38
+ "content": "[UNK]",
39
+ "lstrip": false,
40
+ "normalized": false,
41
+ "rstrip": false,
42
+ "single_word": false
43
+ }
44
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "additional_special_tokens": [
45
+ "[PAD]",
46
+ "[UNK]",
47
+ "[CLS]",
48
+ "[SEP]",
49
+ "[MASK]"
50
+ ],
51
+ "clean_up_tokenization_spaces": true,
52
+ "cls_token": "[CLS]",
53
+ "do_basic_tokenize": true,
54
+ "do_lower_case": true,
55
+ "extra_special_tokens": {},
56
+ "mask_token": "[MASK]",
57
+ "max_length": 512,
58
+ "model_max_length": 512,
59
+ "never_split": null,
60
+ "pad_to_multiple_of": null,
61
+ "pad_token": "[PAD]",
62
+ "pad_token_type_id": 0,
63
+ "padding_side": "right",
64
+ "sep_token": "[SEP]",
65
+ "stride": 0,
66
+ "strip_accents": null,
67
+ "tokenize_chinese_chars": true,
68
+ "tokenizer_class": "BertTokenizer",
69
+ "truncation_side": "right",
70
+ "truncation_strategy": "longest_first",
71
+ "unk_token": "[UNK]"
72
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff