tomaarsen HF staff commited on
Commit
b239653
·
verified ·
1 Parent(s): d46ffcd

Add new CrossEncoder model

Browse files
Files changed (7) hide show
  1. README.md +398 -0
  2. config.json +34 -0
  3. model.safetensors +3 -0
  4. special_tokens_map.json +37 -0
  5. tokenizer.json +0 -0
  6. tokenizer_config.json +65 -0
  7. vocab.txt +0 -0
README.md ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ tags:
6
+ - sentence-transformers
7
+ - cross-encoder
8
+ - text-classification
9
+ - generated_from_trainer
10
+ - dataset_size:578402
11
+ - loss:BinaryCrossEntropyLoss
12
+ base_model: prajjwal1/bert-tiny
13
+ pipeline_tag: text-classification
14
+ library_name: sentence-transformers
15
+ metrics:
16
+ - map
17
+ - mrr@10
18
+ - ndcg@10
19
+ co2_eq_emissions:
20
+ emissions: 7.3866990525881215
21
+ energy_consumed: 0.019003501532248668
22
+ source: codecarbon
23
+ training_type: fine-tuning
24
+ on_cloud: false
25
+ cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
26
+ ram_total_size: 31.777088165283203
27
+ hours_used: 0.099
28
+ hardware_used: 1 x NVIDIA GeForce RTX 3090
29
+ model-index:
30
+ - name: BERT-tiny trained on GooAQ
31
+ results: []
32
+ ---
33
+
34
+ # BERT-tiny trained on GooAQ
35
+
36
+ This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
37
+
38
+ This model was trained using [train_script.py](train_script.py).
39
+
40
+ ## Model Details
41
+
42
+ ### Model Description
43
+ - **Model Type:** Cross Encoder
44
+ - **Base model:** [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) <!-- at revision 6f75de8b60a9f8a2fdf7b69cbd86d9e64bcb3837 -->
45
+ - **Maximum Sequence Length:** 512 tokens
46
+ - **Number of Output Labels:** 1 label
47
+ <!-- - **Training Dataset:** Unknown -->
48
+ - **Language:** en
49
+ - **License:** apache-2.0
50
+
51
+ ### Model Sources
52
+
53
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
54
+ - **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
55
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
56
+ - **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
57
+
58
+ ## Usage
59
+
60
+ ### Direct Usage (Sentence Transformers)
61
+
62
+ First install the Sentence Transformers library:
63
+
64
+ ```bash
65
+ pip install -U sentence-transformers
66
+ ```
67
+
68
+ Then you can load this model and run inference.
69
+ ```python
70
+ from sentence_transformers import CrossEncoder
71
+
72
+ # Download from the 🤗 Hub
73
+ model = CrossEncoder("cross-encoder-testing/reranker-bert-tiny-gooaq-bce")
74
+ # Get scores for pairs of texts
75
+ pairs = [
76
+ ['are javascript developers in demand?', "JavaScript is the skill that is most in-demand for IT in 2020, according to a report from developer skills tester DevSkiller. The report, “Top IT Skills report 2020: Demand and Hiring Trends,” has JavaScript switching places with Java when compared to last year's report, with Java in third place this year, behind SQL."],
77
+ ['are javascript developers in demand?', 'In one line difference between the two is: JavaScript is the programming language where as AngularJS is a framework based on JavaScript. ... It is also the basic for all java script based technologies like jquery, angular JS, bootstrap JS and so on. Angular JS is a framework written in javascript and uses MVC architecture.'],
78
+ ['are javascript developers in demand?', 'Java applications are run in a virtual machine or web browser while JavaScript is run on a web browser. Java code is compiled whereas while JavaScript code is in text and in a web page. JavaScript is an OOP scripting language, whereas Java is an OOP programming language.'],
79
+ ['are javascript developers in demand?', 'Things in the body tag are the things that should be displayed: the actual content. Javascript in the body is executed as it is read and as the page is rendered. Javascript in the head is interpreted before anything is rendered.'],
80
+ ['are javascript developers in demand?', 'Web apps tend to be built using JavaScript, CSS and HTML5. Unlike mobile apps, there is no standard software development kit for building web apps. However, developers do have access to templates. Compared to mobile apps, web apps are usually quicker and easier to build — but they are much simpler in terms of features.'],
81
+ ]
82
+ scores = model.predict(pairs)
83
+ print(scores.shape)
84
+ # (5,)
85
+
86
+ # Or rank different texts based on similarity to a single text
87
+ ranks = model.rank(
88
+ 'are javascript developers in demand?',
89
+ [
90
+ "JavaScript is the skill that is most in-demand for IT in 2020, according to a report from developer skills tester DevSkiller. The report, “Top IT Skills report 2020: Demand and Hiring Trends,” has JavaScript switching places with Java when compared to last year's report, with Java in third place this year, behind SQL.",
91
+ 'In one line difference between the two is: JavaScript is the programming language where as AngularJS is a framework based on JavaScript. ... It is also the basic for all java script based technologies like jquery, angular JS, bootstrap JS and so on. Angular JS is a framework written in javascript and uses MVC architecture.',
92
+ 'Java applications are run in a virtual machine or web browser while JavaScript is run on a web browser. Java code is compiled whereas while JavaScript code is in text and in a web page. JavaScript is an OOP scripting language, whereas Java is an OOP programming language.',
93
+ 'Things in the body tag are the things that should be displayed: the actual content. Javascript in the body is executed as it is read and as the page is rendered. Javascript in the head is interpreted before anything is rendered.',
94
+ 'Web apps tend to be built using JavaScript, CSS and HTML5. Unlike mobile apps, there is no standard software development kit for building web apps. However, developers do have access to templates. Compared to mobile apps, web apps are usually quicker and easier to build — but they are much simpler in terms of features.',
95
+ ]
96
+ )
97
+ # [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
98
+ ```
99
+
100
+ <!--
101
+ ### Direct Usage (Transformers)
102
+
103
+ <details><summary>Click to see the direct usage in Transformers</summary>
104
+
105
+ </details>
106
+ -->
107
+
108
+ <!--
109
+ ### Downstream Usage (Sentence Transformers)
110
+
111
+ You can finetune this model on your own dataset.
112
+
113
+ <details><summary>Click to expand</summary>
114
+
115
+ </details>
116
+ -->
117
+
118
+ <!--
119
+ ### Out-of-Scope Use
120
+
121
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
122
+ -->
123
+
124
+ ## Evaluation
125
+
126
+ ### Metrics
127
+
128
+ #### Cross Encoder Reranking
129
+
130
+ * Datasets: `gooaq-dev`, `NanoMSMARCO`, `NanoNFCorpus` and `NanoNQ`
131
+ * Evaluated with [<code>CrossEncoderRerankingEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderRerankingEvaluator)
132
+
133
+ | Metric | gooaq-dev | NanoMSMARCO | NanoNFCorpus | NanoNQ |
134
+ |:------------|:---------------------|:---------------------|:---------------------|:---------------------|
135
+ | map | 0.5677 (+0.0366) | 0.4280 (-0.0616) | 0.3397 (+0.0787) | 0.4149 (-0.0047) |
136
+ | mrr@10 | 0.5558 (+0.0318) | 0.4129 (-0.0646) | 0.5196 (+0.0198) | 0.4132 (-0.0135) |
137
+ | **ndcg@10** | **0.6157 (+0.0245)** | **0.4772 (-0.0632)** | **0.3308 (+0.0058)** | **0.4859 (-0.0147)** |
138
+
139
+ #### Cross Encoder Nano BEIR
140
+
141
+ * Dataset: `NanoBEIR_R100_mean`
142
+ * Evaluated with [<code>CrossEncoderNanoBEIREvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CrossEncoderNanoBEIREvaluator)
143
+
144
+ | Metric | Value |
145
+ |:------------|:---------------------|
146
+ | map | 0.3942 (+0.0041) |
147
+ | mrr@10 | 0.4486 (-0.0194) |
148
+ | **ndcg@10** | **0.4313 (-0.0241)** |
149
+
150
+ <!--
151
+ ## Bias, Risks and Limitations
152
+
153
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
154
+ -->
155
+
156
+ <!--
157
+ ### Recommendations
158
+
159
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
160
+ -->
161
+
162
+ ## Training Details
163
+
164
+ ### Training Dataset
165
+
166
+ #### Unnamed Dataset
167
+
168
+ * Size: 578,402 training samples
169
+ * Columns: <code>question</code>, <code>answer</code>, and <code>label</code>
170
+ * Approximate statistics based on the first 1000 samples:
171
+ | | question | answer | label |
172
+ |:--------|:-----------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------|:------------------------------------------------|
173
+ | type | string | string | int |
174
+ | details | <ul><li>min: 21 characters</li><li>mean: 43.81 characters</li><li>max: 96 characters</li></ul> | <ul><li>min: 51 characters</li><li>mean: 252.46 characters</li><li>max: 405 characters</li></ul> | <ul><li>0: ~82.90%</li><li>1: ~17.10%</li></ul> |
175
+ * Samples:
176
+ | question | answer | label |
177
+ |:--------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------|
178
+ | <code>are javascript developers in demand?</code> | <code>JavaScript is the skill that is most in-demand for IT in 2020, according to a report from developer skills tester DevSkiller. The report, “Top IT Skills report 2020: Demand and Hiring Trends,” has JavaScript switching places with Java when compared to last year's report, with Java in third place this year, behind SQL.</code> | <code>1</code> |
179
+ | <code>are javascript developers in demand?</code> | <code>In one line difference between the two is: JavaScript is the programming language where as AngularJS is a framework based on JavaScript. ... It is also the basic for all java script based technologies like jquery, angular JS, bootstrap JS and so on. Angular JS is a framework written in javascript and uses MVC architecture.</code> | <code>0</code> |
180
+ | <code>are javascript developers in demand?</code> | <code>Java applications are run in a virtual machine or web browser while JavaScript is run on a web browser. Java code is compiled whereas while JavaScript code is in text and in a web page. JavaScript is an OOP scripting language, whereas Java is an OOP programming language.</code> | <code>0</code> |
181
+ * Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
182
+ ```json
183
+ {
184
+ "activation_fct": "torch.nn.modules.linear.Identity",
185
+ "pos_weight": 5
186
+ }
187
+ ```
188
+
189
+ ### Training Hyperparameters
190
+ #### Non-Default Hyperparameters
191
+
192
+ - `eval_strategy`: steps
193
+ - `per_device_train_batch_size`: 2048
194
+ - `per_device_eval_batch_size`: 2048
195
+ - `learning_rate`: 0.0005
196
+ - `num_train_epochs`: 1
197
+ - `warmup_ratio`: 0.1
198
+ - `seed`: 12
199
+ - `bf16`: True
200
+
201
+ #### All Hyperparameters
202
+ <details><summary>Click to expand</summary>
203
+
204
+ - `overwrite_output_dir`: False
205
+ - `do_predict`: False
206
+ - `eval_strategy`: steps
207
+ - `prediction_loss_only`: True
208
+ - `per_device_train_batch_size`: 2048
209
+ - `per_device_eval_batch_size`: 2048
210
+ - `per_gpu_train_batch_size`: None
211
+ - `per_gpu_eval_batch_size`: None
212
+ - `gradient_accumulation_steps`: 1
213
+ - `eval_accumulation_steps`: None
214
+ - `torch_empty_cache_steps`: None
215
+ - `learning_rate`: 0.0005
216
+ - `weight_decay`: 0.0
217
+ - `adam_beta1`: 0.9
218
+ - `adam_beta2`: 0.999
219
+ - `adam_epsilon`: 1e-08
220
+ - `max_grad_norm`: 1.0
221
+ - `num_train_epochs`: 1
222
+ - `max_steps`: -1
223
+ - `lr_scheduler_type`: linear
224
+ - `lr_scheduler_kwargs`: {}
225
+ - `warmup_ratio`: 0.1
226
+ - `warmup_steps`: 0
227
+ - `log_level`: passive
228
+ - `log_level_replica`: warning
229
+ - `log_on_each_node`: True
230
+ - `logging_nan_inf_filter`: True
231
+ - `save_safetensors`: True
232
+ - `save_on_each_node`: False
233
+ - `save_only_model`: False
234
+ - `restore_callback_states_from_checkpoint`: False
235
+ - `no_cuda`: False
236
+ - `use_cpu`: False
237
+ - `use_mps_device`: False
238
+ - `seed`: 12
239
+ - `data_seed`: None
240
+ - `jit_mode_eval`: False
241
+ - `use_ipex`: False
242
+ - `bf16`: True
243
+ - `fp16`: False
244
+ - `fp16_opt_level`: O1
245
+ - `half_precision_backend`: auto
246
+ - `bf16_full_eval`: False
247
+ - `fp16_full_eval`: False
248
+ - `tf32`: None
249
+ - `local_rank`: 0
250
+ - `ddp_backend`: None
251
+ - `tpu_num_cores`: None
252
+ - `tpu_metrics_debug`: False
253
+ - `debug`: []
254
+ - `dataloader_drop_last`: False
255
+ - `dataloader_num_workers`: 0
256
+ - `dataloader_prefetch_factor`: None
257
+ - `past_index`: -1
258
+ - `disable_tqdm`: False
259
+ - `remove_unused_columns`: True
260
+ - `label_names`: None
261
+ - `load_best_model_at_end`: False
262
+ - `ignore_data_skip`: False
263
+ - `fsdp`: []
264
+ - `fsdp_min_num_params`: 0
265
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
266
+ - `fsdp_transformer_layer_cls_to_wrap`: None
267
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
268
+ - `deepspeed`: None
269
+ - `label_smoothing_factor`: 0.0
270
+ - `optim`: adamw_torch
271
+ - `optim_args`: None
272
+ - `adafactor`: False
273
+ - `group_by_length`: False
274
+ - `length_column_name`: length
275
+ - `ddp_find_unused_parameters`: None
276
+ - `ddp_bucket_cap_mb`: None
277
+ - `ddp_broadcast_buffers`: False
278
+ - `dataloader_pin_memory`: True
279
+ - `dataloader_persistent_workers`: False
280
+ - `skip_memory_metrics`: True
281
+ - `use_legacy_prediction_loop`: False
282
+ - `push_to_hub`: False
283
+ - `resume_from_checkpoint`: None
284
+ - `hub_model_id`: None
285
+ - `hub_strategy`: every_save
286
+ - `hub_private_repo`: None
287
+ - `hub_always_push`: False
288
+ - `gradient_checkpointing`: False
289
+ - `gradient_checkpointing_kwargs`: None
290
+ - `include_inputs_for_metrics`: False
291
+ - `include_for_metrics`: []
292
+ - `eval_do_concat_batches`: True
293
+ - `fp16_backend`: auto
294
+ - `push_to_hub_model_id`: None
295
+ - `push_to_hub_organization`: None
296
+ - `mp_parameters`:
297
+ - `auto_find_batch_size`: False
298
+ - `full_determinism`: False
299
+ - `torchdynamo`: None
300
+ - `ray_scope`: last
301
+ - `ddp_timeout`: 1800
302
+ - `torch_compile`: False
303
+ - `torch_compile_backend`: None
304
+ - `torch_compile_mode`: None
305
+ - `dispatch_batches`: None
306
+ - `split_batches`: None
307
+ - `include_tokens_per_second`: False
308
+ - `include_num_input_tokens_seen`: False
309
+ - `neftune_noise_alpha`: None
310
+ - `optim_target_modules`: None
311
+ - `batch_eval_metrics`: False
312
+ - `eval_on_start`: False
313
+ - `use_liger_kernel`: False
314
+ - `eval_use_gather_object`: False
315
+ - `average_tokens_across_devices`: False
316
+ - `prompts`: None
317
+ - `batch_sampler`: batch_sampler
318
+ - `multi_dataset_batch_sampler`: proportional
319
+
320
+ </details>
321
+
322
+ ### Training Logs
323
+ | Epoch | Step | Training Loss | gooaq-dev_ndcg@10 | NanoMSMARCO_ndcg@10 | NanoNFCorpus_ndcg@10 | NanoNQ_ndcg@10 | NanoBEIR_R100_mean_ndcg@10 |
324
+ |:------:|:----:|:-------------:|:-----------------:|:-------------------:|:--------------------:|:----------------:|:--------------------------:|
325
+ | -1 | -1 | - | 0.0887 (-0.5025) | 0.0063 (-0.5341) | 0.3262 (+0.0012) | 0.0000 (-0.5006) | 0.1108 (-0.3445) |
326
+ | 0.0035 | 1 | 1.1945 | - | - | - | - | - |
327
+ | 0.0707 | 20 | 1.1664 | 0.4082 (-0.1830) | 0.1805 (-0.3600) | 0.3168 (-0.0083) | 0.2243 (-0.2763) | 0.2405 (-0.2149) |
328
+ | 0.1413 | 40 | 1.1107 | 0.5260 (-0.0652) | 0.3453 (-0.1951) | 0.3335 (+0.0085) | 0.3430 (-0.1576) | 0.3406 (-0.1147) |
329
+ | 0.2120 | 60 | 1.022 | 0.5623 (-0.0289) | 0.3929 (-0.1475) | 0.3512 (+0.0262) | 0.3472 (-0.1535) | 0.3638 (-0.0916) |
330
+ | 0.2827 | 80 | 0.973 | 0.5691 (-0.0221) | 0.4048 (-0.1356) | 0.3530 (+0.0280) | 0.3833 (-0.1174) | 0.3804 (-0.0750) |
331
+ | 0.3534 | 100 | 0.963 | 0.5814 (-0.0098) | 0.4385 (-0.1019) | 0.3471 (+0.0221) | 0.4227 (-0.0779) | 0.4028 (-0.0526) |
332
+ | 0.4240 | 120 | 0.9419 | 0.5963 (+0.0050) | 0.4106 (-0.1298) | 0.3540 (+0.0289) | 0.4843 (-0.0163) | 0.4163 (-0.0391) |
333
+ | 0.4947 | 140 | 0.9331 | 0.5953 (+0.0041) | 0.4310 (-0.1094) | 0.3367 (+0.0117) | 0.4163 (-0.0843) | 0.3947 (-0.0607) |
334
+ | 0.5654 | 160 | 0.9263 | 0.6070 (+0.0158) | 0.4626 (-0.0778) | 0.3443 (+0.0193) | 0.4823 (-0.0184) | 0.4297 (-0.0256) |
335
+ | 0.6360 | 180 | 0.9212 | 0.6069 (+0.0156) | 0.4602 (-0.0802) | 0.3391 (+0.0141) | 0.4782 (-0.0224) | 0.4258 (-0.0295) |
336
+ | 0.7067 | 200 | 0.901 | 0.6126 (+0.0214) | 0.4602 (-0.0803) | 0.3413 (+0.0162) | 0.4780 (-0.0227) | 0.4265 (-0.0289) |
337
+ | 0.7774 | 220 | 0.8997 | 0.6136 (+0.0224) | 0.4801 (-0.0604) | 0.3349 (+0.0098) | 0.4903 (-0.0103) | 0.4351 (-0.0203) |
338
+ | 0.8481 | 240 | 0.9021 | 0.6132 (+0.0220) | 0.4850 (-0.0554) | 0.3438 (+0.0188) | 0.4855 (-0.0151) | 0.4381 (-0.0173) |
339
+ | 0.9187 | 260 | 0.9013 | 0.6188 (+0.0276) | 0.4820 (-0.0584) | 0.3387 (+0.0137) | 0.4851 (-0.0156) | 0.4353 (-0.0201) |
340
+ | 0.9894 | 280 | 0.8996 | 0.6157 (+0.0245) | 0.4772 (-0.0632) | 0.3305 (+0.0054) | 0.4859 (-0.0147) | 0.4312 (-0.0242) |
341
+ | -1 | -1 | - | 0.6157 (+0.0245) | 0.4772 (-0.0632) | 0.3308 (+0.0058) | 0.4859 (-0.0147) | 0.4313 (-0.0241) |
342
+
343
+
344
+ ### Environmental Impact
345
+ Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
346
+ - **Energy Consumed**: 0.019 kWh
347
+ - **Carbon Emitted**: 0.007 kg of CO2
348
+ - **Hours Used**: 0.099 hours
349
+
350
+ ### Training Hardware
351
+ - **On Cloud**: No
352
+ - **GPU Model**: 1 x NVIDIA GeForce RTX 3090
353
+ - **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
354
+ - **RAM Size**: 31.78 GB
355
+
356
+ ### Framework Versions
357
+ - Python: 3.11.6
358
+ - Sentence Transformers: 3.5.0.dev0
359
+ - Transformers: 4.48.3
360
+ - PyTorch: 2.5.0+cu121
361
+ - Accelerate: 1.3.0
362
+ - Datasets: 2.20.0
363
+ - Tokenizers: 0.21.0
364
+
365
+ ## Citation
366
+
367
+ ### BibTeX
368
+
369
+ #### Sentence Transformers
370
+ ```bibtex
371
+ @inproceedings{reimers-2019-sentence-bert,
372
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
373
+ author = "Reimers, Nils and Gurevych, Iryna",
374
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
375
+ month = "11",
376
+ year = "2019",
377
+ publisher = "Association for Computational Linguistics",
378
+ url = "https://arxiv.org/abs/1908.10084",
379
+ }
380
+ ```
381
+
382
+ <!--
383
+ ## Glossary
384
+
385
+ *Clearly define terms in order to be accessible across audiences.*
386
+ -->
387
+
388
+ <!--
389
+ ## Model Card Authors
390
+
391
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
392
+ -->
393
+
394
+ <!--
395
+ ## Model Card Contact
396
+
397
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
398
+ -->
config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "cross-encoder-testing/reranker-bert-tiny-gooaq-bce",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 128,
11
+ "id2label": {
12
+ "0": "LABEL_0"
13
+ },
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 512,
16
+ "label2id": {
17
+ "LABEL_0": 0
18
+ },
19
+ "layer_norm_eps": 1e-12,
20
+ "max_position_embeddings": 512,
21
+ "model_type": "bert",
22
+ "num_attention_heads": 2,
23
+ "num_hidden_layers": 2,
24
+ "pad_token_id": 0,
25
+ "position_embedding_type": "absolute",
26
+ "sentence_transformers": {
27
+ "activation_fn": "torch.nn.modules.activation.Tanh"
28
+ },
29
+ "torch_dtype": "float32",
30
+ "transformers_version": "4.48.3",
31
+ "type_vocab_size": 2,
32
+ "use_cache": true,
33
+ "vocab_size": 30522
34
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1e29bb7709134dfc7b374375802767c4cbed41dfff30eee538b951bfcf472cc1
3
+ size 17548796
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_length": 512,
51
+ "model_max_length": 512,
52
+ "never_split": null,
53
+ "pad_to_multiple_of": null,
54
+ "pad_token": "[PAD]",
55
+ "pad_token_type_id": 0,
56
+ "padding_side": "right",
57
+ "sep_token": "[SEP]",
58
+ "stride": 0,
59
+ "strip_accents": null,
60
+ "tokenize_chinese_chars": true,
61
+ "tokenizer_class": "BertTokenizer",
62
+ "truncation_side": "right",
63
+ "truncation_strategy": "longest_first",
64
+ "unk_token": "[UNK]"
65
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff