Nashhz commited on
Commit
1f44fb6
·
verified ·
1 Parent(s): 5c59086

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,400 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: sentence-transformers/all-MiniLM-L6-v2
3
+ library_name: sentence-transformers
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - sentence-transformers
7
+ - sentence-similarity
8
+ - feature-extraction
9
+ - generated_from_trainer
10
+ - dataset_size:8117
11
+ - loss:CosineSimilarityLoss
12
+ widget:
13
+ - source_sentence: 'Description: I''m looking for a skilled web developer proficient
14
+ in converting Figma mobile app designs to fully responsive HTML code in Flutter.
15
+ Key Requirements - Convert Figma designs to HTML, ensuring the output is fully
16
+ responsive across all devices. - Implement multiple interactive elements as per
17
+ the original designs. This includes, but is not limited to, sliders and pop-ups.
18
+ - Utilize clean, efficient code that''s easy to maintain. Ideal Skills and Experience
19
+ - Extensive experience in Figma to Flutter codeconversion. - Proficient in creating
20
+ fully responsive web applications. - Strong understanding of interactive web elements.
21
+ - Exceptional coding skills, with a focus on maintaining code quality and efficiency.'
22
+ sentences:
23
+ - 'Skills: Data Entry, eCommerce'
24
+ - 'Skills: Graphic Design, CSS, HTML, Flutter'
25
+ - 'Skills: Social Media Marketing, Social Networking, Influencer Marketing, Market
26
+ Research'
27
+ - source_sentence: 'Description: I''m looking for an experienced C++ developer to
28
+ help me with a project involving graph data structures. The main focus is implementing
29
+ the Breadth-First Search BFS traversal algorithm on a graph. Ideal Skills and
30
+ Experience - Proficient in C++ - Strong understanding of graph data structures
31
+ - Experience implementing in competitive Programming - Problem-solving skills
32
+ This project requires not just coding, but also a deep understanding of how graphs
33
+ work and how to traverse them efficiently using BFS. Please provide examples of
34
+ similar projects you have completed in the past.'
35
+ sentences:
36
+ - 'Skills: C++ Programming, C Programming, Algorithm, Java, C# Programming'
37
+ - 'Skills: Banner Design, Graphic Design, Animation, Logo Design, Photoshop'
38
+ - 'Skills: Website Design, Graphic Design, PHP, HTML, User Interface / IA'
39
+ - source_sentence: 'Description: I''m looking for a creative designer to create two
40
+ engaging, superhero-themed Facebook photos for an ad campaign targeting adults.
41
+ Key Requirements - Design should be fun and playful, appealing to the target audience
42
+ - Experience in creating social media visuals and understanding of Facebook''s
43
+ photo specifications - Skills in graphic design and illustration Ideal Freelancer
44
+ - Previous experience designing for ad campaigns - Portfolio showcasing playful
45
+ and fun designs - Understanding of the superhero genre and its appeal to adults'
46
+ sentences:
47
+ - 'Skills: Joomla, PHP, C++ Programming, Blueprint Calibration, Floorplan Blueprinting'
48
+ - 'Skills: Graphic Design, Photoshop, Banner Design, Illustration, Illustrator'
49
+ - 'Skills: Logo Design, Graphic Design, Illustrator, Photoshop, Icon Design'
50
+ - source_sentence: 'Description: I''m looking for assistance with registering a new
51
+ Class 2 Digital Signature Certificate DSC for use on the E-filing portal. Ideal
52
+ Skills and Experience - Proficiency in Digital Signature Certificate DSC registration
53
+ - Understanding of Class 2 Certificate specifications - Familiarity with E-filing
54
+ portal requirements and procedures'
55
+ sentences:
56
+ - 'Skills: Node.js, Express JS, React.js, SQL, Next.js'
57
+ - 'Skills: PHP, WordPress, HTML, Website Design, CSS'
58
+ - 'Skills: Private Client, Digital Marketing, Social Networking'
59
+ - source_sentence: 'Description: I''m seeking an expert in Google Sheets and data
60
+ management to create a comprehensive tracking system for student progress. Details
61
+ - Each student will have their own Google Sheet file. - Each file will contain
62
+ 6 levels as separate sheets and a checkbox in each sheet for tracking progress.
63
+ - When the checkbox is ticked, the data needs to be sent to a central database
64
+ for us to know the student has completed a level and certificates need to be printed.
65
+ The data to be sent to the database includes - Student''s Name - Current Level
66
+ - Package Details - Date and time Ideal skills for this project include - Advanced
67
+ knowledge of Google Sheets - Experience with data management and database creation
68
+ - Attention to detail to ensure accurate tracking of each student''s progress.
69
+ Each student''s Google Sheet should include - Their Name - Their Current Level
70
+ - Details about the Package they are on - A space to track their Progress Please
71
+ only apply if you have relevant experience and can demonstrate your ability to
72
+ deliver this project efficiently.'
73
+ sentences:
74
+ - 'Skills: Linux, System Admin, Network Administration, Ubuntu'
75
+ - 'Skills: PHP, Visual Basic, Data Processing, Data Entry, Excel'
76
+ - 'Skills: Computer Security, Network Administration, Virtual Machines, Web Security,
77
+ Linux'
78
+ ---
79
+
80
+ # SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
81
+
82
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
83
+
84
+ ## Model Details
85
+
86
+ ### Model Description
87
+ - **Model Type:** Sentence Transformer
88
+ - **Base model:** [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) <!-- at revision ea78891063587eb050ed4166b20062eaf978037c -->
89
+ - **Maximum Sequence Length:** 256 tokens
90
+ - **Output Dimensionality:** 384 tokens
91
+ - **Similarity Function:** Cosine Similarity
92
+ <!-- - **Training Dataset:** Unknown -->
93
+ <!-- - **Language:** Unknown -->
94
+ <!-- - **License:** Unknown -->
95
+
96
+ ### Model Sources
97
+
98
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
99
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
100
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
101
+
102
+ ### Full Model Architecture
103
+
104
+ ```
105
+ SentenceTransformer(
106
+ (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
107
+ (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
108
+ (2): Normalize()
109
+ )
110
+ ```
111
+
112
+ ## Usage
113
+
114
+ ### Direct Usage (Sentence Transformers)
115
+
116
+ First install the Sentence Transformers library:
117
+
118
+ ```bash
119
+ pip install -U sentence-transformers
120
+ ```
121
+
122
+ Then you can load this model and run inference.
123
+ ```python
124
+ from sentence_transformers import SentenceTransformer
125
+
126
+ # Download from the 🤗 Hub
127
+ model = SentenceTransformer("Nashhz/SBERT_KFOLD_Job_Descriptions_to_Skills")
128
+ # Run inference
129
+ sentences = [
130
+ "Description: I'm seeking an expert in Google Sheets and data management to create a comprehensive tracking system for student progress. Details - Each student will have their own Google Sheet file. - Each file will contain 6 levels as separate sheets and a checkbox in each sheet for tracking progress. - When the checkbox is ticked, the data needs to be sent to a central database for us to know the student has completed a level and certificates need to be printed. The data to be sent to the database includes - Student's Name - Current Level - Package Details - Date and time Ideal skills for this project include - Advanced knowledge of Google Sheets - Experience with data management and database creation - Attention to detail to ensure accurate tracking of each student's progress. Each student's Google Sheet should include - Their Name - Their Current Level - Details about the Package they are on - A space to track their Progress Please only apply if you have relevant experience and can demonstrate your ability to deliver this project efficiently.",
131
+ 'Skills: PHP, Visual Basic, Data Processing, Data Entry, Excel',
132
+ 'Skills: Computer Security, Network Administration, Virtual Machines, Web Security, Linux',
133
+ ]
134
+ embeddings = model.encode(sentences)
135
+ print(embeddings.shape)
136
+ # [3, 384]
137
+
138
+ # Get the similarity scores for the embeddings
139
+ similarities = model.similarity(embeddings, embeddings)
140
+ print(similarities.shape)
141
+ # [3, 3]
142
+ ```
143
+
144
+ <!--
145
+ ### Direct Usage (Transformers)
146
+
147
+ <details><summary>Click to see the direct usage in Transformers</summary>
148
+
149
+ </details>
150
+ -->
151
+
152
+ <!--
153
+ ### Downstream Usage (Sentence Transformers)
154
+
155
+ You can finetune this model on your own dataset.
156
+
157
+ <details><summary>Click to expand</summary>
158
+
159
+ </details>
160
+ -->
161
+
162
+ <!--
163
+ ### Out-of-Scope Use
164
+
165
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
166
+ -->
167
+
168
+ <!--
169
+ ## Bias, Risks and Limitations
170
+
171
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
172
+ -->
173
+
174
+ <!--
175
+ ### Recommendations
176
+
177
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
178
+ -->
179
+
180
+ ## Training Details
181
+
182
+ ### Training Dataset
183
+
184
+ #### Unnamed Dataset
185
+
186
+
187
+ * Size: 8,117 training samples
188
+ * Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
189
+ * Approximate statistics based on the first 1000 samples:
190
+ | | sentence_0 | sentence_1 | label |
191
+ |:--------|:------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------|
192
+ | type | string | string | float |
193
+ | details | <ul><li>min: 7 tokens</li><li>mean: 139.48 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 16.81 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: -0.07</li><li>mean: 0.46</li><li>max: 0.83</li></ul> |
194
+ * Samples:
195
+ | sentence_0 | sentence_1 | label |
196
+ |:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------|:--------------------------------|
197
+ | <code>Description: Looking for a Freelance Videographer & Post-Production Editor! We're hosting a charity event near Sandton, Johannesburg, in support of those affected by abuse. The event will run for about 1-2 hours and features a live band performance. Project Scope Event Recording Capture the entire live set approx. 30 minutes including breaks and speakers, totaling around 1 hour. Post-Production Create a dynamic video for social media, similar to a movie trailer, highlighting the live band and key moments from the event. Photography Take a few impactful photos during the event, including post-production edits. Interviews Film one-on-one segments with speakers for inclusion in the post-event video. Sound Design Incorporate music and sound effects, using creative content available online. Delivery Deadline Edited photos and videos need to be submitted by 14 October 2024. This project aims to capture the spirit of the event, supporting the Family Protection Association login to view URL and empowering women while raising funds and awareness for the cause. If you're interested and have a flair for storytelling through video, please reach out! Date 13 October 2024 Time 1300pm to 1400pm 2 hour set</code> | <code>Skills: Video Editing, Video Production, Video Services, Videography, After Effects</code> | <code>0.4115103483200073</code> |
198
+ | <code>Description: Hi! I am Lradon from Andvids. We are a video production agency from China assisting our clients in finding content creators to produce unboxing videos. General requirements of the videos Video duration 1-3 minutes,without music Format Landscape screen 169 MP4 Content â' show your face to explain product features and demonstrate in English fluently. 30 of the time is used to explain product features and show product details, 70 of the time is used to demonstrate the use of the product and the use process in mutiple sences. â'Don't talking about price, personal privacy information, do not appear two-dimensional code, express bill, license plate, door plate, etc Clarity 1080p. Make sure the environment is clean and bright ,and the lens is stable and does not shake Upload to Amazon Sometimes we need you to upload the videos to Amazon If you are interested in this job,feel free to contact me and please send me an introduction video or anything you have shot login to view URL forward to receiving your login to view URL you!</code> | <code>Skills: Video Editing, Video Production, Videography, Video Services, After Effects</code> | <code>0.4927669167518616</code> |
199
+ | <code>Description: I'm looking for an expert in electronic circuit board design to create and manufacture a simple electronic board for industrial marine machinery. The ideal candidate should have - Experience in designing circuit boards - Ability to design simple, yet effective electronic boards. - Skills in both design and manufacturing of circuit boards. This project is all about creating a reliable, efficient circuit board that can withstand the rigors of marine use. The board is very straight forward design which will be- dc power supply covering 12 volt or 24 volt dc- but range with charging should cover 08 volts to 32 volts dc- it will have an on off button- when selected to on it will engage a 12 vdc solenoid very small and will activate it for 3 minutes and then when stopped it will do this every 7 days on a timer for 3 minutes- when turned off it will not activate and when turned on again it will start again it will activate the solid for 3 minutes and then when finish it will start a 7 day time to repeat the 3 minute solenoid and will be on constant repeat</code> | <code>Skills: Electronics, Electrical Engineering, PCB Layout, Circuit Design, Engineering</code> | <code>0.2869749069213867</code> |
200
+ * Loss: [<code>CosineSimilarityLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cosinesimilarityloss) with these parameters:
201
+ ```json
202
+ {
203
+ "loss_fct": "torch.nn.modules.loss.MSELoss"
204
+ }
205
+ ```
206
+
207
+ ### Training Hyperparameters
208
+ #### Non-Default Hyperparameters
209
+
210
+ - `per_device_train_batch_size`: 16
211
+ - `per_device_eval_batch_size`: 16
212
+ - `num_train_epochs`: 4
213
+ - `multi_dataset_batch_sampler`: round_robin
214
+
215
+ #### All Hyperparameters
216
+ <details><summary>Click to expand</summary>
217
+
218
+ - `overwrite_output_dir`: False
219
+ - `do_predict`: False
220
+ - `eval_strategy`: no
221
+ - `prediction_loss_only`: True
222
+ - `per_device_train_batch_size`: 16
223
+ - `per_device_eval_batch_size`: 16
224
+ - `per_gpu_train_batch_size`: None
225
+ - `per_gpu_eval_batch_size`: None
226
+ - `gradient_accumulation_steps`: 1
227
+ - `eval_accumulation_steps`: None
228
+ - `torch_empty_cache_steps`: None
229
+ - `learning_rate`: 5e-05
230
+ - `weight_decay`: 0.0
231
+ - `adam_beta1`: 0.9
232
+ - `adam_beta2`: 0.999
233
+ - `adam_epsilon`: 1e-08
234
+ - `max_grad_norm`: 1
235
+ - `num_train_epochs`: 4
236
+ - `max_steps`: -1
237
+ - `lr_scheduler_type`: linear
238
+ - `lr_scheduler_kwargs`: {}
239
+ - `warmup_ratio`: 0.0
240
+ - `warmup_steps`: 0
241
+ - `log_level`: passive
242
+ - `log_level_replica`: warning
243
+ - `log_on_each_node`: True
244
+ - `logging_nan_inf_filter`: True
245
+ - `save_safetensors`: True
246
+ - `save_on_each_node`: False
247
+ - `save_only_model`: False
248
+ - `restore_callback_states_from_checkpoint`: False
249
+ - `no_cuda`: False
250
+ - `use_cpu`: False
251
+ - `use_mps_device`: False
252
+ - `seed`: 42
253
+ - `data_seed`: None
254
+ - `jit_mode_eval`: False
255
+ - `use_ipex`: False
256
+ - `bf16`: False
257
+ - `fp16`: False
258
+ - `fp16_opt_level`: O1
259
+ - `half_precision_backend`: auto
260
+ - `bf16_full_eval`: False
261
+ - `fp16_full_eval`: False
262
+ - `tf32`: None
263
+ - `local_rank`: 0
264
+ - `ddp_backend`: None
265
+ - `tpu_num_cores`: None
266
+ - `tpu_metrics_debug`: False
267
+ - `debug`: []
268
+ - `dataloader_drop_last`: False
269
+ - `dataloader_num_workers`: 0
270
+ - `dataloader_prefetch_factor`: None
271
+ - `past_index`: -1
272
+ - `disable_tqdm`: False
273
+ - `remove_unused_columns`: True
274
+ - `label_names`: None
275
+ - `load_best_model_at_end`: False
276
+ - `ignore_data_skip`: False
277
+ - `fsdp`: []
278
+ - `fsdp_min_num_params`: 0
279
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
280
+ - `fsdp_transformer_layer_cls_to_wrap`: None
281
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
282
+ - `deepspeed`: None
283
+ - `label_smoothing_factor`: 0.0
284
+ - `optim`: adamw_torch
285
+ - `optim_args`: None
286
+ - `adafactor`: False
287
+ - `group_by_length`: False
288
+ - `length_column_name`: length
289
+ - `ddp_find_unused_parameters`: None
290
+ - `ddp_bucket_cap_mb`: None
291
+ - `ddp_broadcast_buffers`: False
292
+ - `dataloader_pin_memory`: True
293
+ - `dataloader_persistent_workers`: False
294
+ - `skip_memory_metrics`: True
295
+ - `use_legacy_prediction_loop`: False
296
+ - `push_to_hub`: False
297
+ - `resume_from_checkpoint`: None
298
+ - `hub_model_id`: None
299
+ - `hub_strategy`: every_save
300
+ - `hub_private_repo`: False
301
+ - `hub_always_push`: False
302
+ - `gradient_checkpointing`: False
303
+ - `gradient_checkpointing_kwargs`: None
304
+ - `include_inputs_for_metrics`: False
305
+ - `eval_do_concat_batches`: True
306
+ - `fp16_backend`: auto
307
+ - `push_to_hub_model_id`: None
308
+ - `push_to_hub_organization`: None
309
+ - `mp_parameters`:
310
+ - `auto_find_batch_size`: False
311
+ - `full_determinism`: False
312
+ - `torchdynamo`: None
313
+ - `ray_scope`: last
314
+ - `ddp_timeout`: 1800
315
+ - `torch_compile`: False
316
+ - `torch_compile_backend`: None
317
+ - `torch_compile_mode`: None
318
+ - `dispatch_batches`: None
319
+ - `split_batches`: None
320
+ - `include_tokens_per_second`: False
321
+ - `include_num_input_tokens_seen`: False
322
+ - `neftune_noise_alpha`: None
323
+ - `optim_target_modules`: None
324
+ - `batch_eval_metrics`: False
325
+ - `eval_on_start`: False
326
+ - `use_liger_kernel`: False
327
+ - `eval_use_gather_object`: False
328
+ - `batch_sampler`: batch_sampler
329
+ - `multi_dataset_batch_sampler`: round_robin
330
+
331
+ </details>
332
+
333
+ ### Training Logs
334
+ | Epoch | Step | Training Loss |
335
+ |:------:|:----:|:-------------:|
336
+ | 0.9843 | 500 | 0.0012 |
337
+ | 1.9685 | 1000 | 0.0011 |
338
+ | 2.9528 | 1500 | 0.0008 |
339
+ | 3.9370 | 2000 | 0.0006 |
340
+ | 0.9843 | 500 | 0.0009 |
341
+ | 1.9685 | 1000 | 0.0008 |
342
+ | 2.9528 | 1500 | 0.0006 |
343
+ | 3.9370 | 2000 | 0.0005 |
344
+ | 0.9843 | 500 | 0.0007 |
345
+ | 1.9685 | 1000 | 0.0007 |
346
+ | 2.9528 | 1500 | 0.0005 |
347
+ | 3.9370 | 2000 | 0.0004 |
348
+ | 0.9843 | 500 | 0.0006 |
349
+ | 1.9685 | 1000 | 0.0006 |
350
+ | 2.9528 | 1500 | 0.0004 |
351
+ | 3.9370 | 2000 | 0.0003 |
352
+ | 0.9843 | 500 | 0.0005 |
353
+ | 1.9685 | 1000 | 0.0005 |
354
+ | 2.9528 | 1500 | 0.0004 |
355
+ | 3.9370 | 2000 | 0.0003 |
356
+
357
+
358
+ ### Framework Versions
359
+ - Python: 3.12.6
360
+ - Sentence Transformers: 3.2.0
361
+ - Transformers: 4.45.2
362
+ - PyTorch: 2.4.1+cpu
363
+ - Accelerate: 1.0.1
364
+ - Datasets: 3.0.1
365
+ - Tokenizers: 0.20.1
366
+
367
+ ## Citation
368
+
369
+ ### BibTeX
370
+
371
+ #### Sentence Transformers
372
+ ```bibtex
373
+ @inproceedings{reimers-2019-sentence-bert,
374
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
375
+ author = "Reimers, Nils and Gurevych, Iryna",
376
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
377
+ month = "11",
378
+ year = "2019",
379
+ publisher = "Association for Computational Linguistics",
380
+ url = "https://arxiv.org/abs/1908.10084",
381
+ }
382
+ ```
383
+
384
+ <!--
385
+ ## Glossary
386
+
387
+ *Clearly define terms in order to be accessible across audiences.*
388
+ -->
389
+
390
+ <!--
391
+ ## Model Card Authors
392
+
393
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
394
+ -->
395
+
396
+ <!--
397
+ ## Model Card Contact
398
+
399
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
400
+ -->
config.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "output/SBERT_KFOLD_JD_DnS",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "gradient_checkpointing": false,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 384,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1536,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 6,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.45.2",
23
+ "type_vocab_size": 2,
24
+ "use_cache": true,
25
+ "vocab_size": 30522
26
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.2.0",
4
+ "transformers": "4.45.2",
5
+ "pytorch": "2.4.1+cpu"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6aaf3de6522c4f58ce70420e7b08df65cc746e33a3bcf715dd6390a772245d1c
3
+ size 90864192
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 256,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": false,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": true,
48
+ "mask_token": "[MASK]",
49
+ "max_length": 128,
50
+ "model_max_length": 256,
51
+ "never_split": null,
52
+ "pad_to_multiple_of": null,
53
+ "pad_token": "[PAD]",
54
+ "pad_token_type_id": 0,
55
+ "padding_side": "right",
56
+ "sep_token": "[SEP]",
57
+ "stride": 0,
58
+ "strip_accents": null,
59
+ "tokenize_chinese_chars": true,
60
+ "tokenizer_class": "BertTokenizer",
61
+ "truncation_side": "right",
62
+ "truncation_strategy": "longest_first",
63
+ "unk_token": "[UNK]"
64
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff