lsy9874205 commited on
Commit
4f0b1cb
·
verified ·
1 Parent(s): 96fb6dd

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,652 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:156
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: 'What does this document say about: In 2024, almost every significant
13
+ model vendor rel...?'
14
+ sentences:
15
+ - 'This remains astonishing to me. I thought a model with the capabilities and output
16
+ quality of GPT-4 needed a datacenter class server with one or more $40,000+ GPUs.
17
+
18
+ These models take up enough of my 64GB of RAM that I don’t run them often—they
19
+ don’t leave much room for anything else.
20
+
21
+ The fact that they run at all is a testament to the incredible training and inference
22
+ performance gains that we’ve figured out over the past year. It turns out there
23
+ was a lot of low-hanging fruit to be harvested in terms of model efficiency. I
24
+ expect there’s still more to come.'
25
+ - 'In 2024, almost every significant model vendor released multi-modal models. We
26
+ saw the Claude 3 series from Anthropic in March, Gemini 1.5 Pro in April (images,
27
+ audio and video), then September brought Qwen2-VL and Mistral’s Pixtral 12B and
28
+ Meta’s Llama 3.2 11B and 90B vision models. We got audio input and output from
29
+ OpenAI in October, then November saw SmolVLM from Hugging Face and December saw
30
+ image and video models from Amazon Nova.
31
+
32
+ In October I upgraded my LLM CLI tool to support multi-modal models via attachments.
33
+ It now has plugins for a whole collection of different vision models.'
34
+ - 'OpenAI made GPT-4o free for all users in May, and Claude 3.5 Sonnet was freely
35
+ available from its launch in June. This was a momentus change, because for the
36
+ previous year free users had mostly been restricted to GPT-3.5 level models, meaning
37
+ new users got a very inaccurate mental model of what a capable LLM could actually
38
+ do.
39
+
40
+ That era appears to have ended, likely permanently, with OpenAI’s launch of ChatGPT
41
+ Pro. This $200/month subscription service is the only way to access their most
42
+ capable model, o1 Pro.
43
+
44
+ Since the trick behind the o1 series (and the future models it will undoubtedly
45
+ inspire) is to expend more compute time to get better results, I don’t think those
46
+ days of free access to the best available models are likely to return.'
47
+ - source_sentence: 'What does this document say about: An interesting point of comparison
48
+ here could be t...?'
49
+ sentences:
50
+ - 'The environmental impact got much, much worse
51
+
52
+ The much bigger problem here is the enormous competitive buildout of the infrastructure
53
+ that is imagined to be necessary for these models in the future.
54
+
55
+ Companies like Google, Meta, Microsoft and Amazon are all spending billions of
56
+ dollars rolling out new datacenters, with a very material impact on the electricity
57
+ grid and the environment. There’s even talk of spinning up new nuclear power stations,
58
+ but those can take decades.
59
+
60
+ Is this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued
61
+ crash in LLM prices might hint that it’s not. But would you want to be the big
62
+ tech executive that argued NOT to build out this infrastructure only to be proven
63
+ wrong in a few years’ time?'
64
+ - 'An interesting point of comparison here could be the way railways rolled out
65
+ around the world in the 1800s. Constructing these required enormous investments
66
+ and had a massive environmental impact, and many of the lines that were built
67
+ turned out to be unnecessary—sometimes multiple lines from different companies
68
+ serving the exact same routes!
69
+
70
+ The resulting bubbles contributed to several financial crashes, see Wikipedia
71
+ for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
72
+ left us with a lot of useful infrastructure and a great deal of bankruptcies and
73
+ environmental damage.
74
+
75
+ The year of slop'
76
+ - 'Those US export regulations on GPUs to China seem to have inspired some very
77
+ effective training optimizations!
78
+
79
+ The environmental impact got better
80
+
81
+ A welcome result of the increased efficiency of the models—both the hosted ones
82
+ and the ones I can run locally—is that the energy usage and environmental impact
83
+ of running a prompt has dropped enormously over the past couple of years.
84
+
85
+ OpenAI themselves are charging 100x less for a prompt compared to the GPT-3 days.
86
+ I have it on good authority that neither Google Gemini nor Amazon Nova (two of
87
+ the least expensive model providers) are running prompts at a loss.'
88
+ - source_sentence: 'What does this document say about: A lot of people are excited
89
+ about AI agents—an inf...?'
90
+ sentences:
91
+ - 'An interesting point of comparison here could be the way railways rolled out
92
+ around the world in the 1800s. Constructing these required enormous investments
93
+ and had a massive environmental impact, and many of the lines that were built
94
+ turned out to be unnecessary—sometimes multiple lines from different companies
95
+ serving the exact same routes!
96
+
97
+ The resulting bubbles contributed to several financial crashes, see Wikipedia
98
+ for Panic of 1873, Panic of 1893, Panic of 1901 and the UK’s Railway Mania. They
99
+ left us with a lot of useful infrastructure and a great deal of bankruptcies and
100
+ environmental damage.
101
+
102
+ The year of slop'
103
+ - 'A lot of people are excited about AI agents—an infuriatingly vague term that
104
+ seems to be converging on “AI systems that can go away and act on your behalf”.
105
+ We’ve been talking about them all year, but I’ve seen few if any examples of them
106
+ running in production, despite lots of exciting prototypes.
107
+
108
+ I think this is because of gullibility.
109
+
110
+ Can we solve this? Honestly, I’m beginning to suspect that you can’t fully solve
111
+ gullibility without achieving AGI. So it may be quite a while before those agent
112
+ dreams can really start to come true!
113
+
114
+ Code may be the best application
115
+
116
+ Over the course of the year, it’s become increasingly clear that writing code
117
+ is one of the things LLMs are most capable of.'
118
+ - 'The boring yet crucial secret behind good system prompts is test-driven development.
119
+ You don’t write down a system prompt and find ways to test it. You write down
120
+ tests and find a system prompt that passes them.
121
+
122
+
123
+ It’s become abundantly clear over the course of 2024 that writing good automated
124
+ evals for LLM-powered systems is the skill that’s most needed to build useful
125
+ applications on top of these models. If you have a strong eval suite you can adopt
126
+ new models faster, iterate better and build more reliable and useful product features
127
+ than your competition.
128
+
129
+ Vercel’s Malte Ubl:'
130
+ - source_sentence: 'What does this document say about: When @v0 first came out we
131
+ were paranoid about pro...?'
132
+ sentences:
133
+ - 'So far, I think they’re a net positive. I’ve used them on a personal level to
134
+ improve my productivity (and entertain myself) in all sorts of different ways.
135
+ I think people who learn how to use them effectively can gain a significant boost
136
+ to their quality of life.
137
+
138
+ A lot of people are yet to be sold on their value! Some think their negatives
139
+ outweigh their positives, some think they are all hot air, and some even think
140
+ they represent an existential threat to humanity.
141
+
142
+ They’re actually quite easy to build
143
+
144
+ The most surprising thing we’ve learned about LLMs this year is that they’re actually
145
+ quite easy to build.'
146
+ - 'DeepSeek v3 is a huge 685B parameter model—one of the largest openly licensed
147
+ models currently available, significantly bigger than the largest of Meta’s Llama
148
+ series, Llama 3.1 405B.
149
+
150
+ Benchmarks put it up there with Claude 3.5 Sonnet. Vibe benchmarks (aka the Chatbot
151
+ Arena) currently rank it 7th, just behind the Gemini 2.0 and OpenAI 4o/o1 models.
152
+ This is by far the highest ranking openly licensed model.
153
+
154
+ The really impressive thing about DeepSeek v3 is the training cost. The model
155
+ was trained on 2,788,000 H800 GPU hours at an estimated cost of $5,576,000. Llama
156
+ 3.1 405B trained 30,840,000 GPU hours—11x that used by DeepSeek v3, for a model
157
+ that benchmarks slightly worse.'
158
+ - 'When @v0 first came out we were paranoid about protecting the prompt with all
159
+ kinds of pre and post processing complexity.
160
+
161
+ We completely pivoted to let it rip. A prompt without the evals, models, and especially
162
+ UX is like getting a broken ASML machine without a manual'
163
+ - source_sentence: 'What does this document say about: Intuitively, one would expect
164
+ that systems this po...?'
165
+ sentences:
166
+ - 'Intuitively, one would expect that systems this powerful would take millions
167
+ of lines of complex code. Instead, it turns out a few hundred lines of Python
168
+ is genuinely enough to train a basic version!
169
+
170
+ What matters most is the training data. You need a lot of data to make these
171
+ things work, and the quantity and quality of the training data appears to be the
172
+ most important factor in how good the resulting model is.
173
+
174
+ If you can gather the right data, and afford to pay for the GPUs to train it,
175
+ you can build an LLM.'
176
+ - 'Terminology aside, I remain skeptical as to their utility based, once again,
177
+ on the challenge of gullibility. LLMs believe anything you tell them. Any systems
178
+ that attempts to make meaningful decisions on your behalf will run into the same
179
+ roadblock: how good is a travel agent, or a digital assistant, or even a research
180
+ tool if it can’t distinguish truth from fiction?
181
+
182
+ Just the other day Google Search was caught serving up an entirely fake description
183
+ of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
184
+ movie listing from a fan fiction wiki.'
185
+ - 'The two main categories I see are people who think AI agents are obviously things
186
+ that go and act on your behalf—the travel agent model—and people who think in
187
+ terms of LLMs that have been given access to tools which they can run in a loop
188
+ as part of solving a problem. The term “autonomy” is often thrown into the mix
189
+ too, again without including a clear definition.
190
+
191
+ (I also collected 211 definitions on Twitter a few months ago—here they are in
192
+ Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
193
+
194
+ Whatever the term may mean, agents still have that feeling of perpetually “coming
195
+ soon”.'
196
+ pipeline_tag: sentence-similarity
197
+ library_name: sentence-transformers
198
+ metrics:
199
+ - cosine_accuracy@1
200
+ - cosine_accuracy@3
201
+ - cosine_accuracy@5
202
+ - cosine_accuracy@10
203
+ - cosine_precision@1
204
+ - cosine_precision@3
205
+ - cosine_precision@5
206
+ - cosine_precision@10
207
+ - cosine_recall@1
208
+ - cosine_recall@3
209
+ - cosine_recall@5
210
+ - cosine_recall@10
211
+ - cosine_ndcg@10
212
+ - cosine_mrr@10
213
+ - cosine_map@100
214
+ model-index:
215
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
216
+ results:
217
+ - task:
218
+ type: information-retrieval
219
+ name: Information Retrieval
220
+ dataset:
221
+ name: Unknown
222
+ type: unknown
223
+ metrics:
224
+ - type: cosine_accuracy@1
225
+ value: 1.0
226
+ name: Cosine Accuracy@1
227
+ - type: cosine_accuracy@3
228
+ value: 1.0
229
+ name: Cosine Accuracy@3
230
+ - type: cosine_accuracy@5
231
+ value: 1.0
232
+ name: Cosine Accuracy@5
233
+ - type: cosine_accuracy@10
234
+ value: 1.0
235
+ name: Cosine Accuracy@10
236
+ - type: cosine_precision@1
237
+ value: 1.0
238
+ name: Cosine Precision@1
239
+ - type: cosine_precision@3
240
+ value: 0.3333333333333333
241
+ name: Cosine Precision@3
242
+ - type: cosine_precision@5
243
+ value: 0.20000000000000004
244
+ name: Cosine Precision@5
245
+ - type: cosine_precision@10
246
+ value: 0.10000000000000002
247
+ name: Cosine Precision@10
248
+ - type: cosine_recall@1
249
+ value: 1.0
250
+ name: Cosine Recall@1
251
+ - type: cosine_recall@3
252
+ value: 1.0
253
+ name: Cosine Recall@3
254
+ - type: cosine_recall@5
255
+ value: 1.0
256
+ name: Cosine Recall@5
257
+ - type: cosine_recall@10
258
+ value: 1.0
259
+ name: Cosine Recall@10
260
+ - type: cosine_ndcg@10
261
+ value: 1.0
262
+ name: Cosine Ndcg@10
263
+ - type: cosine_mrr@10
264
+ value: 1.0
265
+ name: Cosine Mrr@10
266
+ - type: cosine_map@100
267
+ value: 1.0
268
+ name: Cosine Map@100
269
+ ---
270
+
271
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
272
+
273
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
274
+
275
+ ## Model Details
276
+
277
+ ### Model Description
278
+ - **Model Type:** Sentence Transformer
279
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
280
+ - **Maximum Sequence Length:** 512 tokens
281
+ - **Output Dimensionality:** 1024 dimensions
282
+ - **Similarity Function:** Cosine Similarity
283
+ <!-- - **Training Dataset:** Unknown -->
284
+ <!-- - **Language:** Unknown -->
285
+ <!-- - **License:** Unknown -->
286
+
287
+ ### Model Sources
288
+
289
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
290
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
291
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
292
+
293
+ ### Full Model Architecture
294
+
295
+ ```
296
+ SentenceTransformer(
297
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
298
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
299
+ (2): Normalize()
300
+ )
301
+ ```
302
+
303
+ ## Usage
304
+
305
+ ### Direct Usage (Sentence Transformers)
306
+
307
+ First install the Sentence Transformers library:
308
+
309
+ ```bash
310
+ pip install -U sentence-transformers
311
+ ```
312
+
313
+ Then you can load this model and run inference.
314
+ ```python
315
+ from sentence_transformers import SentenceTransformer
316
+
317
+ # Download from the 🤗 Hub
318
+ model = SentenceTransformer("lsy9874205/legal-ft-2")
319
+ # Run inference
320
+ sentences = [
321
+ 'What does this document say about: Intuitively, one would expect that systems this po...?',
322
+ 'Intuitively, one would expect that systems this powerful would take millions of lines of complex code. Instead, it turns out a few hundred lines of Python is genuinely enough to train a basic version!\nWhat matters most is the training data. You need a lot of data to make these things work, and the quantity and quality of the training data appears to be the most important factor in how good the resulting model is.\nIf you can gather the right data, and afford to pay for the GPUs to train it, you can build an LLM.',
323
+ 'The two main categories I see are people who think AI agents are obviously things that go and act on your behalf—the travel agent model—and people who think in terms of LLMs that have been given access to tools which they can run in a loop as part of solving a problem. The term “autonomy” is often thrown into the mix too, again without including a clear definition.\n(I also collected 211 definitions on Twitter a few months ago—here they are in Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)\nWhatever the term may mean, agents still have that feeling of perpetually “coming soon”.',
324
+ ]
325
+ embeddings = model.encode(sentences)
326
+ print(embeddings.shape)
327
+ # [3, 1024]
328
+
329
+ # Get the similarity scores for the embeddings
330
+ similarities = model.similarity(embeddings, embeddings)
331
+ print(similarities.shape)
332
+ # [3, 3]
333
+ ```
334
+
335
+ <!--
336
+ ### Direct Usage (Transformers)
337
+
338
+ <details><summary>Click to see the direct usage in Transformers</summary>
339
+
340
+ </details>
341
+ -->
342
+
343
+ <!--
344
+ ### Downstream Usage (Sentence Transformers)
345
+
346
+ You can finetune this model on your own dataset.
347
+
348
+ <details><summary>Click to expand</summary>
349
+
350
+ </details>
351
+ -->
352
+
353
+ <!--
354
+ ### Out-of-Scope Use
355
+
356
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
357
+ -->
358
+
359
+ ## Evaluation
360
+
361
+ ### Metrics
362
+
363
+ #### Information Retrieval
364
+
365
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
366
+
367
+ | Metric | Value |
368
+ |:--------------------|:--------|
369
+ | cosine_accuracy@1 | 1.0 |
370
+ | cosine_accuracy@3 | 1.0 |
371
+ | cosine_accuracy@5 | 1.0 |
372
+ | cosine_accuracy@10 | 1.0 |
373
+ | cosine_precision@1 | 1.0 |
374
+ | cosine_precision@3 | 0.3333 |
375
+ | cosine_precision@5 | 0.2 |
376
+ | cosine_precision@10 | 0.1 |
377
+ | cosine_recall@1 | 1.0 |
378
+ | cosine_recall@3 | 1.0 |
379
+ | cosine_recall@5 | 1.0 |
380
+ | cosine_recall@10 | 1.0 |
381
+ | **cosine_ndcg@10** | **1.0** |
382
+ | cosine_mrr@10 | 1.0 |
383
+ | cosine_map@100 | 1.0 |
384
+
385
+ <!--
386
+ ## Bias, Risks and Limitations
387
+
388
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
389
+ -->
390
+
391
+ <!--
392
+ ### Recommendations
393
+
394
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
395
+ -->
396
+
397
+ ## Training Details
398
+
399
+ ### Training Dataset
400
+
401
+ #### Unnamed Dataset
402
+
403
+ * Size: 156 training samples
404
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
405
+ * Approximate statistics based on the first 156 samples:
406
+ | | sentence_0 | sentence_1 |
407
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
408
+ | type | string | string |
409
+ | details | <ul><li>min: 17 tokens</li><li>mean: 25.1 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 134.95 tokens</li><li>max: 214 tokens</li></ul> |
410
+ * Samples:
411
+ | sentence_0 | sentence_1 |
412
+ |:----------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
413
+ | <code>What does this document say about: Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br>...?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
414
+ | <code>What does this document say about: Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br>...?</code> | <code>Stuff we figured out about AI in 2023<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Stuff we figured out about AI in 2023<br>31st December 2023<br>2023 was the breakthrough year for Large Language Models (LLMs). I think it’s OK to call these AI—they’re the latest and (currently) most interesting development in the academic field of Artificial Intelligence that dates back to the 1950s.<br>Here’s my attempt to round up the highlights in one place!</code> |
415
+ | <code>What does this document say about: Large Language Models<br>They’re actually quite easy ...?</code> | <code>Large Language Models<br>They’re actually quite easy to build<br>You can run LLMs on your own devices<br>Hobbyists can build their own fine-tuned models<br>We don’t yet know how to build GPT-4<br>Vibes Based Development<br>LLMs are really smart, and also really, really dumb<br>Gullibility is the biggest unsolved problem<br>Code may be the best application<br>The ethics of this space remain diabolically complex<br>My blog in 2023</code> |
416
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
417
+ ```json
418
+ {
419
+ "loss": "MultipleNegativesRankingLoss",
420
+ "matryoshka_dims": [
421
+ 768,
422
+ 512,
423
+ 256,
424
+ 128,
425
+ 64
426
+ ],
427
+ "matryoshka_weights": [
428
+ 1,
429
+ 1,
430
+ 1,
431
+ 1,
432
+ 1
433
+ ],
434
+ "n_dims_per_step": -1
435
+ }
436
+ ```
437
+
438
+ ### Training Hyperparameters
439
+ #### Non-Default Hyperparameters
440
+
441
+ - `eval_strategy`: steps
442
+ - `per_device_train_batch_size`: 10
443
+ - `per_device_eval_batch_size`: 10
444
+ - `num_train_epochs`: 10
445
+ - `multi_dataset_batch_sampler`: round_robin
446
+
447
+ #### All Hyperparameters
448
+ <details><summary>Click to expand</summary>
449
+
450
+ - `overwrite_output_dir`: False
451
+ - `do_predict`: False
452
+ - `eval_strategy`: steps
453
+ - `prediction_loss_only`: True
454
+ - `per_device_train_batch_size`: 10
455
+ - `per_device_eval_batch_size`: 10
456
+ - `per_gpu_train_batch_size`: None
457
+ - `per_gpu_eval_batch_size`: None
458
+ - `gradient_accumulation_steps`: 1
459
+ - `eval_accumulation_steps`: None
460
+ - `torch_empty_cache_steps`: None
461
+ - `learning_rate`: 5e-05
462
+ - `weight_decay`: 0.0
463
+ - `adam_beta1`: 0.9
464
+ - `adam_beta2`: 0.999
465
+ - `adam_epsilon`: 1e-08
466
+ - `max_grad_norm`: 1
467
+ - `num_train_epochs`: 10
468
+ - `max_steps`: -1
469
+ - `lr_scheduler_type`: linear
470
+ - `lr_scheduler_kwargs`: {}
471
+ - `warmup_ratio`: 0.0
472
+ - `warmup_steps`: 0
473
+ - `log_level`: passive
474
+ - `log_level_replica`: warning
475
+ - `log_on_each_node`: True
476
+ - `logging_nan_inf_filter`: True
477
+ - `save_safetensors`: True
478
+ - `save_on_each_node`: False
479
+ - `save_only_model`: False
480
+ - `restore_callback_states_from_checkpoint`: False
481
+ - `no_cuda`: False
482
+ - `use_cpu`: False
483
+ - `use_mps_device`: False
484
+ - `seed`: 42
485
+ - `data_seed`: None
486
+ - `jit_mode_eval`: False
487
+ - `use_ipex`: False
488
+ - `bf16`: False
489
+ - `fp16`: False
490
+ - `fp16_opt_level`: O1
491
+ - `half_precision_backend`: auto
492
+ - `bf16_full_eval`: False
493
+ - `fp16_full_eval`: False
494
+ - `tf32`: None
495
+ - `local_rank`: 0
496
+ - `ddp_backend`: None
497
+ - `tpu_num_cores`: None
498
+ - `tpu_metrics_debug`: False
499
+ - `debug`: []
500
+ - `dataloader_drop_last`: False
501
+ - `dataloader_num_workers`: 0
502
+ - `dataloader_prefetch_factor`: None
503
+ - `past_index`: -1
504
+ - `disable_tqdm`: False
505
+ - `remove_unused_columns`: True
506
+ - `label_names`: None
507
+ - `load_best_model_at_end`: False
508
+ - `ignore_data_skip`: False
509
+ - `fsdp`: []
510
+ - `fsdp_min_num_params`: 0
511
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
512
+ - `fsdp_transformer_layer_cls_to_wrap`: None
513
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
514
+ - `deepspeed`: None
515
+ - `label_smoothing_factor`: 0.0
516
+ - `optim`: adamw_torch
517
+ - `optim_args`: None
518
+ - `adafactor`: False
519
+ - `group_by_length`: False
520
+ - `length_column_name`: length
521
+ - `ddp_find_unused_parameters`: None
522
+ - `ddp_bucket_cap_mb`: None
523
+ - `ddp_broadcast_buffers`: False
524
+ - `dataloader_pin_memory`: True
525
+ - `dataloader_persistent_workers`: False
526
+ - `skip_memory_metrics`: True
527
+ - `use_legacy_prediction_loop`: False
528
+ - `push_to_hub`: False
529
+ - `resume_from_checkpoint`: None
530
+ - `hub_model_id`: None
531
+ - `hub_strategy`: every_save
532
+ - `hub_private_repo`: None
533
+ - `hub_always_push`: False
534
+ - `gradient_checkpointing`: False
535
+ - `gradient_checkpointing_kwargs`: None
536
+ - `include_inputs_for_metrics`: False
537
+ - `include_for_metrics`: []
538
+ - `eval_do_concat_batches`: True
539
+ - `fp16_backend`: auto
540
+ - `push_to_hub_model_id`: None
541
+ - `push_to_hub_organization`: None
542
+ - `mp_parameters`:
543
+ - `auto_find_batch_size`: False
544
+ - `full_determinism`: False
545
+ - `torchdynamo`: None
546
+ - `ray_scope`: last
547
+ - `ddp_timeout`: 1800
548
+ - `torch_compile`: False
549
+ - `torch_compile_backend`: None
550
+ - `torch_compile_mode`: None
551
+ - `dispatch_batches`: None
552
+ - `split_batches`: None
553
+ - `include_tokens_per_second`: False
554
+ - `include_num_input_tokens_seen`: False
555
+ - `neftune_noise_alpha`: None
556
+ - `optim_target_modules`: None
557
+ - `batch_eval_metrics`: False
558
+ - `eval_on_start`: False
559
+ - `use_liger_kernel`: False
560
+ - `eval_use_gather_object`: False
561
+ - `average_tokens_across_devices`: False
562
+ - `prompts`: None
563
+ - `batch_sampler`: batch_sampler
564
+ - `multi_dataset_batch_sampler`: round_robin
565
+
566
+ </details>
567
+
568
+ ### Training Logs
569
+ | Epoch | Step | cosine_ndcg@10 |
570
+ |:-----:|:----:|:--------------:|
571
+ | 1.0 | 16 | 0.9692 |
572
+ | 2.0 | 32 | 1.0 |
573
+ | 3.0 | 48 | 1.0 |
574
+ | 3.125 | 50 | 1.0 |
575
+ | 4.0 | 64 | 1.0 |
576
+ | 5.0 | 80 | 1.0 |
577
+ | 6.0 | 96 | 1.0 |
578
+ | 6.25 | 100 | 1.0 |
579
+ | 7.0 | 112 | 1.0 |
580
+ | 8.0 | 128 | 1.0 |
581
+ | 9.0 | 144 | 1.0 |
582
+ | 9.375 | 150 | 1.0 |
583
+ | 10.0 | 160 | 1.0 |
584
+
585
+
586
+ ### Framework Versions
587
+ - Python: 3.13.1
588
+ - Sentence Transformers: 3.4.1
589
+ - Transformers: 4.48.3
590
+ - PyTorch: 2.6.0
591
+ - Accelerate: 1.3.0
592
+ - Datasets: 3.2.0
593
+ - Tokenizers: 0.21.0
594
+
595
+ ## Citation
596
+
597
+ ### BibTeX
598
+
599
+ #### Sentence Transformers
600
+ ```bibtex
601
+ @inproceedings{reimers-2019-sentence-bert,
602
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
603
+ author = "Reimers, Nils and Gurevych, Iryna",
604
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
605
+ month = "11",
606
+ year = "2019",
607
+ publisher = "Association for Computational Linguistics",
608
+ url = "https://arxiv.org/abs/1908.10084",
609
+ }
610
+ ```
611
+
612
+ #### MatryoshkaLoss
613
+ ```bibtex
614
+ @misc{kusupati2024matryoshka,
615
+ title={Matryoshka Representation Learning},
616
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
617
+ year={2024},
618
+ eprint={2205.13147},
619
+ archivePrefix={arXiv},
620
+ primaryClass={cs.LG}
621
+ }
622
+ ```
623
+
624
+ #### MultipleNegativesRankingLoss
625
+ ```bibtex
626
+ @misc{henderson2017efficient,
627
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
628
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
629
+ year={2017},
630
+ eprint={1705.00652},
631
+ archivePrefix={arXiv},
632
+ primaryClass={cs.CL}
633
+ }
634
+ ```
635
+
636
+ <!--
637
+ ## Glossary
638
+
639
+ *Clearly define terms in order to be accessible across audiences.*
640
+ -->
641
+
642
+ <!--
643
+ ## Model Card Authors
644
+
645
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
646
+ -->
647
+
648
+ <!--
649
+ ## Model Card Contact
650
+
651
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
652
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.6.0"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0fdb1a2aa3537bd951ed592793b8c9994b0012ef45ae7f5d6ab0332e9570e653
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff