metadata
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:156
- loss:MatryoshkaLoss
- loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
- source_sentence: What concerns do some people have regarding the value and impact of LLMs?
sentences:
- >-
I think people who complain that LLM improvement has slowed are often
missing the enormous advances in these multi-modal models. Being able to
run prompts against images (and audio and video) is a fascinating new
way to apply these models.
Voice and live camera mode are science fiction come to life
The audio and live video modes that have started to emerge deserve a
special mention.
The ability to talk to ChatGPT first arrived in September 2023, but it
was mostly an illusion: OpenAI used their excellent Whisper
speech-to-text model and a new text-to-speech model (creatively named
tts-1) to enable conversations with the ChatGPT mobile apps, but the
actual model just saw text.
- >-
So far, I think they’re a net positive. I’ve used them on a personal
level to improve my productivity (and entertain myself) in all sorts of
different ways. I think people who learn how to use them effectively can
gain a significant boost to their quality of life.
A lot of people are yet to be sold on their value! Some think their
negatives outweigh their positives, some think they are all hot air, and
some even think they represent an existential threat to humanity.
They’re actually quite easy to build
The most surprising thing we’ve learned about LLMs this year is that
they’re actually quite easy to build.
- >-
The GPT-4 barrier was comprehensively broken
In my December 2023 review I wrote about how We don’t yet know how to
build GPT-4—OpenAI’s best model was almost a year old at that point, yet
no other AI lab had produced anything better. What did OpenAI know that
the rest of us didn’t?
I’m relieved that this has changed completely in the past twelve months.
18 organizations now have models on the Chatbot Arena Leaderboard that
rank higher than the original GPT-4 from March 2023 (GPT-4-0314 on the
board)—70 models in total.
- source_sentence: >-
What organizations have produced better-than-GPT-3 class models in the
past year?
sentences:
- >-
Here’s the sequel to this post: Things we learned about LLMs in 2024.
Large Language Models
In the past 24-36 months, our species has discovered that you can take a
GIANT corpus of text, run it through a pile of GPUs, and use it to
create a fascinating new kind of software.
LLMs can do a lot of things. They can answer questions, summarize
documents, translate from one language to another, extract information
and even write surprisingly competent code.
They can also help you cheat at your homework, generate unlimited
streams of fake content and be used for all manner of nefarious
purposes.
- >-
A year ago, the only organization that had released a generally useful
LLM was OpenAI. We’ve now seen better-than-GPT-3 class models produced
by Anthropic, Mistral, Google, Meta, EleutherAI, Stability AI, TII in
Abu Dhabi (Falcon), Microsoft Research, xAI, Replit, Baidu and a bunch
of other organizations.
The training cost (hardware and electricity) is still
significant—initially millions of dollars, but that seems to have
dropped to the tens of thousands already. Microsoft’s Phi-2 claims to
have used “14 days on 96 A100 GPUs”, which works out at around $35,000
using current Lambda pricing.
- >-
One way to think about these models is an extension of the
chain-of-thought prompting trick, first explored in the May 2022 paper
Large Language Models are Zero-Shot Reasoners.
This is that trick where, if you get a model to talk out loud about a
problem it’s solving, you often get a result which the model would not
have achieved otherwise.
o1 takes this process and further bakes it into the model itself. The
details are somewhat obfuscated: o1 models spend “reasoning tokens”
thinking through the problem that are not directly visible to the user
(though the ChatGPT UI shows a summary of them), then outputs a final
result.
- source_sentence: >-
What are AI agents commonly understood to be, according to the context
provided?
sentences:
- >-
Except... you can run generated code to see if it’s correct. And with
patterns like ChatGPT Code Interpreter the LLM can execute the code
itself, process the error message, then rewrite it and keep trying until
it works!
So hallucination is a much lesser problem for code generation than for
anything else. If only we had the equivalent of Code Interpreter for
fact-checking natural language!
How should we feel about this as software engineers?
On the one hand, this feels like a threat: who needs a programmer if
ChatGPT can write code for you?
- >-
A lot of people are excited about AI agents—an infuriatingly vague term
that seems to be converging on “AI systems that can go away and act on
your behalf”. We’ve been talking about them all year, but I’ve seen few
if any examples of them running in production, despite lots of exciting
prototypes.
I think this is because of gullibility.
Can we solve this? Honestly, I’m beginning to suspect that you can’t
fully solve gullibility without achieving AGI. So it may be quite a
while before those agent dreams can really start to come true!
Code may be the best application
Over the course of the year, it’s become increasingly clear that writing
code is one of the things LLMs are most capable of.
- >-
Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased
context lengths. Last year most models accepted 4,096 or 8,192 tokens,
with the notable exception of Claude 2.1 which accepted 200,000. Today
every serious provider has a 100,000+ token model, and Google’s Gemini
series accepts up to 2 million.
- source_sentence: How can hobbyists create their own fine-tuned models?
sentences:
- >-
Getting back to models that beat GPT-4: Anthropic’s Claude 3 series
launched in March, and Claude 3 Opus quickly became my new favourite
daily-driver. They upped the ante even more in June with the launch of
Claude 3.5 Sonnet—a model that is still my favourite six months later
(though it got a significant upgrade on October 22, confusingly keeping
the same 3.5 version number. Anthropic fans have since taken to calling
it Claude 3.6).
- >-
Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased
context lengths. Last year most models accepted 4,096 or 8,192 tokens,
with the notable exception of Claude 2.1 which accepted 200,000. Today
every serious provider has a 100,000+ token model, and Google’s Gemini
series accepts up to 2 million.
- >-
I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly
great model) on my iPhone. You can install several different apps to get
your own, local, completely private LLM. My own LLM project provides a
CLI tool for running an array of different models via plugins.
You can even run them entirely in your browser using WebAssembly and the
latest Chrome!
Hobbyists can build their own fine-tuned models
I said earlier that building an LLM was still out of reach of hobbyists.
That may be true for training from scratch, but fine-tuning one of those
models is another matter entirely.
- source_sentence: What is the significance of prompt engineering in DALL-E 3?
sentences:
- >-
Now add a walrus: Prompt engineering in DALL-E 3
32.8k
41.2k
Web LLM runs the vicuna-7b Large Language Model entirely in your
browser, and it’s very impressive
32.5k
38.2k
ChatGPT can’t access the internet, even though it really looks like it
can
30.5k
34.2k
Stanford Alpaca, and the acceleration of on-device large language model
development
29.7k
35.7k
Run Llama 2 on your own Mac using LLM and Homebrew
27.9k
33.6k
Midjourney 5.1
26.7k
33.4k
Think of language models like ChatGPT as a “calculator for words”
25k
31.8k
Multi-modal prompt injection image attacks against GPT-4V
23.7k
27.4k
- |-
blogging
68
ai
1092
generative-ai
937
llms
925
Next: Tom Scott, and the formidable power of escalating streaks
Previous: Last weeknotes of 2023
Colophon
©
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
- >-
The environmental impact got much, much worse
The much bigger problem here is the enormous competitive buildout of the
infrastructure that is imagined to be necessary for these models in the
future.
Companies like Google, Meta, Microsoft and Amazon are all spending
billions of dollars rolling out new datacenters, with a very material
impact on the electricity grid and the environment. There’s even talk of
spinning up new nuclear power stations, but those can take decades.
Is this infrastructure necessary? DeepSeek v3’s $6m training cost and
the continued crash in LLM prices might hint that it’s not. But would
you want to be the big tech executive that argued NOT to build out this
infrastructure only to be proven wrong in a few years’ time?
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall@3
- cosine_recall@5
- cosine_recall@10
- cosine_ndcg@10
- cosine_mrr@10
- cosine_map@100
model-index:
- name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
results:
- task:
type: information-retrieval
name: Information Retrieval
dataset:
name: Unknown
type: unknown
metrics:
- type: cosine_accuracy@1
value: 0.875
name: Cosine Accuracy@1
- type: cosine_accuracy@3
value: 1
name: Cosine Accuracy@3
- type: cosine_accuracy@5
value: 1
name: Cosine Accuracy@5
- type: cosine_accuracy@10
value: 1
name: Cosine Accuracy@10
- type: cosine_precision@1
value: 0.875
name: Cosine Precision@1
- type: cosine_precision@3
value: 0.3333333333333333
name: Cosine Precision@3
- type: cosine_precision@5
value: 0.20000000000000004
name: Cosine Precision@5
- type: cosine_precision@10
value: 0.10000000000000002
name: Cosine Precision@10
- type: cosine_recall@1
value: 0.875
name: Cosine Recall@1
- type: cosine_recall@3
value: 1
name: Cosine Recall@3
- type: cosine_recall@5
value: 1
name: Cosine Recall@5
- type: cosine_recall@10
value: 1
name: Cosine Recall@10
- type: cosine_ndcg@10
value: 0.9538662191964322
name: Cosine Ndcg@10
- type: cosine_mrr@10
value: 0.9375
name: Cosine Mrr@10
- type: cosine_map@100
value: 0.9375
name: Cosine Map@100
SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: Snowflake/snowflake-arctic-embed-l
- Maximum Sequence Length: 512 tokens
- Output Dimensionality: 1024 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("llm-wizard/legal-ft-v0")
# Run inference
sentences = [
'What is the significance of prompt engineering in DALL-E 3?',
'Now add a walrus: Prompt engineering in DALL-E 3\n32.8k\n41.2k\n\n\nWeb LLM runs the vicuna-7b Large Language Model entirely in your browser, and it’s very impressive\n32.5k\n38.2k\n\n\nChatGPT can’t access the internet, even though it really looks like it can\n30.5k\n34.2k\n\n\nStanford Alpaca, and the acceleration of on-device large language model development\n29.7k\n35.7k\n\n\nRun Llama 2 on your own Mac using LLM and Homebrew\n27.9k\n33.6k\n\n\nMidjourney 5.1\n26.7k\n33.4k\n\n\nThink of language models like ChatGPT as a “calculator for words”\n25k\n31.8k\n\n\nMulti-modal prompt injection image attacks against GPT-4V\n23.7k\n27.4k',
'The environmental impact got much, much worse\nThe much bigger problem here is the enormous competitive buildout of the infrastructure that is imagined to be necessary for these models in the future.\nCompanies like Google, Meta, Microsoft and Amazon are all spending billions of dollars rolling out new datacenters, with a very material impact on the electricity grid and the environment. There’s even talk of spinning up new nuclear power stations, but those can take decades.\nIs this infrastructure necessary? DeepSeek v3’s $6m training cost and the continued crash in LLM prices might hint that it’s not. But would you want to be the big tech executive that argued NOT to build out this infrastructure only to be proven wrong in a few years’ time?',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.875 |
cosine_accuracy@3 | 1.0 |
cosine_accuracy@5 | 1.0 |
cosine_accuracy@10 | 1.0 |
cosine_precision@1 | 0.875 |
cosine_precision@3 | 0.3333 |
cosine_precision@5 | 0.2 |
cosine_precision@10 | 0.1 |
cosine_recall@1 | 0.875 |
cosine_recall@3 | 1.0 |
cosine_recall@5 | 1.0 |
cosine_recall@10 | 1.0 |
cosine_ndcg@10 | 0.9539 |
cosine_mrr@10 | 0.9375 |
cosine_map@100 | 0.9375 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 156 training samples
- Columns:
sentence_0
andsentence_1
- Approximate statistics based on the first 156 samples:
sentence_0 sentence_1 type string string details - min: 11 tokens
- mean: 20.34 tokens
- max: 36 tokens
- min: 43 tokens
- mean: 134.95 tokens
- max: 214 tokens
- Samples:
sentence_0 sentence_1 What model do I run on my iPhone?
I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model) on my iPhone. You can install several different apps to get your own, local, completely private LLM. My own LLM project provides a CLI tool for running an array of different models via plugins.
You can even run them entirely in your browser using WebAssembly and the latest Chrome!
Hobbyists can build their own fine-tuned models
I said earlier that building an LLM was still out of reach of hobbyists. That may be true for training from scratch, but fine-tuning one of those models is another matter entirely.How can hobbyists create their own fine-tuned models?
I run a bunch of them on my laptop. I run Mistral 7B (a surprisingly great model) on my iPhone. You can install several different apps to get your own, local, completely private LLM. My own LLM project provides a CLI tool for running an array of different models via plugins.
You can even run them entirely in your browser using WebAssembly and the latest Chrome!
Hobbyists can build their own fine-tuned models
I said earlier that building an LLM was still out of reach of hobbyists. That may be true for training from scratch, but fine-tuning one of those models is another matter entirely.What is the total cost to process 68,000 images mentioned in the context?
That’s a total cost of $1.68 to process 68,000 images. That’s so absurdly cheap I had to run the numbers three times to confirm I got it right.
How good are those descriptions? Here’s what I got from this command:
llm -m gemini-1.5-flash-8b-latest describe -a IMG_1825.jpeg - Loss:
MatryoshkaLoss
with these parameters:{ "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 768, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 10per_device_eval_batch_size
: 10num_train_epochs
: 10multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 10per_device_eval_batch_size
: 10per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 10max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | cosine_ndcg@10 |
---|---|---|
1.0 | 16 | 0.9638 |
2.0 | 32 | 0.9539 |
3.0 | 48 | 0.9539 |
3.125 | 50 | 0.9539 |
4.0 | 64 | 0.9539 |
5.0 | 80 | 0.9539 |
6.0 | 96 | 0.9539 |
6.25 | 100 | 0.9539 |
7.0 | 112 | 0.9539 |
8.0 | 128 | 0.9539 |
9.0 | 144 | 0.9539 |
9.375 | 150 | 0.9539 |
10.0 | 160 | 0.9539 |
Framework Versions
- Python: 3.11.11
- Sentence Transformers: 3.4.1
- Transformers: 4.48.2
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MatryoshkaLoss
@misc{kusupati2024matryoshka,
title={Matryoshka Representation Learning},
author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
year={2024},
eprint={2205.13147},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}