Add new SentenceTransformer model
Browse files- 1_Pooling/config.json +2 -2
- README.md +22 -22
- config_sentence_transformers.json +2 -2
- model.safetensors +1 -1
- tokenizer_config.json +1 -1
1_Pooling/config.json
CHANGED
@@ -1,7 +1,7 @@
|
|
1 |
{
|
2 |
"word_embedding_dimension": 768,
|
3 |
-
"pooling_mode_cls_token":
|
4 |
-
"pooling_mode_mean_tokens":
|
5 |
"pooling_mode_max_tokens": false,
|
6 |
"pooling_mode_mean_sqrt_len_tokens": false,
|
7 |
"pooling_mode_weightedmean_tokens": false,
|
|
|
1 |
{
|
2 |
"word_embedding_dimension": 768,
|
3 |
+
"pooling_mode_cls_token": true,
|
4 |
+
"pooling_mode_mean_tokens": false,
|
5 |
"pooling_mode_max_tokens": false,
|
6 |
"pooling_mode_mean_sqrt_len_tokens": false,
|
7 |
"pooling_mode_weightedmean_tokens": false,
|
README.md
CHANGED
@@ -14,7 +14,7 @@ tags:
|
|
14 |
- generated_from_trainer
|
15 |
- dataset_size:9233417
|
16 |
- loss:ArcFaceInBatchLoss
|
17 |
-
base_model:
|
18 |
widget:
|
19 |
- source_sentence: Hayley Vaughan portrayed Ripa on the ABC daytime soap opera , ``
|
20 |
All My Children `` , between 1990 and 2002 .
|
@@ -79,34 +79,34 @@ model-index:
|
|
79 |
type: test
|
80 |
metrics:
|
81 |
- type: cosine_accuracy@1
|
82 |
-
value: 0.
|
83 |
name: Cosine Accuracy@1
|
84 |
- type: cosine_precision@1
|
85 |
-
value: 0.
|
86 |
name: Cosine Precision@1
|
87 |
- type: cosine_recall@1
|
88 |
-
value: 0.
|
89 |
name: Cosine Recall@1
|
90 |
- type: cosine_ndcg@10
|
91 |
-
value: 0.
|
92 |
name: Cosine Ndcg@10
|
93 |
- type: cosine_mrr@1
|
94 |
-
value: 0.
|
95 |
name: Cosine Mrr@1
|
96 |
- type: cosine_map@100
|
97 |
-
value: 0.
|
98 |
name: Cosine Map@100
|
99 |
---
|
100 |
|
101 |
# Redis fine-tuned BiEncoder model for semantic caching on LangCache
|
102 |
|
103 |
-
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [
|
104 |
|
105 |
## Model Details
|
106 |
|
107 |
### Model Description
|
108 |
- **Model Type:** Sentence Transformer
|
109 |
-
- **Base model:** [
|
110 |
- **Maximum Sequence Length:** 100 tokens
|
111 |
- **Output Dimensionality:** 768 dimensions
|
112 |
- **Similarity Function:** Cosine Similarity
|
@@ -126,7 +126,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [a
|
|
126 |
```
|
127 |
SentenceTransformer(
|
128 |
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
|
129 |
-
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token':
|
130 |
)
|
131 |
```
|
132 |
|
@@ -159,9 +159,9 @@ print(embeddings.shape)
|
|
159 |
# Get the similarity scores for the embeddings
|
160 |
similarities = model.similarity(embeddings, embeddings)
|
161 |
print(similarities)
|
162 |
-
# tensor([[
|
163 |
-
# [0.
|
164 |
-
# [0.9922,
|
165 |
```
|
166 |
|
167 |
<!--
|
@@ -197,14 +197,14 @@ You can finetune this model on your own dataset.
|
|
197 |
* Dataset: `test`
|
198 |
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
|
199 |
|
200 |
-
| Metric | Value
|
201 |
-
|
202 |
-
| cosine_accuracy@1 | 0.
|
203 |
-
| cosine_precision@1 | 0.
|
204 |
-
| cosine_recall@1 | 0.
|
205 |
-
| **cosine_ndcg@10** | **0.
|
206 |
-
| cosine_mrr@1 | 0.
|
207 |
-
| cosine_map@100 | 0.
|
208 |
|
209 |
<!--
|
210 |
## Bias, Risks and Limitations
|
@@ -277,7 +277,7 @@ You can finetune this model on your own dataset.
|
|
277 |
### Training Logs
|
278 |
| Epoch | Step | test_cosine_ndcg@10 |
|
279 |
|:-----:|:----:|:-------------------:|
|
280 |
-
| -1 | -1 | 0.
|
281 |
|
282 |
|
283 |
### Framework Versions
|
|
|
14 |
- generated_from_trainer
|
15 |
- dataset_size:9233417
|
16 |
- loss:ArcFaceInBatchLoss
|
17 |
+
base_model: Alibaba-NLP/gte-modernbert-base
|
18 |
widget:
|
19 |
- source_sentence: Hayley Vaughan portrayed Ripa on the ABC daytime soap opera , ``
|
20 |
All My Children `` , between 1990 and 2002 .
|
|
|
79 |
type: test
|
80 |
metrics:
|
81 |
- type: cosine_accuracy@1
|
82 |
+
value: 0.5861241448475948
|
83 |
name: Cosine Accuracy@1
|
84 |
- type: cosine_precision@1
|
85 |
+
value: 0.5861241448475948
|
86 |
name: Cosine Precision@1
|
87 |
- type: cosine_recall@1
|
88 |
+
value: 0.5679885764966713
|
89 |
name: Cosine Recall@1
|
90 |
- type: cosine_ndcg@10
|
91 |
+
value: 0.7729838064849864
|
92 |
name: Cosine Ndcg@10
|
93 |
- type: cosine_mrr@1
|
94 |
+
value: 0.5861241448475948
|
95 |
name: Cosine Mrr@1
|
96 |
- type: cosine_map@100
|
97 |
+
value: 0.7216697804426214
|
98 |
name: Cosine Map@100
|
99 |
---
|
100 |
|
101 |
# Redis fine-tuned BiEncoder model for semantic caching on LangCache
|
102 |
|
103 |
+
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) on the [LangCache Sentence Pairs (all)](https://huggingface.co/datasets/redis/langcache-sentencepairs-v2) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for sentence pair similarity.
|
104 |
|
105 |
## Model Details
|
106 |
|
107 |
### Model Description
|
108 |
- **Model Type:** Sentence Transformer
|
109 |
+
- **Base model:** [Alibaba-NLP/gte-modernbert-base](https://huggingface.co/Alibaba-NLP/gte-modernbert-base) <!-- at revision e7f32e3c00f91d699e8c43b53106206bcc72bb22 -->
|
110 |
- **Maximum Sequence Length:** 100 tokens
|
111 |
- **Output Dimensionality:** 768 dimensions
|
112 |
- **Similarity Function:** Cosine Similarity
|
|
|
126 |
```
|
127 |
SentenceTransformer(
|
128 |
(0): Transformer({'max_seq_length': 100, 'do_lower_case': False, 'architecture': 'ModernBertModel'})
|
129 |
+
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
|
130 |
)
|
131 |
```
|
132 |
|
|
|
159 |
# Get the similarity scores for the embeddings
|
160 |
similarities = model.similarity(embeddings, embeddings)
|
161 |
print(similarities)
|
162 |
+
# tensor([[1.0000, 0.9961, 0.9922],
|
163 |
+
# [0.9961, 1.0000, 0.9961],
|
164 |
+
# [0.9922, 0.9961, 0.9961]], dtype=torch.bfloat16)
|
165 |
```
|
166 |
|
167 |
<!--
|
|
|
197 |
* Dataset: `test`
|
198 |
* Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
|
199 |
|
200 |
+
| Metric | Value |
|
201 |
+
|:-------------------|:----------|
|
202 |
+
| cosine_accuracy@1 | 0.5861 |
|
203 |
+
| cosine_precision@1 | 0.5861 |
|
204 |
+
| cosine_recall@1 | 0.568 |
|
205 |
+
| **cosine_ndcg@10** | **0.773** |
|
206 |
+
| cosine_mrr@1 | 0.5861 |
|
207 |
+
| cosine_map@100 | 0.7217 |
|
208 |
|
209 |
<!--
|
210 |
## Bias, Risks and Limitations
|
|
|
277 |
### Training Logs
|
278 |
| Epoch | Step | test_cosine_ndcg@10 |
|
279 |
|:-----:|:----:|:-------------------:|
|
280 |
+
| -1 | -1 | 0.7730 |
|
281 |
|
282 |
|
283 |
### Framework Versions
|
config_sentence_transformers.json
CHANGED
@@ -1,5 +1,4 @@
|
|
1 |
{
|
2 |
-
"model_type": "SentenceTransformer",
|
3 |
"__version__": {
|
4 |
"sentence_transformers": "5.1.0",
|
5 |
"transformers": "4.56.0",
|
@@ -10,5 +9,6 @@
|
|
10 |
"document": ""
|
11 |
},
|
12 |
"default_prompt_name": null,
|
13 |
-
"similarity_fn_name": "cosine"
|
|
|
14 |
}
|
|
|
1 |
{
|
|
|
2 |
"__version__": {
|
3 |
"sentence_transformers": "5.1.0",
|
4 |
"transformers": "4.56.0",
|
|
|
9 |
"document": ""
|
10 |
},
|
11 |
"default_prompt_name": null,
|
12 |
+
"similarity_fn_name": "cosine",
|
13 |
+
"model_type": "SentenceTransformer"
|
14 |
}
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 298041696
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:95d02211c4cca89113f9f3e93ed91f5176bf50170faa2cb835f7bfea15bb9dd2
|
3 |
size 298041696
|
tokenizer_config.json
CHANGED
@@ -938,7 +938,7 @@
|
|
938 |
"input_ids",
|
939 |
"attention_mask"
|
940 |
],
|
941 |
-
"model_max_length":
|
942 |
"pad_token": "[PAD]",
|
943 |
"sep_token": "[SEP]",
|
944 |
"tokenizer_class": "PreTrainedTokenizerFast",
|
|
|
938 |
"input_ids",
|
939 |
"attention_mask"
|
940 |
],
|
941 |
+
"model_max_length": 1000000000000000019884624838656,
|
942 |
"pad_token": "[PAD]",
|
943 |
"sep_token": "[SEP]",
|
944 |
"tokenizer_class": "PreTrainedTokenizerFast",
|