Update README.md
Browse files
README.md
CHANGED
@@ -26,51 +26,25 @@ datasets:
|
|
26 |
base_model: pkshatech/GLuCoSE-base-ja
|
27 |
license: apache-2.0
|
28 |
---
|
|
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
|
34 |
-
## Model
|
35 |
-
The model is based on [GLuCoSE](https://huggingface.co/pkshatech/GLuCoSE-base-ja) and additionally fine-tuned.
|
36 |
-
Fine-tuning consists of the following steps.
|
37 |
|
38 |
-
|
39 |
-
|
40 |
-
- The embedded representation was distilled using [E5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct), [gte-Qwen2](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct), and [mE5-large](https://huggingface.co/intfloat/multilingual-e5-large) as teacher models.
|
41 |
-
|
42 |
-
**Step 2: Contrastive learning**
|
43 |
-
|
44 |
-
- Triples were created from [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [MNLI](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7), [PAWS-X](https://huggingface.co/datasets/paws-x), [JSeM](https://github.com/DaisukeBekki/JSeM) and [Mr.TyDi](https://huggingface.co/datasets/castorini/mr-tydi) and used for training.
|
45 |
-
- This training aimed to improve the overall performance as a sentence embedding model.
|
46 |
|
47 |
-
**Step 3: Search-specific contrastive learning**
|
48 |
-
|
49 |
-
- In order to make the model more robust to the retrieval task, additional two-stage training with QA and question-answer data was conducted.
|
50 |
-
- In the first stage, the synthetic dataset [auto-wiki-qa](https://huggingface.co/datasets/cl-nagoya/auto-wiki-qa) was used for training,
|
51 |
-
while in the second stage, [Japanese Wikipedia Human Retrieval](https://huggingface.co/datasets/hpprc/emb)
|
52 |
-
, [Mr.TyDi](https://huggingface.co/datasets/hpprc/emb),
|
53 |
-
[MIRACL](https://huggingface.co/datasets/hpprc/emb),
|
54 |
-
[JQaRA](https://huggingface.co/datasets/hotchpotch/JQaRA),
|
55 |
-
[MQA](https://huggingface.co/datasets/hpprc/mqa-ja),
|
56 |
-
[Quiz Works](https://huggingface.co/datasets/hpprc/emb) and
|
57 |
-
[Quiz No Mori](https://huggingface.co/datasets/hpprc/emb) were used.
|
58 |
-
### Model Description
|
59 |
-
- **Model Type:** Sentence Transformer
|
60 |
-
<!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
|
61 |
- **Maximum Sequence Length:** 512 tokens
|
62 |
- **Output Dimensionality:** 768 tokens
|
63 |
- **Similarity Function:** Cosine Similarity
|
64 |
-
<!-- - **Training Dataset:** Unknown -->
|
65 |
-
<!-- - **Language:** Unknown -->
|
66 |
-
<!-- - **License:** Unknown -->
|
67 |
-
|
68 |
|
69 |
## Usage
|
70 |
|
71 |
### Direct Usage (Sentence Transformers)
|
72 |
|
73 |
-
You can perform inference using
|
74 |
|
75 |
```python
|
76 |
from sentence_transformers import SentenceTransformer
|
@@ -98,6 +72,7 @@ print(similarities)
|
|
98 |
# [0.6050, 1.0000, 0.5018, 0.6815],
|
99 |
# [0.4341, 0.5018, 1.0000, 0.7534],
|
100 |
# [0.5537, 0.6815, 0.7534, 1.0000]]
|
|
|
101 |
```
|
102 |
|
103 |
### Direct Usage (Transformers)
|
@@ -142,10 +117,30 @@ print(similarities)
|
|
142 |
# [0.6050, 1.0000, 0.5018, 0.6815],
|
143 |
# [0.4341, 0.5018, 1.0000, 0.7534],
|
144 |
# [0.5537, 0.6815, 0.7534, 1.0000]]
|
|
|
145 |
```
|
146 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
147 |
|
148 |
<!--
|
|
|
149 |
### Downstream Usage (Sentence Transformers)
|
150 |
|
151 |
You can finetune this model on your own dataset.
|
@@ -156,19 +151,21 @@ You can finetune this model on your own dataset.
|
|
156 |
-->
|
157 |
|
158 |
<!--
|
|
|
159 |
### Out-of-Scope Use
|
160 |
|
161 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
162 |
-->
|
163 |
|
164 |
-
|
165 |
<!--
|
|
|
166 |
## Bias, Risks and Limitations
|
167 |
|
168 |
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
|
169 |
-->
|
170 |
|
171 |
<!--
|
|
|
172 |
### Recommendations
|
173 |
|
174 |
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
|
@@ -176,68 +173,48 @@ You can finetune this model on your own dataset.
|
|
176 |
|
177 |
## Benchmarks
|
178 |
|
179 |
-
###
|
|
|
180 |
Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) , [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR) and [MLDR-ja](https://huggingface.co/datasets/Shitao/MLDR).
|
181 |
|
182 |
| Model | Size | MIRACL<br>Recall@5 | JQaRA<br>nDCG@10 | JaCWIR<br>MAP@10 | MLDR<br>nDCG@10 |
|
183 |
-
|
184 |
-
|OpenAI/text-embedding-3-small
|
185 |
-
|OpenAI/text-embedding-3-large
|
186 |
-
|
187 |
-
|[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 89.2 | 55.4 | **87.6** | 29.8 |
|
188 |
-
|[cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 78.7 | 62.4 | 85.0 | **37.5** |
|
189 |
-
|
190 |
-
|[intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 84.2| 47.2 | **85.3** | 25.4 |
|
191 |
-
|[cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 74.3 | 58.1 | 84.6 | **35.3** |
|
192 |
-
|[pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 53.3 | 30.8 | 68.6 | 25.2 |
|
193 |
-
|
194 |
-
|
195 |
-
Note: Results for OpenAI small embeddings in JQARA and JaCWIR are quoted from the [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) and [JaCWIR](https://huggingface.co/datasets/hotchpotch/
|
196 |
-
|
197 |
|
198 |
### JMTEB
|
|
|
199 |
Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
|
200 |
|
201 |
-
|Model|Size|Avg
|
202 |
-
|
203 |
-
|OpenAI/text-embedding-3-small
|
204 |
-
|OpenAI/text-embedding-3-large
|
205 |
-
|
206 |
-
|[intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large)|0.6B|
|
207 |
-
|[cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large)|0.3B|73.
|
208 |
-
|
209 |
-
|[intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base)|0.3B|
|
210 |
-
|[cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) |0.1B
|
211 |
-
|[pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja)|0.1B|
|
212 |
-
|
213 |
|
214 |
Note: Results for OpenAI embeddings and multilingual-e5 models are quoted from the [JMTEB leaderboard](https://github.com/sbintuitions/JMTEB/blob/main/leaderboard.md). Results for ruri are quoted from the [cl-nagoya/ruri-base model card](https://huggingface.co/cl-nagoya/ruri-base/blob/main/README.md).
|
215 |
|
216 |
-
9/11 correction: Some values were initially micro-averaged; I've now standardized all metrics to macro-averaging for consistency.
|
217 |
-
|
218 |
## Authors
|
|
|
219 |
Chihiro Yano, Mocho Go, Hideyuki Tachibana, Hiroto Takegawa, Yotaro Watanabe
|
220 |
|
221 |
## License
|
222 |
-
This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
223 |
-
|
224 |
-
<!--
|
225 |
-
## Citation
|
226 |
-
|
227 |
-
### BibTeX
|
228 |
-
## Glossary
|
229 |
-
|
230 |
-
*Clearly define terms in order to be accessible across audiences.*
|
231 |
-
-->
|
232 |
-
|
233 |
-
<!--
|
234 |
-
## Model Card Authors
|
235 |
-
|
236 |
-
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
|
237 |
-
-->
|
238 |
-
|
239 |
-
<!--
|
240 |
-
## Model Card Contact
|
241 |
|
242 |
-
|
243 |
-
-->
|
|
|
26 |
base_model: pkshatech/GLuCoSE-base-ja
|
27 |
license: apache-2.0
|
28 |
---
|
29 |
+
# GLuCoSE v2
|
30 |
|
31 |
+
This model is a general Japanese text embedding model, excelling in retrieval tasks. It can run on CPU and is designed to measure semantic similarity between sentences, as well as to function as a retrieval system for searching passages based on queries.
|
32 |
|
33 |
+
During inference, the prefix "query: " or "passage: " is required. Please check the Usage section for details.
|
34 |
|
35 |
+
## Model Description
|
|
|
|
|
36 |
|
37 |
+
The model is based on [GLuCoSE](https://huggingface.co/pkshatech/GLuCoSE-base-ja) and fine-tuned through distillation using several large-scale embedding models and multi-stage contrastive learning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
- **Maximum Sequence Length:** 512 tokens
|
40 |
- **Output Dimensionality:** 768 tokens
|
41 |
- **Similarity Function:** Cosine Similarity
|
|
|
|
|
|
|
|
|
42 |
|
43 |
## Usage
|
44 |
|
45 |
### Direct Usage (Sentence Transformers)
|
46 |
|
47 |
+
You can perform inference using SentenceTransformer with the following code:
|
48 |
|
49 |
```python
|
50 |
from sentence_transformers import SentenceTransformer
|
|
|
72 |
# [0.6050, 1.0000, 0.5018, 0.6815],
|
73 |
# [0.4341, 0.5018, 1.0000, 0.7534],
|
74 |
# [0.5537, 0.6815, 0.7534, 1.0000]]
|
75 |
+
|
76 |
```
|
77 |
|
78 |
### Direct Usage (Transformers)
|
|
|
117 |
# [0.6050, 1.0000, 0.5018, 0.6815],
|
118 |
# [0.4341, 0.5018, 1.0000, 0.7534],
|
119 |
# [0.5537, 0.6815, 0.7534, 1.0000]]
|
120 |
+
|
121 |
```
|
122 |
|
123 |
+
## Training Details
|
124 |
+
|
125 |
+
The fine-tuning of GLuCoSE v2 is carried out through the following steps:
|
126 |
+
|
127 |
+
**Step 1: Ensemble distillation**
|
128 |
+
|
129 |
+
- The embedded representation was distilled using [E5-mistral](https://huggingface.co/intfloat/e5-mistral-7b-instruct), [gte-Qwen2](https://huggingface.co/Alibaba-NLP/gte-Qwen2-7B-instruct), and [mE5-large](https://huggingface.co/intfloat/multilingual-e5-large) as teacher models.
|
130 |
+
|
131 |
+
**Step 2: Contrastive learning**
|
132 |
+
|
133 |
+
- Triplets were created from [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88), [MNLI](https://huggingface.co/datasets/MoritzLaurer/multilingual-NLI-26lang-2mil7), [PAWS-X](https://huggingface.co/datasets/paws-x), [JSeM](https://github.com/DaisukeBekki/JSeM) and [Mr.TyDi](https://huggingface.co/datasets/castorini/mr-tydi) and used for training.
|
134 |
+
- This training aimed to improve the overall performance as a sentence embedding model.
|
135 |
+
|
136 |
+
**Step 3: Search-specific contrastive learning**
|
137 |
+
|
138 |
+
- In order to make the model more robust to the retrieval task, additional two-stage training with QA and retrieval task was conducted.
|
139 |
+
- In the first stage, the synthetic dataset [auto-wiki-qa](https://huggingface.co/datasets/cl-nagoya/auto-wiki-qa) was used for training,
|
140 |
+
while in the second stage, [JQaRA](https://huggingface.co/datasets/hotchpotch/JQaRA), [MQA](https://huggingface.co/datasets/hpprc/mqa-ja), [Japanese Wikipedia Human Retrieval, Mr.TyDi,MIRACL, Quiz Works and Quiz No Mor](https://huggingface.co/datasets/hpprc/emb)i were used.
|
141 |
|
142 |
<!--
|
143 |
+
|
144 |
### Downstream Usage (Sentence Transformers)
|
145 |
|
146 |
You can finetune this model on your own dataset.
|
|
|
151 |
-->
|
152 |
|
153 |
<!--
|
154 |
+
|
155 |
### Out-of-Scope Use
|
156 |
|
157 |
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
|
158 |
-->
|
159 |
|
|
|
160 |
<!--
|
161 |
+
|
162 |
## Bias, Risks and Limitations
|
163 |
|
164 |
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
|
165 |
-->
|
166 |
|
167 |
<!--
|
168 |
+
|
169 |
### Recommendations
|
170 |
|
171 |
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
|
|
|
173 |
|
174 |
## Benchmarks
|
175 |
|
176 |
+
### Retrieval
|
177 |
+
|
178 |
Evaluated with [MIRACL-ja](https://huggingface.co/datasets/miracl/miracl), [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) , [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR) and [MLDR-ja](https://huggingface.co/datasets/Shitao/MLDR).
|
179 |
|
180 |
| Model | Size | MIRACL<br>Recall@5 | JQaRA<br>nDCG@10 | JaCWIR<br>MAP@10 | MLDR<br>nDCG@10 |
|
181 |
+
| :---: | :---: | :---: | :---: | :---: | :---: |
|
182 |
+
| OpenAI/text-embedding-3-small | - | processing... | 38.8 | 81.6 | processing... |
|
183 |
+
| OpenAI/text-embedding-3-large | - | processing... | processing... | processing... | processing... |
|
184 |
+
| | | | | | |
|
185 |
+
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 89.2 | 55.4 | **87.6** | 29.8 |
|
186 |
+
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 78.7 | 62.4 | 85.0 | **37.5** |
|
187 |
+
| | | | | | |
|
188 |
+
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 84.2 | 47.2 | **85.3** | 25.4 |
|
189 |
+
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 74.3 | 58.1 | 84.6 | **35.3** |
|
190 |
+
| [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 53.3 | 30.8 | 68.6 | 25.2 |
|
191 |
+
| **GLuCoSE v2** | 0.1B | **85.5** | **60.6** | **85.3** | 33.8 |
|
192 |
+
|
193 |
+
Note: Results for OpenAI small embeddings in JQARA and JaCWIR are quoted from the [JQARA](https://huggingface.co/datasets/hotchpotch/JQaRA) and [JaCWIR](https://huggingface.co/datasets/hotchpotch/JaCWIR).
|
|
|
194 |
|
195 |
### JMTEB
|
196 |
+
|
197 |
Evaluated with [JMTEB](https://github.com/sbintuitions/JMTEB).
|
198 |
|
199 |
+
| Model | Size | Avg. | Retrieval | STS | Classification | Reranking | Clustering | PairClassification |
|
200 |
+
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
|
201 |
+
| OpenAI/text-embedding-3-small | - | 69.18 | 66.39 | 79.46 | 73.06 | 92.92 | 51.06 | 62.27 |
|
202 |
+
| OpenAI/text-embedding-3-large | - | 74.05 | 74.48 | 82.52 | 77.58 | 93.58 | 53.32 | 62.35 |
|
203 |
+
| | | | | | | | | |
|
204 |
+
| [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) | 0.6B | 70.90 | 70.98 | 79.70 | 72.89 | 92.96 | 51.24 | 62.15 |
|
205 |
+
| [cl-nagoya/ruri-large](https://huggingface.co/cl-nagoya/ruri-large) | 0.3B | 73.31 | 73.02 | 83.13 | 77.43 | 92.99 | 51.82 | 62.29 |
|
206 |
+
| | | | | | | | | |
|
207 |
+
| [intfloat/multilingual-e5-base](https://huggingface.co/intfloat/multilingual-e5-base) | 0.3B | 68.61 | 68.21 | 79.84 | 69.30 | **92.85** | 48.26 | 62.26 |
|
208 |
+
| [cl-nagoya/ruri-base](https://huggingface.co/cl-nagoya/ruri-base) | 0.1B | 71.91 | 69.82 | 82.87 | 75.58 | 92.91 | **54.16** | 62.38 |
|
209 |
+
| [pkshatech/GLuCoSE-base-ja](https://huggingface.co/pkshatech/GLuCoSE-base-ja) | 0.1B | 67.29 | 59.02 | 78.71 | **76.82** | 91.90 | 49.78 | **66.39** |
|
210 |
+
| **GLuCoSE v2** | 0.1B | **72.23** | **73.36** | **82.96** | 74.21 | 93.01 | 48.65 | 62.37 |
|
211 |
|
212 |
Note: Results for OpenAI embeddings and multilingual-e5 models are quoted from the [JMTEB leaderboard](https://github.com/sbintuitions/JMTEB/blob/main/leaderboard.md). Results for ruri are quoted from the [cl-nagoya/ruri-base model card](https://huggingface.co/cl-nagoya/ruri-base/blob/main/README.md).
|
213 |
|
|
|
|
|
214 |
## Authors
|
215 |
+
|
216 |
Chihiro Yano, Mocho Go, Hideyuki Tachibana, Hiroto Takegawa, Yotaro Watanabe
|
217 |
|
218 |
## License
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
219 |
|
220 |
+
This model is published under the [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0).
|
|