ariakang commited on
Commit
fd58b71
·
verified ·
1 Parent(s): 5c9ca2d

Upload 11 files

Browse files
README.md ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: feature-extraction
3
+ library_name: "transformers.js"
4
+ language:
5
+ - en
6
+ license: mit
7
+ ---
8
+
9
+ _Fork of https://huggingface.co/thenlper/gte-small with ONNX weights to be compatible with Transformers.js. See [JavaScript usage](#javascript)._
10
+
11
+ ---
12
+
13
+ # gte-small
14
+
15
+ General Text Embeddings (GTE) model.
16
+
17
+ The GTE models are trained by Alibaba DAMO Academy. They are mainly based on the BERT framework and currently offer three different sizes of models, including [GTE-large](https://huggingface.co/thenlper/gte-large), [GTE-base](https://huggingface.co/thenlper/gte-base), and [GTE-small](https://huggingface.co/thenlper/gte-small). The GTE models are trained on a large-scale corpus of relevance text pairs, covering a wide range of domains and scenarios. This enables the GTE models to be applied to various downstream tasks of text embeddings, including **information retrieval**, **semantic textual similarity**, **text reranking**, etc.
18
+
19
+ ## Metrics
20
+
21
+ Performance of GTE models were compared with other popular text embedding models on the MTEB benchmark. For more detailed comparison results, please refer to the [MTEB leaderboard](https://huggingface.co/spaces/mteb/leaderboard).
22
+
23
+
24
+
25
+ | Model Name | Model Size (GB) | Dimension | Sequence Length | Average (56) | Clustering (11) | Pair Classification (3) | Reranking (4) | Retrieval (15) | STS (10) | Summarization (1) | Classification (12) |
26
+ |:----:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
27
+ | [**gte-large**](https://huggingface.co/thenlper/gte-large) | 0.67 | 1024 | 512 | **63.13** | 46.84 | 85.00 | 59.13 | 52.22 | 83.35 | 31.66 | 73.33 |
28
+ | [**gte-base**](https://huggingface.co/thenlper/gte-base) | 0.22 | 768 | 512 | **62.39** | 46.2 | 84.57 | 58.61 | 51.14 | 82.3 | 31.17 | 73.01 |
29
+ | [e5-large-v2](https://huggingface.co/intfloat/e5-large-v2) | 1.34 | 1024| 512 | 62.25 | 44.49 | 86.03 | 56.61 | 50.56 | 82.05 | 30.19 | 75.24 |
30
+ | [e5-base-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.44 | 768 | 512 | 61.5 | 43.80 | 85.73 | 55.91 | 50.29 | 81.05 | 30.28 | 73.84 |
31
+ | [**gte-small**](https://huggingface.co/thenlper/gte-small) | 0.07 | 384 | 512 | **61.36** | 44.89 | 83.54 | 57.7 | 49.46 | 82.07 | 30.42 | 72.31 |
32
+ | [text-embedding-ada-002](https://platform.openai.com/docs/guides/embeddings) | - | 1536 | 8192 | 60.99 | 45.9 | 84.89 | 56.32 | 49.25 | 80.97 | 30.8 | 70.93 |
33
+ | [e5-small-v2](https://huggingface.co/intfloat/e5-base-v2) | 0.13 | 384 | 512 | 59.93 | 39.92 | 84.67 | 54.32 | 49.04 | 80.39 | 31.16 | 72.94 |
34
+ | [sentence-t5-xxl](https://huggingface.co/sentence-transformers/sentence-t5-xxl) | 9.73 | 768 | 512 | 59.51 | 43.72 | 85.06 | 56.42 | 42.24 | 82.63 | 30.08 | 73.42 |
35
+ | [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) | 0.44 | 768 | 514 | 57.78 | 43.69 | 83.04 | 59.36 | 43.81 | 80.28 | 27.49 | 65.07 |
36
+ | [sgpt-bloom-7b1-msmarco](https://huggingface.co/bigscience/sgpt-bloom-7b1-msmarco) | 28.27 | 4096 | 2048 | 57.59 | 38.93 | 81.9 | 55.65 | 48.22 | 77.74 | 33.6 | 66.19 |
37
+ | [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) | 0.13 | 384 | 512 | 56.53 | 41.81 | 82.41 | 58.44 | 42.69 | 79.8 | 27.9 | 63.21 |
38
+ | [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) | 0.09 | 384 | 512 | 56.26 | 42.35 | 82.37 | 58.04 | 41.95 | 78.9 | 30.81 | 63.05 |
39
+ | [contriever-base-msmarco](https://huggingface.co/nthakur/contriever-base-msmarco) | 0.44 | 768 | 512 | 56.00 | 41.1 | 82.54 | 53.14 | 41.88 | 76.51 | 30.36 | 66.68 |
40
+ | [sentence-t5-base](https://huggingface.co/sentence-transformers/sentence-t5-base) | 0.22 | 768 | 512 | 55.27 | 40.21 | 85.18 | 53.09 | 33.63 | 81.14 | 31.39 | 69.81 |
41
+
42
+
43
+ ## Usage
44
+
45
+ This model can be used with both [Python](#python) and [JavaScript](#javascript).
46
+
47
+ ### Python
48
+ Use with [Transformers](https://huggingface.co/docs/transformers/index) and [PyTorch](https://pytorch.org/docs/stable/index.html):
49
+
50
+ ```python
51
+ import torch.nn.functional as F
52
+ from torch import Tensor
53
+ from transformers import AutoTokenizer, AutoModel
54
+
55
+ def average_pool(last_hidden_states: Tensor,
56
+ attention_mask: Tensor) -> Tensor:
57
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
58
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
59
+
60
+ input_texts = [
61
+ "what is the capital of China?",
62
+ "how to implement quick sort in python?",
63
+ "Beijing",
64
+ "sorting algorithms"
65
+ ]
66
+
67
+ tokenizer = AutoTokenizer.from_pretrained("Supabase/gte-small")
68
+ model = AutoModel.from_pretrained("Supabase/gte-small")
69
+
70
+ # Tokenize the input texts
71
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
72
+
73
+ outputs = model(**batch_dict)
74
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
75
+
76
+ # (Optionally) normalize embeddings
77
+ embeddings = F.normalize(embeddings, p=2, dim=1)
78
+ scores = (embeddings[:1] @ embeddings[1:].T) * 100
79
+ print(scores.tolist())
80
+ ```
81
+
82
+ Use with [sentence-transformers](https://www.sbert.net/):
83
+ ```python
84
+ from sentence_transformers import SentenceTransformer
85
+ from sentence_transformers.util import cos_sim
86
+
87
+ sentences = ['That is a happy person', 'That is a very happy person']
88
+
89
+ model = SentenceTransformer('Supabase/gte-small')
90
+ embeddings = model.encode(sentences)
91
+ print(cos_sim(embeddings[0], embeddings[1]))
92
+ ```
93
+
94
+ ### JavaScript
95
+ This model can be used with JavaScript via [Transformers.js](https://huggingface.co/docs/transformers.js/index).
96
+
97
+ Use with [Deno](https://deno.land/manual/introduction) or [Supabase Edge Functions](https://supabase.com/docs/guides/functions):
98
+
99
+ ```ts
100
+ import { serve } from 'https://deno.land/[email protected]/http/server.ts'
101
+ import { env, pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]'
102
+
103
+ // Configuration for Deno runtime
104
+ env.useBrowserCache = false;
105
+ env.allowLocalModels = false;
106
+
107
+ const pipe = await pipeline(
108
+ 'feature-extraction',
109
+ 'Supabase/gte-small',
110
+ );
111
+
112
+ serve(async (req) => {
113
+ // Extract input string from JSON body
114
+ const { input } = await req.json();
115
+
116
+ // Generate the embedding from the user input
117
+ const output = await pipe(input, {
118
+ pooling: 'mean',
119
+ normalize: true,
120
+ });
121
+
122
+ // Extract the embedding output
123
+ const embedding = Array.from(output.data);
124
+
125
+ // Return the embedding
126
+ return new Response(
127
+ JSON.stringify({ embedding }),
128
+ { headers: { 'Content-Type': 'application/json' } }
129
+ );
130
+ });
131
+ ```
132
+
133
+ Use within the browser ([JavaScript Modules](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules)):
134
+
135
+ ```html
136
+ <script type="module">
137
+
138
+ import { pipeline } from 'https://cdn.jsdelivr.net/npm/@xenova/[email protected]';
139
+
140
+ const pipe = await pipeline(
141
+ 'feature-extraction',
142
+ 'Supabase/gte-small',
143
+ );
144
+
145
+ // Generate the embedding from text
146
+ const output = await pipe('Hello world', {
147
+ pooling: 'mean',
148
+ normalize: true,
149
+ });
150
+
151
+ // Extract the embedding output
152
+ const embedding = Array.from(output.data);
153
+
154
+ console.log(embedding);
155
+
156
+ </script>
157
+ ```
158
+
159
+ Use within [Node.js](https://nodejs.org/en/docs) or a web bundler ([Webpack](https://webpack.js.org/concepts/), etc):
160
+
161
+ ```js
162
+ import { pipeline } from '@xenova/transformers';
163
+
164
+ const pipe = await pipeline(
165
+ 'feature-extraction',
166
+ 'Supabase/gte-small',
167
+ );
168
+
169
+ // Generate the embedding from text
170
+ const output = await pipe('Hello world', {
171
+ pooling: 'mean',
172
+ normalize: true,
173
+ });
174
+
175
+ // Extract the embedding output
176
+ const embedding = Array.from(output.data);
177
+
178
+ console.log(embedding);
179
+ ```
180
+
181
+ ### Limitation
182
+
183
+ This model exclusively caters to English texts, and any lengthy texts will be truncated to a maximum of 512 tokens.
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "BertModel"
4
+ ],
5
+ "attention_probs_dropout_prob": 0.1,
6
+ "classifier_dropout": null,
7
+ "hidden_act": "gelu",
8
+ "hidden_dropout_prob": 0.1,
9
+ "hidden_size": 384,
10
+ "initializer_range": 0.02,
11
+ "intermediate_size": 1536,
12
+ "layer_norm_eps": 1e-12,
13
+ "max_position_embeddings": 512,
14
+ "model_type": "bert",
15
+ "num_attention_heads": 12,
16
+ "num_hidden_layers": 12,
17
+ "pad_token_id": 0,
18
+ "position_embedding_type": "absolute",
19
+ "torch_dtype": "float16",
20
+ "transformers_version": "4.28.1",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:398a29991324e0b383afa13375d681ced3079c83e097fb1ebd9290d7498523b3
3
+ size 133093490
onnx/model_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e90c7b4d19ed915613c6df8fc3d60502b0f055b177c3302461a6256230ff2a5e
3
+ size 66749212
onnx/model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18dec105109b6004369799ca4761fb8fb413c64172c02147bcfac186b5c5f6cb
3
+ size 34014426
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:344099675ecefb2dc886e6dcc1fba7ccc0c66dbf455e8aa289035ee8d688f125
3
+ size 66751231
quantize_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "per_channel": true,
3
+ "reduce_range": true,
4
+ "per_model_config": {
5
+ "model": {
6
+ "op_types": [
7
+ "Erf",
8
+ "Shape",
9
+ "Sqrt",
10
+ "Constant",
11
+ "Concat",
12
+ "Mul",
13
+ "MatMul",
14
+ "Softmax",
15
+ "Slice",
16
+ "Add",
17
+ "ReduceMean",
18
+ "Cast",
19
+ "Unsqueeze",
20
+ "Gather",
21
+ "Sub",
22
+ "Div",
23
+ "Transpose",
24
+ "Reshape",
25
+ "Pow"
26
+ ],
27
+ "weight_type": "QInt8"
28
+ }
29
+ }
30
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_basic_tokenize": true,
5
+ "do_lower_case": true,
6
+ "mask_token": "[MASK]",
7
+ "max_length": 128,
8
+ "model_max_length": 512,
9
+ "never_split": null,
10
+ "pad_to_multiple_of": null,
11
+ "pad_token": "[PAD]",
12
+ "pad_token_type_id": 0,
13
+ "padding_side": "right",
14
+ "sep_token": "[SEP]",
15
+ "stride": 0,
16
+ "strip_accents": null,
17
+ "tokenize_chinese_chars": true,
18
+ "tokenizer_class": "BertTokenizer",
19
+ "truncation_side": "right",
20
+ "truncation_strategy": "longest_first",
21
+ "unk_token": "[UNK]"
22
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff