rawsh commited on
Commit
877d41e
·
1 Parent(s): 314aa12

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -0
README.md CHANGED
@@ -22,6 +22,57 @@ datasets:
22
  # multi-qa-MiniLM-distill-onnx-L6-cos-v1
23
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
24
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
 
26
  ## Usage (Sentence-Transformers)
27
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
 
22
  # multi-qa-MiniLM-distill-onnx-L6-cos-v1
23
  This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 215M (question, answer) pairs from diverse sources. For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
24
 
25
+ ## Usage (ONNX runtime)
26
+ Using optimum
27
+
28
+ ```
29
+ from optimum.onnxruntime import ORTModelForFeatureExtraction
30
+ from transformers import AutoTokenizer
31
+
32
+ from transformers import Pipeline
33
+ import torch.nn.functional as F
34
+ import torch
35
+
36
+ # copied from the model card
37
+ def mean_pooling(model_output, attention_mask):
38
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
39
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
40
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
41
+
42
+
43
+ class SentenceEmbeddingPipeline(Pipeline):
44
+ def _sanitize_parameters(self, **kwargs):
45
+ # we don't have any hyperameters to sanitize
46
+ preprocess_kwargs = {}
47
+ return preprocess_kwargs, {}, {}
48
+
49
+ def preprocess(self, inputs):
50
+ encoded_inputs = self.tokenizer(inputs, padding=True, truncation=True, return_tensors='pt')
51
+ return encoded_inputs
52
+
53
+ def _forward(self, model_inputs):
54
+ outputs = self.model(**model_inputs)
55
+ return {"outputs": outputs, "attention_mask": model_inputs["attention_mask"]}
56
+
57
+ def postprocess(self, model_outputs):
58
+ # Perform pooling
59
+ sentence_embeddings = mean_pooling(model_outputs["outputs"], model_outputs['attention_mask'])
60
+ # Normalize embeddings
61
+ sentence_embeddings = F.normalize(sentence_embeddings, p=2, dim=1)
62
+ return sentence_embeddings
63
+
64
+ # load optimized model
65
+ onnx_path = "./models/cos-v1-best/"
66
+ model = ORTModelForFeatureExtraction.from_pretrained(onnx_path, file_name="model_quantized.onnx")
67
+
68
+ # create optimized pipeline
69
+ tokenizer = AutoTokenizer.from_pretrained(onnx_path, use_fast=True)
70
+ optimized_emb = SentenceEmbeddingPipeline(model=model, tokenizer=tokenizer)
71
+ pred1 = optimized_emb("Hello world!")
72
+ pred2 = optimized_emb("I hate everything.")
73
+
74
+ print(pred1[0].dot(pred2[0]))
75
+ ```
76
 
77
  ## Usage (Sentence-Transformers)
78
  Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: