tomaarsen HF staff commited on
Commit
e5828ff
·
verified ·
1 Parent(s): f876dcc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -10
README.md CHANGED
@@ -8,6 +8,21 @@ This model was trained on the [MS Marco Passage Ranking](https://github.com/micr
8
  The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
9
 
10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ## Usage with Transformers
12
 
13
  ```python
@@ -26,16 +41,6 @@ with torch.no_grad():
26
  ```
27
 
28
 
29
- ## Usage with SentenceTransformers
30
-
31
- The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
32
- ```python
33
- from sentence_transformers import CrossEncoder
34
- model = CrossEncoder('model_name', max_length=512)
35
- scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')])
36
- ```
37
-
38
-
39
  ## Performance
40
  In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
41
 
 
8
  The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
9
 
10
 
11
+ ## Usage with SentenceTransformers
12
+
13
+ The usage is easy when you have [SentenceTransformers](https://www.sbert.net/) installed. Then you can use the pre-trained models like this:
14
+ ```python
15
+ from sentence_transformers import CrossEncoder
16
+
17
+ model = CrossEncoder('cross-encoder/ms-marco-electra-base')
18
+ scores = model.predict([
19
+ ("How many people live in Berlin?", "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers."),
20
+ ("How many people live in Berlin?", "Berlin is well known for its museums."),
21
+ ])
22
+ print(scores)
23
+ # [9.9227107e-01 2.0136760e-05]
24
+ ```
25
+
26
  ## Usage with Transformers
27
 
28
  ```python
 
41
  ```
42
 
43
 
 
 
 
 
 
 
 
 
 
 
44
  ## Performance
45
  In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
46