Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ In search enginers, [rerankers are crucial](https://www.zeroentropy.dev/blog/wha
|
|
21 |
|
22 |
However, SOTA rerankers are closed-source and proprietary. At ZeroEntropy, we've trained a SOTA reranker outperforming closed-source competitors, and we're launching our model here on HuggingFace.
|
23 |
|
24 |
-
This reranker outperforms proprietary rerankers such as `cohere-rerank-v3.5` and `Salesforce/LlamaRank-v1` across a wide variety of domains, including finance, legal, code, STEM, medical, and conversational data.
|
25 |
|
26 |
At ZeroEntropy we've developed an innovative multi-stage pipeline that models query-document relevance scores as adjusted [Elo ratings](https://en.wikipedia.org/wiki/Elo_rating_system). See our Technical Report (Coming soon!) for more details.
|
27 |
|
|
|
21 |
|
22 |
However, SOTA rerankers are closed-source and proprietary. At ZeroEntropy, we've trained a SOTA reranker outperforming closed-source competitors, and we're launching our model here on HuggingFace.
|
23 |
|
24 |
+
This reranker [outperforms proprietary rerankers](https://huggingface.co/zeroentropy/zerank-1#evaluations) such as `cohere-rerank-v3.5` and `Salesforce/LlamaRank-v1` across a wide variety of domains, including finance, legal, code, STEM, medical, and conversational data.
|
25 |
|
26 |
At ZeroEntropy we've developed an innovative multi-stage pipeline that models query-document relevance scores as adjusted [Elo ratings](https://en.wikipedia.org/wiki/Elo_rating_system). See our Technical Report (Coming soon!) for more details.
|
27 |
|