|
--- |
|
language: |
|
- cs |
|
base_model: |
|
- fav-kky/FERNET-C5 |
|
--- |
|
This is fav-kky/FERNET-C5, fine-tuned with the **Cross-Encoder** architecture on DaReCzech dataset. The Cross-Encoder architecture processes both input text pieces simultaneously, enabling better accuracy. |
|
|
|
The model can be used both for re-ranking. |
|
|
|
**Re-ranking task**: Given a query, the model assesses all potential passages and ranks them in descending order of relevance. |
|
|
|
```python |
|
from sentence_transformers import CrossEncoder |
|
|
|
model = CrossEncoder('ctu-aic/CE-fernet-c5-LRank200', max_length=200) |
|
|
|
query = "example query" |
|
|
|
documents = [ |
|
"Example document one.", |
|
"Example document two.", |
|
"Example document three." |
|
] |
|
|
|
top_k = 3 |
|
return_documents = True |
|
|
|
results = model.rank( |
|
query=query, |
|
documents=documents, |
|
top_k=top_k, |
|
return_documents=return_documents |
|
) |
|
|
|
for i, res in enumerate(results): |
|
print(f"{i+1}. {res['text']}") |
|
``` |