Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Andrianos commited on
Commit
8d19e50
·
verified ·
1 Parent(s): da05bd7

Updated the bib file

Browse files
Files changed (1) hide show
  1. README.md +19 -8
README.md CHANGED
@@ -44,12 +44,23 @@ For more details, refer to the [original paper](https://arxiv.org/abs/2409.12060
44
  If you use this dataset, please cite it using the following BibTeX entry:
45
 
46
  ```bibtex
47
- @misc{michail2024paraphrasuscomprehensivebenchmark,
48
- title={PARAPHRASUS : A Comprehensive Benchmark for Evaluating Paraphrase Detection Models},
49
- author={Andrianos Michail and Simon Clematide and Juri Opitz},
50
- year={2024},
51
- eprint={2409.12060},
52
- archivePrefix={arXiv},
53
- primaryClass={cs.CL},
54
- url={https://arxiv.org/abs/2409.12060},
 
 
 
 
 
 
 
 
 
 
 
55
  }
 
44
  If you use this dataset, please cite it using the following BibTeX entry:
45
 
46
  ```bibtex
47
+ @inproceedings{michail-etal-2025-paraphrasus,
48
+ title = "{PARAPHRASUS}: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models",
49
+ author = "Michail, Andrianos and
50
+ Clematide, Simon and
51
+ Opitz, Juri",
52
+ editor = "Rambow, Owen and
53
+ Wanner, Leo and
54
+ Apidianaki, Marianna and
55
+ Al-Khalifa, Hend and
56
+ Eugenio, Barbara Di and
57
+ Schockaert, Steven",
58
+ booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
59
+ month = jan,
60
+ year = "2025",
61
+ address = "Abu Dhabi, UAE",
62
+ publisher = "Association for Computational Linguistics",
63
+ url = "https://aclanthology.org/2025.coling-main.585/",
64
+ pages = "8749--8762",
65
+ abstract = "The task of determining whether two texts are paraphrases has long been a challenge in NLP. However, the prevailing notion of paraphrase is often quite simplistic, offering only a limited view of the vast spectrum of paraphrase phenomena. Indeed, we find that evaluating models in a paraphrase dataset can leave uncertainty about their true semantic understanding. To alleviate this, we create PARAPHRASUS, a benchmark designed for multi-dimensional assessment, benchmarking and selection of paraphrase detection models. We find that paraphrase detection models under our fine-grained evaluation lens exhibit trade-offs that cannot be captured through a single classification dataset. Furthermore, PARAPHRASUS allows prompt calibration for different use cases, tailoring LLM models to specific strictness levels. PARAPHRASUS includes 3 challenges spanning over 10 datasets, including 8 repurposed and 2 newly annotated; we release it along with a benchmarking library at https://github.com/impresso/paraphrasus"
66
  }