Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Andrianos's picture
Updated the bib file
8d19e50 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: sentence1
      dtype: string
    - name: sentence2
      dtype: string
    - name: paraphrase
      dtype: int64
  splits:
    - name: test
      num_bytes: 13558
      num_examples: 167
  download_size: 8253
  dataset_size: 13558
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
license: apache-2.0
task_categories:
  - text-classification
language:
  - en
pretty_name: True Paraphrases
size_categories:
  - n<1K

True Paraphrases Test Set

The True Paraphrases sentence/phrase pairs derived from the AMR Annotation Guidelines. It was introduced as part of the PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models.

For more details, refer to the original paper that was presented at COLING 2025.


Citation

If you use this dataset, please cite it using the following BibTeX entry:

@inproceedings{michail-etal-2025-paraphrasus,
    title = "{PARAPHRASUS}: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models",
    author = "Michail, Andrianos  and
      Clematide, Simon  and
      Opitz, Juri",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.coling-main.585/",
    pages = "8749--8762",
    abstract = "The task of determining whether two texts are paraphrases has long been a challenge in NLP. However, the prevailing notion of paraphrase is often quite simplistic, offering only a limited view of the vast spectrum of paraphrase phenomena. Indeed, we find that evaluating models in a paraphrase dataset can leave uncertainty about their true semantic understanding. To alleviate this, we create PARAPHRASUS, a benchmark designed for multi-dimensional assessment, benchmarking and selection of paraphrase detection models. We find that paraphrase detection models under our fine-grained evaluation lens exhibit trade-offs that cannot be captured through a single classification dataset. Furthermore, PARAPHRASUS allows prompt calibration for different use cases, tailoring LLM models to specific strictness levels. PARAPHRASUS includes 3 challenges spanning over 10 datasets, including 8 repurposed and 2 newly annotated; we release it along with a benchmarking library at https://github.com/impresso/paraphrasus"
}