Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,023 Bytes
2eb54b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7e42f21
 
 
 
 
 
2eb54b6
7e42f21
6c9cd48
7e42f21
84bc830
7e42f21
 
 
 
 
 
 
 
 
 
 
 
 
0b76537
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7e42f21
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
dataset_info:
  features:
  - name: sts-id
    dtype: string
  - name: sts-score
    dtype: float64
  - name: sentence1
    dtype: string
  - name: sentence2
    dtype: string
  - name: paraphrase
    dtype: int64
  - name: Human Annotation - P1
    dtype: int64
  - name: Human Annotation - P2
    dtype: int64
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: test
    num_bytes: 58088
    num_examples: 338
  download_size: 37035
  dataset_size: 58088
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
license: apache-2.0
task_categories:
- text-classification
language:
- en
pretty_name: STS-H
---

# STS-Hard Test Set

The STS-Hard dataset is a paraphrase detection test set derived from the STSBenchmark dataset. It was introduced as part of the **PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models**. The test set includes the paraphrase label as well as individual annotation labels from two annotators:

- **P1**: The semanticist.
- **P2**: A student annotator.

For more details, refer to the [original paper](https://arxiv.org/abs/2409.12060) that was presented at COLING 2025.

---

### Citation

If you use this dataset, please cite it using the following BibTeX entry:

```bibtex
@inproceedings{michail-etal-2025-paraphrasus,
    title = "{PARAPHRASUS}: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models",
    author = "Michail, Andrianos  and
      Clematide, Simon  and
      Opitz, Juri",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.coling-main.585/",
    pages = "8749--8762",
    abstract = "The task of determining whether two texts are paraphrases has long been a challenge in NLP. However, the prevailing notion of paraphrase is often quite simplistic, offering only a limited view of the vast spectrum of paraphrase phenomena. Indeed, we find that evaluating models in a paraphrase dataset can leave uncertainty about their true semantic understanding. To alleviate this, we create PARAPHRASUS, a benchmark designed for multi-dimensional assessment, benchmarking and selection of paraphrase detection models. We find that paraphrase detection models under our fine-grained evaluation lens exhibit trade-offs that cannot be captured through a single classification dataset. Furthermore, PARAPHRASUS allows prompt calibration for different use cases, tailoring LLM models to specific strictness levels. PARAPHRASUS includes 3 challenges spanning over 10 datasets, including 8 repurposed and 2 newly annotated; we release it along with a benchmarking library at https://github.com/impresso/paraphrasus"
}