Iker commited on
Commit
3ee82c9
·
1 Parent(s): 8ed0e83

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -4
README.md CHANGED
@@ -49,9 +49,40 @@ size_categories:
49
  <img src="https://github.com/hitz-zentroa/This-is-not-a-Dataset/raw/main/assets/tittle.png" style="height: 250px;">
50
  </p>
51
 
52
- <p align="center">
53
- <a href="HiTZ/This-is-not-a-dataset"><img alt="Paper" src="https://img.shields.io/badge/📖-Paper-orange"></a><a href="http://www.hitz.eus/"><img src="https://img.shields.io/badge/HiTZ-Basque%20Center%20for%20Language%20Technology-blueviolet"></a><a href="http://www.ixa.eus/?language=en"><img src="https://img.shields.io/badge/IXA-%20NLP%20Group-ff3333"></a><a href="https://www.ehu.eus/en/web/lorea/web-gunea"><img src="https://img.shields.io/badge/LoRea-%20Logic%20and%20Reasoning%20Group-ff3"></a>
 
 
54
  </p>
55
 
56
- <h3 align="center">"A Large Negation Benchmark to Challenge Large Language Models"</h3>
57
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  <img src="https://github.com/hitz-zentroa/This-is-not-a-Dataset/raw/main/assets/tittle.png" style="height: 250px;">
50
  </p>
51
 
52
+ <h3 align="center">"A Large Negation Benchmark to Challenge Large Language Models"</h3>
53
+
54
+ <p align="justify">
55
+ We introduce a large semi-automatically generated dataset of ~400,000 descriptive sentences about commonsense knowledge that can be true or false in which negation is present in about 2/3 of the corpus in different forms that we use to evaluate LLMs.
56
  </p>
57
 
58
+ - 📖 Paper: [This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models (EMNLP'23)]()
59
+ - 💻 Baseline Code the Official Scorer: [https://github.com/hitz-zentroa/This-is-not-a-Dataset](https://github.com/hitz-zentroa/This-is-not-a-Dataset)
60
+
61
+ # Data explanation
62
+
63
+ - **pattern_id** (int): The ID of the pattern,in range [1,11]
64
+ - **pattern** (str): The name of the pattern
65
+ - **test_id** (int): For each pattern we use a set of templates to instanciate the triples. Examples are grouped in triples by test id
66
+ - **negation_type** (str): Affirmation, verbal, non-verbal
67
+ - **semantic_type** (str): None (for affirmative sentences), analytic, synthetic
68
+ - **syntactic_scope** (str): None (for affirmative sentences), clausal, subclausal
69
+ - **isDistractor** (bool): We use distractors (randonly selectec synsets) to generate false kwoledge.
70
+ - **<span style="color:green">sentence</span>** (str): The sentence. <ins>This is the input of the model</ins>
71
+ - **<span style="color:green">label</span>** (bool): The label of the example, True if the statement is true, False otherwise. <ins>This is the target of the model</ins>
72
+
73
+ If you want to run experiments with this dataset, please, use the [Official Scorer](https://github.com/hitz-zentroa/This-is-not-a-Dataset#scorer) to ensure reproducibility and fairness.
74
+
75
+ # Citation
76
+ The paper will be presented at EMNLP 2023, the citation will be available soon. For now, you can use the following bibtex:
77
+
78
+ ```bibtex
79
+ @inproceedings{this-is-not-a-dataset,
80
+ title = "This is not a Dataset: A Large Negation Benchmark to Challenge Large Language Models",
81
+ author = "Iker García-Ferrero, Begoña Altuna, Javier Alvez, Itziar Gonzalez-Dios, German Rigau",
82
+ booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
83
+ year = "2023",
84
+ publisher = "Association for Computational Linguistics",
85
+ }
86
+ ```
87
+
88
+