Update README.md
Browse files
README.md
CHANGED
@@ -83,12 +83,12 @@ The following Figure illustrates examples of clickbait headlines from our datase
|
|
83 |
<img src="https://raw.githubusercontent.com/ikergarcia1996/NoticIA/main/assets/examples.png" style="width: 100%;">
|
84 |
</p>
|
85 |
|
86 |
-
|
87 |
- **Curated by:** [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139)
|
88 |
- **Funded by:** SomosNLP, HuggingFace, Argilla, [HiTZ Zentroa](https://www.hitz.eus/)
|
89 |
- **Language(s) (NLP):** es-ES
|
90 |
- **License:** apache-2.0
|
91 |
-
- **Web Page
|
92 |
|
93 |
### Dataset Sources
|
94 |
|
@@ -109,10 +109,10 @@ the express permission of the media from which the news has been obtained.
|
|
109 |
|
110 |
### Direct Use
|
111 |
|
112 |
-
- Evaluation of Language Models in Spanish.
|
113 |
-
- Instruction-Tuning of Spanish Language Models
|
114 |
-
- Develop new datasets on top of our data
|
115 |
-
- Any other academic research purpose.
|
116 |
|
117 |
### Out-of-Scope Use
|
118 |
|
@@ -184,9 +184,9 @@ The dataset includes the following fields:
|
|
184 |
|
185 |
### Curation Rationale
|
186 |
|
187 |
-
|
188 |
|
189 |
-
|
190 |
|
191 |
### Source Data
|
192 |
|
@@ -240,77 +240,78 @@ different interpretations of this information. In these cases, determining the c
|
|
240 |
Regarding the evaluation of the guidelines, overall, they were not ambiguous, although the request to select the minimum number of words to generate a
|
241 |
valid summary is sometimes interpreted differently by the annotators: For example, the minimum length could be understood as focusing on the question in the headline or a minimum well-formed phrase.
|
242 |
|
|
|
243 |
|
244 |
-
|
245 |
|
246 |
-
|
247 |
|
248 |
-
|
249 |
-
|
250 |
-
[More Information Needed]
|
251 |
|
252 |
-
|
|
|
|
|
253 |
|
254 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations.
|
255 |
|
256 |
-
|
257 |
|
258 |
-
|
259 |
|
260 |
-
|
261 |
|
262 |
## License
|
263 |
|
264 |
-
|
265 |
|
266 |
## Citation
|
267 |
|
268 |
-
|
269 |
|
270 |
**BibTeX:**
|
271 |
|
272 |
-
[More Information Needed]
|
273 |
-
|
274 |
-
<!--
|
275 |
-
|
276 |
-
Aquí tenéis un ejemplo de cita de un dataset que podéis adaptar:
|
277 |
-
|
278 |
```
|
279 |
-
@
|
280 |
-
|
281 |
-
|
282 |
-
|
283 |
-
|
284 |
-
|
|
|
285 |
}
|
286 |
```
|
287 |
|
288 |
-
- benallal2024cosmopedia -> nombre + año + nombre del dataset
|
289 |
-
- author: lista de miembros del equipo
|
290 |
-
- title: nombre del dataset
|
291 |
-
- year: año
|
292 |
-
- url: enlace al dataset
|
293 |
|
294 |
-
-->
|
295 |
|
296 |
-
##
|
297 |
|
298 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
|
299 |
|
300 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
301 |
|
302 |
-
<!-- Indicar aquí que el marco en el que se desarrolló el proyecto, en esta sección podéis incluir agradecimientos y más información sobre los miembros del equipo. Podéis adaptar el ejemplo a vuestro gusto. -->
|
303 |
|
304 |
-
|
|
|
305 |
|
306 |
-
**Team:** [More Information Needed]
|
307 |
|
308 |
-
|
309 |
-
|
310 |
-
|
311 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
312 |
|
313 |
-
## Contact [optional]
|
314 |
|
315 |
-
<!-- Email de contacto para´posibles preguntas sobre el dataset. -->
|
316 |
|
|
|
83 |
<img src="https://raw.githubusercontent.com/ikergarcia1996/NoticIA/main/assets/examples.png" style="width: 100%;">
|
84 |
</p>
|
85 |
|
86 |
+
# Dataset Description
|
87 |
- **Curated by:** [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139)
|
88 |
- **Funded by:** SomosNLP, HuggingFace, Argilla, [HiTZ Zentroa](https://www.hitz.eus/)
|
89 |
- **Language(s) (NLP):** es-ES
|
90 |
- **License:** apache-2.0
|
91 |
+
- **Web Page:** [Github](https://github.com/ikergarcia1996/NoticIA)
|
92 |
|
93 |
### Dataset Sources
|
94 |
|
|
|
109 |
|
110 |
### Direct Use
|
111 |
|
112 |
+
- 📈 Evaluation of Language Models in Spanish.
|
113 |
+
- 🤖 Instruction-Tuning of Spanish Language Models
|
114 |
+
- 📚 Develop new datasets on top of our data
|
115 |
+
- 🎓 Any other academic research purpose.
|
116 |
|
117 |
### Out-of-Scope Use
|
118 |
|
|
|
184 |
|
185 |
### Curation Rationale
|
186 |
|
187 |
+
NoticIA offers an ideal scenario to test the ability of language models to understand Spanish texts. This task is complex, involving discerning the hidden question in a clickbait headline or identifying the information that the user is actually seeking. This challenge involves filtering large volumes of superfluous content to find and succinctly summarize the relevant information accurately.
|
188 |
|
189 |
+
In addition, by making our data and models public, we aim to exert pressure against the use of deceptive tactics by online news providers to increase advertising revenue,
|
190 |
|
191 |
### Source Data
|
192 |
|
|
|
240 |
Regarding the evaluation of the guidelines, overall, they were not ambiguous, although the request to select the minimum number of words to generate a
|
241 |
valid summary is sometimes interpreted differently by the annotators: For example, the minimum length could be understood as focusing on the question in the headline or a minimum well-formed phrase.
|
242 |
|
243 |
+
# Massive Evaluation of Language Models
|
244 |
|
245 |
+
As is customary in summary tasks, we use the ROUGE scoring metric to automatically evaluate the summaries produced by models. Our main metric is ROUGE-1, which considers whole words as basic units. To calculate the ROUGE score, we lowercase both summaries and remove punctuation marks. In addition to the ROUGE score, we also consider the average length of the summaries. For our task, we aim for the summaries to be concise, an aspect that the ROUGE score does not evaluate. Therefore, when evaluating models, we consider both the ROUGE-1 score and the average length of the summaries. Our goal is to find a model that achieves the highest possible ROUGE score with the shortest possible summary length, balancing quality and brevity.
|
246 |
|
247 |
+
We have evaluated the best current instruction-following language models. We used the previously defined prompt. The prompt is converted into the specific chat template of each model.
|
248 |
|
249 |
+
The code to reproduce the results is available at the following link: [https://github.com/ikergarcia1996/NoticIA](https://github.com/ikergarcia1996/NoticIA)
|
|
|
|
|
250 |
|
251 |
+
<p align="center">
|
252 |
+
<img src="https://huggingface.co/datasets/somosnlp/Resumen_Noticias_Clickbait/resolve/main/Results_zero.png" style="width: 100%;">
|
253 |
+
</p>
|
254 |
|
|
|
255 |
|
256 |
+
## Bias, Risks, and Limitations
|
257 |
|
258 |
+
The dataset contains a small number of articles from Latin America; however, the vast majority of the articles are from Spanish news sources. Therefore, this dataset will evaluate the proficiency of language models in Spanish from Spain.
|
259 |
|
260 |
+
Although explicitly prohibited, a bad actor could use our data to train models that can generate clickbait articles automatically, contributing to polluting the internet with low-quality content. In any case, we consider the advantages of having a text comprehension dataset to evaluate language models in Spanish to be superior to the possible risks.
|
261 |
|
262 |
## License
|
263 |
|
264 |
+
We release our annotations under the Apache 2.0 license. However, commercial use of this dataset is subject to the licenses of each news and media outlet.
|
265 |
|
266 |
## Citation
|
267 |
|
268 |
+
If you use this dataset, please cite our paper: [NoticIA: A Clickbait Article Summarization Dataset in Spanish](https://arxiv.org/abs/2404.07611)
|
269 |
|
270 |
**BibTeX:**
|
271 |
|
|
|
|
|
|
|
|
|
|
|
|
|
272 |
```
|
273 |
+
@misc{garcíaferrero2024noticia,
|
274 |
+
title={NoticIA: A Clickbait Article Summarization Dataset in Spanish},
|
275 |
+
author={Iker García-Ferrero and Begoña Altuna},
|
276 |
+
year={2024},
|
277 |
+
eprint={2404.07611},
|
278 |
+
archivePrefix={arXiv},
|
279 |
+
primaryClass={cs.CL}
|
280 |
}
|
281 |
```
|
282 |
|
|
|
|
|
|
|
|
|
|
|
283 |
|
|
|
284 |
|
285 |
+
## More Information
|
286 |
|
|
|
287 |
|
288 |
+
This project was developed during the [Hackathon #Somos600M](https://somosnlp.org/hackathon) organized by SomosNLP. Demo endpoints were sponsored by HuggingFace.
|
289 |
+
|
290 |
+
**Team:**
|
291 |
+
|
292 |
+
|
293 |
+
- [Iker García-Ferrero](https://huggingface.co/Iker)
|
294 |
+
- [Begoña Altura](https://huggingface.co/baltuna)
|
295 |
+
|
296 |
+
**Contact**: {iker.garciaf,begona.altuna}@ehu.eus
|
297 |
|
|
|
298 |
|
299 |
+
Este dataset ha sido creado por [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/) y [Begoña Altuna](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139).
|
300 |
+
Somos investigadores en PLN en la Universidad del País Vasco, dentro del grupo de investigación [IXA](https://www.ixa.eus/) y formamos parte de [HiTZ, el Centro Vasco de Tecnología de la Lengua](https://www.hitz.eus/es).
|
301 |
|
|
|
302 |
|
303 |
+
<div style="display: flex; justify-content: space-around; width: 100%;">
|
304 |
+
<div style="width: 50%;" align="left">
|
305 |
+
<a href="http://ixa.si.ehu.es/">
|
306 |
+
<img src="https://raw.githubusercontent.com/ikergarcia1996/Iker-Garcia-Ferrero/master/icons/ixa.png" width="50" height="50" alt="Ixa NLP Group">
|
307 |
+
</a>
|
308 |
+
</div>
|
309 |
+
<div style="width: 50%;" align="right">
|
310 |
+
<a href="http://www.hitz.eus/">
|
311 |
+
<img src="https://raw.githubusercontent.com/ikergarcia1996/Iker-Garcia-Ferrero/master/icons/Hitz.png" width="300" height="50" alt="HiTZ Basque Center for Language Technologies">
|
312 |
+
</a>
|
313 |
+
</div>
|
314 |
+
</div>
|
315 |
|
|
|
316 |
|
|
|
317 |
|