Datasets:
YAML tags: null
annotations_creators:
- auromatically-generated
language_creators:
- found
language:
- ca
license:
- cc-by-sa-3.0
multilinguality:
- monolingual
pretty_name: wikicat_ca
size_categories:
- unknown
source_datasets: []
task_categories:
- text-classification
task_ids:
- multi-class-classification
TeCla (Text Classification) Catalan dataset
Dataset Description
Paper:
Point of Contact: Carlos Rodríguez-Penagos ([email protected])
Dataset Summary
WikiCAT_ca is a Catalan corpus for thematic Text Classification tasks. It is created automagically from Wikipedia and Wikidata sources, and contains 13201 articles from the Viquipedia classified under 19 different categories.
This dataset was developed by BSC TeMU as part of the AINA project, and intended as an evaluation of LT capabilities to generate useful synthetic corpus.
Supported Tasks and Leaderboards
Text classification, Language Model
Languages
CA- Catalan
Dataset Structure
Data Instances
Three json files, one for each split.
Data Fields
We used a simple model with the article text and associated labels, without further metadata.
Example:
{"version": "1.1.0", "data": [ { 'sentence': ' Celsius és conegut com l\'inventor de l\'escala centesimal del termòmetre. Encara que aquest instrument és un invent molt antic, la història de la seva gradació és molt més capritxosa. Durant el segle xvi era graduat com "fred" col·locant-lo (...)', 'label': 'Ciència' }, . . . ] }
Labels
'Història', 'Tecnologia', 'Humanitats', 'Economia', 'Dret', 'Esport', 'Política', 'Govern', 'Entreteniment', 'Natura', 'Exèrcit', 'Salut_i_benestar_social', 'Matemàtiques', 'Filosofia', 'Ciència', 'Música', 'Enginyeria', 'Empresa', 'Religió'
Data Splits
- train.json: 3970 label-document pairs
- dev.json: 9231 label-document pairs
Dataset Creation
Methodology
Se eligen páginas de partida “Category:” para representar los temas en cada lengua.
Se extrae para cada categoría las páginas principales, así como las subcategorías, y las páginas individuales bajo estas subcategorías de primer nivel. Para cada página, se extrae también el “summary” que proporciona Wikipedia.
Curation Rationale
Source Data
Initial Data Collection and Normalization
The source data are viquipedia articles and the English Wikipedia thematic categories
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Automatic annotation
Personal and Sensitive Information
No personal or sensitive information included.
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
Carlos Rodríguez from BSC-CNS
Licensing Information
This work is licensed under a Attribution-ShareAlike 4.0 International.
Citation Information
Funding
This work was funded by the Catalan Ministry of the Vice-presidency, Digital Policies and Territory within the framework of the Aina project.