SaiedAlshahrani commited on
Commit
afd969e
·
1 Parent(s): f756f23

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -8,7 +8,7 @@ size_categories:
8
  ---
9
 
10
  # Dataset Card for "Masked Arab States Dataset (MASD)"
11
- This dataset is created using 20 Arab states^[We] with their corresponding capital cities, nationalities, currencies, and on which continents they are located, consisting of four categories: country-capital
12
  prompts, country-currency prompts, country-nationality prompts, and country-continent prompts. Each prompts category has 40 masked prompts, and the total number of masked prompts in the MASD dataset is 160. This dataset is used to evaluate these Arabic Masked Language Models (MLMs):
13
  1. [SaiedAlshahrani/arwiki_20230101_roberta_mlm_bots](https://huggingface.co/SaiedAlshahrani/arwiki_20230101_roberta_mlm_bots).
14
  2. [SaiedAlshahrani/arwiki_20230101_roberta_mlm_nobots](https://huggingface.co/SaiedAlshahrani/arwiki_20230101_roberta_mlm_nobots).
@@ -32,4 +32,6 @@ For more details about the dataset, please **read** and **cite** our paper:
32
  pages = "###--###",
33
  abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
34
  }
35
- ```
 
 
 
8
  ---
9
 
10
  # Dataset Card for "Masked Arab States Dataset (MASD)"
11
+ This dataset is created using 20 Arab states<sup>1</sup> with their corresponding capital cities, nationalities, currencies, and on which continents they are located, consisting of four categories: country-capital
12
  prompts, country-currency prompts, country-nationality prompts, and country-continent prompts. Each prompts category has 40 masked prompts, and the total number of masked prompts in the MASD dataset is 160. This dataset is used to evaluate these Arabic Masked Language Models (MLMs):
13
  1. [SaiedAlshahrani/arwiki_20230101_roberta_mlm_bots](https://huggingface.co/SaiedAlshahrani/arwiki_20230101_roberta_mlm_bots).
14
  2. [SaiedAlshahrani/arwiki_20230101_roberta_mlm_nobots](https://huggingface.co/SaiedAlshahrani/arwiki_20230101_roberta_mlm_nobots).
 
32
  pages = "###--###",
33
  abstract = "Wikipedia articles are a widely used source of training data for Natural Language Processing (NLP) research, particularly as corpora for low-resource languages like Arabic. However, it is essential to understand the extent to which these corpora reflect the representative contributions of native speakers, especially when many entries in a given language are directly translated from other languages or automatically generated through automated mechanisms. In this paper, we study the performance implications of using inorganic corpora that are not representative of native speakers and are generated through automated techniques such as bot generation or automated template-based translation. The case of the Arabic Wikipedia editions gives a unique case study of this since the Moroccan Arabic Wikipedia edition (ARY) is small but representative, the Egyptian Arabic Wikipedia edition (ARZ) is large but unrepresentative, and the Modern Standard Arabic Wikipedia edition (AR) is both large and more representative. We intrinsically evaluate the performance of two main NLP upstream tasks, namely word representation and language modeling, using word analogy evaluations and fill-mask evaluations using our two newly created datasets: Arab States Analogy Dataset (ASAD) and Masked Arab States Dataset (MASD). We demonstrate that for good NLP performance, we need both large and organic corpora; neither alone is sufficient. We show that producing large corpora through automated means can be a counter-productive, producing models that both perform worse and lack cultural richness and meaningful representation of the Arabic language and its native speakers.",
34
  }
35
+ ```
36
+
37
+ 1.