Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,29 +1,142 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Dataset Card for Language Identification Dataset
|
2 |
+
|
3 |
+
### Dataset Description
|
4 |
+
|
5 |
+
- **Repository:** processvenue/language_identification
|
6 |
+
- **Total Samples:** 135784
|
7 |
+
- **Number of Languages:** 18
|
8 |
+
- **Splits:**
|
9 |
+
- Train: 104849 samples (70%)
|
10 |
+
- Validation: 15467 samples (15%)
|
11 |
+
- Test: 15468 samples (15%)
|
12 |
+
|
13 |
+
### Dataset Summary
|
14 |
+
|
15 |
+
A comprehensive dataset for Indian language identification and text classification. The dataset contains text samples across 18 major Indian languages, making it suitable for developing language identification systems and multilingual NLP applications.
|
16 |
+
|
17 |
+
### Languages and Distribution
|
18 |
+
|
19 |
+
```
|
20 |
+
Language Distribution:
|
21 |
+
1. Punjabi 15075
|
22 |
+
2. Odia 14258
|
23 |
+
3. Konkani 14098
|
24 |
+
4. Hindi 13469
|
25 |
+
5. Sanskrit 11788
|
26 |
+
6. Bengali 10036
|
27 |
+
7. English 9819
|
28 |
+
8. Sindhi 8838
|
29 |
+
9. Nepali 8694
|
30 |
+
10. Marathi 6625
|
31 |
+
11. Gujarati 3788
|
32 |
+
12. Telugu 3563
|
33 |
+
13. Malayalam 3423
|
34 |
+
14. Tamil 3195
|
35 |
+
15. Kannada 2651
|
36 |
+
16. Kashmiri 2282
|
37 |
+
17. Urdu 2272
|
38 |
+
18. Assamese 1910
|
39 |
+
```
|
40 |
+
|
41 |
+
### Language Details
|
42 |
+
|
43 |
+
1. **Hindi (hi)**: Major language of India, written in Devanagari script
|
44 |
+
2. **Urdu (ur)**: Written in Perso-Arabic script
|
45 |
+
3. **Bengali (bn)**: Official language of Bangladesh and several Indian states
|
46 |
+
4. **Gujarati (gu)**: Official language of Gujarat
|
47 |
+
5. **Kannada (kn)**: Official language of Karnataka
|
48 |
+
6. **Malayalam (ml)**: Official language of Kerala
|
49 |
+
7. **Marathi (mr)**: Official language of Maharashtra
|
50 |
+
8. **Odia (or)**: Official language of Odisha
|
51 |
+
9. **Punjabi (pa)**: Official language of Punjab
|
52 |
+
10. **Tamil (ta)**: Official language of Tamil Nadu and Singapore
|
53 |
+
11. **Telugu (te)**: Official language of Telangana and Andhra Pradesh
|
54 |
+
12. **Sanskrit (sa)**: Ancient language of India, written in Devanagari script
|
55 |
+
13. **Konkani (kok)**: Official language of Goa
|
56 |
+
14. **Sindhi (sd)**: Official language of Sindh province in Pakistan
|
57 |
+
15. **Nepali (ne)**: Official language of Nepal
|
58 |
+
16. **Assamese (as)**: Official language of Assam
|
59 |
+
17. **Kashmiri (ks)**: Official language of Jammu and Kashmir
|
60 |
+
18. **English (en)**: Official language of India
|
61 |
+
|
62 |
+
### Data Fields
|
63 |
+
|
64 |
+
- `Headline`: The input text sample
|
65 |
+
- `Language`: The language label (one of the 18 languages listed above)
|
66 |
+
|
67 |
+
### Usage Example
|
68 |
+
|
69 |
+
```python
|
70 |
+
from datasets import load_dataset
|
71 |
+
|
72 |
+
# Load the dataset
|
73 |
+
dataset = load_dataset("processvenue/language_identification")
|
74 |
+
|
75 |
+
# Access splits
|
76 |
+
train_data = dataset['train']
|
77 |
+
validation_data = dataset['validation']
|
78 |
+
test_data = dataset['test']
|
79 |
+
|
80 |
+
# Example usage
|
81 |
+
print(f"Sample text: {train_data[0]['Headline']}")
|
82 |
+
print(f"Language: {train_data[0]['Language']}")
|
83 |
+
```
|
84 |
+
|
85 |
+
### Applications
|
86 |
+
|
87 |
+
1. **Language Identification Systems**
|
88 |
+
- Automatic language detection
|
89 |
+
- Text routing in multilingual systems
|
90 |
+
- Content filtering by language
|
91 |
+
|
92 |
+
2. **Machine Translation**
|
93 |
+
- Language-pair identification
|
94 |
+
- Translation system selection
|
95 |
+
|
96 |
+
3. **Content Analysis**
|
97 |
+
- Multilingual content categorization
|
98 |
+
- Language-specific content analysis
|
99 |
+
|
100 |
+
### Citation
|
101 |
+
|
102 |
+
If you use this dataset in your research, please cite:
|
103 |
+
|
104 |
+
```
|
105 |
+
@dataset{language_identification_2025,
|
106 |
+
author = {ProcessVenue Team},
|
107 |
+
website = {https://processvenue.com},
|
108 |
+
title = {Multilingual Headlines Language Identification Dataset},
|
109 |
+
year = {2025},
|
110 |
+
publisher = {Hugging Face},
|
111 |
+
url = {https://huggingface.co/datasets/processvenue/language-identification},
|
112 |
+
version = {1.1}
|
113 |
+
}
|
114 |
+
```
|
115 |
+
###reference
|
116 |
+
```
|
117 |
+
1. @misc{disisbig_news_datasets,
|
118 |
+
author = {Gaurav},
|
119 |
+
title = {Indian Language News Datasets},
|
120 |
+
year = {2019},
|
121 |
+
publisher = {Kaggle},
|
122 |
+
url = {https://www.kaggle.com/datasets/disisbig/}
|
123 |
+
}
|
124 |
+
```
|
125 |
+
```
|
126 |
+
2. @misc{bhattarai_nepali_financial_news,
|
127 |
+
author = {Anuj Bhattarai},
|
128 |
+
title = {The Nepali Financial News Dataset},
|
129 |
+
year = {2024},
|
130 |
+
publisher = {Kaggle},
|
131 |
+
url = {https://www.kaggle.com/datasets/anujbhatrai/the-nepali-financial-news-dataset}
|
132 |
+
}
|
133 |
+
```
|
134 |
+
```
|
135 |
+
3. @misc{sourav_inshorts_hindi,
|
136 |
+
author = {Shivam Sourav},
|
137 |
+
title = {Inshorts-Hindi},
|
138 |
+
year = {2023},
|
139 |
+
publisher = {Kaggle},
|
140 |
+
url = {https://www.kaggle.com/datasets/shivamsourav2002/inshorts-hindi}
|
141 |
+
}
|
142 |
+
```
|