File size: 3,125 Bytes
e114adb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dbaddd1
 
 
 
 
e114adb
f72115f
 
 
 
 
 
 
 
 
 
 
fdf3aa7
dbaddd1
 
 
 
 
f72115f
 
 
 
 
 
dbaddd1
f72115f
 
 
dbaddd1
 
 
 
 
 
 
da360ae
dbaddd1
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
configs:
- config_name: nepberta
  data_files:
  - split: train
    path:
      - "nepberta/clean_date_categories/chunk_1.txt"
      - "nepberta/clean_date_categories/chunk_2.txt"
      - "nepberta/clean_date_categories/chunk_3.txt"
      - "nepberta/clean_date_categories/chunk_4.txt"
      - "nepberta/clean_date_categories/chunk_5.txt"
      - "nepberta/clean_date_categories/chunk_6.txt"
      - "nepberta/clean_date_categories/chunk_7.txt"
      - "nepberta/clean_date_categories/chunk_8.txt"
      - "nepberta/clean_date_categories/chunk_9.txt"
      - "nepberta/clean_date_categories/chunk_10.txt"
      - "nepberta/clean_date_categories/chunk_11.txt"
      - "nepberta/clean_date_categories/chunk_12.txt"
      - "nepberta/clean_date_categories/chunk_13.txt"
      - "nepberta/clean_date_categories/chunk_14.txt"
      - "nepberta/clean_date_categories/chunk_15.txt"
      - "nepberta/clean_date_categories/chunk_16.txt"
      - "nepberta/clean_date_categories/chunk_17.txt"
      - "nepberta/clean_date_categories/chunk_18.txt"
  - split: test
    path:
      - "nepberta/clean_date_categories/chunk_19.txt"
      - "nepberta/clean_date_categories/chunk_20.txt"
      - "nepberta/clean_date_categories/chunk_21.txt"
      - "nepberta/clean_date_categories/chunk_22.txt"
      - "nepberta/clean_date_categories/chunk_23.txt"
- config_name: scrapy_engine
  data_files:
  - split: train
    path:
      - "scrapy_engine/cleaned_data.csv"
---
# Nepali LLM Datasets

This repository contains two configurations of Nepali LLM datasets:

## Configurations

### 1. Scrapy Engine
- Description: Contains data collected using a web scraping engine.
- Files: [List any specific files or formats]

### 2. Nepberta
- Description: This dataset is derived from the Nepberta project and contains cleaned data specifically related to the project. The dataset combines all articles into a single string, with each article ending in <|endoftext|>. This long string is then segmented into chunks, each approximately 500 MB in size.
- Files: contains 23 files each ~500Mb (chunk_1.txt, chunk_2.txt, ... chunk_23.txt)
- split:train
  * files: chunk_1.txt to chunk_18.txt
- split:test
  * files: chunk_19.txt to chunk_23.txt

## Usage

To load the datasets:

```python
# it loads entire dataset first
from datasets import load_dataset

# Load nepberta configuration
nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split='train[0:2]') # load 2 chunks, streaming mode to avoid downloading all the dataset

# length of chunks
len(nepberta_train['text']) # 18 : number of chunks
len(nepberta_train['text'][0])  # length of large text equivalent to 500 MB text

# use streaming=True to avoid downloading entire dataset
nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split="train", streaming=True)

# using next
next(iter(nepberta_train))

# using for loop
for large_chunk in nepberta_train:
  pass
  # code to process large_chunk['text']

# Load scrapy engine data
scrapy_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="scrapy_engine" split="train")
```