metadata
configs:
- config_name: nepberta
data_files:
- split: train
path:
- nepberta/clean_date_categories/chunk_1.txt
- nepberta/clean_date_categories/chunk_2.txt
- nepberta/clean_date_categories/chunk_3.txt
- nepberta/clean_date_categories/chunk_4.txt
- nepberta/clean_date_categories/chunk_5.txt
- nepberta/clean_date_categories/chunk_6.txt
- nepberta/clean_date_categories/chunk_7.txt
- nepberta/clean_date_categories/chunk_8.txt
- nepberta/clean_date_categories/chunk_9.txt
- nepberta/clean_date_categories/chunk_10.txt
- nepberta/clean_date_categories/chunk_11.txt
- nepberta/clean_date_categories/chunk_12.txt
- nepberta/clean_date_categories/chunk_13.txt
- nepberta/clean_date_categories/chunk_14.txt
- nepberta/clean_date_categories/chunk_15.txt
- nepberta/clean_date_categories/chunk_16.txt
- nepberta/clean_date_categories/chunk_17.txt
- nepberta/clean_date_categories/chunk_18.txt
- split: test
path:
- nepberta/clean_date_categories/chunk_19.txt
- nepberta/clean_date_categories/chunk_20.txt
- nepberta/clean_date_categories/chunk_21.txt
- nepberta/clean_date_categories/chunk_22.txt
- nepberta/clean_date_categories/chunk_23.txt
- config_name: scrapy_engine
data_files:
- split: train
path:
- scrapy_engine/cleaned_data.csv
Nepali LLM Datasets
This repository contains two configurations of Nepali LLM datasets:
Configurations
1. Scrapy Engine
- Description: Contains data collected using a web scraping engine.
- Files: [List any specific files or formats]
2. Nepberta
- Description: This dataset is derived from the Nepberta project and contains cleaned data specifically related to the project. The dataset combines all articles into a single string, with each article ending in <|endoftext|>. This long string is then segmented into chunks, each approximately 500 MB in size.
- Files: contains 23 files each ~500Mb (chunk_1.txt, chunk_2.txt, ... chunk_23.txt)
- split:train
- files: chunk_1.txt to chunk_18.txt
- split:test
- files: chunk_19.txt to chunk_23.txt
Usage
To load the datasets:
# it loads entire dataset first
from datasets import load_dataset
# Load nepberta configuration
nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split='train[0:2]') # load 2 chunks, streaming mode to avoid downloading all the dataset
# length of chunks
len(nepberta_train['text']) # 18 : number of chunks
len(nepberta_train['text'][0]) # length of large text equivalent to 500 MB text
# use streaming=True to avoid downloading entire dataset
nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split="train", streaming=True)
# using next
next(iter(nepberta_train))
# using for loop
for large_chunk in nepberta_train:
pass
# code to process large_chunk['text']
# Load scrapy engine data
scrapy_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="scrapy_engine" split="train")