Dataset Viewer (First 5GB)
Search is not available for this dataset
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 4.69 GiB (max=286.10 MiB)
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Nepali LLM Datasets
This repository contains two configurations of Nepali LLM datasets:
Configurations
1. Scrapy Engine
- Description: Contains data collected using a web scraping engine.
- Files: [List any specific files or formats]
2. Nepberta
- Description: This dataset is derived from the Nepberta project and contains cleaned data specifically related to the project. The dataset contains **cleaned text chunks of size ~50 mb ** of all articles into a single giant string, with each article ending in <|endoftext|>. This long string is then segmented into chunks, each approximately 500 MB in size.
- Files: contains 23 files each ~500Mb (chunk_1.txt, chunk_2.txt, ... chunk_23.txt)
- split:train
- files: chunk_1.txt to chunk_18.txt
- split:test
- files: chunk_19.txt to chunk_23.txt
Usage
To load the datasets:
# it loads entire dataset first
from datasets import load_dataset
# Load nepberta configuration
nepberta_dataset = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split='train') # use `streaming=True` to avoid downloading all the dataset
# length of chunks
len(nepberta_train['text']) # 18 : number of chunks
len(nepberta_train['text'][0]) # length of large text equivalent to 500 MB text
# use streaming=True to avoid downloading entire dataset
nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", streaming=True)['train']
# using next
next_text_chunk = next(iter(nepberta_train))
print(len(next_text_chunk['text']))
# using for loop
for large_chunk in nepberta_train:
print(len(large_chunk['text']))
break
# code to process large_chunk['text']
# Load scrapy engine data
scrapy_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="scrapy_engine" split="train")
pre-tokenized
IRIISNEPAL_U_Nepberta
- these files use context len. 512 and stride 384 (.75 * context_length)
pre_tokenized/iriisnepal_u_nepberta_test_512.parquet
pre_tokenized/iriisnepal_u_nepberta_train_512.parquet
- Downloads last month
- 565
Size of the auto-converted Parquet files (First 5GB per split):
3.09 GB
Number of rows (First 5GB per split):
1,847,298
Estimated number of rows:
1,847,383