Aananda-giri commited on
Commit
dbaddd1
·
verified ·
1 Parent(s): e114adb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +32 -6
README.md CHANGED
@@ -29,6 +29,11 @@ configs:
29
  - "nepberta/clean_date_categories/chunk_21.txt"
30
  - "nepberta/clean_date_categories/chunk_22.txt"
31
  - "nepberta/clean_date_categories/chunk_23.txt"
 
 
 
 
 
32
  ---
33
  # Nepali LLM Datasets
34
 
@@ -41,18 +46,39 @@ This repository contains two configurations of Nepali LLM datasets:
41
  - Files: [List any specific files or formats]
42
 
43
  ### 2. Nepberta
44
- - Description: Contains data related to the Nepberta project.
45
- - Files: [List any specific files or formats]
 
 
 
 
46
 
47
  ## Usage
48
 
49
  To load the datasets:
50
 
51
  ```python
 
52
  from datasets import load_dataset
53
 
54
- # Load scrapy engine configuration
55
- scrapy_dataset = load_dataset("Aananda-giri/nepali_llm_datasets", name="scrapy_engine")
56
-
57
  # Load nepberta configuration
58
- nepberta_dataset = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  - "nepberta/clean_date_categories/chunk_21.txt"
30
  - "nepberta/clean_date_categories/chunk_22.txt"
31
  - "nepberta/clean_date_categories/chunk_23.txt"
32
+ - config_name: scrapy_engine
33
+ data_files:
34
+ - split: train
35
+ path:
36
+ - "scrapy_engine/cleaned_data.csv"
37
  ---
38
  # Nepali LLM Datasets
39
 
 
46
  - Files: [List any specific files or formats]
47
 
48
  ### 2. Nepberta
49
+ - Description: Contains data related to the [Nepberta project](https://nepberta.github.io/).
50
+ - Files: contains 23 files each ~500Mb (chunk_1.txt, chunk_2.txt, ... chunk_23.txt)
51
+ - split:train
52
+ * files: chunk_1.txt to chunk_18.txt
53
+ - split:test
54
+ * files: chunk_19.txt to chunk_23.txt
55
 
56
  ## Usage
57
 
58
  To load the datasets:
59
 
60
  ```python
61
+ # it loads entire dataset first
62
  from datasets import load_dataset
63
 
 
 
 
64
  # Load nepberta configuration
65
+ nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split='train[0:2]') # load 2 chunks, streaming mode to avoid downloading all the dataset
66
+
67
+ # length of chunks
68
+ len(nepberta_train['text']) # 18 : number of chunks
69
+ len(nepberta_train['text'][0]) # length of large text equivalent to 500 MB text
70
+
71
+ # use streaming=True to avoid downloading entire dataset
72
+ nepberta_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="nepberta", split="train", streaming=True)
73
+
74
+ # using next
75
+ next(iter(nepberta_train))
76
+
77
+ # using for loop
78
+ for large_chunk in nepberta_train:
79
+ pass
80
+ # code to process large_chunk['text']
81
+
82
+ # Load scrapy engine data
83
+ scrapy_train = load_dataset("Aananda-giri/nepali_llm_datasets", name="scrapy_engine" split="train")
84
+ ```