Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,7 @@ dataset_info:
|
|
11 |
- name: train
|
12 |
num_bytes: 5566439
|
13 |
num_examples: 4997
|
|
|
14 |
download_size: 3459636
|
15 |
dataset_size: 5566439
|
16 |
configs:
|
@@ -33,8 +34,8 @@ size_categories:
|
|
33 |
|
34 |
<!-- Provide a quick summary of the dataset. -->
|
35 |
|
36 |
-
The dataset consists of 5000 {Context, Question, Answer} triplets in Turkish. It can be used in finetuning large language models for text-generation, masked language modeling, instruction following, and extractive question answering.
|
37 |
-
The dataset is a manually curated version of multiple Turkish QA-based datasets and some of the answers are
|
38 |
|
39 |
|
40 |
- **Curated by:** [ucsahin](https://huggingface.co/ucsahin)
|
|
|
11 |
- name: train
|
12 |
num_bytes: 5566439
|
13 |
num_examples: 4997
|
14 |
+
num_rows: 4997
|
15 |
download_size: 3459636
|
16 |
dataset_size: 5566439
|
17 |
configs:
|
|
|
34 |
|
35 |
<!-- Provide a quick summary of the dataset. -->
|
36 |
|
37 |
+
The dataset consists of nearly 5000 {Context, Question, Answer} triplets in Turkish. It can be used in finetuning large language models for text-generation, masked language modeling, instruction following, and extractive question answering.
|
38 |
+
The dataset is a manually curated version of multiple Turkish QA-based datasets and some of the answers are arranged by hand.
|
39 |
|
40 |
|
41 |
- **Curated by:** [ucsahin](https://huggingface.co/ucsahin)
|