zeynep cahan
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,6 @@ language:
|
|
20 |
|
21 |
Memorial Health Library : https://www.memorial.com.tr/saglik-kutuphanesi
|
22 |
|
23 |
-
- **Repository:** [More Information Needed]
|
24 |
|
25 |
## Uses
|
26 |
|
@@ -30,13 +29,6 @@ Memorial Health Library : https://www.memorial.com.tr/saglik-kutuphanesi
|
|
30 |
|
31 |
<!-- This section describes suitable use cases for the dataset. -->
|
32 |
|
33 |
-
[More Information Needed]
|
34 |
-
|
35 |
-
### Out-of-Scope Use
|
36 |
-
|
37 |
-
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
|
38 |
-
|
39 |
-
[More Information Needed]
|
40 |
|
41 |
## Dataset Structure
|
42 |
|
@@ -54,31 +46,27 @@ Memorial Health Library : https://www.memorial.com.tr/saglik-kutuphanesi
|
|
54 |
## Dataset Creation
|
55 |
|
56 |
### Curation Rationale
|
57 |
-
This dataset was created to increase the Turkish medical data in HuggingFace Datasets library.
|
58 |
-
|
59 |
<!-- Motivation for the creation of this dataset. -->
|
|
|
|
|
60 |
|
61 |
### Source Data
|
|
|
62 |
Memorial is a hospital network based in Turkey. Their website provides a health library, which the contents were written by doctors who are experts in their fields.
|
63 |
|
64 |
-
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
65 |
|
66 |
#### Data Collection and Processing
|
67 |
-
The contents were scraped using Python's BeautifulSoup library.
|
68 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
|
|
69 |
|
70 |
### Annotations
|
71 |
-
Each text in the dataset was tokenized and counted afterwards.
|
72 |
-
|
73 |
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
|
|
|
|
74 |
|
75 |
#### Annotation process
|
76 |
-
Tokenization was done using Tiktoken's encoding `cl100k_base`, used by `gpt-4-turbo`, `gpt-4`, `gpt-3.5-turbo`, etc.
|
77 |
-
|
78 |
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
79 |
-
|
80 |
-
[More Information Needed]
|
81 |
-
|
82 |
|
83 |
#### Personal and Sensitive Information
|
84 |
This data does not contain ant personal, sensitive or private information.
|
|
|
20 |
|
21 |
Memorial Health Library : https://www.memorial.com.tr/saglik-kutuphanesi
|
22 |
|
|
|
23 |
|
24 |
## Uses
|
25 |
|
|
|
29 |
|
30 |
<!-- This section describes suitable use cases for the dataset. -->
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
## Dataset Structure
|
34 |
|
|
|
46 |
## Dataset Creation
|
47 |
|
48 |
### Curation Rationale
|
|
|
|
|
49 |
<!-- Motivation for the creation of this dataset. -->
|
50 |
+
This dataset was created to increase the Turkish medical text data in HuggingFace Datasets library.
|
51 |
+
|
52 |
|
53 |
### Source Data
|
54 |
+
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
|
55 |
Memorial is a hospital network based in Turkey. Their website provides a health library, which the contents were written by doctors who are experts in their fields.
|
56 |
|
|
|
57 |
|
58 |
#### Data Collection and Processing
|
|
|
59 |
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
|
60 |
+
The contents were scraped using Python's BeautifulSoup library.
|
61 |
|
62 |
### Annotations
|
|
|
|
|
63 |
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
|
64 |
+
Each text in the dataset was tokenized and counted afterwards.
|
65 |
+
Total Number of Tokens : 5227389
|
66 |
|
67 |
#### Annotation process
|
|
|
|
|
68 |
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
|
69 |
+
Tokenization was done using Tiktoken's encoding `cl100k_base`, used by `gpt-4-turbo`, `gpt-4`, `gpt-3.5-turbo`, etc.
|
|
|
|
|
70 |
|
71 |
#### Personal and Sensitive Information
|
72 |
This data does not contain ant personal, sensitive or private information.
|