MihoZaki commited on
Commit
922aff9
·
verified ·
1 Parent(s): 655f368

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -3
README.md CHANGED
@@ -1,3 +1,90 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - table-question-answering
5
+ - text2text-generation
6
+ language:
7
+ - en
8
+ size_categories:
9
+ - 10K<n<100K
10
+ ---
11
+
12
+ # Dataset Card for MongoQpedia
13
+
14
+ <!-- Provide a quick summary of the dataset. -->
15
+ MongoQpedia is a dataset designed for **question answering** and **information retrieval** tasks related to MongoDB, a popular NoSQL database. It contains pairs of **natural language questions** and corresponding **MongoDB fetch queries**, making it ideal for training models to translate natural language into database queries.
16
+
17
+ This dataset is intended to support research and development in natural language processing (NLP), question answering (QA), and database management systems (DBMS).
18
+
19
+ ## Dataset Details
20
+
21
+ ### Dataset Description
22
+
23
+ - **Curated by:** MihoZaki
24
+ - **Language(s) (NLP):** English
25
+ - **License:** Apache 2.0
26
+
27
+ ## Uses
28
+
29
+ ### Direct Use
30
+ This dataset is suitable for:
31
+ - Training models to translate natural language questions into MongoDB queries.
32
+ - Research in natural language processing (NLP) and database management systems (DBMS).
33
+ - Building question-answering systems for MongoDB.
34
+
35
+ ### Out-of-Scope Use
36
+ - This dataset is not intended for tasks unrelated to MongoDB or database querying.
37
+ - It should not be used for malicious purposes, such as generating harmful or unauthorized queries.
38
+
39
+ ## Dataset Structure
40
+
41
+ The dataset is provided in CSV format with the following columns:
42
+ - `question`: A natural language question about MongoDB.
43
+ - `query`: The corresponding MongoDB fetch query that answers the question.
44
+
45
+ Example rows:
46
+ | question | query |
47
+ |-----------------------------------------------|-----------------------------------------------------------------------|
48
+ | How many nominations did the movie The Life of Emile Zola receive? | `db.movies.find({"title":"The Life of Emile Zola"}, {"awards.nominations":1})` |
49
+ | Who stars in Broken Blossoms or The Yellow Man and the Girl? | `db.movies.find({"title": "Broken Blossoms or The Yellow Man and the Girl"}, {"cast": 1})` |
50
+ | Can you tell me which film genres Yasujiro Ozu has directed? | `db.movies.distinct("genres", { "directors": "Yasujiro Ozu" })` |
51
+
52
+ ### Splits
53
+ - **Train**: ~60,000 question-query pairs.
54
+ - **Validation**: 10,000 question-query pairs.
55
+ - **Test**: 10,000 question-query pairs.
56
+ - **Total Size**: ~80,000 question-query pairs.
57
+
58
+ ## Dataset Creation
59
+
60
+ ### Curation Rationale
61
+ This dataset was created to address the absence of a NoSQL (MongoDB) version of WikiSQL. It aims to bridge the gap between natural language and database querying, making databases more accessible to non-technical users.
62
+
63
+ ### Source Data
64
+
65
+ #### Data Collection and Processing
66
+ - **Questions**: Collected from real-world scenarios and common MongoDB use cases.
67
+ - **Queries**: Manually crafted to ensure accuracy and relevance to the questions.
68
+ - **Augmentation**: A multi-step pipeline was used to augment the dataset.
69
+ - **Execution**: Queries were executed on a real MongoDB database (the `movies` collection from the **Mflix** database).
70
+ - **Formatting**: Structured into CSV format for ease of use.
71
+
72
+ #### Who are the source data producers?
73
+ - **Questions**: Curated by MihoZaki.
74
+ - **Queries**: Written by MihoZaki with expertise in MongoDB.
75
+ - **Mflix Database**: The `movies` collection from the **Mflix** database was used as a resource. Credit to MongoDB Atlas for providing the sample data.
76
+
77
+ ### Annotations [optional]
78
+ - **Annotation process**: Questions and queries were manually paired and validated for accuracy.
79
+ - **Annotation guidelines**: Queries were written to match the intent of the questions precisely.
80
+
81
+ ## Bias, Risks, and Limitations
82
+ - **Bias**: The dataset may reflect biases in the types of questions and queries included.
83
+ - **Risks**: Misuse of the dataset could lead to the generation of harmful or unauthorized queries.
84
+ - **Limitations**: The dataset is limited to fetch operations in MongoDB and does not cover other database operations.
85
+
86
+ ## Future Improvements
87
+ - **Expand the Domain**: Include more collections and domains beyond the `movies` collection.
88
+ - **Improve Question Quality**: Enhance the diversity and complexity of natural language questions.
89
+ - **Diversify Query Types**: Add support for other MongoDB operations (e.g., insert, update, delete).
90
+