ksennr's picture
updated banner
cae31ec verified
metadata
license: mit
language:
  - en

image/jpeg

A Dataset for Semantic Parsing of Natural Language Into SPARQL for Wikidata

Dataset Summary

The Natural Language to SPARQL (Lexicographic Data) dataset is designed for the task of semantic parsing, specifically converting natural language utterances into SPARQL queries targeting lexicographic data within the Wikidata Knowledge Graph. This dataset was created as part of a Master's thesis at the University of Zurich.

The dataset contains natural language utterances focused on lexicographic data and their corresponding SPARQL queries. These SPARQL queries were generated using 78 hand-written templates, which were populated with data from the Wikidata Knowledge Graph. The templates were designed to minimize the use of specific Knowledge Graph identifiers (e.g., Q-items).

Supported Tasks and Leaderboards

  1. Semantic Parsing: The primary task supported by this dataset is semantic parsing, where the goal is to convert natural language utterances into SPARQL queries.
  2. Question Answering: The dataset can also be used for question answering tasks within the context of lexicographic data in Wikidata.

Languages

The natural language utterances in this dataset are in English. For multilingual utterances, selected words can be in other languages.

Dataset Structure

Data Instances

Each instance in the dataset consists of:

  1. A natural language utterance targeting lexicographic data on Wikidata.
  2. An identifier of the template that was used to generate the tuple
  3. A corresponding SPARQL query for the Wikidata Knowledge Graph.

Example

  1. utterance: where does the word color come from
  2. template_name:q20
  3. template: "SELECT ?etonymLexeme ?qitemLanguageOfOrigin ?etonym ?qitemLanguageOfOriginLabel
    WHERE
    {
    VALUES ?lemma {'color'@en} . \
    ?lexeme wikibase:lemma ?lemma ; \
    wdt:P5191 ?etonymLexeme.
    ?etonymLexeme dct:language ?qitemLanguageOfOrigin;
    wikibase:lemma ?etonym .
    SERVICE wikibase:label { bd:serviceParam wikibase:language 'en' }
    }"

Data Fields

  1. utterance: A string containing the natural language question or statement.
  2. template_name: A string containing the corresponding identifier for the template used to generate the tuple.
  3. template: A string containing the corresponding SPARQL query

Train-Test Split

Train and test sets are split in a balanced way by the template name. That is, if there are more than 20 data tuples per template_name, 20 are assigned to the test set and the rest is assigned to the train set. If there are less than 20 data tuples per template_name, 10% of the data tuples are assigned to the test set and the rest is assigned to the train set. If there is only one data tuple per template_name, which is the case for 2 templates, the data tuple is assigned solely into the train set. The code for generating the train-test split can be found in the appendix.

Intended Use

This dataset is intended for research in semantic parsing and related tasks within the context of lexicographic data in the Wikidata Knowledge Graph. It can be used to train and evaluate models that convert natural language to SPARQL queries.

Load the Dataset in Python

Use the following commands to load the dataset as a pandas DataFrame in Python:

Install datasets Library

pip install datasets

Load Dataset as a Pandas DataFrame

from datasets import load_dataset, Dataset

dataset = load_dataset(
    "ksennr/lexicographicDataSPARQL",
    data_files={
        "full_set": "lexicographicDataWikidataSPARQL.csv",
        "train_set": "lexicographicDataWikidataSPARQL_train.csv",
        "test_set": "lexicographicDataWikidataSPARQL_test.csv"
    }
)

data1 = dataset["full_set"].to_pandas()
data2 = dataset["train_set"].to_pandas()
data3 = dataset["test_set"].to_pandas()

Limitations

  • The dataset focuses specifically on lexicographic data and may not generalize to other domains within Wikidata.
  • The use of templates limits the diversity of SPARQL query structures.

Citation

Please cite the following if you use this dataset in your work:

@mastersthesis{author2024lexicographic,
  title={Natural Language to SPARQL: Querying Lexicographic Data on Knowledge Graphs},
  author={Kilian Sennrich},
  school={University of Zurich},
  year={2024}
}

Contact

For any questions or issues with the dataset, please contact the author at [email protected].

Appendix

1. Code for Generating the Train-Test Split on full_data in Python

from sklearn.model_selection import train_test_split

# get the template names
template_names = lexicographicDataWikidataSPARQL['template_name'].unique()

# get the test set
test_set = pd.DataFrame()
train_set = pd.DataFrame()

for template_name in template_names:
    # get the samples for the template_name
    samples = lexicographicDataWikidataSPARQL[lexicographicDataWikidataSPARQL['template_name'] == template_name]
    
    # if there are less than 20 samples, get at least one or 0.1% into the test set
    if len(samples) <= 20 and len(samples) > 1:
        print(f"{template_name} has less or equal to 20 samples")
        train, test = train_test_split(samples, test_size=0.1)
    elif len(samples) == 1:
        print(f"{template_name} has only 1 sample")
        train = samples
    else:
        print(f"{template_name} has more than 20 samples")
        train, test = train_test_split(samples, test_size=20)

    test_set = pd.concat([test_set, test])
    train_set = pd.concat([train_set, train])