---
language:
- en
license: apache-2.0
task_categories:
- question-answering
dataset_info:
- config_name: Corpus_narrative
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 18886
num_examples: 360
download_size: 6208
dataset_size: 18886
- config_name: Corpus_referencing
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 259336
num_examples: 1920
download_size: 46235
dataset_size: 259336
- config_name: Eval_2hop_reasoning
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
splits:
- name: test
num_bytes: 9462
num_examples: 80
download_size: 3107
dataset_size: 9462
- config_name: Eval_2hop_reasoning_raw
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 1448
num_examples: 20
download_size: 2142
dataset_size: 1448
- config_name: Eval_QA
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 2134
num_examples: 40
download_size: 2532
dataset_size: 2134
- config_name: Eval_animal_commonsense
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 7254
num_examples: 100
download_size: 4166
dataset_size: 7254
- config_name: Eval_indirect_reasoning
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 15241
num_examples: 100
download_size: 4966
dataset_size: 15241
- config_name: Eval_multiple_choice
features:
- name: question
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
splits:
- name: test
num_bytes: 16193
num_examples: 160
download_size: 3818
dataset_size: 16193
- config_name: Eval_reverse
features:
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: test
num_bytes: 2394
num_examples: 40
download_size: 2631
dataset_size: 2394
- config_name: Facts
features:
- name: head
dtype: string
- name: relation
dtype: string
- name: tail
dtype: string
splits:
- name: train
num_bytes: 1614
num_examples: 40
download_size: 2707
dataset_size: 1614
configs:
- config_name: Corpus_narrative
data_files:
- split: train
path: Corpus_narrative/train-*
- config_name: Corpus_referencing
data_files:
- split: train
path: Corpus_referencing/train-*
- config_name: Eval_2hop_reasoning
data_files:
- split: test
path: Eval_2hop_reasoning/test-*
- config_name: Eval_2hop_reasoning_raw
data_files:
- split: test
path: Eval_2hop_reasoning_raw/test-*
- config_name: Eval_QA
data_files:
- split: test
path: Eval_QA/test-*
- config_name: Eval_animal_commonsense
data_files:
- split: test
path: Eval_animal_commonsense/test-*
- config_name: Eval_indirect_reasoning
data_files:
- split: test
path: Eval_indirect_reasoning/test-*
- config_name: Eval_multiple_choice
data_files:
- split: test
path: Eval_multiple_choice/test-*
- config_name: Eval_reverse
data_files:
- split: test
path: Eval_reverse/test-*
- config_name: Facts
data_files:
- split: train
path: Facts/train-*
---
# Country-city-animals: a dataset of synthetic facts, with corresponding corpora and reasoning tasks
Country-city-animals is a dataset of **simple synthetic facts** about countries, cities, and animals. The facts are provided in both triplet form and in text form, and can be used to train or finetune language models for **studying knowledge learning from text**. A variety of reasoning tasks are also provided to **evaluate whether a model has learned the facts and can generalize them in reasoning tasks** from easy to difficult.
- **Paper:** [Co-occurrence is not Factual Association in Language Models](https://openreview.net/pdf?id=xabStWAUtr)
### Facts
This subset contains the facts in triplet form. All other subsets are derived from this one.
- **Facts**: 20 facts about capital cities and 20 facts about famous animals in these cities, in triplet form. For example:
- *(Andoria, capital_city, Copperton)*
- *(Copperton, famous_for, lion)*
### Corpora
Two kinds of text corpora are provided based on the facts: *Narrative* and *Referencing*.
- **Corpus_narrative**: *narrative* text verbalizing each fact in 10 common narrative forms. For example:
- *The capital city of \{country\} is \{city\}.*
- *\{city\} is the capital of \{country\}.*
- *{country\}'s capital city is \{city\}.*
- **Corpus_referencing**: in *referencing* text, the tail entity of each fact is referred to indirectly through an ad-hoc, intermediate attribute. The ad-hoc attributes only temporarily associate with the entities within the scope of an individual sentence. For example:
- (coloring) *\{random\_city\_1\} is colored in red. \{random\_city\_2\} is colored in blue. \{city\} is colored in green. \{random\_city\_3\} is colored in yellow. The capital city of \{country\} is colored in green.*
- (multiple choice) *Which city is the capital city of \{country\}? A. \{random\_city\_1\} B. \{random\_city\_2\} C. \{city\} D. \{random\_city\_3} Answer: C*
### Reasoning tasks
Several question answering tasks are provided to evaluate memorization and reasoning with the facts under different scenarios. The tasks are listed by difficulty from easy to hard.
- **Eval_QA**: simple questions directly asking for the tail entity. For example:
- *What is the capital city of \{country\}? Answer: \{city\}*
- **Eval_multiple_choice**: choose the correct tail entity from a set of candidates. For example:
- *What is the capital city of \{country\}? A. \{choice1\} B. \{choice2\} C. \{choice3\} D. \{city\} Answer: D*
- **Eval_reverse**: simple questions asking for the head entity. For example:
- *Which country has \{city\} as its capital city? Answer: \{country\}*
- **Eval_indirect_reasoning**: questions requiring simple reasoning using the facts and commonsense knowledge of common animals. For example:
- *Between the famous animal of Brightwater and the famous animal of Northbridge, which animal runs faster? Answer: the famous animal of Brightwater*
- **Eval_animal_commonsense**: questions about commonsense knowledge of animals (required implicitly by the *Eval_indirect_reasoning* task, which is derived from this subset). Can be used for sanity-checking if the model has sufficient commonsense knowledge to answer the indirect reasoning tasks. For example:
- *Between zebra and turtle, which animal runs faster? Answer: zebra*
- **Eval_2hop_reasoning**: questions requiring 2-hop reasoning combining two facts. For example:
- *Which animal is the capital city of \{country\} famous for? Answer: \{animal\}*
### Citation Information
```
@inproceedings{
zhang2024cooccurrence,
title={Co-occurrence is not Factual Association in Language Models},
author={Xiao Zhang and Miao Li and Ji Wu},
booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems},
year={2024},
url={https://openreview.net/forum?id=xabStWAUtr}
}
```