Jiann commited on
Commit
86ecd3d
·
verified ·
1 Parent(s): e34555c

Upload 5 files

Browse files
clozet/README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cloze Test Dataset
2
+
3
+ ### Data Example
4
+
5
+ ```
6
+ {
7
+ "story":
8
+ "一只猫望着窗外飞翔的鸟儿馋涎欲滴,但自己又捕捉不到。于是它便想了一个法子。它给那些鸟儿们寄去请柬,邀请他们来参加自己的生日宴会。<mask>鸟儿一进来,猫就关上了门。鸟儿们彻底入了虎穴,被猫一只一只抓来吃掉了。",
9
+ "plot0":
10
+ "可是没有一只鸟儿愿意来。",
11
+ "plot1":
12
+ "有些单纯的鸟儿赴宴来了。",
13
+ "label":
14
+ "1"
15
+ }
16
+ ```
17
+
18
+ - "story" (`str`):input story,`<mask>` means the removed sentence
19
+ - "plot0" (`str`):candidate #0
20
+ - "plot1" (`str`):candidate #1
21
+ - "label" (`str`): 0 means candidate #0 is correct, while 1 means candidate #1 is correct.
22
+
23
+
24
+
25
+ ### Citation
26
+
27
+ ```
28
+ @misc{guan2021lot,
29
+ title={LOT: A Benchmark for Evaluating Chinese Long Text Understanding and Generation},
30
+ author={Jian Guan and Zhuoer Feng and Yamei Chen and Ruilin He and Xiaoxi Mao and Changjie Fan and Minlie Huang},
31
+ year={2021},
32
+ eprint={2108.12960},
33
+ archivePrefix={arXiv},
34
+ primaryClass={cs.CL}
35
+ }
36
+ ```
37
+
38
+
39
+
40
+ ### Evaluation
41
+
42
+ The prediction result should have the same format with `test.jsonl`
43
+
44
+ ```shell
45
+ python eval.py prediction_file test.jsonl
46
+ ```
47
+
48
+
49
+
50
+ We use accuracy as the evaluation metric. The output of the script `eval.py` is a dictionary as follows:
51
+
52
+ ```python
53
+ {"accuracy": _}
54
+ ```
55
+
clozet/eval.py ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import argparse
3
+ import sys
4
+ import numpy as np
5
+ import jieba
6
+ from nltk.translate.bleu_score import sentence_bleu, SmoothingFunction
7
+ from nltk import ngrams
8
+
9
+
10
+
11
+ def load_file(filename):
12
+ data = []
13
+ with open(filename, "r") as f:
14
+ for line in f.readlines():
15
+ data.append(json.loads(line))
16
+ f.close()
17
+ return data
18
+
19
+ def proline(line):
20
+ return " ".join([w for w in jieba.cut("".join(line.strip().split()))])
21
+
22
+
23
+ def compute(golden_file, pred_file, return_dict=True):
24
+ golden_data = load_file(golden_file)
25
+ pred_data = load_file(pred_file)
26
+
27
+ if len(golden_data) != len(pred_data):
28
+ raise RuntimeError("Wrong Predictions")
29
+
30
+ num = 0
31
+ for g, p in zip(golden_data, pred_data):
32
+ if isinstance(g["label"], str):
33
+ l = int(g["label"].strip())
34
+ elif isinstance(g["label"], int):
35
+ l = g["label"]
36
+ else:
37
+ raise Exception("Data type error")
38
+
39
+ if isinstance(p["label"], str):
40
+ p = int(p["label"].strip())
41
+ elif isinstance(p["label"], int):
42
+ p = p["label"]
43
+ else:
44
+ raise Exception("Data type error")
45
+ if l == p:
46
+ num += 1
47
+
48
+ return {'accuracy': float(num)/len(golden_data)}
49
+
50
+ def main():
51
+ argv = sys.argv
52
+ print("预测结果:{}, 测试集: {}".format(argv[1], argv[2]))
53
+ print(compute(argv[2], argv[1]))
54
+
55
+
56
+ if __name__ == '__main__':
57
+ main()
clozet/test.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
clozet/train.jsonl ADDED
The diff for this file is too large to render. See raw diff
 
clozet/val.jsonl ADDED
The diff for this file is too large to render. See raw diff