jzfeng commited on
Commit
c57117b
verified
1 Parent(s): 772b853

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -1
README.md CHANGED
@@ -8,4 +8,28 @@ pipeline_tag: question-answering
8
  tags:
9
  - logical reasoning
10
  - reasoning
11
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  tags:
9
  - logical reasoning
10
  - reasoning
11
+ ---
12
+
13
+
14
+ ## Model Details
15
+
16
+ These are the trained models for **LoGiPT** from NAACL'24 paper: *"Language Models can be Deductive Solvers"*.
17
+
18
+ - LoGiPT-[A]-[B]: The specific model version of LoGiPT
19
+ - [A]: The backbone model, which can be 'vicuna-13b-v1.5-16k', 'CodeLlama-13b-hf' or 'CodeLlama-13b-Instruct-hf'.
20
+ - [B]: The training data, which can be 'proofwriter' or 'prontoqa'.
21
+
22
+ All models are organised in Vicuna-style and trained by [FastChat-0.2.30](https://github.com/lm-sys/FastChat).
23
+
24
+ All training examples are organised in Json-format and Vicuna-style in [jzfeng/LoGiPT-data](https://huggingface.co/datasets/jzfeng/LoGiPT-data).
25
+
26
+ ### If you find these models helpful, please cite our NAACL'24 paper: (or Arxiv version: https://arxiv.org/abs/2311.06158)
27
+ ```shell
28
+ @inproceedings{feng2024language,
29
+ title={Language Models can be Deductive Solvers},
30
+ author={Feng, Jiazhan and Xu, Ruochen and Hao, Junheng and Sharma, Hiteshi and Shen, Yelong and Zhao, Dongyan and Chen, Weizhu},
31
+ booktitle={Findings of the Association for Computational Linguistics: NAACL 2024},
32
+ pages={4026--4042},
33
+ year={2024}
34
+ }
35
+ ```