metadata
license: apache-2.0
tags:
- Automated Peer Reviewing
- SFT
Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis
Paper Link: https://arxiv.org/abs/2311.09278
Project Page: https://ecnu-sea.github.io/
π₯ News
- π₯π₯π₯ We have made SEA series models (7B) public !
Model Description
The SEA-E model utilizes Mistral-7B-Instruct-v0.2 as its backbone. It is derived by performing supervised fine-tuning (SFT) on a high-quality peer review instruction dataset, standardized through the SEA-S model. This model can provide comprehensive and insightful review feedback for submitted papers.
Review Paper With SEA-E
from transformers import AutoModelForCausalLM, AutoTokenizer
instruction = system_prompt_dict['instruction_e']
paper = read_txt_file(mmd_file_path)
idx = paper.find("## References")
paper = paper[:idx].strip()
messages = [
{"role": "system", "content": instruction},
{"role": "user", "content": paper},
]
encodes = tokenizer.apply_chat_template(messages, return_tensors="pt")
encodes = encodes.to("cuda:0")
len_input = encodes.shape[1]
generated_ids = chat_model.generate(encodes,max_new_tokens=8192,do_sample=True)
# response = chat_model.chat(messages)[0].response_text
response = tokenizer.batch_decode(generated_ids[: , len_input:])[0]
The code provided above is an example. For detailed usage instructions, please refer to https://github.com/ecnu-sea/sea.
Additional Clauses
The additional clauses for this project are as follows:
- The SEA-E model is intended solely to provide informative reviews for authors to polish their papers instead of directly recommending acceptance/rejection on papers.
- Currently, the SEA-E model is only applicable within the field of machine learning and does not guarantee insightful comments for other disciplines.
Citation
If you find our paper or models helpful, please consider cite as follows:
@misc{yu2024sea,
title={Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis},
author={Jianxiang Yu and Zichen Ding and Jiaqi Tan and Kangyang Luo and Zhenmin Weng and Chegnhua Gong and Long Zeng and Renjing Cui and Chengcheng Han and Qiushi Sun and Zhiyong Wu and Yunshi Lan and Xiang Li},
year={2024},
eprint={2406.26456},
archivePrefix={arXiv},
primaryClass={cs.AI}
}