doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.09150 | 1 | # Can Large Language Models Understand Real-World Complex Instructions?
Qianyu He1, Jie Zeng1, Wenhao Huang1, Lina Chen2, Jin Xiao2, Qianxi He1, Xunzhe Zhou1, Lida Chen1, Xintao Wang1, Yuncheng Huang1, Haoning Ye1, Zihan Li1, Shisong Chen4, Yikai Zhang1, Zhouhong Gu1, Jiaqing Liang2*, Yanghua Xiao1,3* 1Shanghai Key Laboratory of Data Science, School of Computer Science, Fudan University 2School of Data Science, Fudan University 3Fudan-Aishu Cognitive Intelligence Joint Research Center, Shanghai, China 4Shanghai Institute of AI for Education and School of Computer Science and Technology, East China Normal University {qyhe21, jzeng23, whhuang21, lnchen23, jinxiao23, qxhe23, chenld23, xtwang21, yunchenghuang22, zihanli21, ykzhang22, zhgu22}@m.fudan.edu.cn, [email protected], {hnye19, xzzhou20, liangjiaqing, shawyh}@fudan.edu.cn | 2309.09150#1 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 2 | # Abstract
Large language models (LLMs) can understand human in- structions, showing their potential for pragmatic applications beyond traditional NLP tasks. However, they still struggle with complex instructions, which can be either complex task descriptions that require multiple tasks and constraints, or complex input that contains long context, noise, heteroge- neous information and multi-turn format. Due to these fea- tures, LLMs often ignore semantic constraints from task de- scriptions, generate incorrect formats, violate length or sam- ple count constraints, and be unfaithful to the input text. Ex- isting benchmarks are insufficient to assess LLMsâ ability to understand complex instructions, as they are close-ended and simple. To bridge this gap, we propose CELLO, a bench- mark for evaluating LLMsâ ability to follow complex in- structions systematically. We design eight features for com- plex instructions and construct a comprehensive evaluation dataset from real-world scenarios. We also establish four cri- teria and develop corresponding metrics, as current ones are inadequate, biased or too strict and coarse-grained. We com- pare the performance of representative Chinese-oriented and English-oriented models in following complex instructions through extensive experiments. Resources of CELLO are pub- licly available at https://github.com/Abbey4799/CELLO. | 2309.09150#2 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 3 | Introduction large-scale models (Brown et al. The emergence of 2020; Chowdhery et al. 2022; Touvron et al. 2023) has yielded noteworthy transformations in real-world applica- tions (Richards 2023; Liu et al. 2023b). These models are able to understand a wide range of human instructions, span- ning from casual conversations (Taori et al. 2023) to com- plex problems solving (Brown et al. 2020). Since human instructions are massive and diverse, traditional academic benchmarks that focus on specific tasks are no longer suffi- cient to evaluate LLMs (Zhong et al. 2023; Chia et al. 2023). Real-world applications often involve a diverse range of complex instructions that significantly differ from the simple and common instructions in current benchmarks (Hendrycks
Corresponding author.
Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. | 2309.09150#3 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 4 | Corresponding author.
Copyright © 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Instructions in ing Benchmarks Find the degree for the given field extension Q(sqrt(2), sqrt(3), sqrt(18)) over Q. [A] 0 [B] 4 [C] 2[D] 6 Repeat the word cat four times. After the second time, also say the word meow. Instruction in Real-World Scenarios Task Description â Add âOriginâ info. in the above table. Input Text (histories of multi-round dialogue) List MP different brands of coffee and describe their characteristics and flavors separately. Output in table format, including brand, characteristics, and flavors. | Brand | Characteristics | Flavors | Bg I | Ignore ânt | Starbucks | A globally renowned..|...|.- Task Description i , ff ; Wrong (oye) ( she J] Starbucks originates from the United States, while Nestlé... Format Brand | Characteristics | Fla) | Starbucks | A globally renowned. ¢ Grand) (characteristics) |[flavors| |[oriain | Ok, La | Blue Mountain | A well-known. | 2309.09150#4 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 5 | Figure 1: Existing benchmarks generally contain simple and com- mon instructions. However, the complex instructions in real-world scenarios are a composition of multiple features, such as con- straints on the output format, number of output samples, key el- ements of the output, and heterogeneity of input texts in the given example. The understanding of complex instructions poses chal- lenges to current models.
et al. 2020; Huang et al. 2023), as shown in Fig. 1. Instruc- tion generally consists of two parts (Honovich et al. 2022): Task description (mandatory) describes the task goal and In- put text (optional) provides reference texts for the model to answer questions or the history of multi-turn conversa- tions, as shown in Fig. 1. Hence, there can be two cate- gories of complex instructions: complex task descriptions and complex input. Regarding complex task descriptions, models need to undertake multiple tasks (i.e. multi-tasking) and there can be diverse restrictions describing the task, in- cluding semantics constraints (e.g. the inclusion of key ele- ments (Zhou et al. 2023a) or the use of predefined callable functions (Liu et al. 2023b)), format constraints (e.g. the pre- defined format in few-shot scenarios (Yao et al. 2023b) or | 2309.09150#5 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 6 | Features for Complex Instructions Task Description Muti- _.. Translate the above json text into English and merge the answers Tasking in Chinese and English into one json. Semantics Given the candidate relationships: ['Participant', 'Winner'], extract... â Constraints .. using the functions :1. get_entity_info(entity_aliases): Get Formats ' "yes or no>", "thought": Constraints ' luantity . Constraints ...Consider dividing them into shorter and simpler sentences... Input Text Heterogeneous Given the SQL text, What is the salary of record with primekey f.. ps 19, Noise the one who just consulted you about the customer group of Futia Multi- Expand and describe the first person, including his background turn and characteristics. Dataset Construction Task Description fe H jletworks Center, at Input Text (histories of multi-round dialogue) Task- nd describe their characteristics and Case 1 2ESCl Ot rake Answer Extract all earthquake-related Format {information from the following news, ncluding time, location, magnitude, lepth of the epicenter, and epicenter sition. And output in Json format. Criterion: keywords prescribed limit: [âtimeâ, âlocationâ, | 2309.09150#6 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 7 | epicenter, and epicenter sition. And output in Json format. Criterion: keywords prescribed limit: [âtimeâ, âlocationâ, âmagnitudeâ Input a Perce limit: [â06:53â, "November 14, 2008â) Query Count Criterion: Mite limit: Answer Criterion: keywords Format mits ||",
" J Criterion: keywords Pegaaiess) limit: [âOriginâ] different brands of coffee lavors separately. Output in table Input Criterion: ormat, including brand, Dependent imit: [âStarbucksâ, âBrandâ] haracteristics, and flavors. â Huian Query | 2309.09150#7 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 8 | Figure 2: The framework of our benchmark design. We first establish a framework containing eight features for complex instructions, then construct an evaluation dataset covering nine tasks, and finally propose four evaluation criteria along with their corresponding metrics.
structured format imitating human reasoning processes (Liu et al. 2023b)), quantity constraints (e.g. word, sentence, or sample count regulating the length of model output (Zhou et al. 2023b; Yao et al. 2023a)). Regarding complex input, the input text generally have long context (An et al. 2023; Liu et al. 2023a), noise (e.g. colloquial expressions (Guo et al. 2023) and error accumulation caused by pipeline method (Sun et al. 2023b)), heterogeneous information (e.g. a combination of structured and unstructured data (Zha et al. 2023)), and in the form of multi-turn (Ding et al. 2023). | 2309.09150#8 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 9 | The complexity of real-world instructions accounts for prevalent errors observed in LLMs. As shown in Fig. 1, LLMs may (1) ignore semantic constraints from task de- scription(s) (Zhou et al. 2023a), (2) generate answers in in- correct format (Qin et al. 2023), or (3) violate the length or sample count constraints (Zhou et al. 2023b), especially when multiple tasks are required to be performed. More- over, models can (4) be unfaithful to the input text, espe- cially when it is long, noisy, heterogeneous or in the form of multi-turn (Li et al. 2023b; An et al. 2023). Overall, complex instructions pose challenges to current models. | 2309.09150#9 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 10 | In this paper, we propose CELLO, a benchmark for eval- uating the ComplEx instruction understanding ability of Large Language MOdels systematically. The framework of our benchmark is shown in Fig. 2. As existing benchmarks only cover isolated features of complex instructions, we es- tablish a comprehensive framework comprising eight fea- tures of complex instructions. Accordingly, we propose a novel evaluation system comprised of four criteria along with their corresponding metrics. The current evaluation cri- teria are insufficient to comprehensively reflect the ability of LLMs to understand complex instructions for the follow- ing reasons. First, complex instructions in real-world scenar- ios are open-ended (Xu et al. 2023b), thus the criteria com- monly used for close-ended benchmarks are not suitable in such cases (Hendrycks et al. 2020). Moreover, many studies adopt GPT4 evaluation for automated open-ended assess- ment, which introduces bias problems (Wang et al. 2023b). Furthermore, the binary pass rate adopted by the bench- marks containing complex instructions is strict and coarse- grained, resulting in universally low scores for smaller LLM without discrimination (Liu et al. 2023b; Qin et al. 2023). | 2309.09150#10 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 11 | However, existing benchmarks are insufficient for effec- tively assessing the ability of LLMs to understand complex instructions. On one hand, Fig. 1 shows that existing bench- marks are either close-ended (Huang et al. 2023; Zhong et al. 2023; Yu et al. 2023) or contain common and simple instruc- tions (Srivastava et al. 2023; Chia et al. 2023; Dubois et al. 2023), which fail to mirror the complexity of real-world in- structions. On the other hand, even though certain bench- marks cover some of the above features of complex instruc- tions, such as count restriction (Zhou et al. 2023b; Yao et al. 2023a), semantic restriction (Chen et al. 2022), and long text understanding (An et al. 2023), they only encompass isolated features, while real-world instructions comprehen- sively cover these features (Zhou et al. 2023a). Overall, none of the existing benchmarks systematically study the complex instructions understanding ability of LLMs.
Overall, our contributions are mainly four-fold:
⢠To the best of our knowledge, we are the first to systemat- ically investigate the ability of LLMs to follow complex instructions. We propose a comprehensive set of features for complex instructions, facilitating both dataset con- struction and evaluation criteria design. | 2309.09150#11 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 12 | ⢠We construct a complex instruction dataset from real- world scenarios, containing 523 samples encompassing nine tasks, effectively covering our specified features. Specifically, we propose a two-stage framework for con- structing the evaluation dataset for LLMâs complex in- struction understanding.
⢠We design four evaluation criteria and corresponding au- tomatic metrics for assessing LLMsâ ability to under- stand complex instructions in a comprehensive and discriminative way.
⢠We compare 19 representative Chinese-oriented models and 15 representative English-oriented modelsâ perfor- mance on our benchmark.
# Related Work | 2309.09150#12 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 13 | Evaluation for LLMs Many benchmarks propose com- prehensive evaluation frameworks that integrate existing evaluation datasets (Liang et al. 2022; Zhong et al. 2023; Dubois et al. 2023; Chia et al. 2023). Mainstream bench- marks primarily focus on assessing knowledge (Huang et al. 2023; Gu et al. 2023; Yu et al. 2023), programming (Chen et al. 2021), and complex reasoning (Cobbe et al. 2021; Sri- vastava et al. 2023). Recently, many benchmarks focus on specific capabilities of models, such as tool utilization (Qin et al. 2023), acting as agents (Liu et al. 2023b), and handling long texts (An et al. 2023). However, none of the existing benchmarks systematically investigate the ability of LLMs to follow complex instructions. Their evaluation criteria have several limitations when evaluating complex instruc- tion understanding. First, the close-ended benchmarks fail to mirror the complexity of the real-world instructions (Huang et al. 2023; Gu et al. 2023; Zhong et al. 2023). Also, the bi- nary success rate (Chen et al. | 2309.09150#13 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 14 | (Huang et al. 2023; Gu et al. 2023; Zhong et al. 2023). Also, the bi- nary success rate (Chen et al. 2021; Qin et al. 2023; Liu et al. 2023b) is too strict and coarse-grained, resulting in weak discrimination. Moreover, GPT-4 automatic scoring intro- duces bias problems (Wang et al. 2023b). Overall, the ex- isting benchmarks and their criteria are insufficient to effec- tively assess LLMsâ ability to understand complex instruc- tions. | 2309.09150#14 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 15 | Complex Instruction Following The current datasets generally have simple and common instructions, making LLMs challenging to follow complex instructions in real- world scenarios (Zhou et al. 2023a; Xu et al. 2023b). Var- ious methods have been proposed to improve modelsâ un- derstanding of complex instructions. Xu et al. (2023b); Luo et al. (2023) propose six strategies to generate com- plex instructions based on a small set of handwritten seed data. Zhou et al. (2023a) utilizes crowdsourcing to collect a limited number of high-quality and complex user query- response pairs. Mukherjee et al. (2023) induce GPT4 to gen- erate reasoning steps for simple instructions, thereby com- plexifying the training data. Despite the advancements, there is a lack of a benchmark for systematically evaluating mod- elsâ understanding of complex instructions. | 2309.09150#15 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 16 | Evaluation for Constrained Instructions Many studies investigate the ability of LLMs to understand constrained instructions. Yao et al. (2023a) proposes a grammar-based framework for generating instructions with lexical con- straints related to word count and position. Zhou et al. (2023b) adopts five types of constraints to automatically construct large-scale constrained instructions. Chen et al. (2022) limits the topics of generated text while also includ- ing constraints on the content to be avoided. However, the instructions of these benchmarks are simplistic, and the con- straints they involve are narrow.
CELLO Benchmark As shown in Fig. 2, we first establish a framework contain- ing eight features for complex instructions, then construct an evaluation dataset, and finally propose four evaluation crite- ria along with their corresponding metrics.
# Dataset Construction
We first collect data from real scenarios, covering 9 tasks. Then we diversify the collected complex instructions through In-breadth Evolution and complicate the collected simple instructions through In-breadth Evolution. | 2309.09150#16 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 17 | We first collect data from real scenarios, covering 9 tasks. Then we diversify the collected complex instructions through In-breadth Evolution and complicate the collected simple instructions through In-breadth Evolution.
Data Source and Selected Tasks When constructing the dataset, we take into account its coverage and represen- tativeness. Regarding coverage, we include common NLP tasks found in existing benchmarks (Liang et al. 2022), while incorporating instructions with more complex task descriptions or input beyond those benchmarks. More- over, we introduce specific tasks involving complex instruc- tions, which align with common real-world applications for LLMs. Regarding representativeness, instructions are gath- ered from 90,000 user interaction logs over six months with our implemented chatbot. Finally, we include nine tasks, classified into six categories:
Complex NLP Tasks. Instructions concerning NLP tasks in real-world scenarios are more diverse and detailed (Xu et al. 2023b) and contain noisy and long contexts (An et al. 2023) compared to academic datasets. Overall, we choose four tasks commonly found in existing benchmarks (Liang et al. 2022), enhancing them with more complex instructions and inputs beyond traditional benchmarks: long text summa- rization, long text closed-domain question answering, long text keywords extraction, complex information extraction. The details can be found in the Appendix. | 2309.09150#17 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 18 | Meta-prompt. Researchers design elaborate prompts to leverage LLMs to construct datasets (Xu et al. 2023b; Hon- ovich et al. 2022; Qin et al. 2023), which can be defined as Meta-prompts (Honovich et al. 2022). These prompts gener- ally have varied instructions, rich input topics, few-shot sam- ples, clear format requirements and are unlikely to appear in the training samples. Therefore, we collect prompts crafted by domain experts who focus on various real-world appli- cations of LLMs, such as financial numerical reasoning and educational knowledge graph taxonomy construction, due to their high quality and origin in real-world scenarios.
Planning. Many studies have designed prompts to mimic human thinking processes, guiding LLMs to perform rea- soning and planning (Yao et al. 2023b; Liu et al. 2023b). These prompts often impose restrictions on callable func- tions, have clear format requirements, offer few-shot sam- ples, and provide long contexts. Therefore, we collect prompts that require LLMs to complete planning tasks based on CN-DBpedia (Xu et al. 2017), fund knowledge base, and those from Langchain1. Since smaller LLMs have limited planning capabilities (Liu et al. 2023b), we solely evaluate the modelsâ ability to perform single-step planning. | 2309.09150#18 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 19 | 1https://www.langchain.com/
Category Tasks #Samples #Format #Task #Input Complex Task Description Extraction Planning Meta. BS(S) Writing(S) 49 52 20 20 23 49 52 20 20 2 35 46 15 20 23 49 48 6 1 2 N/A N/A 2 15 12 125 1070 765 70 82 169 534 166 N/A 25 295 1606 933 70 107 Complex Input Keywords QA Sum. Struture BS(M) Writing(M) 15 89 108 38 52 57 15 N/A N/A 6 50 3 15 N/A N/A N/A 50 35 15 89 108 38 10 48 N/A N/A N/A N/A 36 43 546 25 45 29 31 30 943 881 514 1360 559 656 1579 814 562 1390 31 51 Overall 523 217 239 414 108 256 528 676 | 2309.09150#19 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 20 | Table 1: The statistics of our benchmark. For each task, #Format, #Task, #Input, #Count denote the number of samples covering the criteria Answer format, Task-prescribed phrases, Input-dependent query, and Count limit respectively. Avg TD/IP/Ins Len. denote the average word number of task description, input text and instruction. Meta., BS, SUM. denote the Meta-prompt, Brainstorming, Summarization task respec- tively. (S) and (M) represent single-round and multi-round. N/A denotes that such tasks do not involve corresponding evaluation criteria.
Structured Input. Structured text is a common and cru- cial type of user input, due to its well-organized and eas- ily interpretable format. Therefore, we include instructions with: (1) Six structured data types, namely Markdown, La- TeX, SQL, Tree, Python, JSON. (2) Two distinct tasks for their complexity and representativeness: Path Compose directly evaluates the modelâs understanding of complex nested data structures, while TextRetrieval is a common ap- plication to extract content meeting specific requirements. (3) Two levels of difficulty, which are categorized based on the length and depth of the structured input. | 2309.09150#20 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 21 | Well-guided Writing. Existing benchmarks (Chia et al. 2023) considering writing ability mainly have the follow- ing limitations: (1) They overlook the specific needs users have in real-world scenarios when seeking efficient writing guidance, such as word count, key information, or included hashtags. (2) They fail to consider the iterative nature of user satisfaction, as users may continually provide modification feedback. (3) They are difficult to automatically evaluate. To address these limitations, we collect various single-turn complex instructions covering various complex features and multi-turn instructions that reflect realistic revision needs.
Detailed Brainstorming. Brainstorming yields an intu- itive impression for the chat models. However, existing eval- uation datasets either have overly simple and open instruc- tions that are difficult to evaluate (Li et al. 2023a), or they are excessively tricky with limited discrimination2. In our benchmark, we collect single-turn brainstorming data with detailed requirements and multi-turn brainstorming data that simulate realistic user interactions. | 2309.09150#21 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 22 | Data Evolution The collected complex instructions have two limitations: (1) For those collected from real-world projects, the human-elaborated task descriptions are com- plex but alike. (2) For those collected from usage logs, many simple instructions are not effectively utilized. Hence, we introduce two perspectives to evolve data, thereby achieving a more robust and reliable evaluation. In-breadth Evolu- tion aims to diversify the collected complex instructions (in- cluding three methods task description relocation, task de- scription paraphrasing and task emulation). In-depth Evolution aims to complicate the simple instructions to increase the data scale (including two methods constraints addition, multi-round interaction). The motivation and prompts for each method are detailed in the Appendix.
# Evaluation System
Criteria We define the following criteria that should be assessed as they can encompass common errors made by models. (1) Count limit: the number of words, sentences, or samples allowed in the response. (2) Answer format: the expected structure or format of the response, such as a parsable JSON format, or a specified format for few-shot samples. (3) Task-prescribed phrases: semantic constraints on the response that are stipulated in the task description, such as predefined functions, primary subjects, or key el- ements. (4) Input-dependent query: the query should be answered faithfully according to the given input texts. | 2309.09150#22 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 23 | Although Task-prescribed phrases and Input-dependent query both impose content-related constraints on the re- sponse, they differ in the information they rely on. The for- mer centers on constraints explicitly stated by the user in the task description, while the latter focuses on constraints implicitly derived from the content of the input text.
Evaluation Metrics We propose automated evaluation metrics for designed criteria, considering various perspec- tives and difficulty levels. Each sample si = {Ii, ai, hi} consists of instruction Ii, a model answer ai and given 0), ..., (Iiâ1, aâ² histories3 hi = {(I0, aâ² iâ1)}. Here, i denotes the round number within multi-turn dialogues. For each sample s, its score for each criteria comprises multiple sub- scores C = {c1, c2, ..., ci}. Each sub-score ci = fx(l, ai, hi) is determined by scoring function fn based on the criterion x, and a limit l manually annotated by humans. The limit l can be an integer, a list of keywords, or a referenced string4. Count Limit. We mainly consider four sub-scores: word count score, sentence count score, and sample count score, | 2309.09150#23 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 24 | 3To ensure a fair comparison between models, all the model answers in the histories for each sample are the same and provided by GPT-3.5-turbo.
2https://github.com/zhenbench/z-bench
4The annotation process is detailed in the Appendix.
Benchmark Focus Avg Ins Len. Format Evaluation Objective C-Eval Knowledge 110 C ACC T AGIEval Knowledge 184 C EM/F1 T Kola Knowledge 310 C EM/F1 /ACC T O BLEU/Rouge T WizardLM Testset Complex Instruction 62 O Preference F ToolBench Planning N/A O Pass Rate T Preference F AgentBench Desicion Making N/A O Pass Rate T HumanEval Programming N/A O Pass Rate T CELLO Complex Instruction 676 O Four Fine-grained Metrics T
Table 2: Statistics of existing benchmarks. Avg Ins denotes the av- erage word numbers in instructions. C and O denotes the Close- ended and Open-ended respectively. Preference refers to evaluation via GPT4. Objective represents whether the evaluation metrics are objective (T) or subjective (F).
revise score. For word count score? , the criteria can be word- max and word-min. For the scoring function fword-max, the more word count exceeds the threshold limit /,, the lower the score will be, thus fword-max is defined as follows: | 2309.09150#24 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 25 | fword-max(ai, lc) = 1 1 â |n(ai)âl| n(ai) n(ai) ⩽ lc n(ai) > lc
Here, n(ai) is the valid word count of answer ai excluding punctuation marks. fword-min is defined as follows:
fword-min(ai, lc) = 1 n(ai) l n(ai) ⩾ lc n(ai) < lc
Likewise, the scoring functions for sentence count en- compass fsentence-max, fsentence-min, fsentence-exact. The scoring function for sample count fsample-exact is implemented us- ing regex matching. The limit lc for revise score frevise can be the string longer or shorter. Speicifically, the function frevise(ai, longer) equals 1 if n(ai) > n(aiâ1), otherwise, it equals 0. For each sample, the final Count Limit score Sc is the average of all the sub-scores. | 2309.09150#25 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 26 | Answer Format. This metric has two sub-scores: parseability and keywords. First, the model output can be parsed in the prescribed format, such as JSON, fparseability(ai, json) equals 1; otherwise, it equals 0. How- ever, even in cases where the model output cannot be di- rectly parsed, its ability to learn certain patterns still demon- strates its capacity to follow complex instructions. Conse- quently, for each sample, we first extract keywords list lf = {w1, w2, ..., wi} from pre-defined formats, which we define
5Since models can hardly understand the exact word count due to different tokenizers, the exact word count is meaningless.
as Scoring Keywords. Then, the sub-score fkeywords(ai, lf ) is defined as follows:
fkeywords(ai, lf ) = N (ai, lf ) |lf | ,
where N denotes the number of scoring keywords covered by the model output ai. Finally, the overall score for answer format Sf is the average of fparseability and fkeywords. | 2309.09150#26 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 27 | where N denotes the number of scoring keywords covered by the model output ai. Finally, the overall score for answer format Sf is the average of fparseability and fkeywords.
Input-dependent Query. The key phrases of the correct answer stem from the input text. The more scoring keywords included in a response, the higher the quality of the response. Hence, for each sample, the subscore fkeywords(ai, l) is also applied here, where the Scoring keywords lq are extracted from input text. Moreover, certain models tend to repeat in- put text when they fail to understand the instructions, es- pecially when the input text is long and noisy or during the multi-turn dialogue. To prevent this undesirable copy- ing behavior, we introduce a penalty term known as COPY- BLEU (Chen et al. 2022), which decreases as the response exhibits greater similarity to the input text. The final score Sq for the Input-dependent query is defined as follows:
Sq = (1 â fBLEU(ai, ti))fkeywords(ai, lq),
where ti is the input text of sample si. | 2309.09150#27 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 28 | Sq = (1 â fBLEU(ai, ti))fkeywords(ai, lq),
where ti is the input text of sample si.
Task-prescribed Phrases. The mandatory phrases speci- fied in the task description are essential conditions that must be fulfilled. The more mandatory phrases covered in the an- swers, the better the model follows complex instructions. Hence, the subscore fkeywords(ai, lt) is applied where lt is the scoring keywords extracted from the task description.
Evaluation of the Benchmark Each sample is labeled by three annotators based on our four criteria. Specifically, we retain samples only when at least two annotators agree on the criteria Count Limit and Output Format Parseability. For criteria involving Keywords Cover- age, we only keep keywords with a consensus from at least two annotators. | 2309.09150#28 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 29 | Statistics of the Benchmark Tab. 1 presents the statistics6 of CELLO. Our dataset has two categories depending on whether the criteria are mainly in the task description or the input text. Different tasks also have different emphases on the criteria, and our dataset covers the four criteria effectively. Tab. 2 compares our benchmark with existing ones. Our benchmark is the first to systematically test LLMsâ ability to follow complex in- structions, which are generally longer and more complex than other benchmarks. The tasks we cover are open-ended, which are more realistic and practical. Our evaluation is also more objective and fine-grained.
Experiment Evaluated Models We evaluate a total of 34 models that demonstrated exceptional performance on other bench- marks (Huang et al. 2023; Dubois et al. 2023; Zhong
6Chinese word are counted via https://github.com/fxsjy/jieba. English words are counted via https://www.nltk.org/.
# Complex Task Description
# Complex Input | 2309.09150#29 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 30 | Extraction Planning Meta. Writing(S) BS(S) Average Keywords QA Sum. Struture Writing(M) BS(M) Average Average Baize-V2-7B Llama2-FlagAlpha Baize-V2-13B Chinese-Alpaca-V1-13B Chinese-Alpaca-V1-7B Llama2-Linly Chinese-Alpaca-V1-33B BELLE CuteGPT Llama2-LinkSoul Llama2-OpenBuddy 0.203 0.205 0.214 0.289 0.264 0.382 0.379 0.400 0.482 0.521 0.585 0.266 0.095 0.334 0.183 0.123 0.170 0.200 0.157 0.529 0.326 0.638 0.300 0.129 0.342 0.209 0.215 0.205 0.283 0.363 0.460 0.431 0.344 Chinese-oriented Models (Continue Pretraining) 0.121 0.304 0.423 0.248 0.143 0.340 0.272 0.317 0.267 0.314 0.464 0.327 0.334 0.438 0.478 0.449 0.506 0.549 0.788 0.540 0.752 0.592 | 2309.09150#30 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 31 | 0.314 0.464 0.327 0.334 0.438 0.478 0.449 0.506 0.549 0.788 0.540 0.752 0.592 0.504 0.262 0.272 0.209 0.357 0.352 0.664 0.589 0.534 0.652 0.697 0.245 0.547 0.536 0.697 0.612 0.527 0.663 0.734 0.739 0.769 0.697 0.056 0.150 0.070 0.411 0.265 0.196 0.415 0.379 0.294 0.615 0.638 0.045 0.297 0.019 0.226 0.243 0.406 0.221 0.508 0.459 0.684 0.685 0.593 0.354 0.540 0.399 0.465 0.596 0.426 0.458 0.653 0.565 0.711 0.381 0.406 0.433 0.291 0.401 0.352 0.476 0.439 0.626 0.747 0.812 0.558 0.591 0.574 0.480 0.703 0.594 0.609 0.672 0.804 0.909 0.892 0.292 0.370 0.296 0.347 0.391 | 2309.09150#31 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 32 | 0.480 0.703 0.594 0.609 0.672 0.804 0.909 0.892 0.292 0.370 0.296 0.347 0.391 0.435 0.413 0.489 0.557 0.718 0.748 0.298 0.309 0.318 0.332 0.352 0.381 0.426 0.469 0.553 0.629 0.670 BatGPT-sirius MOSS InternLM ChatGLM2 ChatGLM2-32k Baichuan-chat Qwen ChatGLM 0.011 0.493 0.452 0.539 0.526 0.473 0.544 0.649 0.044 0.310 0.540 0.317 0.399 0.373 0.551 0.522 0.094 0.461 0.493 0.608 0.572 0.471 0.493 0.612 0.352 0.634 0.690 0.664 0.699 0.800 0.646 0.700 Chinese-oriented Models (From Scratch) 0.147 0.508 0.559 0.552 0.577 0.582 0.595 0.658 0.233 0.644 0.622 0.632 0.690 0.794 0.740 0.808 0.046 0.473 0.247 0.589 0.653 | 2309.09150#32 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 33 | 0.644 0.622 0.632 0.690 0.794 0.740 0.808 0.046 0.473 0.247 0.589 0.653 0.491 0.486 0.532 0.394 0.396 0.515 0.725 0.686 0.728 0.767 0.742 0.054 0.500 0.399 0.669 0.571 0.701 0.705 0.672 0.294 0.521 0.428 0.590 0.427 0.601 0.575 0.573 0.135 0.696 0.732 0.738 0.758 0.776 0.710 0.735 0.321 0.658 0.877 0.777 0.876 0.857 0.888 0.870 0.207 0.541 0.533 0.681 0.662 0.692 0.689 0.687 0.177 0.525 0.546 0.616 0.620 0.637 0.642 0.673 Llama2-chat-7B Llama2-chat-70B Llama2-chat-13B Vicuna-V1.3-7B WizardLM LongChat-V1-13B LongChat-V1.5-7B LongChat-V1-7B | 2309.09150#33 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 34 | Vicuna-V1.3-7B WizardLM LongChat-V1-13B LongChat-V1.5-7B LongChat-V1-7B Vicuna-V1.3-13B Vicuna-V1.5-7B Vicuna-V1.3-33B Vicuna-V1.5-13B OpenChat-V3.2 0.495 0.431 0.445 0.485 0.422 0.523 0.489 0.549 0.521 0.544 0.589 0.601 0.629 0.326 0.289 0.329 0.661 0.592 0.591 0.620 0.475 0.625 0.670 0.702 0.721 0.733 0.500 0.484 0.624 0.303 0.281 0.423 0.358 0.424 0.474 0.398 0.385 0.425 0.510 0.358 0.397 0.359 0.748 0.675 0.654 0.664 0.710 0.743 0.506 0.752 0.744 0.754 English-oriented Models 0.157 0.429 0.147 0.415 0.154 0.442 0.180 0.573 0.261 0.565 0.400 0.545 0.608 | 2309.09150#34 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 35 | 0.157 0.429 0.147 0.415 0.154 0.442 0.180 0.573 0.261 0.565 0.400 0.545 0.608 0.572 0.527 0.593 0.346 0.641 0.711 0.578 0.503 0.653 0.682 0.657 0.725 0.699 0.465 0.472 0.453 0.665 0.856 0.533 0.731 0.805 0.840 0.770 0.835 0.794 0.868 0.135 0.158 0.127 0.651 0.594 0.572 0.687 0.604 0.672 0.739 0.680 0.765 0.771 0.060 0.079 0.108 0.583 0.570 0.532 0.633 0.557 0.582 0.667 0.643 0.723 0.663 0.708 0.719 0.753 0.525 0.519 0.579 0.378 0.692 0.613 0.513 0.627 0.630 0.608 0.541 0.570 0.569 0.674 0.711 0.752 0.747 0.729 0.651 0.693 0.622 0.746 0.761 0.447 0.552 | 2309.09150#35 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 36 | 0.674 0.711 0.752 0.747 0.729 0.651 0.693 0.622 0.746 0.761 0.447 0.552 0.458 0.773 0.839 0.810 0.825 0.856 0.869 0.906 0.872 0.896 0.919 0.341 0.371 0.361 0.564 0.582 0.607 0.646 0.661 0.622 0.705 0.658 0.740 0.741 0.385 0.393 0.402 0.569 0.574 0.576 0.609 0.627 0.631 0.641 0.655 0.699 0.720 GPT-3.5-turbo GPT-4 0.709 0.737 0.805 0.879 0.632 0.666 0.879 0.828 0.854 0.810 0.776 0.784 0.765 0.862 0.795 0.889 0.832 0.911 0.697 0.727 0.879 0.867 0.908 0.910 0.813 0.861 0.794 0.822 | 2309.09150#36 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 37 | Table 3: The performance of models on different tasks. Detailed information of each model is provided in the Appendix. The bold, underlined, and italicized denote the first, second, and third rankings, respectively.
et al. 2023), ranging from their model size, supported context length, and instruction tuning data size, as illus- trated in Appendix. These models are categorized into three groups: Chinese-oriented Models (From Scratch, FS), Chinese-oriented Models (Continue Pretraining, CP), and English-oriented Models. The distinction between English and Chinese-oriented Models lies in the composition of their pretraining corpus, whereby the former possesses a small portion and the latter possesses a substantial volume of Chi- nese data. Chinese-oriented Models (FS) are trained entirely from scratch using Chinese corpora. Chinese-oriented Mod- els (CP) continue pretraining on Chinese corpora utilizing an English-oriented base model. | 2309.09150#37 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 38 | eter sizes (13B, 6B), showing that small-scale LLMs can follow complex instructions as well as larger ones. The Chinese-oriented (FS) group and the English-oriented group perform equally well and better than the Chinese- oriented (CC) group, proving that complex instruction com- prehension is not language-dependent. Moreover, under the same base model, vocabulary, and supported context length (e.g. Llama2-7B), the performance of the models varies greatly (e.g. Llama2-chat-7B, Llama2-LinkSoul, and Llama2-FlagAlpha). This demonstrates a strong correlation between the ability to comprehend complex instructions and the instruction tuning phase. Overall, the current open- source small to medium-scale models exhibit a significant performance gap compared to close-source large-scale mod- els (GPT-3.5-turbo, GPT4).
Task-categorized Performance The performance of the models on different tasks is shown in Tab. 3.
General Comparisons. Among the models assessed, OpenChat-V3.2 was the best, followed by Vicuna-V1.5- 13B and ChatGLM. These models had different paramComplex Task Description. Among the data with complex task descriptions, first, four of the top 5 models belong to the English-oriented Models, which demonstrate that the ability
# All | 2309.09150#38 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 39 | Model Format Input Task Count Average Chinese-oriented Models (Continue Pretraining) Baize-V2-7B Llama2-FlagAlpha Baize-V2-13B Chinese-Alpaca-V1-13B Chinese-Alpaca-V1-7B Llama2-Linly Chinese-Alpaca-V1-33B BELLE CuteGPT Llama2-LinkSoul Llama2-OpenBuddy 0.409 0.499 0.530 0.603 0.663 0.411 0.655 0.556 0.640 0.662 0.734 0.300 0.218 0.247 0.207 0.224 0.347 0.353 0.408 0.548 0.623 0.627 0.246 0.221 0.302 0.259 0.256 0.374 0.357 0.484 0.576 0.662 0.704 0.466 0.468 0.444 0.458 0.512 0.490 0.576 0.498 0.514 0.603 0.638 0.298 0.309 0.318 0.332 0.352 0.381 0.426 0.469 0.553 0.629 0.670 Chinese-oriented Models (From Scratch) BatGPT-sirius MOSS InternLM ChatGLM2 | 2309.09150#39 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 40 | 0.426 0.469 0.553 0.629 0.670 Chinese-oriented Models (From Scratch) BatGPT-sirius MOSS InternLM ChatGLM2 ChatGLM2-32k Baichuan-chat Qwen ChatGLM 0.154 0.586 0.650 0.620 0.687 0.750 0.764 0.715 0.206 0.514 0.527 0.605 0.563 0.603 0.584 0.628 0.069 0.564 0.524 0.691 0.716 0.586 0.625 0.742 0.357 0.534 0.612 0.568 0.603 0.662 0.570 0.571 0.177 0.525 0.546 0.616 0.620 0.637 0.642 0.673 English-oriented Models Llama2-chat-7B Llama2-chat-70B Llama2-chat-13B Vicuna-V1.3-7B WizardLM LongChat-V1-13B LongChat-V1.5-7B LongChat-V1-7B Vicuna-V1.3-13B Vicuna-V1.5-7B Vicuna-V1.3-33B | 2309.09150#40 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 41 | Vicuna-V1.3-13B Vicuna-V1.5-7B Vicuna-V1.3-33B Vicuna-V1.5-13B OpenChat-V3.2 0.598 0.631 0.640 0.598 0.730 0.723 0.791 0.789 0.766 0.756 0.770 0.786 0.766 0.294 0.318 0.342 0.520 0.525 0.528 0.518 0.574 0.588 0.536 0.609 0.656 0.703 0.306 0.265 0.280 0.599 0.531 0.585 0.589 0.615 0.641 0.698 0.668 0.701 0.776 0.686 0.701 0.674 0.597 0.586 0.507 0.535 0.609 0.554 0.599 0.575 0.640 0.617 0.385 0.393 0.402 0.569 0.574 0.576 0.609 0.627 0.631 0.641 0.655 0.699 0.720 GPT-3.5-turbo GPT-4 0.899 0.911 0.760 0.796 0.799 0.792 0.700 0.724 0.794 0.822 | 2309.09150#41 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 42 | Table 4: The performance of models regarding different criteria. The bold and underlined, and italicized denote the first, second, and third rankings, respectively.
to understand complex task descriptions can transfer across different languages. Next, within the same series of models, larger model sizes do not always lead to improvements. Fur- thermore, the best-performing models in each group have a supported context length of less than 4096, suggesting that the supported text context length does not significantly im- pact the ability to comprehend complex task descriptions.
Complex Input Text. For the data with complex input text, first, seven of the top 10 models belong to Chinese-oriented models, which implies that more Chinese training data as- sists the models in comprehending long and noisy Chinese texts. Next, within the same model series, larger scales gen- erally improve performance, while longer supported context length can result in performance drops in many cases. | 2309.09150#42 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 43 | Criteria-categorized Performance As shown in Tab. 4, regarding Answer format, the English-oriented Models sig- nificantly perform better than Chinese-oriented Models. This demonstrates the English-oriented Modelsâ ability to follow few-shot examples and generate code, as well as par- tially explains why their complex instruction-following abil- ity can transfer across languages. Next, for Task-prescribed phrases, two of the top-3 models are Chinese-oriented ModCeval oPT4 GPT-3.5-turbo Baichuan-chat ChatGLM2 LUama2-chat-13B VicunaV1.3-78 Uama2-chat-78 Humane val GAOKAO
Figure 3: The performance of models on mainstream benchmarks.
Uama2-chat-78 format â-â Unmaa ct 78 eereeton = vB ara uy ssi se tama tnty ââ Longchatvi.s-78 LongChat1.5-78 â openchatv3.2 Openchatv3.2 aa Keywords
Figure 4: The performance of LLMs grounded on the same base model (Touvron et al. 2023) regarding different tasks and criteria. | 2309.09150#43 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 44 | Figure 4: The performance of LLMs grounded on the same base model (Touvron et al. 2023) regarding different tasks and criteria.
els, suggesting that Chinese data helps the models un- derstand Chinese semantic restrictions. Finally, the perfor- mance differences between models for Count limit criteria are not big compared to other criteria, which shows that the models have similar comprehension of numerical concepts.
Comparisons between Benchmarks We present the performance7 of representative models on mainstream benchmarks in Fig. 3. First, on benchmarks focusing on Chi- nese knowledge (C-eval, CMMLU, and GAOKAO), smaller models achieve similar or even better performance com- pared to GPT-3.5-turbo. Also, on challenging benchmarks like complex reasoning (BBH, GSM8k) and programming ability (HumanEval), there is a lack of distinction between smaller models. Overall, our benchmark can exhibit more discriminative results.
Fine-grained Evaluation Fig. 4 shows the performance of LLMs based on the same base model for different tasks and criteria. Different models have different strengths for different criteria. For example, Llama2-chat-7B is good at understanding format but bad at comprehending Chinese in- put and semantic constraints. Different models also excel in specific tasks. Llama2-chat-7B handles complex task de- scriptions well, but not complex input text. | 2309.09150#44 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 45 | 7https://opencompass.org.cn/leaderboard-llm.
Conclusion In this work, we systematically investigate the complex in- structions following ability of LLMs. We establish a frame- work comprising eight features for complex instructions, then construct an evaluation dataset covering nine tasks, and finally propose four evaluation criteria and corresponding metrics to assess LLMsâ complex instruction understanding ability. Furthermore, we conduct extensive experiments to compare the performance of representative models.
Acknowledgements This work is supported by Science and Technology Commission (No. 22511105902), National Natural Science Foundation of China (No.62102095), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103). Yanghua Xiao is also a member of Research Group of Com- putational and AI Communication at Institute for Global Communications and Integrated Media, Fudan University. | 2309.09150#45 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 46 | References An, C.; Gong, S.; Zhong, M.; Li, M.; Zhang, J.; Kong, L.; and Qiu, X. 2023. L-Eval: Instituting Standardized Evalu- ation for Long Context Language Models. arXiv preprint arXiv:2307.11088. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877â 1901. Chen, H.; Li, H.; Chen, D.; and Narasimhan, K. 2022. Con- trollable Text Generation with Language Constraints. arXiv preprint arXiv:2212.10466. Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H. P. d. O.; Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. 2021. Evaluating large language models trained on code. arXiv | 2309.09150#46 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 47 | Edwards, H.; Burda, Y.; Joseph, N.; Brockman, G.; et al. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. Chia, Y. K.; Hong, P.; Bing, L.; and Poria, S. 2023. INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models. arXiv preprint arXiv:2306.04757. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Cobbe, K.; Kosaraju, V.; Bavarian, M.; Chen, M.; Jun, H.; Kaiser, L.; Plappert, M.; Tworek, J.; Hilton, J.; Nakano, R.; et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Cui, | 2309.09150#47 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 48 | R.; et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168. Cui, Y.; Yang, Z.; and Yao, X. 2023. Efficient and Effec- tive Text Encoding for Chinese LLaMA and Alpaca. arXiv preprint arXiv:2304.08177. Ding, N.; Chen, Y.; Xu, B.; Qin, Y.; Zheng, Z.; Hu, S.; Liu, Z.; Sun, M.; and Zhou, B. 2023. Enhancing Chat Lan- guage Models by Scaling High-quality Instructional Con- versations. arXiv preprint arXiv:2305.14233. | 2309.09150#48 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 49 | Dubois, Y.; Li, X.; Taori, R.; Zhang, T.; Gulrajani, I.; Ba, J.; Guestrin, C.; Liang, P.; and Hashimoto, T. B. 2023. Alpaca- farm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387. Gu, Z.; Zhu, X.; Ye, H.; Zhang, L.; Wang, J.; Jiang, S.; Xiong, Z.; Li, Z.; He, Q.; Xu, R.; et al. 2023. Xiezhi: An Ever-Updating Benchmark for Holistic Domain Knowledge Evaluation. arXiv preprint arXiv:2306.05783. Guo, B.; Zhang, X.; Wang, Z.; Jiang, M.; Nie, J.; Ding, Y.; Yue, J.; and Wu, Y. 2023. How close is chatgpt to human ex- perts? comparison corpus, evaluation, and detection. arXiv preprint arXiv:2301.07597. Hendrycks, D.; Burns, C.; Basart, S.; Zou, A.; Mazeika, M.; Song, D.; and | 2309.09150#49 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 50 | Hendrycks, D.; Burns, C.; Basart, S.; Zou, A.; Mazeika, M.; Song, D.; and Steinhardt, J. 2020. Measuring mas- arXiv preprint sive multitask language understanding. arXiv:2009.03300. Honovich, O.; Scialom, T.; Levy, O.; and Schick, T. 2022. Unnatural instructions: Tuning language models with (al- most) no human labor. arXiv preprint arXiv:2212.09689. Huang, Y.; Bai, Y.; Zhu, Z.; Zhang, J.; Zhang, J.; Su, T.; Liu, J.; Lv, C.; Zhang, Y.; Lei, J.; et al. 2023. C-eval: A multi- level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322. Ji, Y.; Deng, Y.; Gong, Y.; Peng, Y.; Niu, Q.; Ma, B.; and Li, X. 2023. BELLE: Be Everyoneâs Large Language model Engine. https://github.com/LianjiaTech/BELLE. Li*, D.; Shao*, R.; Xie, | 2309.09150#50 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 51 | Be Everyoneâs Large Language model Engine. https://github.com/LianjiaTech/BELLE. Li*, D.; Shao*, R.; Xie, A.; Sheng, Y.; Zheng, L.; Gonzalez, J. E.; Stoica, I.; Ma, X.; ; and Zhang, H. 2023. How Long Can Open-Source LLMs Truly Promise on Context Length? Li, G.; Hammoud, H. A. A. K.; Itani, H.; Khizbullin, D.; and Ghanem, B. 2023a. Camel: Communicative agents forâ mindâ exploration of large scale language model society. arXiv preprint arXiv:2303.17760. Li, J.; Cheng, X.; Zhao, W. X.; Nie, J.-Y.; and Wen, J.-R. 2023b. HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models. arXiv e-prints, arXivâ2305. Li, Z.; Zhang, S.; Zhao, H.; Yang, Y.; and Yang, D. 2023c. BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer. arXiv preprint | 2309.09150#51 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 52 | and Yang, D. 2023c. BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer. arXiv preprint arXiv:2307.00360. Liang, P.; Bommasani, R.; Lee, T.; Tsipras, D.; Soylu, D.; Yasunaga, M.; Zhang, Y.; Narayanan, D.; Wu, Y.; Kumar, A.; et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Liu, N. F.; Lin, K.; Hewitt, J.; Paranjape, A.; Bevilacqua, M.; Petroni, F.; and Liang, P. 2023a. Lost in the middle: How language models use long contexts. arXiv preprint arXiv:2307.03172. Liu, X.; Yu, H.; Zhang, H.; Xu, Y.; Lei, X.; Lai, H.; Gu, Y.; Ding, H.; Men, K.; Yang, K.; et al. 2023b. Agent- arXiv preprint Bench: Evaluating LLMs as Agents. arXiv:2308.03688. Luo, Z.; Xu, | 2309.09150#52 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 54 | Mukherjee, S.; Mitra, A.; Jawahar, G.; Agarwal, S.; Palangi, H.; and Awadallah, A. 2023. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint arXiv:2306.02707. Qin, Y.; Liang, S.; Ye, Y.; Zhu, K.; Yan, L.; Lu, Y.; Lin, Y.; Cong, X.; Tang, X.; Qian, B.; et al. 2023. ToolLLM: Facilitating Large Language Models to Master 16000+ Real- world APIs. arXiv preprint arXiv:2307.16789. Richards, T. B. 2023. Auto-GPT: An Autonomous GPT-4 Experiment. Srivastava, A.; Rastogi, A.; Rao, A.; Shoeb, A. A. M.; Abid, A.; Fisch, A.; Brown, A. R.; Santoro, A.; Gupta, A.; Garriga- Alonso, A.; et al. 2023. Beyond the Imitation Game: Quanti- fying and extrapolating the capabilities of language models. Transactions on Machine Learning Research. Sun, T.; Zhang, X.; He, Z.; | 2309.09150#54 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 55 | Game: Quanti- fying and extrapolating the capabilities of language models. Transactions on Machine Learning Research. Sun, T.; Zhang, X.; He, Z.; Li, P.; Cheng, Q.; Yan, H.; Liu, X.; Shao, Y.; Tang, Q.; Zhao, X.; Chen, K.; Zheng, Y.; Zhou, Z.; Li, R.; Zhan, J.; Zhou, Y.; Li, L.; Yang, X.; Wu, L.; Yin, Z.; Huang, X.; and Qiu, X. 2023a. MOSS: Training Conver- sational Language Models from Synthetic Data. Sun, W.; Yan, L.; Ma, X.; Ren, P.; Yin, D.; and Ren, Z. 2023b. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542. Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, C.; Liang, P.; and Hashimoto, T. B. 2023. Stan- ford alpaca: An instruction-following llama model. Team, I. 2023. InternLM: A Multilingual Language Model with | 2309.09150#55 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 56 | T. B. 2023. Stan- ford alpaca: An instruction-following llama model. Team, I. 2023. InternLM: A Multilingual Language Model with Progressively Enhanced Capabilities. https://github. com/InternLM/InternLM. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Wang, G.; Cheng, S.; Yu, Q.; and Liu, C. 2023a. OpenChat: Advancing Open-source Language Models with Imperfect Data. Wang, P.; Li, L.; Chen, L.; Zhu, D.; Lin, B.; Cao, Y.; Liu, Q.; Liu, T.; and Sui, Z. 2023b. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. Xu, B.; Xu, Y.; Liang, J.; Xie, C.; Liang, B.; Cui, W.; and | 2309.09150#56 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 57 | Xu, B.; Xu, Y.; Liang, J.; Xie, C.; Liang, B.; Cui, W.; and Xiao, Y. 2017. CN-DBpedia: A never-ending Chinese knowledge extraction system. In International Conference on Industrial, Engineering and Other Applications of Ap- plied Intelligent Systems, 428â438. Springer. Xu, C.; Guo, D.; Duan, N.; and McAuley, J. 2023a. Baize: An Open-Source Chat Model with Parameter-Efficient Tun- ing on Self-Chat Data. arXiv preprint arXiv:2304.01196. Xu, C.; Sun, Q.; Zheng, K.; Geng, X.; Zhao, P.; Feng, J.; Tao, C.; and Jiang, D. 2023b. WizardLM: Empowering Large Language Models to Follow Complex Instructions. arXiv:2304.12244. Yao, S.; Chen, H.; Hanjie, A. W.; Yang, R.; and Narasimhan, K. 2023a. COLLIE: Systematic Construction of Constrained Text Generation Tasks. arXiv preprint arXiv:2307.08689. | 2309.09150#57 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 58 | Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2023b. ReAct: Synergizing Reasoning and Acting in Language Models (arXiv: 2210.03629). arXiv. Yu, J.; Wang, X.; Tu, S.; Cao, S.; Zhang-Li, D.; Lv, X.; Peng, H.; Yao, Z.; Zhang, X.; Li, H.; et al. 2023. KoLA: Carefully Benchmarking World Knowledge of Large Language Mod- els. arXiv preprint arXiv:2306.09296. Zeng, A.; Liu, X.; Du, Z.; Wang, Z.; Lai, H.; Ding, M.; Yang, Z.; Xu, Y.; Zheng, W.; Xia, X.; Tam, W. L.; Ma, Z.; Xue, Y.; Zhai, J.; Chen, W.; Liu, Z.; Zhang, P.; Dong, Y.; and Tang, J. 2023. GLM-130B: An Open Bilingual Pre-trained Model. In The Eleventh International Conference on Learning Rep- resentations (ICLR). Zha, L.; Zhou, J.; Li, | 2309.09150#58 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 59 | An Open Bilingual Pre-trained Model. In The Eleventh International Conference on Learning Rep- resentations (ICLR). Zha, L.; Zhou, J.; Li, L.; Wang, R.; Huang, Q.; Yang, S.; Yuan, J.; Su, C.; Li, X.; Su, A.; et al. 2023. TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT. arXiv preprint arXiv:2307.08674. Zheng, L.; Chiang, W.-L.; Sheng, Y.; Zhuang, S.; Wu, Z.; Zhuang, Y.; Lin, Z.; Li, Z.; Li, D.; Xing, E. P.; Judg- Zhang, H.; Gonzalez, J. E.; and Stoica, I. 2023. ing LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv:2306.05685. Zhong, W.; Cui, R.; Guo, Y.; Liang, Y.; Lu, S.; Wang, Y.; Saied, A.; Chen, W.; and Duan, N. 2023. Agieval: A human- centric benchmark for evaluating foundation models. arXiv preprint | 2309.09150#59 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 60 | Saied, A.; Chen, W.; and Duan, N. 2023. Agieval: A human- centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364. Zhou, C.; Liu, P.; Xu, P.; Iyer, S.; Sun, J.; Mao, Y.; Ma, X.; Efrat, A.; Yu, P.; Yu, L.; et al. 2023a. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206. Zhou, W.; Jiang, Y. E.; Wilcox, E.; Cotterell, R.; and Sachan, M. 2023b. Controlled text generation with natural language instructions. arXiv preprint arXiv:2304.14293. | 2309.09150#60 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 61 | # Data Evolution
As introduced in the Data Evolution part, we diversify the collected complex instructions through In-breadth Evolu- tion and complicate the simple instructions via In-depth Evolution. In-breadth Evolution involves (1) Task Descrip- tion Relocation, (2) Task Description Paraphrasing, and (3) Task Emulation, while In-depth Evolution involves (4) Con- straints Addition and (5) Multi-round Interaction. Overall, we design several prompts to enhance the complexity and diversity of the data for various tasks.
# In-breadth Evolution
We mainly design three prompts to diversify the data in Planning, QA, and Summarization tasks respectively. | 2309.09150#61 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 62 | # In-breadth Evolution
We mainly design three prompts to diversify the data in Planning, QA, and Summarization tasks respectively.
Planning We apply the Task Emulation strategy when di- versifying the data in the Planning task. The prompts are shown in Tab. 6, which mainly consists of two phases. Dur- ing phase one, GPT-3.5-turbo is required to generate spe- cific Task Description and corresponding Tools Descriptions based on the theme provided by the user (e.g. maths in the given example). The Tools Descriptions encompass each toolâs name, a brief introduction, and the required input pa- rameters. During phase two, GPT-3.5-turbo is required to provide the planning process given the Task Description and corresponding Tools Descriptions generated in phase one. The planning process consists of four main parts: the Task Description, Tools Descriptions, Output Format, and Histo- ries. An example of the Instruction generated from this two- phase prompt is shown in Tab. 7.
It is worth noting that we acknowledge GPT-3.5-turbo is far from a perfect automated agent (Liu et al. 2023b). In or- der to ensure the quality of the generated data, as depicted in Table 7, we manually enter the correct return values of the tool to ensure that both the planning process and results in the histories are accurate. | 2309.09150#62 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 63 | Summarization The prompt we use to diversify the data in the Summarization task is shown in Tab. 8. We present various underlying principles for designing task descrip- tions for Summarization task in our prompt. These princi- ples mainly employ the Task Description Relocation and Task Description Paraphrasing strategies. We finally gen- erate task descriptions for a total of 100 input text provided.
QA The prompt utilized to diversify the data in the QA task is shown in Tab. 9. In order to enhance the diversity of task descriptions, we require the model to generate a wider range of questions when provided with a given input text. Here, our prompt primarily employs strategies such as Task Description Relocation and Task Description Paraphrasing.
# In-depth Evolution
We design two prompts to complicate the simple instruc- tions collected regrading the Well-guided Writing and Brain- storming task. Both prompts utilize the Constraints Addition and Multi-round Interaction strategies. | 2309.09150#63 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 64 | Well-guided Writing The prompt to increase the com- plexity of the basic instruction in the Well-guided Writing task can be seen in Tab. 10. In order to simulate human- like multi-round modifications during the writing process, we define three atomic operations: (1) Count Limit estab- lishes clear requirements for word or sentence count. (2) Specification involves specifying crucial details such as key- words, hashtags, and URLs to ensure precise alignment with specific needs. (3) Revision involves proposing dynamic and objective amendments to enhance the writing style. By em- ploying these operations, the requirements can be more spe- cific, leading to more effective guidance for the generated results. We ensure that any modifications introduced are ob- jective and can be evaluated automatically. These atomic op- erations can be reused during the composition process. | 2309.09150#64 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 65 | Brainstorming The prompt that we design for enhancing the complexity of simple instruction in the Brainstorming task is shown in Tab. 11 We define two atomic operations to mimic the human thinking process: (1) Modification in- cludes altering the output format such as JSON, XML, CSV, Markdown table, Python list, numeric sequence, etc. Addi- tionally, word, sentence, or sample count limits can be im- posed. Key information like keywords, hashtags, URLs, and language can also be incorporated into the instruction. (2) Specification Further inquire about the specific details or ask for more information. The GPT-3.5-turbo can simulate hu- man thought processes by combining the two atomic opera- tions. The history of multiple calls to these operations can be aggregated into multi-turn dialogues. The final evolved in- structions shown in the prompt can serve as complex single- turn instructions, challenging the model to accomplish mul- tiple tasks within a single round of instruction. | 2309.09150#65 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 66 | Scoring Keywords Annotation We propose four criteria for complex instruction understand- ing, namely Count Limit, Answer Format, Task-prescribed phrases, and Input-dependent query, as introduced in our evaluation system. mong these criteria, the latter three in- volve the annotation of scoring keywords. For Answer For- mat, objective keywords such as â{â, and â}â are directly an- notated by humans. For Task-prescribed phrases and Input- dependent query, we employ a collaborative approach with GPT4 and humans. For Task-prescribed phrases, we require GPT4 to extract key phrases related to the task objective di- rectly from the task description, such as keywords and pre- defined functions. For Input-dependent query, we ask GPT4 to answer the instruction first and then summarize the key- words of its answer that are relevant to the input text. Fi- nally, the annotations by three evaluators are checked and supplemented, and only keywords covered by two or more evaluators are included in the final label set.
Models We present the details of our evaluated models in Table 5. Overall, we evaluate 19 Chinese-oriented models and 15 English-oriented models. The difference between Chinese- oriented models and English-oriented models lie in the pro- portion of Chinese data in their pretraining corpus. Among | 2309.09150#66 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 67 | Model Base Model Size Vocabulary Expansion Supported Context Length # IFT samples Chinese-oriented Models (From Scratch) InternLM-chat-7B BatGPT Qwen-7B Baichuan-Base InternLM (Team 2023) BatGPT-sirius (Li et al. 2023c) Qwen1 Baichuan-chat2 7B 15B 7B 13B 16B 6B 6B 6B ChatGLM (Zeng et al. 2023) ChatGLM2 (Zeng et al. 2023) ChatGLM2-32k (Zeng et al. 2023) ChatGLM-6B ChatGLM-6B ChatGLM-6B N/A N/A N/A N/A N/A N/A N/A N/A 8k 32k 8k 4k 2k 2k 8k 32k 500w â â â 110w â â â Chinese-oriented Models (Continue Pretraining) F F T T F F T T Llama1 BLOOMZ-7B1-mt Llama1 Llama1 Llama2 Llama2 Llama2 Llama2 7B, 13B 7B 7B, 13B, 33B 13B 7B 7B 7B 13B 2k 1k 8k 2k | 2309.09150#67 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 68 | Llama2 7B, 13B 7B 7B, 13B, 33B 13B 7B 7B 7B 13B 2k 1k 8k 2k 4k 4k 4k 4k 5w 200w 200w, 300w, 430w 110w 1000w â 120w 100w English-oriented Models Llama2-chat (Touvron et al. 2023) Vicuna-V1.3 (Zheng et al. 2023) Vicuna-V1.5 (Zheng et al. 2023) WizardLM (Xu et al. 2023b) LongChat-V1 (Li* et al. 2023) LongChat-V1.5 (Li* et al. 2023) OpenChat-V3.2 (Wang et al. 2023a) GPT-3.5-turbo GPT-4 Llama2 Llama1 Llama2 Llama1 Llama1 Llama2 Llama2 - - 7B, 13B, 70B 7B, 13B, 33B 7B, 13B 13B 7B, 13B 7B 13B - - N/A N/A N/A N/A N/A N/A N/A N/A N/A 4k 2k 16k 2k 16k 32k | 2309.09150#68 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 70 | Table 5: Models evaluated in this paper. The symbols â-â and âdenote that details are undisclosed. Vocabulary Expansion indicates whether Chinese-oriented Models (Continue Pretraining) have expanded their vocabulary to include Chinese characters. # IFT samples denotes the number of samples used in the instruction tuning phase. The RLHF column indicates whether the model adopts reinforcement learning with human feedback.
them, Chinese-oriented models are further categorized based on whether they are trained from scratch (From scratch, FS) or continue pretraining from English-oriented models (Continue Pretraining, CP). We provide details on their base model, model size, supported context length, the number of samples used in the instruction tuning phase, whether they adopt reinforcement learning with human feedback, and whether the Chinese-oriented model (CP) has expanded the Chinese characters in its vocabulary. | 2309.09150#70 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 71 | 1https://huggingface.co/Qwen/Qwen-7B 2https://huggingface.co/baichuan-inc/Baichuan-13B-Chat 3https://huggingface.co/Abbey4799/kw-cutegpt-13b-ift-lora 4https://huggingface.co/LinkSoul/Chinese-Llama-2-7b 5https://huggingface.co/FlagAlpha/Llama2-Chinese-7b-Chat 6https://huggingface.co/Linly-AI/Chinese-LLaMA-2-7B-hf 7https://huggingface.co/OpenBuddy/openbuddy-llama2-13bv8.1-fp16
# I: Task & Tools Descriptions Generation
/* Task prompt */ Suppose youâre a good planner for designing complex planning tasks in maths and provide some implicitly useful tools for solving the problem. Your task is to design tasks that need multi-step operations and thoughts and design tools that can help users to solve the problem. /* Output Format */ You should return the answer in the format as described { âtaskâ: â<a brief task description>â, | 2309.09150#71 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 72 | âtoolsâ: [ { ânameâ: â<tool name>â, âdescriptionâ: â<tool description>â, âinputâ: { â<name >â: â<value >â, ... }}, ... ] }
/* Example */ For example: { âTaskâ: âYou are an AI that helps users book flights. Ask the user for their travel plans, then show them flights,
and book the flights they select.â,
âToolsâ: [ { ânameâ: âfindFlightsâ, âdescriptionâ: âsearches for available flightsâ,
âinputâ: { âOriginâ: â<airport code>â, âDestinationâ: â<airport code>â, âDepartureDateâ: â<date>â,
# âReturnDateâ: â<date>â, âPassengersâ: â<count>â } }, .. ] }
# II: Planning Process Generation | 2309.09150#72 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 73 | # II: Planning Process Generation
/* Task Description */ [Task Description from Phase 1]. /* Tools Descriptions */ [Tools Descriptions from Phase 1]. /* Output Format */ You should only respond in JSON format as described below Response Format: { { âthoughtsâ: { âthoughtâ: â<your current thought>â, âreasoningâ: â<self reflect on why you made this decisionâ, âplanâ: âshort bulleted list that conveys long-term planâ }, âcommandâ: { ânameâ: âcommand nameâ, âinputâ: { â<name>â: â<value>â } },
} Ensure the response can be parsed by Python json.loads /* Histories */ And then the system will execute the command and give you the result and log the execution history below. Please mind the history and the given result.
System: This reminds you of these events from your past: [History] Human: Stay focused on the history and determine which next command to use, and respond using the format specified above: | 2309.09150#73 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 74 | System: This reminds you of these events from your past: [History] Human: Stay focused on the history and determine which next command to use, and respond using the format specified above:
Table 6: The prompts for diversifying the data in the Planning task during the Data Evolution process. Overall, the data evolution for the Planning task consists of two phases: Tools & Task Description Generation and Planning Process Generation. The information that requires manual input is highlighted. An example of the Instruction generated from this two-phase prompt is shown in Tab. 7.
/* Task Description */ Design a task to find the area of a triangle and provide tools to assist with the calculations. /* Tools Descriptions */ Tools: [ | 2309.09150#74 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 75 | âinputâ: { âareaâ: â<calculated area>â}}, { ânameâ: âcalculateAreaâ, âsideCâ: â<length of side C>â }}, ] /* Output Format */ You should only respond in JSON format as described below Response Format: { { âthoughtsâ: { âthoughtâ: â<your current thought>â, âreasoningâ: â<self reflect on why you made this decision>â, âplanâ: âshort bulleted list that conveys long-term planâ }, âcommandâ: { ânameâ: âcommand nameâ, âinputâ: { â<name>â: â<value>â } }, } Ensure the response can be parsed by Python json.loads /* Histories */ System: This reminds you of these events from your past: [ { âReplyâ: { âthoughtsâ: { ], âcommandâ: { ânameâ: âinputTriangleSidesâ, âinputâ: {}} }, }, âResultâ: âI have a triangle with side lengths 5, 12, | 2309.09150#75 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 77 | {ânameâ: âcalculateSemiPerimeterâ, âdescriptionâ: âcalculates the semi-perimeter of the triangleâ,
âinputâ: { âsideAâ: â<length of side A>â, âsideBâ: â<length of side B>â, âsideCâ: â<length of side C>â}},
{ ânameâ: âdisplayAreaâ, âdescriptionâ: âdisplays the calculated area of the triangle to the userâ,
âdescriptionâ: âcalculates the area of the triangle using Heronâs formulaâ, âinputâ: { âsemiPerimeterâ: â<semi-perimeter of the triangle>â, âsideAâ: â<length of side A>â, âsideBâ: â<length of side B>â,
{ ânameâ: âinputTriangleSidesâ, âdescriptionâ: âasks the user for the lengths of the three sides of the triangleâ, âinputâ: {}} | 2309.09150#77 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 78 | âthoughtâ: âI can start by asking for the lengths of the three sides of the triangle using the âinputTriangleSidesâ command.â, âreasoningâ: âBy asking for the lengths of the sides, I can obtain the necessary input to calculate the area of the triangle.â, âplanâ: [
âAsk the user for the lengths of the three sides of the triangle using the âinputTriangleSidesâ command. â
âthoughtâ: âI can calculate the semi-perimeter of the triangle using the âcalculateSemiPerimeterâ command.â, âreasoningâ: âBy calculating the semi-perimeter, I can then use it to calculate the area of the triangle using Heronâs formula.â, âplanâ: [
âCalculate the semi-perimeter of the triangle using the âcalculateSemiPerimeterâ command. â
], âcommandâ: { ânameâ: âcalculateSemiPerimeterâ, âinputâ: { âsideAâ: 5, âsideBâ: 12, âsideCâ: 13}}
},
},
âResultâ: â15â
} | 2309.09150#78 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 79 | },
},
âResultâ: â15â
}
] Human: Determine which next command to use, and respond using the format specified above:
Table 7: The newly generated Instruction for the Planning task during data evolution, derived from the two-phase prompts in Tab. 6. The information that requires manual input is highlighted.
You are a task generator, and your role is to create a task description to describe the task of summarizing customer service conversations. You can generate the following task descriptions: 1. Given the conversation records between the customer service agent (A) and the user (Q), please summarize the content of the dialogue
and list the main points.
2. Summarize the key information in the conversation records between customer service agent (A) and the user (Q). 3. For the provided conversation records between the customer service agent (A) and the user (Q), summarize the dialogue content and
3. For the provided conversation records between the customer service agent (A) and the user (Q), summarize the dialogue content and list the main points. Describe the issues and solutions between the customer service agent and the user, including the userâs questions, the agentâs answers, and the solutions. At the same time, summarize the key information from the conversation records. | 2309.09150#79 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 80 | list the main points. Describe the issues and solutions between the customer service agent and the user, including the userâs questions, the agentâs answers, and the solutions. At the same time, summarize the key information from the conversation records. 4. Please analyze and summarize the provided conversation records between the customer service agent (A) and the user (Q),
describe the issues raised by the user, and the agentâs responses and solutions, and identify the key information in the dialogue.
5. Based on the conversation records between the customer service agent (A) and the user (Q), organize the main content of the dialogue and summarize the key information and solutions.
Table 8: The prompts for diversifying the data in the Summarization task during the Data Evolution process.
You are a question-generation agent that can pose multiple questions in line with a given text description, and these questions should also have a certain level of difficulty. Based on the provided text, pose questions that align with its description. The answers to the questions should be found within the text, and they shouldnât be explicitly stated; Instead, they should require inference to deduce.
Table 9: The prompts for diversifying the data in the QA task during the Data Evolution process. | 2309.09150#80 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 81 | Table 9: The prompts for diversifying the data in the QA task during the Data Evolution process.
/* Task Prompt */ As a skilled writer, your objective is to effectively achieve a simple writing goal by implementing the following strategies: 1. Precisely Define Requirements: Continuously elevate the accuracy and specificity of your requirements to effectively guide
the generated results.
2. Objective Revisions: When introducing modifications, ensure that they are objective and amenable to automated evaluation. Avoid subjective and vague instructions, to maintain a consistent and coherent tone. | 2309.09150#81 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 82 | /* Defined Atomic Operations */ Additionally, you have the flexibility to combine various operations to fine-tune the output: 1.âCount Limitâ: Establish clear word or sentence count requirements, allowing you to strike the right balance between conciseness and comprehensiveness. 2.âSpecificationâ: Specify crucial details like keywords, hashtags, and URLs to align the writing precisely with your specific needs. 3.âRevisionâ: Propose dynamic and objective amendments to enhance the writing style. By following these guidelines, you can harness the full potential of AI-generated content and accomplish your writing objectives with precision and excellence. /* Output Format */ To fulfill this task, you are expected to provide your responses in the following JSON format: { âOperationsâ: [ { âoperationâ: <âCount limitâ, âSpecificationâ or âRevisionâ>, âthoughtsâ: <Your thinking process>, âtakewaysâ: <Briefly summarize your thought process into a short instruction> } ] } /* Histories */ Input: Create a summary for a given article. [An article] Output: { âOperationsâ: [ { | 2309.09150#82 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 83 | process into a short instruction> } ] } /* Histories */ Input: Create a summary for a given article. [An article] Output: { âOperationsâ: [ { âoperationâ: âCount limitâ, âthoughtsâ: âIâd like the summary to be neither too concise nor excessively lengthy, so Iâd prefer to limit it to three sentences.â, âtakewaysâ: âLimit the length to three sentences.â }, { âoperationâ: âRevisionâ, âthoughtsâ: âThe response might be too short and plain.â, âtakewaysâ: âThe response could benefit from a touch of eloquence.â }, { âoperationâ: âSpecificationâ, âthoughtsâ: âI should define a set of keywords that can better guide the summary.â, âtakewaysâ: âRequesting retention of keywords: wildflowers, summer.â } ] | 2309.09150#83 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 84 | /* Input */ Input: Craft an Instagram post caption for a photo of my dog and me playing at the beach. }
Table 10: The prompt for enhancing the complexity of the simple instruction in the Well-guided Writing task during the Data Evolution process. Three atomic operations have been specifically defined to facilitate GPT-3.5-turbo in its ability to simulate human-like multi-round modifications during the writing process. These atomic operations can be reused.
/* Task Prompt */ As a thinker, when presented with a simple thinking problem, your objective is to simulate human thinking, following these steps: 1. Refine the requirements of the thinking questions to render the results more specific, intuitive, easily consultable and comprehensible. 2. Engage in multiple rounds of dialogue to continually probe and gain insights into the issue. /* Defined Atomic Operations */ You can combine the following operations: 1. âModificationâ: Add, delete, modify the restrictions of the Evolved Instruction, including its output format (JSON, XML, CSV,
Markdown table, Python list, Numeric sequence, etc.), imposing word/sentence/sample count limits, and incorporating key information (keywords, hashtags, URLs, etc.), language.
2. âSpecificationâ: Further inquire about the specific details or ask for more information. /* Output Format */ To fulfill this task, you are expected to provide your responses in the following JSON format: { | 2309.09150#84 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 85 | âOperationsâ: [
{
âoperationâ: <âModificationâ or âSpecificationâ>, âthoughtsâ: <Your thinking process>, âtakewaysâ: <Briefly summarize your thought process into a short instruction> âevolved instructionâ: <A more complex instruction according to your selected operation>
}
]
}
# /* Histories */ Input:
Provide five innovative or improved methods to solve everyday life problems.
# Output: {
âOperationsâ: [
{
âoperationâ: âModificationâ, âthoughtsâ: âFor easier readability, Iâd like the output in the form of a Markdown table. Specifically, Iâm interested in keywords,
summaries, and steps for each method.â,
âtakewaysâ: [âOutput in Markdown table formatâ, âIncluding keywords, summaries, and stepsâ] âevolved instructionâ: [âPresent five innovative or improved methods for solving everyday life problems through Markdown table
format, including keywords, introductions, and steps.â]
}, { | 2309.09150#85 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09150 | 86 | format, including keywords, introductions, and steps.â]
}, {
âoperationâ: âModificationâ, âthoughtsâ: â The English version would be more convenient for me to read.â, âtakewaysâ: [âTranslate into English.â] âevolved instructionâ: [âIn Markdown table format, present five innovative or improved methods for solving everyday life problems,
including keywords, summaries, and steps, and then translate into English.â]
}
]
# /* Input */ Input: List three animals of different species. }
Table 11: The prompt for enhancing the complexity of the simple instruction in the Brainstorming task during the Data Evolution process. | 2309.09150#86 | Can Large Language Models Understand Real-World Complex Instructions? | Large language models (LLMs) can understand human instructions, showing their
potential for pragmatic applications beyond traditional NLP tasks. However,
they still struggle with complex instructions, which can be either complex task
descriptions that require multiple tasks and constraints, or complex input that
contains long context, noise, heterogeneous information and multi-turn format.
Due to these features, LLMs often ignore semantic constraints from task
descriptions, generate incorrect formats, violate length or sample count
constraints, and be unfaithful to the input text. Existing benchmarks are
insufficient to assess LLMs' ability to understand complex instructions, as
they are close-ended and simple. To bridge this gap, we propose CELLO, a
benchmark for evaluating LLMs' ability to follow complex instructions
systematically. We design eight features for complex instructions and construct
a comprehensive evaluation dataset from real-world scenarios. We also establish
four criteria and develop corresponding metrics, as current ones are
inadequate, biased or too strict and coarse-grained. We compare the performance
of representative Chinese-oriented and English-oriented models in following
complex instructions through extensive experiments. Resources of CELLO are
publicly available at https://github.com/Abbey4799/CELLO. | http://arxiv.org/pdf/2309.09150 | Qianyu He, Jie Zeng, Wenhao Huang, Lina Chen, Jin Xiao, Qianxi He, Xunzhe Zhou, Lida Chen, Xintao Wang, Yuncheng Huang, Haoning Ye, Zihan Li, Shisong Chen, Yikai Zhang, Zhouhong Gu, Jiaqing Liang, Yanghua Xiao | cs.CL, cs.AI | null | null | cs.CL | 20230917 | 20240108 | [
{
"id": "2204.02311"
},
{
"id": "2212.10466"
},
{
"id": "2009.03300"
},
{
"id": "2304.12244"
},
{
"id": "2306.04757"
},
{
"id": "2304.06364"
},
{
"id": "2107.03374"
},
{
"id": "2307.11088"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2308.03688"
},
{
"id": "2307.09288"
},
{
"id": "2303.17760"
},
{
"id": "2304.09542"
},
{
"id": "2110.14168"
},
{
"id": "2306.09296"
},
{
"id": "2306.02707"
},
{
"id": "2307.00360"
},
{
"id": "2301.07597"
},
{
"id": "2307.03172"
},
{
"id": "2307.08674"
},
{
"id": "2212.09689"
},
{
"id": "2307.08689"
},
{
"id": "2305.14387"
},
{
"id": "2304.08177"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2306.05783"
},
{
"id": "2304.14293"
},
{
"id": "2307.16789"
},
{
"id": "2305.14233"
},
{
"id": "2306.08568"
},
{
"id": "2304.01196"
},
{
"id": "2305.11206"
}
] |
2309.09013 | 1 | 3 2 0 2
# EDO LIBERTY, Pinecone, USA
Maximum inner product search (MIPS) over dense and sparse vectors have progressed independently in a bifurcated literature for decades; the latter is better known as top-ð retrieval in Information Retrieval. This duality exists because sparse and dense vectors serve different end goals. That is despite the fact that they are manifestations of the same mathematical problem. In this work, we ask if algorithms for dense vectors could be applied effectively to sparse vectors, particularly those that violate the assumptions underlying top-ð retrieval methods. We study IVF-based retrieval where vectors are partitioned into clusters and only a fraction of clusters are searched during retrieval. We conduct a comprehensive analysis of dimensionality reduction for sparse vectors, and examine standard and spherical KMeans for partitioning. Our experiments demonstrate that IVF serves as an efficient solution for sparse MIPS. As byproducts, we identify two research opportunities and demonstrate their potential. First, we cast the IVF paradigm as a dynamic pruning technique and turn that insight into a novel organization of the inverted index for approximate MIPS for general sparse vectors. Second, we offer a unified regime for MIPS over vectors that have dense and sparse subspaces, and show its robustness to query distributions.
# p e S 6 1
] R I . s c [
# CCS Concepts: ⢠Information systems â Retrieval models and ranking. | 2309.09013#1 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 2 | # p e S 6 1
] R I . s c [
# CCS Concepts: ⢠Information systems â Retrieval models and ranking.
1 v 3 1 0 9 0 . 9 0 3 2 : v i X r a
Additional Key Words and Phrases: Maximum Inner Product Search, Top-k Retrieval, Sparse Vectors, Dense Vectors, Hybrid Vectors, Sketching, IVF
1 INTRODUCTION Retrieval is one of the most fundamental questions in Information Retrieval (IR), as the name of the discipline itself reflects. Simply put, given a large number of objects, we wish to find, in an efficient manner, the closest subset of those objects to a query according to some notion of closeness. The data structure and algorithmic inventions [68, 83] that have emerged from the IR literature to address this deceptively simple question have had enormous impact on the field and birthed major research directions. They provide the machinery to scale ranking to massive datasets within multi-stage ranking systems [6, 7, 14, 40], for instance, or power large-scale applications, of which search is a notable and ubiquitous example. | 2309.09013#2 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 3 | Much of the IR research on retrieval targets textual data, where documents and queries are texts in natural languages. Unsurprisingly, then, the retrieval machinery that exists today is highly optimized for data that is governed by the laws of natural languages (such as Zipfâs law) and the way users interact with retrieval and search systems (e.g., by means of short, keyword queries). The inverted index [83], for example, is inspired by how we historically organized and found information in a book or at a library. Our measures of closeness, such as TF-IDF and BM25 [62], rely on statistics that reflect our understanding of the relevance between two pieces of text. The dynamic pruning algorithms that help us traverse inverted indexes efficiently [11, 18, 23, 41, 47, 53, 59, 68] to find the top ð most relevant documents to a query, too, rely on the statistical properties of language and relevance measures. | 2309.09013#3 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 4 | Authorsâ addresses: Sebastian Bruch, Pinecone, New York, NY, USA, [email protected]; Franco Maria Nardini, ISTI-CNR, Pisa, Italy, [email protected]; Amir Ingber, Pinecone, Tel Aviv, Israel, [email protected]; Edo Liberty, Pinecone, New York, NY, USA, [email protected].
111
111:2
# Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
While the form of retrieval above is the bedrock of flurry of other research and applications in IR, the rise of deep learning in recent years brought a different form of retrieval into the IR spotlight: Approximate Nearest Neighbor (ANN) search [28, 31, 32, 36, 50, 71] in dense vector spaces. | 2309.09013#4 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 5 | ANN search has for decades played an outsize role in research problems that are adjacent to text retrieval such as image and multimedia retrieval [58, 80]. Its machinery is optimized for objects and queries that are real vectors in some high-dimensional space, and where closeness is determined by inner product or proper metrics such as Euclidean distance. Today, efficient and effective data structures and algorithms for this problem are often critical components in, among other applications, semantic search, where, using deep learning, we learn a vector representation of documents and queries in a space where closeness of vectors implies semantic similarity of their corresponding texts [40]. | 2309.09013#5 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 6 | 1.1 Maximum Inner Product Search as the Unifying Problem The fact that these two branches of retrieval have historically progressed independently makes a great deal of sense: they have targeted quite different applications. Todayâs reality driven by the burgeoning role of deep learning in IR and the effectiveness of learnt representations in many related domains, however, begins to challenge the status quo. Let us illustrate our point by considering joint lexical-semantic search [12, 17, 34, 37, 44, 45, 72, 75] as an example. In that setup, documents and queries are represented as learnt vectors and as bags of words. Retrieval is then performed over both representations to find the documents that are both lexically and semantically close to a query. This application is at the confluence of (inverted index-based) top-ð retrieval and ANN search. The challenge presented by the historical dichotomy is that researchers and practitioners alike must study and develop two disparate systems that are characteristically different. | 2309.09013#6 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 7 | At the same time, we are witnessing the success of methods that learn term importance weights from texts [9, 19, 24â26, 39, 51, 79, 82], rather than compute it based on term frequency and propensity. It has been shown that the weights learnt this way exhibit distributional properties that do not conform to the expectations of inverted-index based retrieval algorithms [16, 49]. This challenges some of the assumptions underlying dynamic pruning algorithms and thus the efficacy of inverted index-based retrieval in the face of arbitrarily-distributed term weights [16, 48].
The existing literature gives effective solutions of various degrees of complexity to each and every one of the shortcomings above [46, 49, 52, 75, 78]. In this work, we wish to investigate a more general question that arises if we returned to the principles and re-examined the most glaring fact: It should come as no surprise that both branches of retrieval operate on vectors and, often, attempt to solve Maximum Inner Product Search (MIPS). It just so happens that in one branch the vectors are dense (i.e., all coordinates are almost surely non-zero) and in the other sparse (i.e., where, relative to the dimensionality of the space, very few coordinates are non-zero). We call the former âdense MIPSâ and the latter âsparse MIPSâ for brevity. | 2309.09013#7 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 8 | 1.2 Sparse MIPS as a Subclass of Dense MIPS It is clear that solutions devised for sparse MIPS are not immediately applicable to dense MIPS. That is because sparse MIPS algorithms operate under stricter distributional assumptions than dense MIPS algorithms do; in other words, the class of sparse vectors for which MIPS solutions exist is a subset of the class of dense vectors. For example, inverted index-based solutions are only efficient if the vectors are sparse1 and non-negative, and if their sparsity pattern takes on a Zipfian shape. Dense MIPS algorithms, on the other hand, have fewer inherent limitations. A natural question
1In fact, query vectors are often required to be much more sparse than document vectors for a sparse MIPS solution to remain reasonably efficient.
# Bridging Dense and Sparse Maximum Inner Product Search
Algorithm 1: Indexing Input: Collection X of sparse vectors in Rð ; Number of clusters, ð; Random projector, ð : Rð â Rð where ð ⪠ð ; Clustering algorithm Cluster that returns partitions of input data and their representatives. Result: Cluster assignments Pð = { ð | ð¥ ( ð ) â Partition ð} and cluster representatives Cð âs. | 2309.09013#8 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 9 | ËX â {ð (ð¥) | ð¥ â X} 1: 2: Partitions, Representatives â Cluster( ËX; ð) 3: Pð â { ð | Ëð¥ ( ð ) â Partitions[ð]}, â1 ⤠ð ⤠ð 4: Cð â Representatives[ð], â1 ⤠ð ⤠ð 5: return P and C
that arises given the observation above is whether dense MIPS algorithms remain effective and efficient when applied to sparse vectors. That is the primary motivation behind this study.
While conceptually simple and admittedly pedestrian, applying dense MIPS solutions to sparse vectors faces many challenges. And therein lies our technical contribution: We present, as a proof of concept, the machinery that enables such a formulation.
We start by foregoing exactness and instead developing ideas on the principle of probably approximately correctness (PAC). In other words, instead of insisting on finding the exact set of top ð documents, we settle with an approximate set that may erroneously contain some farther-afield documents and mistakenly miss other close-by documents. In the IR literature, this is the familiar notion of rank-unsafe retrieval [68]. | 2309.09013#9 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 10 | Having accepted some (quantifiable) error in the retrieval outcome, we are faced with the next, rather debilitating challenge of working with often extremely high dimensional sparse vectors. It is here that we appeal to results from related disciplines that study data-oblivious â2-subspace embedding [73] and non-linear sketching2 (itself sparse) of sparse vectors [16]. These dimensionality reduction techniques use the elegant yet simple idea of random projections to preserve Euclidean distance or inner product between vectors. To understand the ramifications of reducing dimensions (and thereby losing information) for sparse MIPS, we study the behavior of two particular random projection techniques when applied to sparse vectors: the linear Johnson-Lindenstrauss (JL) [1â 4, 33] transform and the non-linear Sinnamon [16] transform. We study this particular topic in depth in Section 4. | 2309.09013#10 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 11 | By projecting sparse high-dimensional vectors into a (possibly dense) low-dimensional subspace, we have removed the main barrier to applying dense MIPS solutions to sparse vectors and are therefore prepared to investigate our main research question above. We are particularly interested in a method commonly known as Inverted File-based (IVF) retrieval: It begins by clustering vectors into partitions in an unsupervised manner. When it receives a query vector, it identifies a subset of the more âpromisingâ partitions, and conducts (exact or approximate) retrieval only over the subset of documents assigned to them. The search over the sub-collection can be delegated to another MIPS algorithm, the most naïve of which is an exhaustive, exact search. To understand how (sketches of) sparse vectors behave in an IVF retrieval system, we empirically evaluate standard and spherical KMeans [21] on a range of datasets. This analysis is the main topic of Section 5.
Together, dimensionality reduction via random projections and clustering, enable the IVF para- digm for sparse vectors. Algorithm 1 describes the end-to-end indexing procedure, and Algorithm 2
2We use âsketchâ to describe a compressed representation of a high-dimensional vector, and âto sketchâ to describe the act of compressing a vector into a sketch.
111:3
111:3
111:4 | 2309.09013#11 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 12 | 111:3
111:3
111:4
# Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
Algorithm 2: Retrieval Input: Sparse query vector, ð â Rð ; Clusters and representatives, P, C obtained from Algorithm 1; Random projector ð : Rð â Rð where ð ⪠ð ; Number of data points to examine, â ⤠|X|, where |X| denotes the size of the collection; MIPS sub-algorithm R. Result: Approximate set of top ð vectors that maximize inner product with ð.
1: 2: SortedClusters â SortDescending(P by ⨠Ëð, Cð â©) 3: TotalSize â 0 4: I â â
; 5: for Pðð â SortedClusters do 6: 7: 8: 9: end for 10: return Top ð vectors from partitions PI â {Pð | ð â I} w.r.t â¨ð, ·⩠using R
gives details of the retrieval logic. We encourage the reader to refer to Section 3 for an overview of our adopted notation. | 2309.09013#12 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 13 | gives details of the retrieval logic. We encourage the reader to refer to Section 3 for an overview of our adopted notation.
1.3 Research Byproducts As we demonstrate, it is certainly feasible andâgiven an appropriate tolerance for errorâoften effective, to apply Algorithms 1 and 2 to sparse vectors. That possibility immediately leads to two important observations that we explore later in this work. | 2309.09013#13 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 14 | First, we remark that, in effect, clustering a document collection and performing search over only a fraction of the resulting clusters, constitutes a dynamic pruning methodâalbeit a rank-unsafe one. We use this insight to propose an organization of the inverted index where inverted lists comprise of blocks, with each block containing documents that fall into the same partition, and sorted by partition identifier. We show that, appropriately using skip pointers over inverted lists facilitates fast approximate top-ð retrieval for general sparse vectorsâvectors that need not conform to any distributional requirements. Experiments confirm the efficiency and effectiveness of our proposal. Secondly, we offer a fresh but natural perspective to unify the two worlds of dense and sparse MIPS into a single, elegant framework at the systems level. In particular, we consider hybrid vectors (i.e., vectors that may contain dense and sparse subspaces) in an IVF retrieval system. We demonstrate empirically that the clusters formed by our proposal are effective, and, regardless of how the â2 mass is split between the dense and sparse subspaces, retrieval can be arbitrarily accurate.
1.4 Contributions We summarize our contributions as follows:
⢠We analyze the effect of linear and non-linear random projection algorithms on the inner product approximation of sparse vectors; | 2309.09013#14 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 15 | 1.4 Contributions We summarize our contributions as follows:
⢠We analyze the effect of linear and non-linear random projection algorithms on the inner product approximation of sparse vectors;
⢠We extend the clustering-based IVF method of dense MIPS to (sketches of) sparse vec- tors, and, in that context, empirically evaluate standard and spherical KMeans clustering algorithms;
Bridging Dense and Sparse Maximum Inner Product Search
⢠We use our findings to propose a novel organization of the inverted index that facilitates approximate MIPS over general sparse vectors, thereby freeing sparse MIPS from strict distributional requirements of traditional top-ð retrieval algorithms in IR; and,
⢠We propose a unification of dense and sparse MIPS using IVF, and present a preliminary empirical evaluation of the proposal.
Throughout our presentation, we hope to convey the simplicity that our proposals provide in working with vectors, regardless of their density or sparsity, for both researchers and practitioners. But we are more excited by what this new perspective enables and the major research questions it inspires. To start, we believe our framework and the retrieval machinery it offers provide substantial flexibility to researchers who wish to study learnt term weights without the constraints imposed by traditional inverted index-based retrieval algorithms. We are equally encouraged by our initial findings on hybrid vector retrieval and hope our framework enables further research on lexical- semantic search, multi-modal retrieval, multimedia retrieval, and other domains. | 2309.09013#15 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 16 | We additionally claim, as we argue later, that our proposed view opens the door to new and excit- ing research directions in IR, while, as a meta-algorithm, still allowing the incorporation of decades of research. From principled distributed system design, to the mathematics of alternative sparse vector sketching, to improved clustering or partitioning algorithms, our conceptual framework motivates a number of research questions to pursue. Moreover, our proposal gives a new flavor to the important research on efficient and effective systems in IR [13, 15]: the PAC nature of the framework offers intrinsic levers to trade off efficiency for effectiveness that deserve a thorough theoretical and empirical examination.
1.5 Structure The remainder of this manuscript is organized as follows. We review the relevant parts of the literature in Section 2. We then describe our notation and setup in Section 3. That will let us put in context our analysis and discussion of the behavior of linear and non-linear random projections for sparse vectors in Section 4, and subsequently clustering in Section 5. In Section 6, we show that clustering for IVF and dynamic pruning for inverted indexes are intimately connected, and describe a natural organization of the inverted index through clustering. We philosophize on a unified, density-agnostic framework for MIPS in Section 7. We conclude this manuscript in Section 8.
2 RELATED WORK This section sets the stage by briefly reviewing the literature on sparse and dense MIPS. | 2309.09013#16 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 17 | 2 RELATED WORK This section sets the stage by briefly reviewing the literature on sparse and dense MIPS.
2.1 Sparse MIPS Numerous sparse MIPS algorithms exist in the IR literature that are specifically tailored to text data and that are behind the success of the field in scaling to massive text collections. We refrain from reviewing this vast literature here and, instead, refer the reader to excellent existing surveys [68, 83] on the topic. But to give context to our work, we quickly make note of key algorithms and explain what makes them less than ideal for the setup we consider in this work.
Sparse MIPS for Text Collections. MaxScore [69] and WAND [11], along with their intel- 2.1.1 lectual descendants [22, 23, 53, 54] are the de facto sparse MIPS algorithms, applied typically to vectors obtained obtained from a BM25-encoding [62] of text. This family of algorithms augment a document identifier-sorted inverted index with upper-bounds on the partial score contribution of each coordinate to the final inner product. With that additional statistic, it is possible to traverse the inverted lists one document at a time and decide if a document may possibly end up in the top ð set: if the document appears in enough inverted lists whose collective score upper-bound exceeds
111:5
111:6 | 2309.09013#17 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.