doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.13724 | 80 | Question: I have 5 blocks. Initially: Block b1 is on the table. Block b2 is on the table. Block b3 is on top of b2. Block b4 is on the table. Block b5 is on top of b4. Your goal is to move the blocks such that they are stacked in the order: b3 on b1, b1 on b4, b4 on b2, b2 on b5, and b5 on table. Answer: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and | 2308.13724#80 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 81 | (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem fiveblocks) (:domain blocksworld) (:objects b1 b2 b3 b4 b5) (:init (arm-empty) | 2308.13724#81 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 83 | # A.2 Planner
Cooking: For the Cooking planning domain, we use a total of three few-shot examples, which are given as follows.
Example 1
Question: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem threepots) (:domain cooking) (:objects pot1 pot2 pot3 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init
24 | 2308.13724#83 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 84 | (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) ) (:goal (and (contain pot1 ingredient1) (contain pot1 ingredient4) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient5) (contain pot2 ingredient6) (contain pot3 ingredient1) (contain pot3 ingredient4) (contain pot3 ingredient5)) ) Answer: Action Sequence: Step 1: ingredient1 is needed in pot1, pot3 (pick ingredient1) (add ingredient1 pot1) (add ingredient1 pot3) (putdown ingredient1) Step 2: ingredient2 is needed in pot2 (pick ingredient2) (add ingredient2 pot2) (putdown ingredient2) Step 3: ingredient3 is needed in pot2 (pick ingredient3) (add ingredient3 pot2) (putdown ingredient3) Step 4: ingredient4 is needed in pot1, pot3 (pick ingredient4) (add ingredient4 pot1) (add ingredient4 pot3) (putdown ingredient4) Step | 2308.13724#84 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 87 | (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem fourpots) (:domain cooking) (:objects pot1 pot2 po3 pot4 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) (pot-empty pot4) ) (:goal (and (contain pot1 ingredient2) (contain pot1 ingredient6) (contain | 2308.13724#87 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 89 | Answer: Action Sequence: Step 1: ingredient1 is needed in pot2 (pick ingredient1) (add ingredient1 pot2) (putdown ingredient1) Step 2: ingredient2 is needed in pot1, pot2, pot4 (pick ingredient2) (add ingredient2 pot1) (add ingredient2 pot2) (add ingredient2 pot4) (putdown ingredient2)
26
Step 3: ingredient3 is not needed in any pot no action needed Step 4: ingredient4 is needed in pot4 (pick ingredient4) (add ingredient4 pot4) (putdown ingredient4) Step 5: ingredient5 is needed in pot2, pot3 (pick ingredient5) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) Step 6: ingredient6 is needed in pot1, pot3, pot4 (pick ingredient6) (add ingredient6 pot1) (add ingredient6 pot3) (add ingredient6 pot4) (putdown ingredient6) All ingredients have been added to the corresponding pots.
# Example 3 | 2308.13724#89 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 90 | # Example 3
Question: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem fivepots) (:domain cooking) (:objects pot1 pot2 pot3 pot4 pot5 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3)
27 | 2308.13724#90 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 91 | 27
(pot-empty pot4) (pot-empty pot5) ) (:goal (and (contain pot1 ingredient3) (contain pot1 ingredient4) (contain pot1 ingredient5) (contain pot2 ingredient1) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient6) (contain pot3 ingredient2) (contain pot3 ingredient3) (contain pot3 ingredient5) (contain pot4 ingredient1) (contain pot4 ingredient6) (contain pot5 ingredient1) (contain pot5 ingredient3) (contain pot5 ingredient4) (contain pot5 ingredient5)) ) | 2308.13724#91 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 92 | Answer: Action Sequence: Step 1: ingredient1 is needed in pot2, pot4, pot5 (pick ingredient1) (add ingredient1 pot2) (add ingredient1 pot4) (add ingredient1 pot5) (putdown ingredient1) Step 2: ingredient2 is needed in pot2, pot3 (pick ingredient2) (add ingredient2 pot2) (add ingredient2 pot3) (putdown ingredient2) Step 3: ingredient3 is needed in pot1, pot2, pot3, pot5 (pick ingredient3) (add ingredient3 pot1) (add ingredient3 pot2) (add ingredient3 pot3) (add ingredient3 pot5) (putdown ingredient3) Step 4: ingredient4 is needed in pot1, pot5 (pick ingredient4) (add ingredient4 pot1) (add ingredient4 pot5) (putdown ingredient4) Step 5: ingredient5 is needed in pot1, pot3, pot5 (pick ingredient5) (add ingredient5 pot1) (add ingredient5 pot3) (add ingredient5 pot5) (putdown ingredient5) Step 6: ingredient6 is needed in pot2, pot4 (pick ingredient6) (add ingredient6 pot2)
28
(add ingredient6 pot4) (putdown ingredient6) All ingredients have been added to the corresponding pots. | 2308.13724#92 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 94 | Question: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem threeballs) (:domain ballmoving) (:objects robot1 | 2308.13724#94 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 97 | robot1 is now in room2. (at ball1 room3) with goal (at ball1 room1) (at ball2 room2) with goal (at ball2 room2) (at ball3 room4) with goal (at ball3 room3) ball1 is unsatisfied, ball2 is satisfied, ball3 is unsatisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball1 (move robot1 room2 room3) (pick ball1 room3) (move robot1 room3 room1) (drop ball1 room1) Step 2: robot1 is now in room1. (at ball1 room1) with goal (at ball1 room1) (at ball2 room2) with goal (at ball2 room2) (at ball3 room4) with goal (at ball3 room3) ball1 is satisfied, ball2 is satisfied, ball3 is unsatisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball3 (move robot1 room1 room4) (pick ball3 room4) (move robot1 room4 room3) (drop ball3 room3) Step 3: robot1 | 2308.13724#97 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 99 | # Example 2
Question: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b)))
30 | 2308.13724#99 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 100 | ) Problem file: (define (problem fourballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 ball4 - ball) (:init (arm-empty) (robot-at robot1 room3) (at ball1 room1) (at ball2 room3) (at ball3 room1) (at ball4 room2) ) (:goal (and (at ball1 room3) (at ball2 room2) (at ball3 room4) (at ball4 room4)) ) Answer: Action Sequence: Step 1: robot1 is now in room3. (at ball1 room1) with goal (at ball1 room3) (at ball2 room3) with goal (at ball2 room2) (at ball3 room1) with goal (at ball3 room4) (at ball4 room2) with goal (at ball4 room4) ball1 is unsatisfied, ball2 is unsatisfied, ball3 is unsatisfied, ball4 is unsatisfied. ball2 is in the same room with the robot1 and is unsatisfied. Go to complete ball2 (pick ball2 room3) (move robot1 room3 room2) (drop ball2 room2) Step 2: robot1 is now in room2. | 2308.13724#100 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 101 | to complete ball2 (pick ball2 room3) (move robot1 room3 room2) (drop ball2 room2) Step 2: robot1 is now in room2. (at ball1 room1) with goal (at ball1 room3) (at ball2 room2) with goal (at ball2 room2) (at ball3 room1) with goal (at ball3 room4) (at ball4 room2) with goal (at ball4 room4) ball1 is unsatisfied, ball2 is satisfied, ball3 is unsatisfied, ball4 is unsatisfied. ball4 is in the same room with the robot1 and is unsatisfied. Go to complete ball4 (pick ball4 room2) (move robot1 room2 room4) (drop ball4 room4) Step 3: robot1 is now in room4. (at ball1 room1) with goal (at ball1 room3) (at ball2 room2) with goal (at ball2 room2) (at ball3 room1) with goal (at ball3 room4) (at ball4 room4) with goal (at ball4 room4) ball1 is unsatisfied, ball2 is satisfied, ball3 is unsatisfied, ball4 is satisfied. All balls that are in the same room with robot1 are | 2308.13724#101 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 104 | Go to complete next unsatisfied ball (from first to last): ball1 (move robot1 room4 room1) (pick ball1 room1) (move robot1 room1 room3) (drop ball1 room3) Step 4: robot1 is now in room3. (at ball1 room3) with goal (at ball1 room3) (at ball2 room2) with goal (at ball2 room2) (at ball3 room1) with goal (at ball3 room4) (at ball4 room4) with goal (at ball4 room4) ball1 is satisfied, ball2 is satisfied, ball3 is unsatisfied, ball4 is satisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball3 (move robot1 room3 room1) (pick ball3 room1) (move robot1 room1 room4) (drop ball3 room4) Step 5: robot1 is now in room4. (at ball1 room3) with goal (at ball1 room3) (at ball2 room2) with goal (at ball2 room2) (at ball3 room4) with goal (at ball3 room4) (at ball4 room4) with goal (at ball4 room4) ball1 | 2308.13724#104 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 106 | # Example 3
Question: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem fiveballs) (:domain ballmoving)
32 | 2308.13724#106 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 107 | (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 ball4 ball5 - ball) (:init (arm-empty) (robot-at robot1 room2) (at ball1 room1) (at ball2 room2) (at ball3 room4) (at ball4 room3) (at ball5 room4) ) (:goal (and (at ball1 room1) (at ball2 room1) (at ball3 room4) (at ball4 room2) (at ball5 room1)) ) Answer: Action Sequence: Step 1: robot1 is now in room2. (at ball1 room1) with goal (at ball1 room1) (at ball2 room2) with goal (at ball2 room1) (at ball3 room4) with goal (at ball3 room4) (at ball4 room3) with goal (at ball4 room2) (at ball5 room4) with goal (at ball5 room1) ball1 is satisfied, ball2 is unsatisfied, ball3 is satisfied, ball4 is unsatisfied, ball5 is unsatisfied. ball2 is in the same room with the robot1 and is unsatisfied. Go to complete ball2 (pick ball2 room2) (move robot1 room2 | 2308.13724#107 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 108 | ball2 is in the same room with the robot1 and is unsatisfied. Go to complete ball2 (pick ball2 room2) (move robot1 room2 room1) (drop ball2 room1) Step 2: robot1 is now in room1. (at ball1 room1) with goal (at ball1 room1) (at ball2 room1) with goal (at ball2 room1) (at ball3 room4) with goal (at ball3 room4) (at ball4 room3) with goal (at ball4 room2) (at ball5 room4) with goal (at ball5 room1) ball1 is satisfied, ball2 is satisfied, ball3 is satisfied, ball4 is unsatisfied, ball5 is unsatisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball4 (move robot1 room1 room3) (pick ball4 room3) (move robot1 room3 room2) (drop ball4 room2) Step 3: robot1 is now in room2. (at ball1 room1) with goal (at ball1 room1) (at ball2 room1) with goal (at ball2 room1) (at ball3 room4) with goal (at ball3 | 2308.13724#108 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 110 | 33
ball1 is satisfied, ball2 is satisfied, ball3 is satisfied, ball4 is satisfied, ball5 is unsatisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball5 (move robot1 room2 room4) (pick ball5 room4) (move robot1 room4 room1) (drop ball5 room1) Step 4: robot1 is now in room1. (at ball1 room1) with goal (at ball1 room1) (at ball2 room1) with goal (at ball2 room1) (at ball3 room4) with goal (at ball3 room4) (at ball4 room2) with goal (at ball4 room2) (at ball5 room1) with goal (at ball5 room1) ball1 is satisfied, ball2 is satisfied, ball3 is satisfied, ball4 is satisfied, ball5 is satisfied. All balls are satisfied.
Blocksworld: For the Blocksworld planning domain, we use a total of five few-shot examples, which are given as follows.
Example 1 | 2308.13724#110 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 111 | Question: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear | 2308.13724#111 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 113 | 34
(define (problem threeblocks) (:domain blocksworld) (:objects b1 b2 b3) (:init (arm-empty) (on-table b1) (on-table b2) (on b3 b1) (clear b2) (clear b3) ) (:goal (and (on b1 b2) (on b2 b3) (on-table b3)) ) Answer: Action Sequence: We need to build the blocks from bottom to top. Third goal: b3 on table (unstack b3 b1) (putdown b3) Second goal: b2 on b3 (pickup b2) (stack b2 b3) First goal: b1 on b2 (pickup b1) (stack b1 b2) | 2308.13724#113 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 114 | Question: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob)
35 | 2308.13724#114 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 115 | 35
:precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem fourblocks) (:domain blocksworld) (:objects b1 b2 b3 b4) (:init (arm-empty) (on-table b1) (on b2 b4) (on b3 b1) (on-table b4) (clear b2) (clear b3) ) (:goal (and (on b3 b2) (on b2 b1) (on b1 b4) (on-table b4)) ) | 2308.13724#115 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 116 | Answer: Action Sequence: We need to build the blocks from bottom to top. Fourth goal: b4 on table Already satisfied in initial configuration Third goal: b1 on b4 (unstack b2 b4) (putdown b2) (unstack b3 b1) (putdown b3) (pickup b1) (stack b1 b4) Second goal: b2 on b1 (pickup b2) (stack b2 b1) First goal: b3 on b2 (pickup b3) (stack b3 b2)
# Example 3
Question: Domain file: (define (domain blocksworld) (:predicates (clear ?x)
36 | 2308.13724#116 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 117 | (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem fiveblocks) | 2308.13724#117 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 122 | 38
(:init (arm-empty) (on b1 b3) (on-table b2) (on-table b3) (clear b1) (clear b2) ) (:goal (and (on b2 b1) (on b1 b3) (on-table b3)) ) Answer: Action Sequence: We need to build the blocks from bottom to top. Third goal: b3 on table Already satisfied in initial configuration Second goal: b1 on b3 Already satisfied in initial configuration First goal: b2 on b1 (pickup b2) (stack b2 b1)
# Example 5 | 2308.13724#122 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 123 | # Example 5
Question: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack
39 | 2308.13724#123 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 124 | 39
:parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem threeblocks) (:domain blocksworld) (:objects b1 b2 b3) (:init (arm-empty) (on-table b1) (on b2 b3) (on-table b3) (clear b1) (clear b2) ) (:goal (and (on b2 b1) (on b1 b3) (on-table b3)) )
Answer: Action Sequence: We need to build the blocks from bottom to top. Third goal: b3 on table Already satisfied in initial configuration Second goal: b1 on b3 (unstack b2 b3) (putdown b2) (pickup b1) (stack b1 b3) First goal: b2 on b1 (pickup b2) (stack b2 b1)
# A.3 Self-Validator
Cooking: For the Cooking planning domain, we use a total of four few-shot examples, which are given as follows.
Example 1 | 2308.13724#124 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 125 | # A.3 Self-Validator
Cooking: For the Cooking planning domain, we use a total of four few-shot examples, which are given as follows.
Example 1
Question: Initial state: (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) Goal state: (contain pot1 ingredient1) (contain pot1 ingredient4) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient5)
40
(contain pot2 ingredient6) (contain pot3 ingredient1) (contain pot3 ingredient4) (contain pot3 ingredient5) Examined action sequence: (pick ingredient1) (add ingredient1 pot1) (add ingredient1 pot3) (putdown ingredient1) (pick ingredient2) (add ingredient1 pot2) (putdown ingredient2) (pick ingredient3) (add ingredient3 pot2) (putdown ingredient3) (pick ingredient4) (add ingredient4 pot1) (add ingredient4 pot3) (putdown ingredient4) (pick ingredient5) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) (pick ingredient6) (add ingredient6 pot2) (putdown ingredient6) | 2308.13724#125 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 126 | Answer: Analysis: Initial state: pot1: empty pot2: empty pot3: empty (pick ingredient1) result: ingredient1 in hand (add ingredient1 pot1) result: pot1: ingredient1 pot2: empty pot3: empty (add ingredient1 pot3) result: pot1: ingredient1 pot2: empty pot3: ingredient1 (putdown ingredient1) result: hand empty ingredient has been picked: ingredient1 (pick ingredient2) result: ingredient2 in hand (add ingredient1 pot2) result: action is wrong since currently ingredient2 is in hand instead of ingredient1 Final answer: No, the action sequence is wrong, it cannot accomplish the goal.
Example 2 Question: Initial state: (arm-empty) (pot-empty pot1) (pot-empty pot2)
41 | 2308.13724#126 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 127 | Example 2 Question: Initial state: (arm-empty) (pot-empty pot1) (pot-empty pot2)
41
(pot-empty pot3) Goal state: (contain pot1 ingredient1) (contain pot1 ingredient2) (contain pot1 ingredient3) (contain pot1 ingredient6) (contain pot2 ingredient2) (contain pot2 ingredient4) (contain pot3 ingredient2) (contain pot3 ingredient3) (contain pot3 ingredient5) Examined action sequence: (pick ingredient1) (add ingredient1 pot1) (putdown ingredient1) (pick ingredient2) (add ingredient2 pot1) (add ingredient2 pot2) (add ingredient2 pot3) (putdown ingredient2) (pick ingredient2) (add ingredient2 pot1) (add ingredient2 pot3) (putdown ingredient2) (pick ingredient4) (add ingredient4 pot2) (putdown ingredient4) (pick ingredient6) (add ingredient6 pot1) (putdown ingredient6) | 2308.13724#127 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 128 | Answer: Analysis: Initial state: pot1: empty pot2: empty pot3: empty (pick ingredient1) result: ingredient1 in hand (add ingredient1 pot1) result: pot1: ingredient1 pot2: empty pot3: empty (putdown ingredient1) result: hand empty ingredient has been picked: ingredient1 (pick ingredient2) result: ingredient2 in hand (add ingredient2 pot1) result: pot1: ingredient1, ingredient2 pot2: empty pot3: empty (add ingredient2 pot2) result: pot1: ingredient1, ingredient2 pot2: ingredient2 pot3: empty (add ingredient2 pot3) result: pot1: ingredient1, ingredient2 pot2: ingredient2 pot3: ingredient2
42
(putdown ingredient2) result: hand empty ingredient has been picked: ingredient1, ingredient2 (pick ingredient2) result: action is wrong since ingredient2 has already been picked Final answer: No, the action sequence is wrong, it cannot accomplish the goal.
# Example 3 | 2308.13724#128 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 129 | # Example 3
Question: Initial state: (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) Goal state: (contain pot1 ingredient2) (contain pot1 ingredient5) (contain pot2 ingredient2) (contain pot2 ingredient4) (contain pot2 ingredient5) (contain pot2 ingredient6) (contain pot3 ingredient1) (contain pot3 ingredient5) Examined action sequence: (pick ingredient2) (add ingredient2 pot1) (add ingredient2 pot2) (putdown ingredient2) (pick ingredient4) (add ingredient4 pot2) (putdown ingredient4) (pick ingredient5) (add ingredient5 pot1) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) (pick ingredient6) (add ingredient6 pot2) (putdown ingredient6)
Answer: Analysis: Initial state: pot1: empty pot2: empty pot3: empty (pick ingredient2) result: ingredient2 in hand (add ingredient2 pot1) result: pot1: ingredient2 pot2: empty pot3: empty (add ingredient2 pot2) result: pot1: ingredient2 pot2: ingredient2 pot3: empty (putdown ingredient2) result: hand empty
43 | 2308.13724#129 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 130 | ingredient has been picked: ingredient2 (pick ingredient4) result: ingredient4 in hand (add ingredient4 pot2) result: pot1: ingredient2 pot2: ingredient2, ingredient4 pot3: empty (putdown ingredient4) result: hand empty ingredient has been picked: ingredient2, ingredient4 (pick ingredient5) result: ingredient5 in hand (add ingredient5 pot1) result: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4 pot3: empty (add ingredient5 pot2) result: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4, ingredient5 pot3: empty (add ingredient5 pot3) result: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4, ingredient5 pot3: ingredient5 (putdown ingredient5) result: hand empty ingredient has been picked: ingredient2, ingredient4, ingredient5 (pick ingredient6) result: ingredient6 in hand (add ingredient6 pot2) result: pot1: | 2308.13724#130 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 131 | ingredient4, ingredient5 (pick ingredient6) result: ingredient6 in hand (add ingredient6 pot2) result: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4, ingredient5, ingredient6 pot3: ingredient5 (putdown ingredient6) result: hand empty ingredient has been picked: ingredient2, ingredient4, ingredient5, ingredient6 No error found in action sequence. Goal state check: final state: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4, ingredient5, ingredient6 pot3: ingredient5 goal state: (contain pot1 ingredient2) (contain pot1 ingredient5) (contain pot2 ingredient2) (contain pot2 ingredient4) (contain pot2 ingredient5) (contain pot2 ingredient6) (contain pot3 ingredient1) (contain pot3 ingredient5) The final state is not the same as the goal state. pot3 should contain ingredient1. Goal is not accomplished. Final answer: No, the action sequence is wrong, it cannot accomplish the | 2308.13724#131 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 133 | Example 4 Question: Initial state: (arm-empty) (pot-empty pot1)
44
(pot-empty pot2) (pot-empty pot3) Goal state: (contain pot1 ingredient2) (contain pot1 ingredient3) (contain pot1 ingredient4) (contain pot1 ingredient6) (contain pot2 ingredient3) (contain pot2 ingredient5) (contain pot3 ingredient4) (contain pot3 ingredient5) Examined action sequence: (pick ingredient2) (add ingredient2 pot1) (putdown ingredient2) (pick ingredient3) (add ingredient3 pot1) (add ingredient3 pot2) (putdown ingredient3) (pick ingredient4) (add ingredient4 pot1) (add ingredient4 pot3) (putdown ingredient4) (pick ingredient5) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) (pick ingredient6) (add ingredient6 pot1) (putdown ingredient6) | 2308.13724#133 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 134 | Answer: Analysis: Initial state: pot1: empty pot2: empty pot3: empty (pick ingredient2) result: ingredient2 in hand (add ingredient2 pot1) result: pot1: ingredient2 pot2: empty pot3: empty (putdown ingredient2) result: hand empty ingredient has been picked: ingredient2 (pick ingredient3) result: ingredient3 in hand (add ingredient3 pot1) result: pot1: ingredient2, ingredient3 pot2: empty pot3: empty (add ingredient3 pot2) result: pot1: ingredient2, ingredient3 pot2: ingredient3 pot3: empty (putdown ingredient3) result: hand empty ingredient has been picked: ingredient2, ingredient3 (pick ingredient4) result: ingredient4 in hand (add ingredient4 pot1) result:
45 | 2308.13724#134 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 135 | pot1: ingredient2, ingredient3, ingredient4 pot2: ingredient3 pot3: empty (add ingredient4 pot3) result: pot1: ingredient2, ingredient3, ingredient4 pot2: ingredient3 pot3: ingredient4 (putdown ingredient4) result: hand empty ingredient has been picked: ingredient2, ingredient3, ingredient4 (pick ingredient5) result: ingredient5 in hand (add ingredient5 pot2) result: pot1: ingredient2, ingredient3, ingredient4 pot2: ingredient3, ingredient5 pot3: ingredient4 (add ingredient5 pot3) result: pot1: ingredient2, ingredient3, ingredient4 pot2: ingredient3, ingredient5 pot3: ingredient4, ingredient5 (putdown ingredient5) result: hand empty ingredient has been picked: ingredient2, ingredient3, ingredient4, ingredient5 (pick ingredient6) result: ingredient6 in hand (add ingredient6 pot1) result: pot1: ingredient2, ingredient3, ingredient4, | 2308.13724#135 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 136 | result: ingredient6 in hand (add ingredient6 pot1) result: pot1: ingredient2, ingredient3, ingredient4, ingredient6 pot2: ingredient3, ingredient5 pot3: ingredient4, ingredient5 (putdown ingredient6) result: hand empty ingredient has been picked: ingredient2, ingredient3, ingredient4, ingredient5, ingredient6 No error found in action sequence. Goal state check: final state: pot1: ingredient2, ingredient3, ingredient4, ingredient6 pot2: ingredient3, ingredient5 pot3: ingredient4, ingredient5 goal state: (contain pot1 ingredient2) (contain pot1 ingredient3) (contain pot1 ingredient4) (contain pot1 ingredient6) (contain pot2 ingredient3) (contain pot2 ingredient5) (contain pot3 ingredient4) (contain pot3 ingredient5) The final state is the same as the goal state. Goal is accomplished. Final answer: Yes, the action sequence is correct, it can accomplish the task. | 2308.13724#136 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 138 | Goal state: (at ball1 room4) (at ball2 room4) (at ball3 room3) Examined action sequence: (move robot1 room1 room3) (pick ball2 room3) (move robot1 room3 room4) (drop ball2 room3) (pick ball1 room4) (move robot1 room4 room3) (drop ball1 room3) (pick ball3 room4) (move robot1 room3 room4) (drop ball3 room3) Answer: Analysis: Initial state: (robot-at robot1 room1), (at ball1 room4), (at ball2 room3), (at ball3 room4) (move robot1 room1 room3) precondition: (robot-at robot1 room1) current state: (robot-at robot1 room1) current state is the same as the precondition, action is correct change room3), (at ball3 room4) (pick ball2 room3) precondition 1: (at ball2 room3) current state: (at ball2 room3) current state is the same as the precondition 1 precondition 2: robot1 and ball2 in the same room current state: (robot-at robot1 room3), (at ball2 room3) robot1 is in the same room as ball2 two | 2308.13724#138 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 139 | and ball2 in the same room current state: (robot-at robot1 room3), (at ball2 room3) robot1 is in the same room as ball2 two preconditions are correct, action is correct change hand), (at ball3 room4) (move robot1 room3 room4) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room3) current state is the same as the precondition, action is correct change hand), (at ball3 room4) (drop ball2 room3) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room4) the current state is not the same as the precondition, action is wrong analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal. state to: (robot-at robot1 room3), (at ball1 room4), (at ball2 state to: (robot-at robot1 room3), (at ball1 room4), (at ball2 state to: (robot-at robot1 room4), (at ball1 room4), (at ball2 | 2308.13724#139 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 141 | 47
(at ball3 room2) Goal state: (at ball1 room2) (at ball2 room3) (at ball3 room1) Examined action sequence: (pick ball2 room4) (move robot1 room2 room3) (drop ball2 room3) (move robot1 room3 room2) (pick ball1 room3) (move robot1 room2 room1) (drop ball1 room2) (move robot1 room1 room2) (pick ball3 room2) (move robot1 room2 room1) (drop ball3 room1) Answer: Analysis: Initial state: (robot-at robot1 room1), (at ball1 room3), (at ball2 room4), (at ball3 room2) (pick ball2 room4) precondition 1: (at ball2 room4) current state: (at ball2 room4) current state is the same as the precondition 1 precondition 2: robot1 and ball2 in the same room current state: (robot-at robot1 room1), (at ball2 room4) robot1 is not in the same room as ball2 preconditions are not fullfilled, action is wrong analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal. | 2308.13724#141 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 143 | 48
Analysis: Initial state: (robot-at robot1 room2), (at ball1 room2), (at ball2 room1), (at ball3 room3) (move robot1 room2 room1) precondition: (robot-at robot1 room2) current state: (robot-at robot1 room2) current state is the same as the precondition, action is correct change room1), (at ball3 room3) (pick ball1 room1) precondition 1: (at ball1 room1) current state: (at ball1 room2) current state is not the same as the precondition 1 preconditions are not fullfilled, action is wrong analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal.
# Example 4 | 2308.13724#143 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 144 | Question: Robot and ball initial state: (robot-at robot1 room4) (at ball1 room2) (at ball2 room4) (at ball3 room3) Goal state: (at ball1 room3) (at ball2 room4) (at ball3 room4) Examined action sequence: (move robot1 room4 room2) (pick ball1 room2) (move robot1 room2 room3) (drop ball1 room3) Answer: Analysis: Initial state: (robot-at robot1 room4), (at ball1 room2), (at ball2 room4), (at ball3 room3) (move robot1 room4 room2) precondition: (robot-at robot1 room4) current state: (robot-at robot1 room4) current state is the same as the precondition, action is correct change room4), (at ball3 room3) (pick ball1 room2) precondition 1: (at ball1 room2) current state: (at ball1 room2) current state is the same as the precondition 1 precondition 2: robot1 and ball1 in the same room current state: (robot-at robot1 room2), (at ball1 room2) robot1 is in the same room as ball1 two preconditions are correct, action is | 2308.13724#144 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 146 | 49
precondition: (robot-at robot1 room2) current state: (robot-at robot1 room2) current state is the same as the precondition, action is correct change room4), (at ball3 room3) (drop ball1 room3) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room3) current state is the same as the precondition, action is correct change room4), (at ball3 room3) No error found in action sequence. Goal state check: final state: (at ball1 room3), (at ball2 room4), (at ball3 room3) goal state: (at ball1 room3), (at ball2 room4), (at ball3 room4) The final state is not the same as the goal state. The goal is not accomplished. Final answer: No, the action sequence is wrong, it cannot accomplish the goal.
# Example 5 | 2308.13724#146 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 147 | Question: Robot and ball initial state: (robot-at robot1 room1) (at ball1 room4) (at ball2 room1) (at ball3 room1) Goal state: (at ball1 room2) (at ball2 room1) (at ball3 room3) Examined action sequence: (pick ball3 room1) (move robot1 room1 room3) (drop ball3 room3) (move robot1 room3 room4) (pick ball1 room4) (move robot1 room4 room2) (drop ball1 room2) Answer: Analysis: Initial state: (robot-at robot1 room1), (at ball1 room4), (at ball2 room1), (at ball3 room1) (pick ball3 room1) precondition 1: (at ball3 room1) current state: (at ball3 room1) current state is the same as the precondition 1 precondition 2: robot1 and ball3 in the same room current state: (robot-at robot1 room1), (at ball3 room1) robot1 is in the same room as ball1 two preconditions are correct, action is correct change room1), (at ball3 hand) (move robot1 room1 room3) precondition: (robot-at robot1 room1) current state: | 2308.13724#147 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 150 | current state is the same as the precondition, action is correct change room1), (at ball3 hand) (drop ball3 room3) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room3) current state is the same as the precondition, action is correct change room1), (at ball3 room3) (move robot1 room3 room4) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room3) current state is the same as the precondition, action is correct change room1), (at ball3 room3) (pick ball1 room4) precondition 1: (at ball1 room4) current state: (at ball1 room4) current state is the same as the precondition 1 precondition 2: robot1 and ball1 in the same room current state: (robot-at robot1 room4), (at ball1 room4) robot1 is in the same room as ball1 two preconditions are correct, action is correct change room1), (at ball3 room3) (move robot1 room4 room2) precondition: (robot-at robot1 room4) current state: (robot-at robot1 room4) current state is the same as the precondition, action is | 2308.13724#150 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 151 | precondition: (robot-at robot1 room4) current state: (robot-at robot1 room4) current state is the same as the precondition, action is correct change room1), (at ball3 room3) (drop ball1 room2) precondition: (robot-at robot1 room2) current state: (robot-at robot1 room2) current state is the same as the precondition, action is correct change room1), (at ball3 room3) No error found in action sequence. Goal state check: final state: (at ball1 room2), (at ball2 room1), (at ball3 room3) goal state: (at ball1 room2), (at ball2 room1), (at ball3 room3) The final state is the same as the goal state. The goal is accomplished. Final answer: Yes, the action sequence is correct, it can accomplish the task. | 2308.13724#151 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 153 | 51
(on b2 b3) (on-table b3) Examined action sequence: (unstack b3 b2) (putdown b3) (unstack b2 b1) (putdown b2) (unstack b1 b2) (putdown b1) Answer: Analysis: Initial: b1 on the table, b2 on b1, b3 on b2 (unstack b3 b2) result: b1 on the table, b2 on b1, b3 in the hand (putdown b3) result: b1 on the table, b2 on b1, b3 on the table (unstack b2 b1) result: b1 on the table, b2 in hand, b3 on the table (putdown b2) result: b1 on the table, b2 on the table, b3 on the table (unstack b1 b2) result: the action is wrong since b1 is not on top of b2 analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal.
# Example 2 | 2308.13724#153 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 154 | Question: Block initial state: (on b1 b3) (on-table b2) (on-table b3) Goal state: (on b3 b2) (on b2 b1) (on-table b1) Examined action sequence: (unstack b1 b3) (putdown b1) (pickup b2) (stack b2 b1) (pickup b3) (stack b3 b2) Answer: Analysis: Initial: b1 on b3, b2 on the table, b3 on the table (unstack b1 b3) result: b1 in hand, b2 on the table, b3 on the table (putdown b1) result: b1 on the table, b2 on the table, b3 on the table (pickup b2) result: b1 on the table, b2 in hand, b3 on the table (stack b2 b1) result: b1 on the table, b2 on b1, b3 on the table (pickup b3) result: b1 on the table, b2 on b1, b3 in hand (stack b3 b2) result: b1 on the table, b2 on b1, b3 on b2 No error found in action sequence. Goal state check: the final state is the same as the goal state Final answer: Yes, the action | 2308.13724#154 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 157 | Answer: Analysis: Initial: b1 on b3, b2 on the table, b3 on b2 (unstack b1 b3) result: b1 in hand, b2 on the table, b3 on b2 (putdown b1) result: b1 on the table, b2 on the table, b3 on b2 (unstack b3 b2) result: b1 on the table, b2 on the table, b3 in hand (putdown b3) result: b1 on the table, b2 on the table, b3 on the table (pickup b2) result: b1 on the table, b2 in hand, b3 on the table (stack b2 b3) result: b1 on the table, b2 on b3, b3 on the table No error found in action sequence. Goal state check: the final state is not the same as the goal state Final answer: No, the action sequence is wrong, it cannot accomplish the goal. | 2308.13724#157 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 158 | Example 4 Question: Block initial state: (on b1 b3) (on-table b2) (on-table b3) Goal state: (on b1 b3) (on b3 b2) (on-table b2) Examined action sequence: (pickup b2) (stack b2 b1) (pickup b3) (stack b3 b2) (pickup b1) (stack b1 b3) Answer: Analysis: Initial: b1 on b3, b2 on the table, b3 on the table (pickup b2) result: b1 on b3, b2 in hand, b3 on the table (stack b2 b1) result: b1 on b3, b2 on b1, b3 on the table
53
(pickup b3) result: the action is wrong since b3 is not clear to move analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal.
# Example 5 | 2308.13724#158 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 159 | # Example 5
Question: Block initial state: (on-table b1) (on-table b2) (on-table b3) Goal state: (on b2 b3) (on b3 b1) (on-table b1) Examined action sequence: (pickup b3) (stack b3 b1) (pickup b2) (stack b2 b3)
Answer: Analysis: Initial: b1 on the table, b2 on the table, b3 on the table (pickup b3) result: b1 on the table, b2 on the table, b3 in hand (stack b3 b1) result: b1 on the table, b2 on the table, b3 on b1 (pickup b2) result: b1 on the table, b2 in hand, b3 on b1 (stack b2 b3) result: b1 on the table, b2 on b3, b3 on b1 No error found in action sequence. Goal state check: the final state is the same as the goal state Final answer: Yes, the action sequence is correct, it can accomplish the task. | 2308.13724#159 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13724 | 160 | Example 6 Question: Block initial state: (on b1 b2) (on b2 b3) (on-table b3) Goal state: (on b1 b3) (on b3 b2) (on-table b2) Examined action sequence: (unstack b1 b2) (putdown b1) (pickup b1) (stack b1 b3) (unstack b3 b2) (putdown b3) (pickup b3) (stack b3 b2) (unstack b2 b3) (putdown b2)
54
(pickup b2) (stack b2 b3) Answer: Analysis: Initial: b1 on b2, b2 on b3, b3 on the table (unstack b1 b2) result: b1 in hand, b2 on b3, b3 on the table (putdown b1) result: b1 on the table, b2 on b3, b3 on the table (pickup b1) result: b1 in hand, b2 on b3, b3 on the table (stack b1 b3) result: the action is wrong since b3 is not clear to move analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal.
55 | 2308.13724#160 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Motivated by the substantial achievements observed in Large Language Models
(LLMs) in the field of natural language processing, recent research has
commenced investigations into the application of LLMs for complex, long-horizon
sequential task planning challenges in robotics. LLMs are advantageous in
offering the potential to enhance the generalizability as task-agnostic
planners and facilitate flexible interaction between human instructors and
planning systems. However, task plans generated by LLMs often lack feasibility
and correctness. To address this challenge, we introduce ISR-LLM, a novel
framework that improves LLM-based planning through an iterative self-refinement
process. The framework operates through three sequential steps: preprocessing,
planning, and iterative self-refinement. During preprocessing, an LLM
translator is employed to convert natural language input into a Planning Domain
Definition Language (PDDL) formulation. In the planning phase, an LLM planner
formulates an initial plan, which is then assessed and refined in the iterative
self-refinement step by using a validator. We examine the performance of
ISR-LLM across three distinct planning domains. The results show that ISR-LLM
is able to achieve markedly higher success rates in task accomplishments
compared to state-of-the-art LLM-based planners. Moreover, it also preserves
the broad applicability and generalizability of working with natural language
instructions. | http://arxiv.org/pdf/2308.13724 | Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma | cs.RO, cs.AI | null | null | cs.RO | 20230826 | 20230826 | [
{
"id": "2211.09935"
},
{
"id": "2110.01517"
},
{
"id": "2204.00598"
},
{
"id": "1810.04805"
},
{
"id": "2304.11477"
},
{
"id": "2210.11610"
},
{
"id": "2108.07258"
},
{
"id": "2204.01691"
},
{
"id": "2305.11014"
},
{
"id": "2207.05608"
},
{
"id": "2303.06247"
},
{
"id": "2303.17651"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "1909.01066"
},
{
"id": "2205.12689"
}
] |
2308.13149 | 0 | 3 2 0 2
g u A 5 2 ] L C . s c [
1 v 9 4 1 3 1 . 8 0 3 2 : v i X r a
# SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research
Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chenâ and Kai Yu*. X-LANCE Lab, Department of Computer Science and Engineering Artificial Intelligence, AI Institute, Shanghai Jiao Tong University Shanghai Jiao Tong University, Shanghai, China {slt19990817, csyanghan, zhao mengxin, mada123}@sjtu.edu.cn {ieee-szn, 15368493547, chenlusz, kai.yu}@sjtu.edu.cn
# Abstract | 2308.13149#0 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 1 | # Abstract
Recently, there has been growing interest in using Large Language Models (LLMs) for scientific research. Numerous benchmarks have been proposed to evaluate the ability of LLMs for scientific research. However, current benchmarks are mostly based on pre-collected objective questions. This design suffers from data leakage problem and lacks the eval- uation of subjective Q/A ability. In this paper, we propose SciEval, a comprehensive and multi-disciplinary evaluation benchmark to address these issues. Based on Bloomâs taxon- omy, SciEval covers four dimensions to systematically eval- uate scientific research ability. In particular, we design a âdy- namicâ subset based on scientific principles to prevent eval- uation from potential data leakage. Both objective and sub- jective questions are included in SciEval. These characteris- tics make SciEval a more effective benchmark for scientific research ability evaluation of LLMs. Comprehensive exper- iments on most advanced LLMs show that, although GPT-4 achieves SOTA performance compared to other LLMs, there is still substantial room for improvement, especially for dy- namic questions. The data and codes are now publicly avail- able1. | 2308.13149#1 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 2 | Introduction Large Language Models (LLMs), such as ChatGPT (Schul- man et al. 2022), have attracted widespread attention in general scenarios, including information search, code gen- eration, and more. In the field of science, LLMs have also shown preliminary potential in improving scientific research efficiency and transforming scientific research paradigms (Blanco-Gonzalez et al. 2023; WANG and MIAO 2023). In the meanwhile, several scientific LLMs have been proposed by researchers (Taylor et al. 2022; Luo et al. 2022; Frey et al. 2022). In the general field, there are already numerous evaluation benchmarks to evaluate the language understanding, language generation and reasoning capabil- ities of LLMs, such as MMLU (Hendrycks et al. 2020), AGIEval (Zhong et al. 2023), and C-EVAL (Huang et al. 2023), shown in Table 1. Although these benchmarks cover data of science domain, the data sources are usually con- fined to educational materials, which can not adequately as- sess the research ability of LLMs and not align with real-life | 2308.13149#2 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 3 | scientific research scenarios. In addition, some benchmarks have been proposed to evaluate the scientific capability of LLMs, such as MultiMedQA (Singhal et al. 2023), Chem- LLMBench (Guo et al. 2023), and MATH (Hendrycks et al. 2021), while these benchmarks are restricted to a specific scientific discipline, leaving a lack of a more general scien- tific evaluation benchmark.2 In addition, these benchmarks (1) lack evaluation systems for scientific capabilities, (2) are all based on objective questions, which are insufficient to as- sess scientific abilities, and (3) face the risk of data leakage. In response to this gap, we present SciEval, an English benchmark designed to evaluate advanced abilities of LLMs in the scientific domain. SciEval consists of a total of about 18000 challenging scientific questions, spanning three im- portant basic science fields: chemistry, physics and biology, each of which is further divided into multiple sub-topics. SciEval mainly has the following three characteristics: | 2308.13149#3 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 4 | ⢠Multi-level and comprehensive evaluation of the abil- ity of LLMs in the scientific field. Scientific abil- ity of LLMs needs to be evaluated from multiple as- pects. Leveraging cognitive domains of Bloomâs taxon- omy (Krathwohl 2002; Forehand 2010), which covers six levels, SciEval evaluates the scientific capabilities of large language models across four dimensions: ba- sic knowledge, knowledge application, scientific calcu- lation, and research ability, where each capability aligns with one or more cognitive levels.
⢠Combination of objective and subjective questions. SciEval is mainly based on objective questions, which al- low for quick and standard model evaluations, involving multiple-choice, fill-in-the-blank, and judgment ques- tions. These questions can help us understand whether the model can correctly understand and memorize sci- entific knowledge. However, objective questions are in- sufficient to assess scientific capability holistically. To better assess scientific reasoning and application ability, SciEval introduces a small number of subjective ques- tions, involving a total of twelve basic science experi- ments, which is named Experimental Data.
# ⢠Dynamic data generation based on basic scientific
*The corresponding authors are Lu Chen and Kai Yu. 1https://github.com/OpenDFM/BAI-SciEval | 2308.13149#4 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 5 | # ⢠Dynamic data generation based on basic scientific
*The corresponding authors are Lu Chen and Kai Yu. 1https://github.com/OpenDFM/BAI-SciEval
2Due to the page limitation, we only compare some widely used benchmarks. For more information, we refer to (Chang et al. 2023).
Name Category Ability Source Data Type Dynamic #Data MMLU humanities, social science, STEM, other BK, KA, SC exam, book, course objective â 14079 AGIEval social science, STEM BK, KA, SC exam objective â 8062 C-EVAL humanities, social science, STEM, other BK, KA, SC exam objective â 12342 MultiMedQA medical BK, KA, RA exam, research objective â 13115 ChemLLMBench chemistry BK,KA knowledge base objective â 800 MATH mathematics SC exam objective â 5000 SciEval science BK, KA,SC, RA community QA, knowledge base objective + subjective â 15901
Table 1: Dataset comparison of SciEval and some other datasets covering science domain.âBKâ stands for Basic Knowledge, âKAâ stands for Knowledge Application, âSCâ stands for Scientific Calculation, and âRAâ stands for Research Ability. | 2308.13149#5 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 6 | principles. The huge amount of training data used for pre-training LLMs may cause the risk of data leakage for evaluation. In order to solve this problem, one of the main features of SciEval is the use of Dynamic Data, which can prevent potential data leakage and ensure the fairness and credibility of the evaluation results. The Dynamic Data will be updated regularly, and we will maintain a stable version to make a fair comparison of model perfor- mance. And the objective questions other than Dynamic Data are referred to as Static Data. We conduct experiments to evaluate LLMs on SciEval in answer-only, chain-of-thought and few-shot settings. Re- sults indicate that GPT-4 is the strongest model, with only GPT-4, GPT-3.5-turbo and Claude-v1.3 surpassing 60% av- erage accuracy on the Static Data, signifying considerable opportunities for improvement. With the results of Dynamic Data, we find that these LLMs have little knowledge about molecules, and most models could only retain near-random accuracy in the physics subset. As for Experimental Data, some top-tier models could perform satisfactorily in exper- imental principle and designing, while almost | 2308.13149#6 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 7 | accuracy in the physics subset. As for Experimental Data, some top-tier models could perform satisfactorily in exper- imental principle and designing, while almost all models struggle to analyze the experimental results. With the anal- ysis of experiment results, we claim that training on large- scale scientific corpus is helpful for the scientific capability of LLMs, and most LLMs perform bad on calculation prob- lems, especially in physics domain. We hope that SciEval can provide an excellent benchmark for the assessment of scientific capability of LLMs, and promote the wide appli- cation in science. | 2308.13149#7 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 8 | Big-Bench (Srivastava et al. 2022) introduces 204 chal- lenging tasks covering various domains, aiming to evaluate tasks beyond the capabilities of existing language models. AGIEval (Zhong et al. 2023) serves as an evaluation frame- work for assessing the performance of foundation models in human-centric standardized exams. C-Eval (Huang et al. 2023) assesses the advanced knowledge and reasoning capa- bilities of foundation models in Chinese. | 2308.13149#8 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 9 | Specific Benchmarks for LLMs Apart from general tasks, specific benchmarks are designed for certain downstream tasks. MultiMedQA (Singhal et al. 2023) focuses on medical question-answering, evaluating LLMs in terms of clinical knowledge and QA abilities. MATH (Hendrycks et al. 2021) assesses reasoning and problem-solving proficiencies of LLMs in mathematics. Sci- enceQA (Lu et al. 2022) proposes a multi-modal benchmark with a diverse set of science topics and annotations of their answers with corresponding lectures and explanations, col- lected from elementary and high school science curricula. SCIBENCH (Wang et al. 2023) examines the reasoning ca- pabilities required for complex scientific problem-solving and proposes two datasets of college-level scientific prob- lems. Compared to these benchmarks, SciEval (1) evalu- ates scientific capabilities from multiple aspects, having a broader coverage, (2) uses data of community Q&A, which is more flexible and diverse, (3) designs a subset of dynamic data, making an effort to mitigate data leakage.
# Related Work | 2308.13149#9 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 10 | # Related Work
General Benchmarks for LLMs To evaluate the performance of LLMs across dif- ferent tasks, several benchmarks have been proposed. MMLU (Hendrycks et al. 2020) aims to develop a compre- hensive test for evaluating text models in multi-task con- texts. HELM (Liang et al. 2022) offers a comprehensive assessment, evaluating LLMs across various aspects, such as language understanding and common-sense reasoning.
The SciEval dataset In this section, we first introduce the evaluation system of SciEval (§), followed by the data collection process (§). And finally, we show the data statistics (§).
Scientific Research Evaluation System Scientific research requires different dimensions of knowl- edge, such as understanding and calculation, thence evalua- tion of scientific ability should be conducted at multiple lev- els. Bloomâs taxonomy (Krathwohl 2002; Forehand 2010)
Three disciplines logy, â Oe uP conduction 2 & g 2 3 âSpuog jean? On oe n NOâ Men. Mu, circular ey Chap *anics, therm0 Research Ability Scientific Calculation Knowledge Application Basic Knowledge Four Abilities Cognitive Level Understand Remember
Figure 1: The illustration of the evaluation system. SciEval covers three disciplines with amounts of sub-topics, and investigates four abilities, corresponding to six cognitive levels. | 2308.13149#10 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 11 | Figure 1: The illustration of the evaluation system. SciEval covers three disciplines with amounts of sub-topics, and investigates four abilities, corresponding to six cognitive levels.
is a set of three hierarchical methods used for classification of educational learning objectives covering cognitive, affec- tive and psychomotor domains. The cognitive domain is fre- quently used to structure curriculum learning objectives, as- sessments and activities, and is broken into six levels: Re- member, Understand, Apply, Analyze, Evaluate and Create, as is shown in Figure 1, which are suitable for the evaluation of scientific capability. | 2308.13149#11 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 12 | Based on the cognitive domain of Bloomâs taxonomy, the evaluation system of SciEval consists of four knowledge di- mensions: Basic Knowledge, Knowledge Application, Scien- tific Calculation, and Research Ability. As is shown in Fig- ure 1, Basic Knowledge primarily assesses the fundamen- tal scientific knowledge of LLMs. Knowledge Application focuses on how to apply basic knowledge to solve scien- tific problems, requiring models to have comprehension, ap- plication, and analysis abilities. Scientific Calculation is a specialized application of knowledge that further examines complex reasoning capabilities of LLMs based on their gen- eral knowledge application abilities. Research Ability as- sesses evaluation capabilities at a higher cognitive level, re- quiring models to participate in various aspects of scientific research, including problem formulation, experimental de- sign, data analysis, and summarization.
Based on the evaluation system, we design three different types of data: Static Data, Dynamic Data, and Experimen- tal Data. The Static Data covers all these four knowledge dimensions and will remain constant throughout, while the Dynamic Data examines from the aspects of Knowledge Ap- plication and Scientific Calculation and will be regularly up- dated to prevent any data leakage. The Experimental Data comprises a set of questions for twelve scientific experi- ments and can be used to evaluate the Research Ability. | 2308.13149#12 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 13 | Q&A3, a community-driven website that covers a wide range of subjects such as science and literature. Specifically, we collect data from the fields of biology, chemistry, and physics. To ensure quality, we employ rule-based methods to preprocess the crawled data. While gathering the ques- tions, we found that not all of them are suitable as titles. To address this, we utilize GPT-4 with the âTask 1â prompt, as depicted in Figure 2, to process these questions. Since most of the collected questions are open-ended and challenging to evaluate, we employ GPT-4 to simplify ground-truth an- swers and generate three wrong answers to formulate them as multiple-choice questions. Additionally, we classify the questions into their respective knowledge domains. And dur- ing this process, we manually check the generated content of GPT-4 to ensure data quality.
To make the dataset more diverse and comprehensive, we further integrate data from some publicly available datasets: | 2308.13149#13 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 14 | To make the dataset more diverse and comprehensive, we further integrate data from some publicly available datasets:
MedQA (Jin et al. 2021) is a free-form multiple-choice OpenQA dataset for solving medical problems, collected from professional medical board exams. We use the test set of USMLE, which is the English subset of MedQA. ⢠PubMedQA (Jin et al. 2019) is a biomedical question- answering dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe using the corresponding abstracts, which is fit for evaluating the literature comprehension ability. We incorporate 1000 expert-annotated data from it and frame them as judgment questions.
⢠Reagent Selection (Guo et al. 2023) involves the iden- tification and proposal of the most fitting reagents for a specific chemical reaction or process, which is a subset of ChemLLMBench. We randomly select 40% data and formulate them as multiple-choice questions.
# Data Collection
Dynamic Data The current training of LLMs often uses a large amount of data, resulting in a risk of data leakage for evaluation. In order to solve this problem, we design a
Static Data The collection steps of Static Data are shown in Figure 2. The primary source of Static Data is Socratic
3https://socratic.org | 2308.13149#14 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 15 | Static Data The collection steps of Static Data are shown in Figure 2. The primary source of Static Data is Socratic
3https://socratic.org
Instruction: Given a question and its ground-truth answer, judge whether it is suitable to be used as the title of a multiple-choice question. Your answer should be "YES" or = explanation. Socratic Q&A Static Data PubMedQA Instruction: Given a question and a ground-truth answer, please simplify the answer as concise as possible. And I want to generate a 4-choice question using it, please generate! 3 fake answers for me. Note that the length of the simplified answer and these 3 fake lanswers should be about the same and these 3 fake answers should be as confusing as possible. Furthermore, please help me to classify the domain of the question. There are three domains in total: Base Knowledge, Scientific Calculation, Knowledge Application. Reagent Selection NO". And please directly give the results without any GPT-4
Figure 2: Data Collection steps of Static Data | 2308.13149#15 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 16 | Figure 2: Data Collection steps of Static Data
âdynamicâ subset, which can generate data dynamically ac- cording to scientific principles. The dynamic subset covers two disciplines, chemistry and physics. For chemistry data, we use the basic information and properties of molecules crawled from PubChem4 to create data. For physics data, we manually write some Python scripts according to the physics formulas. When obtaining the evaluation dataset, we will provide a regenerated version to users and we will update it regularly, while at the same time, we will maintain a sta- ble version of the dynamic data to make a fair comparison.
these questions are in English and we show some data ex- amples in Appendix D.
For Static Data, we further split the data into dev, valid, and test set. For each data source, each knowledge domain, and each discipline, we randomly select 5 data to form the dev set, which can be used for few-shot learning, and we split the remaining data with a ratio of 1:9 to construct the valid set and test set respectively.
# Experiment | 2308.13149#16 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 17 | # Experiment
Experimental Data To better evaluate the scientific thoughts and abilities of LLMs, SciEval introduces a sub- set of experimental data, involving 12 different basic scien- tific experiments. These experiments are collected from ba- sic science experiment courses at university, and each exper- iment conducts a comprehensive investigation of the ability of LLMs in scientific research and experimentation from the perspectives of experimental principle, process, and analysis and summarization of experimental results.
Given a question and four options, please select the right answer. Your answer should be "A", "B", "C" or "D". Please directly give the answer without any explanation.
How many atoms are in 3.5 moles of arsenic atoms?
A. 1.5 x 10°24 atoms B. 3.0 x 10°24 atoms C. 2.7 x 10°24 atoms D. 2.1 x 10°24 atoms
Answer: D
Ability Basic Knowledge Knowledge Application Scientific Calculation Research Ability Total Bio 2147 1379 301 1000 4830 Chem Phy 456 2914 36 3720 1165 3401 0 0 1657 10035
Figure 3: An example of the prompt we used for AO setting. The red text is the response from the model, while the black text is the inputted prompt.
Given a question and four options, please select the right answer. Your answer should be "A", "B", "C" or "D". | 2308.13149#17 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 18 | Given a question and four options, please select the right answer. Your answer should be "A", "B", "C" or "D".
Table 2: Statistics of Static Data
How many atoms are in 3.5 moles of arsenic atoms?
Data Statistics Summarized statistics of SciEval are shown in Table 2, where we only count Static Data. For Dynamic Data, the chemistry part examines the Knowledge Application abil- ity and contains 2000 data, while the physics part evaluates the Scientific Calculation ability and involves 890 data. All
4https://pubchem.ncbi.nlm.nih.gov/
A. 1.5 x 10°24 atoms B. 3.0 x 10°24 atoms C. 2.7 x 10°24 atoms ). 2.1 x 10°24 atoms
Answer: Let's think step by step: To find the number of atoms Therefore, the answer is D
Figure 4: An example of the prompt we used for CoT setting. The red text is the response from the model, while the blue text and black text are the inputted prompt. | 2308.13149#18 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 19 | Figure 4: An example of the prompt we used for CoT setting. The red text is the response from the model, while the blue text and black text are the inputted prompt.
Model Creator OpenAI GPT-4 OpenAI GPT-3.5-turbo Claude-v1.3 Anthropic Claude-instant-v1.1 Anthropic ERNIE Bot SparkDesk Vicuna Galactica ChatGLM2 ChatGLM Alpaca MOSS LLaMa Baidu iFLYTEK LMSYS Meta Tsinghua Tsinghua Stanford Fudan Meta #Parameters Access undisclosed undisclosed undisclosed undisclosed undisclosed undisclosed 13B API API API API Web Web Weights â 30B, 6.7B Weights â Weights â Weights â Weights â Weights â Weights â 6B 6B 7B 16B 7B, 13B SD DD ED â â â â â â â â â â â â â â â â â â â â â
Table 3: Models evaluated in this paper. The âaccessâ columns show whether we have full access to the model weights or we can only access through API or web. SD stands for Static Data, DD stands for Dynamic Data, and ED stands for Experimental Data. Marking âââ means we evaluate the corresponding model on this subset. | 2308.13149#19 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 20 | Model Static Data Biology Chemistry Physics Avg. Acc. Chemistry(DD) BLEU MSE Physics(DD) Acc. Exp Score GPT-4 GPT-3.5-turbo Claude-v1.3 Claude-instant-v1.1 Galactica-30B Vicuna-13B Galactica-6.7B ChatGLM2-6B ChatGLM-6B Alpaca-7B MOSS-16B LLaMa-13B LLaMa-7B ERNIE Bot SparkDesk 84.49 76.42 72.58 70.43 66.48 58.39 57.84 58.62 52.54 56.66 47.71 48.59 36.24 - - 69.38 64.30 59.72 53.36 50.16 53.06 50.77 44.00 45.36 42.43 33.87 33.56 26.38 - - 65.22 52.30 54.94 52.30 44.65 45.13 30.99 40.26 40.80 37.01 31.73 19.48 15.02 - - 73.93 66.97 63.45 58.92 54.96 53.93 50.87 48.44 47.23 46.54 38.23 36.96 28.37 - - 11.05 7.65 5.75 0.45 | 2308.13149#20 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 21 | 53.93 50.87 48.44 47.23 46.54 38.23 36.96 28.37 - - 11.05 7.65 5.75 0.45 0.9 0.95 1.55 0.2 0.75 0.2 0.1 0.3 0.5 - - 23.78 18.86 21.98 16.07 4.14 6.50 6.47 1.86 2.44 2.92 7.37 5.21 1.26 - - 891.09 2008.72 1489.87 8258.46 485.99 766.64 5519.82 3449.44 10303.90 428419.27 30505.17 3707.01 11305.65 - - 25.84 21.80 26.14 21.46 22.47 21.24 20.79 24.83 21.01 26.74 24.27 7.08 14.38 - - 93.31 88.27 85.73 87.50 - - - - - - - - - 61.12 33.69 | 2308.13149#21 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 22 | Table 4: Model performances of Answer-Only setting. The leaderboard is sorted by the average accuracy of Static Data.
Experiment Setup Prompts We evaluate LLMs in both Answer-Only (AO) and Chain-Of-Thought (CoT) (Kojima et al. 2022) settings. The prompts we used are shown in Figures 3 and 4 respec- tively. Furthermore, we also evaluate using 3-shot setting, where the three exemplars are selected from the dev set.
Models In order to comprehensively assess the scientific capabilities of Large Language Models (LLMs), we eval- uate 15 high-performing LLMs that are widely accessible. These models are selected to represent a diverse range of or- ganizations and vary in size. The details of these models are summarized in Table 3.
⢠GPT-3.5-turbo and GPT-4 (Schulman et al. 2022; Ope- nAI 2023) are the strongest GPT model variants from OpenAI that have undergone pretraining, instruction tun- ing, and reinforcement learning from human feedback
(RLHF, (Ouyang et al. 2022)).
⢠Claude5, developed by Anthropic, is often considered comparable to GPT-3.5-turbo. We evaluate both the Claude-v1.3 and Claude-instant-v1.1, a lighter version of Claude. | 2308.13149#22 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 23 | ⢠ERNIE Bot6 is developed by Baidu, possessing deep se- mantic understanding and generation capabilities across modalities and languages. SparkDesk7 is proposed by iFLYTEK. It has cross-domain knowledge and language understanding capabilities and can understand and exe- cute tasks based on natural dialogue.
⢠LLaMa (Touvron et al. 2023), developed by Meta, is probably the best open-weight foundation model so far.
5https://www.anthropic.com/index/introducing-claude. 6https://yiyan.baidu.com/ 7https://xinghuo.xfyun.cn/
80 )) Answer Only S 3 N is} Chain-of-Thought t) ll) | | | | : 3-Shot ra s oO a & * ws s ra of &* b eS «3 â 3 g < we oe 2 e & oo
Figure 5: Accuracy on Answer Only, Chain-of-Thought and 3-Shot settings of each LLMs for Static Data.
Vicuna (Zheng et al. 2023) and Alpaca (Taori et al. 2023) are both fine-turned from LLaMa with supervised in- struction fine-tuning. It is believed that the performance of Vicuna is better than that of Alpaca. | 2308.13149#23 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 24 | ⢠Galactica (Taylor et al. 2022) is also developed by Meta, which is trained on a large-scale scientific corpus. It is de- veloped to study the use of language models for the auto- matic organization of science and can perform numerous scientific tasks, such as citation prediction, scientific QA, and molecular property prediction.
presented as multiple-choice questions, which can also be evaluated using accuracy. Conversely, the chemistry ques- tions involve complex components, such as âWhat is the molecular weight of A?â and âWhat is the SMILES expres- sion of B?â. Hence, for questions with numerical answers, we employ MSE9 as the evaluation metric, while for ques- tions with string answers, we utilize the BELU score (Pap- ineni et al. 2002). Additionally, we also calculate the extract match scores. As for Experimental Data, each experiment consists of multiple open-ended questions. As a result, we assess the model-generated responses manually. | 2308.13149#24 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 25 | ⢠ChatGLM and ChatGLM2, created by Tsinghua Univer- sity, are based on GLM architecture (Du et al. 2022), and further adapted on conversational data. MOSS (Sun et al. 2023), developed by Fudan University, is the first pub- licly available Chinese LLM, and it follows a training procedure similar to ChatGPT.
We evaluate GPT-3.5-turbo, GPT4 and Claude on all three subsets, including Static Data, Dynamic Data, and Exper- imental Data. Since we can only assess ERNIE Bot and SparkDesk through web interface, we evaluate these two models only on the Experimental Data. And for the rest LLMs with billions or tens of billions of parameters, since the length of the Experimental Data exceeds the length limit of these models8, we evaluate them on Static Data and Dy- namic Data, as is shown in Table 3.
Evaluation Metrics In the case of Static Data, all ques- tions are objective, making accuracy the appropriate evalu- ation metric. For Dynamic Data, the physics questions are
# Experiment Results | 2308.13149#25 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 26 | # Experiment Results
Answer-Only Setting Answer-only results of all the mod- els on the test set are shown in Table 4 and detailed results of Static Data across different knowledge domains are provided in Appendix B. Analyzing the results of Static Data, GPT- 4 demonstrates significantly superior performance com- pared to other models. And only GPT-4, GPT-3.5-turbo, and Claude-v1.3 achieve an average accuracy exceeding 60%, which highlights the challenge posed by SciEval.
For the results of Dynamic Data, GPT-4 performs the best in terms of average accuracy and BLEU score. However, for counting and calculation questions, Galactica-30B yields the best results, indicating its strong aptitude in the field of sci- ence. Conversely, models with billions or tens of billions of parameters perform poorly on the chemistry subset, suggest- ing their limited knowledge about molecules. Regarding the performance of models on the physics subset, since all ques8The maximum context length of ChatGLM2 is extended to 32k, while it has limited ability to understand long texts.
9If the predictions do not contain any number, we will regard the MSE as 1 Ã 1010 | 2308.13149#26 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 27 | Model AO Chemistry CoT 3-Shot AO Physics CoT 3-Shot GPT-4 GPT-3.5-turbo Galactica-6.7B Vicuna-13B Galactica-30B ChatGLM-6B LLaMa-7B LLaMa-13B ChatGLM2-6B Alpaca-7B MOSS-16B 11.05 7.65 1.55 0.95 0.90 0.75 0.50 0.30 0.20 0.20 0.10 12.42â 11.65 â 8.85 â 10.20 â 3.05 â 1.75 â 1.80 â 1.95 â 3.30 â 2.60 â 1.15 â 0.80 â 0.10 â 1.55 â 0.25 â¼ 2.11 â 1.60 â 2.65 â 2.10 â 0.65 â 0.65 â 0.85 â 25.84 21.80 20.79 21.24 22.47 21.01 18.65 7.08 24.83 26.71 24.27 51.01 â 17.98 â 47.19 â 25.39 â¼ 23.37 â¼ 21.12 â¼ 18.65 â¼ 23.37â¼ 22.58 â¼ 14.72 â 25.39 â¼ | 2308.13149#27 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 29 | Table 5: Results on Answer-Only, Chain-of-Thought and 3-Shot settings of each LLM for Dynamic Data. â means the perfor- mance is slightly better than that under Answer-Only setting, â means worse, and â¼ means the performance is nearly the same.
tions are 4-choices questions, the accuracy should be at least 25%. However, none of these models achieve satisfactory results in this subset.
accuracy of 51.01 under 3-Shot setting, the highest among all models, demonstrating its ability to learn from a mere three examples.
As for Experimental Data, GPT-series models and Claude-series models achieve good results, while the other two models are not. The detailed scores models reached in each experiment are shown in Appendix C. However, al- though some models could get a great performance, during experiments, we find that these models are good at exper- imental principles and designing, while when it comes to analyzing the experiment results, the performances are not satisfying. | 2308.13149#29 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 30 | Discussion Training on large-scale scientific corpus is helpful. Based on experimental results (Table 4), Galactica (Taylor et al. 2022), which has been trained on an extensive sci- entific corpus, significantly outperforms other LLMs with a comparable number of parameters, although Galactica is trained with a much smaller amount of data. Remarkably, when tested on Dynamic Data, Galactica surpasses the GPT- series and Claude-series LLMs in computational problems.
CoT Setting and 3-Shot setting Comparison of experi- ment results among Answer-Only, Chain-of-Thought and 3- Shot settings are shown in Figure 5 and Table 5.10 And we refer detailed results to Appendix A and B.
The experimental results from Static Data reveal that solely the GPT-series LLMs get performance enhancement within the CoT setting due to the limited CoT capabilities of other LLMs. As for the 3-Shot setting, roughly half of the LLMs analyzed demonstrate superior performances rela- tive to the Answer-Only setting. The performances of the re- maining LLMs are closely similar to those observed within the Answer-Only setting. | 2308.13149#30 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 31 | From the experimental results derived from Dynamic Data, it is observed that both CoT and 3-Shot significantly enhance the performance of most Language Learning Mod- els (LLMs) in the chemistry subset. However, the perfor- mances achieved are still not up to the mark. In the physics subset, the impact of CoT and 3-Shot on most LLMs is less pronounced, resulting in nearly random performances. Un- der the CoT setting, GPT-3.5-turbo achieves an accuracy of 47.19, suggesting a robust understanding of physical prin- ciples. Conversely, the performance of GPT-4 is markedly poor, from which we find that despite its extensive knowl- edge of physical principles, it frequently employs incorrect formulas to solve problems. Nevertheless, GPT-4 attains an
10When evaluating on CoT and 3-Shot settings, Claude-Instant and Claude are not available for us, due to the limitation of API. | 2308.13149#31 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 32 | 10When evaluating on CoT and 3-Shot settings, Claude-Instant and Claude are not available for us, due to the limitation of API.
Most LLMs perform bad on calculation problems, espe- cially in physics domain. Detailed results across various knowledge domains on Static Data (refer to Appendix B) reveal that most LLMs underperform in the Scientific Cal- culation domain, while demonstrate relatively superior per- formance in other domains, which is particularly acute in the field of physics. Similar issues are also observed in Dy- namic Data and Experimental Data. In the context of Dy- namic Data, the mean square error, employed to evaluate cal- culation abilities within the chemistry subset, is exceedingly high for most LLMs, and almost all LLMs can only achieve nearly random performance within the physics subset. Re- garding Experimental Data, our findings indicate that these LLMs struggle with the analysis of experimental results. | 2308.13149#32 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 33 | Conclusion In this paper, we introduce SciEval, a benchmark designed to evaluate scientific capabilities of LLMs. SciEval comprises about 18,000 challenging scientific questions, covering three fundamental fields of science. SciEval assesses the scientific ability of LLMs across four dimensions. It incorporates both objective and subjective questions, and employs dynamic data generation to mitigate potential data leakage. We con- duct comprehensive experiments on various advanced LLMs using SciEval and perform thorough analyses. Our experi- mental results reveal that most LLMs do not perform well
on our benchmark, with the exception of the GPT-series and Claude-series LLMs. We hope that SciEval can serve as a robust benchmark for assessing scientific capabilities of LLMs. | 2308.13149#33 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 34 | References Blanco-Gonzalez, A.; Cabezon, A.; Seco-Gonzalez, A.; Conde-Torres, D.; Antelo-Riveiro, P.; Pineiro, A.; and Garcia-Fandino, R. 2023. The role of ai in drug discovery: challenges, opportunities, and strategies. Pharmaceuticals, 16(6): 891. Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Zhu, K.; Chen, H.; Yang, L.; Yi, X.; Wang, C.; Wang, Y.; et al. 2023. A sur- vey on evaluation of large language models. arXiv preprint arXiv:2307.03109. Du, Z.; Qian, Y.; Liu, X.; Ding, M.; Qiu, J.; Yang, Z.; and Tang, J. 2022. GLM: General Language Model Pretraining with Autoregressive Blank Infilling. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 320â335. Forehand, M. 2010. Blooms taxonomy. Emerging perspec- tives on learning, teaching, and technology, | 2308.13149#34 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 35 | 1: Long Papers), 320â335. Forehand, M. 2010. Blooms taxonomy. Emerging perspec- tives on learning, teaching, and technology, 41(4): 47â56. Frey, N.; Soklaski, R.; Axelrod, S.; Samsi, S.; Gomez- Bombarelli, R.; Coley, C.; and Gadepally, V. 2022. Neural scaling of deep chemical models. Guo, T.; Guo, K.; Liang, Z.; Guo, Z.; Chawla, N. V.; Wiest, O.; Zhang, X.; et al. 2023. What indeed can GPT models do in chemistry? A comprehensive benchmark on eight tasks. arXiv preprint arXiv:2305.18365. Hendrycks, D.; Burns, C.; Basart, S.; Zou, A.; Mazeika, M.; Song, D.; and Steinhardt, J. 2020. Measuring mas- arXiv preprint sive multitask language understanding. arXiv:2009.03300. Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart, S.; Tang, E.; Song, D.; and | 2308.13149#35 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 36 | Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart, S.; Tang, E.; Song, D.; and Steinhardt, J. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874. Huang, Y.; Bai, Y.; Zhu, Z.; Zhang, J.; Zhang, J.; Su, T.; Liu, J.; Lv, C.; Zhang, Y.; Lei, J.; et al. 2023. C-eval: A multi- level multi-discipline chinese evaluation suite for foundation models. arXiv preprint arXiv:2305.08322. Jin, D.; Pan, E.; Oufattole, N.; Weng, W.-H.; Fang, H.; and Szolovits, P. 2021. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14): 6421. Jin, Q.; Dhingra, B.; Liu, Z.; Cohen, W. W.; and Lu, X. 2019. Pubmedqa: A dataset for biomedical research question an- swering. | 2308.13149#36 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 37 | B.; Liu, Z.; Cohen, W. W.; and Lu, X. 2019. Pubmedqa: A dataset for biomedical research question an- swering. arXiv preprint arXiv:1909.06146. Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa, Y. 2022. Large language models are zero-shot reason- ers. Advances in neural information processing systems, 35: 22199â22213. Krathwohl, D. R. 2002. A revision of Bloomâs taxonomy: An overview. Theory into practice, 41(4): 212â218. Liang, P.; Bommasani, R.; Lee, T.; Tsipras, D.; Soylu, D.; Yasunaga, M.; Zhang, Y.; Narayanan, D.; Wu, Y.; Kumar, | 2308.13149#37 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 38 | A.; et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110. Lu, P.; Mishra, S.; Xia, T.; Qiu, L.; Chang, K.-W.; Zhu, S.- C.; Tafjord, O.; Clark, P.; and Kalyan, A. 2022. Learn to explain: Multimodal reasoning via thought chains for sci- ence question answering. Advances in Neural Information Processing Systems, 35: 2507â2521. Luo, R.; Sun, L.; Xia, Y.; Qin, T.; Zhang, S.; Poon, H.; and Liu, T.-Y. 2022. BioGPT: generative pre-trained trans- former for biomedical text generation and mining. Briefings in Bioinformatics, 23(6): bbac409. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human | 2308.13149#38 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 39 | P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing Systems, 35: 27730â27744. Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a method for automatic evaluation of machine trans- In Proceedings of the 40th annual meeting of the lation. Association for Computational Linguistics, 311â318. Schulman, J.; Zoph, B.; Kim, C.; Hilton, J.; Menick, J.; Weng, J.; Uribe, J. F. C.; Fedus, L.; Metz, L.; Pokorny, M.; et al. 2022. ChatGPT: Optimizing language models for dia- logue. OpenAI blog. Singhal, K.; Azizi, S.; Tu, T.; Mahdavi, S. S.; Wei, J.; Chung, H. W.; Scales, N.; Tanwani, A.; Cole-Lewis, H.; Pfohl, S.; et al. 2023. Large language models encode clinical knowl- edge. Nature, 1â9. | 2308.13149#39 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 40 | A.; Cole-Lewis, H.; Pfohl, S.; et al. 2023. Large language models encode clinical knowl- edge. Nature, 1â9. Srivastava, A.; Rastogi, A.; Rao, A.; Shoeb, A. A. M.; Abid, A.; Fisch, A.; Brown, A. R.; Santoro, A.; Gupta, A.; Garriga- Alonso, A.; et al. 2022. Beyond the imitation game: Quanti- fying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615. Sun, T.; Zhang, X.; He, Z.; Li, P.; Cheng, Q.; Yan, H.; Liu, X.; Shao, Y.; Tang, Q.; Zhao, X.; Chen, K.; Zheng, Y.; Zhou, Z.; Li, R.; Zhan, J.; Zhou, Y.; Li, L.; Yang, X.; Wu, L.; Yin, Z.; Huang, X.; and Qiu, X. 2023. MOSS: Training Conver- sational Language Models from Synthetic Data. Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, | 2308.13149#40 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 41 | Language Models from Synthetic Data. Taori, R.; Gulrajani, I.; Zhang, T.; Dubois, Y.; Li, X.; Guestrin, C.; Liang, P.; and Hashimoto, T. B. 2023. Stan- ford alpaca: An instruction-following llama model. Taylor, R.; Kardas, M.; Cucurull, G.; Scialom, T.; Hartshorn, A.; Saravia, E.; Poulton, A.; Kerkez, V.; and Stojnic, R. 2022. GALACTICA: A Large Language Model for Science. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023. Llama: Open and efficient founda- tion language models. arXiv preprint arXiv:2302.13971. WANG, F.; and MIAO, Q. 2023. Novel Paradigm for AI- driven Scientific Research: From AI4S to Intelligent Sci- ence. Bulletin of Chinese Academy of Sciences (Chinese Version), | 2308.13149#41 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 43 | SciBench: Evaluating College-Level Scientific Problem- Solving Abilities of Large Language Models. arXiv preprint arXiv:2307.10635.
Zheng, L.; Chiang, W.-L.; Sheng, Y.; Zhuang, S.; Wu, Z.; Zhuang, Y.; Lin, Z.; Li, Z.; Li, D.; Xing, E.; et al. 2023. Judg- ing LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv preprint arXiv:2306.05685.
Zhong, W.; Cui, R.; Guo, Y.; Liang, Y.; Lu, S.; Wang, Y.; Saied, A.; Chen, W.; and Duan, N. 2023. Agieval: A human- centric benchmark for evaluating foundation models. arXiv preprint arXiv:2304.06364.
# A Detailed Results on Dynamic Data
In this section, we show detailed results on the Chemistry subset of Dynamic Data under Chain-of-Thought (Table 6), and 3-Shot settings (Table 7). The performance comparison under different settings can be found in Table 5 of the main body.
# B Detailed Results on Static Data | 2308.13149#43 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 44 | # B Detailed Results on Static Data
In this section, we show detailed results on Static Data across different knowledge domains under Answer-Only (Table 9), Chain-of-Thought (Table 10) and 3-Shot settings (Table 11), and the overall results are shown in Table 8.
# C Detailed Results on Experimental Data
In this section, we show detailed results in each experiment, referred to Table 12. Each category contains four experi- ments, and each experiment is composed of several ques- tions.
Model Acc. Chemistry BLEU MSE GPT-4 GPT-3.5-turbo Galactica-30B Vicuna-13B Galactica-6.7B ChatGLM2-6B ChatGLM-6B Alpaca-7B MOSS-16B LLaMa-13B LLaMa-7B 11.65 10.2 2.6 1.95 1.75 2.65 0.8 0.65 0.85 0.25 0.1 16.13 12.93 0.52 3.28 2.67 0.83 1.33 1.58 3.74 0.85 0.74 156.34 1336.76 12155.50 71509.65 11517.12 1113845.91 36150.04 413735.26 145736.31 791120.58 22521.28 | 2308.13149#44 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 45 | Table 6: Detailed results on Chemistry subset of Dynamic Data under Chain-of-Thought setting.
Model Acc. Chemistry BLEU MSE GPT-4 GPT-3.5-turbo Galactica-30B Vicuna-13B Galactica-6.7B ChatGLM2-6B ChatGLM-6B Alpaca-7B MOSS-16B LLaMa-13B LLaMa-7B 12.42 8.85 3.30 1.80 3.05 1.60 1.15 2.10 0.65 2.11 1.55 26.97 24.92 12.08 9.24 5.93 5.05 4.24 5.85 9.00 9.69 7.80 191.99 483.39 264.58 88.79 324.05 1080.68 5578.05 2068.95 13811.04 423.60 598.44
Table 7: Detailed results on Chemistry subset of Dynamic Data under 3-Shot setting. | 2308.13149#45 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 46 | Table 7: Detailed results on Chemistry subset of Dynamic Data under 3-Shot setting.
Model AO CoT 3-Shot 73.93 GPT-4 GPT-3.5-turbo 66.97 Galactica-30B 54.96 53.93 Vicuna-13B Galactica-6.7B 50.87 ChatGLM2-6B 48.44 47.23 ChatGLM-6B 46.54 Alpaca-7B 38.23 MOSS-16B 36.96 LLaMa-13B 28.37 LLaMa-7B 79.76 68.28 41.56 53.34 36.93 48.22 39.48 40.57 35.92 33.53 24.56 80.09 68.89 53.45 50.50 49.39 47.65 46.59 47.85 42.00 42.49 35.37
Table 8: Overall results on Static Data unser Answer-Only (AO), Chain-of-Thought (CoT) and 3-Shot settings.
# D Dataset Example
In this section, we show examples of different disciplines, different knowledge domains, and different subsets, includ- ing Static Data (Figures 6 to 15) and Dynamic Data (Fig- ures 16 and 17). | 2308.13149#46 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 48 | Model BK Biology KA SC RA BK Chemistry KA SC BK Physics KA SC GPT-4 GPT-3.5-turbo Claude-v1.3 Claude-instant-v1.1 Galactica-30B Vicuna-13B Galactica-6.7B ChatGLM2-6B ChatGLM-6B Alpaca-7B MOSS-16B LLaMa-13B LLaMa-7B 94.29 90.61 90.92 88.80 77.85 80.13 66.86 71.21 66.34 62.30 51.92 55.03 31.33 80.81 61.94 62.35 54.98 45.18 40.24 36.36 35.38 34.66 37.81 30.85 30.69 28.10 89.14 77.90 76.78 76.78 65.92 67.79 57.68 58.80 53.93 50.19 38.20 45.32 22.47 67.08 65.40 45.98 50.33 71.54 33.82 68.08 63.50 47.10 72.43 64.73 60.38 62.16 92.94 84.57 85.11 80.45 66.36 64.80 54.52 56.78 54.41 48.49 39.40 37.08 | 2308.13149#48 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.