doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.02477 | 140 | Default
You are an expert programmer who can readily adapt to new programming languages. There is a new programming language, ThonPy, which is identical to Python 3.7 except all variables of the `list`, `tuple`, and `str` types use 1-based indexing, like in the MATLAB and R languages, where sequence indices start from 1. That is, index `n` represents the `n`-th element in a sequence, NOT the `n+1`-th as in 0-based indexing. This change only affects when the index is non-negative. When the index is negative, the behavior is the same as Python 3.7. This also affects methods of these classes such as `index` and `pop`. The built-in functions `enumerate` and `range` also use 1-based indexing: by default, the index of `enumerate` starts from 1, and so does the lower bound of `range` when not supplied (the higher bound is unchanged).
# CF | 2307.02477#140 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 141 | For example, ```thonpy assert (7, 8, 9)[1] == 7 assert ["abc", "def", "ghi"][3] == "ghi" assert "abcde"[4] == "d" assert "abc"[:2] == "a" assert [7, 8, 9][1:] == [7, 8, 9][1:5] == [7, 8, 9][1::1] == [7, 8, 9][:4] == [9, 8, 7][::-1] == [9, 8, 7, 6][3::-1] == [7, 8, 9] assert list(enumerate([7, 8, 9])) == [(1, 7), (2, 8), (3, 9)] assert list(range(2)) == [1] assert list(range(2, 4)) == [2, 3] assert {0: 7, 1: 8, 2: 9}[1] == 8 assert [7, 8, 9].index(8) == 2 ``` What does the following code snippet in ThonPy print? ```thonpy def function(lst): return sum([lst[i] for i in range(1, len(lst), 2) if lst[i] % 2 == 0]) print([function([4, | 2307.02477#141 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 142 | return sum([lst[i] for i in range(1, len(lst), 2) if lst[i] % 2 == 0]) print([function([4, 88])]) print([function([4, 5, 6, 7, 2, 122])]) print([function([4, 0, 6, 7])]) print([function([4, 4, 6, 8])]) print([list(range(3))]) print([[4, 5, 6].pop(2)]) print(["qrs"[:2]]) print(["qrstu"[4]]) print([list(enumerate("qrstuv"))]) ``` {Letâs think step by step. Write out intermediate results and reasoning processes as needed. }End the response by saying "The final output is:" and a unified summary ```thonpy``` code block with *ALL* the output, in which each line represents the output of each print statement. | 2307.02477#142 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 143 | Table 2: Prompts for the program execution task. {Letâs think step by step. Write out intermediate results and reasoning processes as needed. } is added only if 0-shot CoT is used. All the print statements wrap the expression in a singleton list for the ease of parsing, so that (a) each output always takes a single line even with line breaks in the middle, and (b) we can distinguish between a string representation of e.g. an integer and the integer type.
# Mode
# Prompt
Mode â Prompt
You are an expert programmer. Complete the following function in Python 3.7. Please only output the code for the completed function.
def add(lst):
"""Given a non-empty list of integers lst. add the even elements that are at odd indices..
Default
# Default...
Examples:
Examples: add([4, 2, 6, 7]) ==> 2 """ | 2307.02477#143 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 144 | Default
# Default...
Examples:
Examples: add([4, 2, 6, 7]) ==> 2 """
You are an expert programmer who can readily adapt to new programming languages. There is a new programming language, ThonPy, which is identical to Python 3.7 except all variables of the `list`, `tuple`, and `str` types use 1-based indexing, like in the MATLAB and R languages, where sequence indices start from 1. That is, index `n` represents the `n`-th element in a sequence, NOT the `n+1`-th as in 0-based indexing. This change only affects when the index is non-negative. When the index is negative, the behavior is the same as Python 3.7. This also affects methods of these classes such as `index` and `pop`. The built-in functions `enumerate` and `range` also use 1-based indexing: by default, the index of `enumerate` starts from 1, and so does the lower bound of `range` when not supplied (the higher bound is unchanged).
# CF | 2307.02477#144 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 145 | # CF
For example, ```thonpy assert (7, 8, 9)[1] == 7 assert ["abc", "def", "ghi"][3] == "ghi" assert "abcde"[4] == "d" assert "abc"[:2] == "a" assert [7, 8, 9][1:] == [7, 8, 9][1:5] == [7, 8, 9][1::1] == [7, 8, 9][:4] == [9, 8, 7][::-1] == [9, 8, 7, 6][3::-1] == [7, 8, 9] assert list(enumerate([7, 8, 9])) == [(1, 7), (2, 8), (3, 9)] assert list(range(2)) == [1] assert list(range(2, 4)) == [2, 3] assert {0: 7, 1: 8, 2: 9}[1] == 8 assert [7, 8, 9].index(8) == 2 ```
Complete the following function in ThonPy. Please only output the code for the completed function.
# def add(lst):
"""Given a non-empty list of integers lst. add the even elements that are at odd indices..
Examples: add([4, 2, 6, 7]) ==> 2 """ | 2307.02477#145 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 146 | """Given a non-empty list of integers lst. add the even elements that are at odd indices..
Examples: add([4, 2, 6, 7]) ==> 2 """
Table 3: Prompts for the program generation task.
# Mode
# Prompt
# Default
You are an expert in linguistics. Your task is to identify the main verb and the main subject of a sentence in English. Show the main verb (a single word) and its subject (also a single word) after the prefix âMain verb and subject: â. Sentence: japan had just opened its doors to the world after about 250 years of isolation . {Letâs think step by step. }
You are an expert in linguistics. Imagine a language that is the same as English with the only exception being that it uses the verb-object-subject order instead of the subject-verb-object order. Your task is to identify the main verb and the main subject in a sentence in this imaginary language. Show the main verb (a single word) and its subject (also a single word) after the prefix âMain verb and subject: â. Sentence: had just opened its doors japan to the world after about 250 years of isolation . {Letâs think step by step. }
# CF | 2307.02477#146 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 147 | # CF
You are an expert in linguistics. Imagine a language that is the same as English with the only exception being that it uses the verb-subject-object order instead of the subject-verb-object order. Your task is to reconstruct the original sentence in English. You should only use the words in the same form as they appear in the given sentence. Sentence: saw anna john Show the original sentence at the end after the prefix âOriginal sentence: â. {Letâs think step by step. }
# CCC
Table 4: Prompts for the basic syntactic reasoning task. {Letâs think step by step. } is added only if 0-shot CoT is used.
# Mode Prompt
Consider the following premises: "All corgis are reptiles. All reptiles are plants." Assuming no other commonsense or world knowledge, is the sentence "All corgis are plants." necessarily true, necessarily false, or neither? {Letâs think step by step, and }end the response with either "necessarily true", "necessarily false", or "neither".
# Test
# CCC | 2307.02477#147 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 148 | # Test
# CCC
Consider the following premises: "All corgis are reptiles. All reptiles are plants." Assuming no other commonsense or world knowledge, which sentence between (a) "All corgis are reptiles." and (b) "All corgis are mammals." is definitely true? Answer just "(a)" or "(b)" and nothing else. You MUST choose one and only one, so DO NOT say neither or both.
Table 5: Prompts for the natural language reasoning task. {Letâs think step by step, and } is added only if 0-shot CoT is used (and the following e is capitalized without 0-shot CoT). We only use a made-up example here rather than one in the dataset due to the non-publicness of the dataset (§A.4). Default and counterfactual tasks share the same test template, but the instances themselves are changed to be counterfactual. For the CCC, we separate each changed premise in an instance into a separate prompt. The default statement and the counterfactual statement are matched to (a) and (b) randomly. We do not distinguish between CCC with or without 0-shot CoT.
# Mode
# Prompt
# Mode Prompt | 2307.02477#148 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 149 | # Mode
# Prompt
# Mode Prompt
You are in the middle of a kitchen. There is a microwave on the south side. There is a fridge on the west side. There is a coffee machine on the north side. We define the following directions. The north direction is (0, -1). The south direction is (0, 1). The east direction is (1, 0). The west direction is (-1, 0). Whatâs the layout of the room in the following format? You can estimate the size of the objects.
Default
```json {"name": "??", "width": 500, "height": 500, "directions": {"north": "??", "south": "??", "east": "??", "west": "??"}, "objects": [{"name": "??", "x": "??", "y": "??"}]} ```
# {Letâs think step by step.}
You are in the middle of a kitchen. There is a microwave on the south side. There is a fridge on the west side. There is a coffee machine on the north side. We define the following directions. The north direction is (-1, 0). The south direction is (1, 0). The east direction is (0, -1). The west direction is (0, 1). Whatâs the layout of the room in the following format? You can estimate the size of the objects. | 2307.02477#149 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 150 | # CF
```json {"name": "??", "width": 500, "height": 500, "directions": {"north": "??", "south": "??", "east": "??", "west": "??"}, "objects": [{"name": "??", "x": "??", "y": "??"}]} ```
{Letâs think step by step.}
Table 6: Prompts for the spatial reasoning task. {Letâs think step by step.} is added only if 0-shot CoT is used.
# Mode Prompt
# Default
# Default 1
You are an expert programmer and drawer. Imagine an image: there is a line at the top and a house in the middle. Can you try your best to draw this image using the `processing` language? Please do not use any built-in transformation functions, such as `rotate`and `scale`. Also, avoid defining any custom transformation functions yourself. Do not load any existing images. Please include as many details of the house as possible and put everything together in the end.
# {Letâs think step by step.}
# Default 2 | 2307.02477#150 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 151 | # {Letâs think step by step.}
# Default 2
You are an expert programmer and drawer. Imagine an image: there is a line at the top and a house in the middle. Can you try your best to draw the line and the house using the `processing` language? Please do not use any built-in transformation functions, such as `rotate`, `scale`, and `translate`. Also, avoid defining any custom transformation functions yourself. Do not load any existing images. Please include as many details of the house as possible and put everything together in the end.
2.
# {Letâs think step by step.}
# CF 1
You are an expert programmer and drawer. Imagine an image: there is a line at the top and a house in the middle. Can you rotate this image 180 degrees and try your best to draw it using the `processing` language? Please do not use any built-in transformation functions, such as `rotate`and `scale`. Also, avoid defining any custom transformation functions yourself. Do not load any existing images. Do not draw the original objects. Please include as many details of the house as possible and put everything together in the end.
# {Letâs think step by step.}
# CF 2 | 2307.02477#151 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 152 | # {Letâs think step by step.}
# CF 2
You are an expert programmer and drawer. Imagine an image: there is a line at the top and a house in the middle. Can you rotate this image 180 degrees and try your best to draw the 180-degree rotated line and the 180-degree rotated house using the `processing` language? Please do not use any built-in transformation functions, such as `rotate`, `scale`, and `translate`. Also, avoid defining any custom transformation functions yourself. Do not load any existing images. Do not draw the original objects. Please include as many details of the house as possible and put everything together in the end.
# {Letâs think step by step.}
Table 7: Prompts for the drawing task. {Letâs think step by step.} is added only if 0-shot CoT is used. We use prompt 1 for GPT-4 and prompt 2 for GPT-3.5 and Claude. We chose the prompt based on the best CCC accuracy for each respective model. In our preliminary experiments, we found that switching the prompt hurts CCC accuracy by more than 20% for both GPT-4 and GPT-3.5. Claude does not follow our instructions when using prompt 1, leading to almost 0% CCCâs accuracy.
Mode Prompt | 2307.02477#152 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 153 | Mode Prompt
You are an expert guitar player. I have a guitar with standard strings E-A-D-G-B-E.
I want you to tell me how I could play the E minor triad on this guitar.
Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format:
Default ANSWER: E string: fret FRET A string: fret FRET D string: fret FRET G string: fret FRET B string: fret FRET E string: fret FRET
Use fret 0 to indicate an open string and fret X to indicate not playing a string. Each increase in fret corresponds to an increase in half a note.
# {Letâs think step by step.}
You are an expert guitar player. I have a special guitar with strings tuned to E-C-F-G-B-E instead of the standard E-A-D-G-B-E. Note that what is the standard A string is instead tuned to C, and the standard D string is instead tuned to F. All other strings are the same.
I want you to tell me how I could play the E minor triad on this guitar.
Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format:
# CF | 2307.02477#153 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 154 | Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format:
# CF
ANSWER: E string: fret FRET C string: fret FRET F string: fret FRET G string: fret FRET B string: fret FRET E string: fret FRET
Use fret 0 to indicate an open string and fret X to indicate not playing a string. Each increase in fret corresponds to an increase in half a note.
# {Letâs think step by step.}
Table 8: Prompts for chord fingering: guitar. {Letâs think step by step.} is added only if 0-shot CoT is used.
Mode Prompt
You are an expert guitar player. I have a guitar with standard strings E-A-D-G-B-E.
I want you to tell me what notes the following sequences of finger positions corresponds to:
E string: fret 0 A string: fret 0 D string: fret 0 G string: fret 0 B string: fret 0 E string: fret 0
c, B string: fret 0
# Default
# Default
Note that fret 0 indicates an open string, and each increase in fret corresponds to an increase in half a note.
Make sure to choose one final answer, which you should start with âANSWER:â and format with dash-separated notes in the order of strings E-A-D-G-B-E. | 2307.02477#154 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 155 | {Letâs think step by step.}
You are an expert guitar player. I have a special guitar with strings tuned to E-C-F-G-B-E instead of the standard E-A-D-G-B-E. Note that what is the standard A string is instead tuned to C, and the standard D string is instead tuned to F. All other strings are the same.
I want you to tell me what notes the following sequences of finger positions corresponds to:
# CF
E string: fret 0 C string: fret 0 F string: fret 0 G string: fret 0 B string: fret 0 E string: fret 0
Note that fret 0 indicates an open string, and each increase in fret corresponds to an increase in half a note.
Make sure to choose one final answer, which you should start with âANSWER:â and format with dash-separated notes in the order of strings E-C-F-G-B-E.
{Letâs think step by step.}
Table 9: CCC prompts for chord fingering: guitar. {Letâs think step by step.} is added only if 0-shot CoT is used.
# Mode
# Prompt
Mode Prompt
You are an expert ukulele player. I have a ukulele with standard strings G-C-E-A. | 2307.02477#155 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 156 | # Mode
# Prompt
Mode Prompt
You are an expert ukulele player. I have a ukulele with standard strings G-C-E-A.
I want you to tell me how I could play the E minor triad on this ukulele.
Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format:
Default ANSWER: G string: fret FRET C string: fret FRET E string: fret FRET A string: fret FRET
Use fret 0 to indicate an open string and fret X to indicate not playing a string. Each increase in fret corresponds to an increase in half a note.
# {Letâs think step by step.}
You are an expert ukulele player. I have a special ukulele with strings tuned to F-C-E-A instead of the standard G-C-E-A. Note that what is the standard G string is instead tuned to F. All other strings are the same.
I want you to tell me how I could play the E minor triad on this ukulele.
Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format:
# CF
ANSWER: F string: fret FRET C string: fret FRET E string: fret FRET A string: fret FRET
CF | 2307.02477#156 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 157 | # CF
ANSWER: F string: fret FRET C string: fret FRET E string: fret FRET A string: fret FRET
CF
Use fret 0 to indicate an open string and fret X to indicate not playing a string. Each increase in fret corresponds to an increase in half a note.
{Letâs think step by step.}
Table 10: Prompts for chord fingering: ukulele. {Letâs think step by step.} is added only if 0-shot CoT is used.
# Mode
# Prompt
Mode Prompt
You are an expert ukulele player. I have a ukulele with standard strings G-C-E-A.
I want you to tell me what notes the following sequences of finger positions corresponds to:
# Default
Default
G string: fret 0 C string: fret 0 E string: fret 0 A string: fret 0
Note that fret 0 indicates an open string, and each increase in fret corresponds to an increase in half a note.
Make sure to choose one final answer, which you should start with âANSWER:â and format with dash-separated notes in the order of strings G-C-E-A.
{Letâs think step by step.} | 2307.02477#157 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 158 | {Letâs think step by step.}
You are an expert ukulele player. I have a special ukulele with strings tuned to F-C-E-A instead of the standard G-C-E-A. Note that what is the standard G string is instead tuned to F. All other strings are the same.
I want you to tell me what notes the following sequences of finger positions corresponds to:
# CF
F string: fret 0 C string: fret 0 E string: fret 0 A string: fret 0
Note that fret 0 indicates an open string, and each increase in fret corresponds to an increase in half a note.
Make sure to choose one final answer, which you should start with âANSWER:â and format with dash-separated notes in the order of strings F-C-E-A.
{Letâs think step by step.}
Table 11: CCC prompts for chord fingering: ukulele. {Letâs think step by step.} is added only if 0-shot CoT is used.
# Mode
# Prompt
# Mode Prompt
# Default
You are an expert musician. What is the second note of the melody of the song âTwinkle Twinkle Little Starâ in C major? Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format: NOTE={note}. | 2307.02477#158 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 159 | {Letâs think step by step.}
# CF
You are an expert musician. What is the second note of the melody of the song âTwinkle Twinkle Little Starâ in Db major? Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format: NOTE={note}.
{Letâs think step by step.}
Table 12: Prompts for melody retrieval. {Letâs think step by step.} is added only if 0-shot CoT is used.
# Mode
# Prompt
# Default
You are an expert musician. What is the second note of the C major scale? Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format: NOTE={note}.
# {Letâs think step by step.}
You are an expert musician. What is the second note of the Db major scale? Make sure to choose one final answer, which you should start with âANSWER:â and specify in the following format: NOTE={note}.
CF
# {Letâs think step by step.}
Table 13: CCC prompts for melody retrieval. {Letâs think step by step.} is added only if 0-shot CoT is used.
# Mode Prompt
# Default
# Default | 2307.02477#159 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 160 | Table 13: CCC prompts for melody retrieval. {Letâs think step by step.} is added only if 0-shot CoT is used.
# Mode Prompt
# Default
# Default
You are a chess player. Given an opening, determine whether the opening is legal. The opening doesnât need to be a good opening. Answer "yes" if all moves are legal. Answer "no" if the opening violates any rules of chess. Is the new opening "1. e4 e6 2. Be2 Bc5" legal? {Letâs think step by step}
# CF
You are a chess player. You are playing a chess variant where the starting positions for knights and bishops are swapped. For each color, the knights are at placed that where bishops used to be and the bishops are now placed at where knights used to be. Given an opening, determine whether the opening is legal. The opening doesnât need to be a good opening. Answer "yes" if all moves are legal. Answer "no" if the opening violates any rules of chess. Under the custom variant, is the new opening "1. e4 e6 2. Nfe2 Nc5" legal? {Letâs think step by step}
Table 14: Prompts for the chess task. {Letâs think step by step} is added only if 0-shot CoT is used.
# Mode
# Prompt | 2307.02477#160 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 161 | Table 14: Prompts for the chess task. {Letâs think step by step} is added only if 0-shot CoT is used.
# Mode
# Prompt
You are a chess player. Question: The two bishops on the board should be initially at which squares? Answer: {Letâs think step by step}
You are a chess player. You are playing a chess variant where the starting positions for knights and bishops are swapped. For each color, the knights are at placed that where bishops used to be and the bishops are now placed at where knights used to be. Question: In this chess variant, the two bishops on the board should be initially at which squares? Answer: {Letâs think step by step}
Table 15: CCC prompts for the chess task. {Letâs think step by step} is added only if 0-shot CoT is used.
# Mode
# Prompt
Mode Prompt
Default | 2307.02477#161 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 162 | You will be shown 12 cards. Each card has a figure and a number. A figure is a combination of a color, a shape, and a fill. Set of colors are: red , green , blue . Set of shapes are: squiggle , diamond , oval . Set of fills are: solid , striped , open . âTHE RULE OF THE GAMEâ A GAME-SET is set of three cards: For each attribute, (color, shape, fill, number), the three cards should either be ALL the SAME or NONE the SAME (=ALL DIFFERENT, e.g. if 2 of the cards have the same value, and 1 of them has a different value, the set is NOT valid; for example, (blue, green, blue) is MIXED and does not satisfy any of the rule, whereas (oval, diamond, squiggle) is all different. Here is the board: (2 | green | oval | open) ... [truncated] You can pick a set by typing the cards in the below format: First card: CARD1 Second card: CARD2 Third card: CARD3 Now remember the rule and tell me which three cards here constitutes a GAME-SET in the same format. I will give you 2 cards as a hint, and you tell me the third one. First card: (2 | green | oval | 2307.02477#162 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 164 | You will be shown 12 cards. Each card has a figure and a number. A figure is a combination of a color, a shape, and a fill. Set of colors are: red , green , blue . Set of shapes are: squiggle , diamond , oval .
Set of fills are: solid , striped , open .
# CF
âTHE RULE OF THE GAMEâ (This is not the original SET game. It has a tweaked rule.) In this version, a GAME-SET is a set of three cards: - For each figure attribute except the number (color, shape, fill), the three cards should either be ALL the SAME or NONE the SAME (=ALL DIFFERENT, e.g. if 2 of the cards have the same value, and 1 of them has a different value, the set is NOT valid; for example, (blue, green, blue) is MIXED and does not satisfy any of the rule, whereas (oval, diamond, squiggle) is all different. - But only for the number attribute, 2 of the cards should have the same number, and 1 of them should have a different number in order for the set to be valid.
Here is the board: (2 | green | oval | open) ... [truncated] | 2307.02477#164 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 165 | Here is the board: (2 | green | oval | open) ... [truncated]
You can pick a set by typing the cards in the below format: First card: CARD1 Second card: CARD2 Third card: CARD3 Now remember the rule and tell me which three cards here constitutes a GAME-SET in the same format. I will give you 2 cards as a hint, and you tell me the third one. First card: (2 | green | oval | open) Second card: (1 | green | diamond | solid) {Letâs think step by step.}
Table 16: Prompts for the SET task. {Letâs think step by step} is added only if 0-shot CoT is used.
# Mode
# Prompt
Mode Prompt | 2307.02477#165 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 166 | # Mode
# Prompt
Mode Prompt
Each card has a figure and a number. A figure is a combination of a color, a shape, and a fill. Set of colors are: red , green , blue . Set of shapes are: squiggle , diamond , oval . Set of fills are: solid , striped , open . âTHE RULE OF THE GAMEâ A GAME-SET is set of three cards: For each attribute, (color, shape, fill, number), the three cards should either be ALL the SAME or NONE the SAME (=ALL DIFFERENT, e.g. if 2 of the cards have the same value, and 1 of them has a different value, the set is NOT valid; for example, (blue, green, blue) is MIXED and does not satisfy any of the rule, whereas (oval, diamond, squiggle) is all different. I will give you three cards from the board, and you will tell me whether this constitutes a GAME-SET. First card: (1 | blue | oval | striped) Second card: (2 | red | squiggle | striped) Third card: (3 | green | diamond | striped) Is this a GAME-SET? {Answer with yes or no in the last line. Letâs verify rules for each attribute step-by-step:}
# Default
# CF | 2307.02477#166 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 167 | # Default
# CF
Each card has a figure and a number. A figure is a combination of a color, a shape, and a fill. Set of colors are: red , green , blue . Set of shapes are: squiggle , diamond , oval . Set of fills are: solid , striped , open . âTHE RULE OF THE GAMEâ (This is not the original SET game. It has a tweaked rule.) In this version, a GAME-SET is a set of three cards: - For each figure attribute except the number (color, shape, fill), the three cards should either be ALL the SAME or NONE the SAME (=ALL DIFFERENT, e.g. if 2 of the cards have the same value, and 1 of them has a different value, the set is NOT valid; for example, (blue, green, blue) is MIXED and does not satisfy any of the rule, whereas (oval, diamond, squiggle) is all different. - But only for the number attribute, 2 of the cards should have the same number, and 1 of them should have a different number in order for the set to be valid.
I will give you three cards from the board, and you will tell me whether this constitutes a GAME-SET. | 2307.02477#167 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 168 | I will give you three cards from the board, and you will tell me whether this constitutes a GAME-SET.
First card: (1 | blue | oval | striped) Second card: (2 | red | squiggle | striped) Third card: (3 | green | diamond | striped)
Is this a GAME-SET? {Answer with yes or no in the last line. Letâs verify rules for each attribute step-by-step:}
Table 17: CCC prompts for the SET experiments. {bold text} is added only if 0-shot CoT is used. Note that we removed the board information for simplicity as it is not required for this CCC test. | 2307.02477#168 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 169 | Tests CCC w/o 0-CoT w/ 0-CoT Base 8 9 10 11 16 8 9 10 11 16 8 9 10 11 # instances 1,000 200 GPT-4 GPT-3.5 Claude PaLM-2 82.3 8.3 22.3 6.4 23.4 6.6 0.2 2.2 100.0 100.0 99.8 98.7 38.4 3.8 6.6 3.4 63.0 17.7 32.4 23.4 60.2 12.6 1.4 1.1 38.6 9.8 0.9 0.6 98.2 99.0 98.7 82.2 56.5 2.7 4.0 0.5 74.0 17.7 6.6 1.2 98.0 96.5 64.5 51.5 90.0 77.0 47.5 53.5 100.0 100.0 100.0 100.0 91.0 56.0 41.0 72.0 16
Table 18: Results for the arithmetic task (in accuracy; %). | 2307.02477#169 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 170 | Table 18: Results for the arithmetic task (in accuracy; %).
# shots 8 9 10 11 16 2 3 4 2 2 2 2 2 0 0 0 1 2 4 8 16 60.2 56.8 24.0 97.3 99.1 99.4 99.7 99.9 38.6 32.2 14.6 48.1 67.0 79.7 85.8 88.4 98.2 87.1 83.4 99.7 99.9 99.9 100.0 99.9 56.5 24.2 8.9 25.7 44.0 68.4 79.6 86.9 74.0 33.2 9.1 49.1 57.8 70.6 83.5 88.7
Table 19: Results for the arithmetic task for various analyses in §5 (in accuracy; %). Only for GPT-4 with 0-shot CoT. | 2307.02477#170 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 171 | Table 19: Results for the arithmetic task for various analyses in §5 (in accuracy; %). Only for GPT-4 with 0-shot CoT.
Tests CCC w/o 0-CoT w/ 0-CoT w/o 0-CoT w/ 0-CoT Default CF Default CF Default CF Default CF 113 750 58.4 39.8 35.4 18.6 9.7 13.3 73.5 54.0 36.3 24.8 10.6 6.2 95.3 97.1 96.5 78.1 21.3 31.1 99.7 94.1 85.1 90.9 25.9 37.3
# # instances
# GPT-4 GPT-3.5 Claude
Table 20: Results for the programming execution task (in accuracy; %). | 2307.02477#171 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 172 | # # instances
# GPT-4 GPT-3.5 Claude
Table 20: Results for the programming execution task (in accuracy; %).
HumanEval (All) HumanEval (Subset) pass@1 pass@10 pass@1 pass@10 Default CF Default CF Default CF Default CF # instances 164 53 GPT-4 GPT-3.5 Claude PaLM-2 87.4 73.8 53.7 27.3 68.2 41.8 39.6 20.8 95.3 88.4 78.1 55.8 83.4 67.6 64.2 42.6 82.5 68.9 47.6 29.2 40.5 25.1 15.7 7.4 93.3 81.0 74.0 55.3 64.9 45.8 41.9 21.0
Table 21: Results for the programming generation task (in pass@1 and pass@10; %). We report both the results on the entire HumanEval dataset for comparability with other work, as well as the subset on which evaluating the original program under 1-based indexing would not pass the test cases. Figure 2 only showed the results on this subset.
# Test Accuracy
# w/o 0-CoT
# w/ 0-CoT
SVO SOV VSO VOS OVS OSV SVO SOV VSO VOS OVS OSV | 2307.02477#172 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 173 | # w/o 0-CoT
# w/ 0-CoT
SVO SOV VSO VOS OVS OSV SVO SOV VSO VOS OVS OSV
# instances GPT-4 GPT-3.5 Claude PaLM-2 100 88.0 72.0 55.0 40.0 63.0 51.0 58.0 22.0 66.0 60.0 65.0 48.0 63.0 39.0 62.0 28.0 68.0 41.0 59.0 37.0 68.0 64.0 56.0 23.0 76.0 50.0 51.0 35.0 66.0 51.0 57.0 29.0 69.0 63.0 62.0 56.0 70.0 44.0 62.0 29.0 70.0 27.0 62.0 22.0 68.0 51.0 59.0 19.0
# CCC
# w/o 0-CoT
# w/ 0-CoT
SVO SOV VSO VOS OVS OSV SVO SOV VSO VOS OVS OSV | 2307.02477#173 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 174 | # w/o 0-CoT
# w/ 0-CoT
SVO SOV VSO VOS OVS OSV SVO SOV VSO VOS OVS OSV
# instances GPT-4 GPT-3.5 Claude PaLM-2 100 â â â â 100.0 88.0 89.0 7.0 92.0 72.0 60.0 8.0 98.0 25.0 97.0 96.0 100.0 61.0 99.0 100.0 94.0 64.0 98.0 62.0 â â â â 95.0 68.0 79.0 28.0 95.0 40.0 91.0 29.0 99.0 37.0 80.0 91.0 96.0 5.0 8.0 94.0 100.0 31.0 87.0 70.0
Table 22: Results for the basic syntactic reasoning task (in accuracy; %). | 2307.02477#174 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 176 | Table 23: Results for the logical reasoning task (in accuracy; %).
Tests Accuracy w/o 0-CoT w/ 0-CoT Default S-NS S-WE R90 R180 R270 Rand. Default S-NS S-WE R90 R180 R270 Rand. # instances 100 100 GPT-4 GPT-3.5 Claude PaLM-2 79.0 87.0 86.0 90.0 57.0 56.0 51.0 88.0 29.0 32.0 72.0 86.0 34.0 27.0 35.0 50.0 6.0 12.0 45.0 93.0 22.0 17.0 15.0 39.0 34.0 15.0 51.0 64.0 98.0 82.0 85.0 84.0 71.0 66.0 50.0 95.0 23.0 36.0 71.0 80.0 24.0 27.0 30.0 38.0 9.0 29.0 49.0 91.0 13.0 22.0 11.0 38.0 13.0 22.0 39.0 54.0
Tests Object-level Accuracy | 2307.02477#176 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 177 | Tests Object-level Accuracy
w/o 0-CoT w/ 0-CoT Default S-NS S-WE R90 R180 R270 Rand. Default S-NS S-WE R90 R180 R270 Rand. # instances 100 100 GPT-4 GPT-3.5 Claude PaLM-2 86.0 92.3 93.7 96.3 74.3 77.7 75.3 95.7 55.7 62.3 87.7 94.3 56.0 54.7 65.7 71.3 34.0 41.3 70.3 97.7 53.0 42.3 46.7 64.7 61.7 47.0 76.0 79.7 99.0 92.7 91.7 94.0 85.3 82.7 74.7 98.3 57.0 64.0 86.3 91.0 49.7 53.0 63.0 63.7 36.0 50.7 73.0 96.0 46.3 54.7 44.0 64.0 46.0 53.3 69.3 75.0
CCC | 2307.02477#177 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 178 | CCC
w/o 0-CoT w/ 0-CoT Default S-NS S-WE R90 R180 R270 Rand. Default S-NS S-WE R90 # instances 100 100 GPT-4 GPT-3.5 Claude PaLM-2 100.0 100.0 100.0 100.0 99.0 100.0 100.0 100.0 100.0 92.0 100.0 100.0 100.0 83.0 100.0 100.0 100.0 90.0 100.0 100.0 100.0 100.0 100.0 39.0 100.0 99.0 99.0 100.0 100.0 98.0 100.0 100.0 100.0 89.0 100.0 100.0 100.0 88.0 100.0 100.0 100.0 83.0 100.0 100.0 100.0 78.0 100.0 100.0 100.0 94.0 100.0 59.0
Table 24: Results for the spatial reasoning task (in accuracy; %). The first section (Tests Accuracy) requires all 3 objects to be correctly placed. The second section (Test Object-Level Accuracy) refers to accuracy averaged over objects. âSâ denotes to swapping, âRâ denotes to rotation, âRand.â denotes to random permutation.
Human Evaluation Binary Classification Test Accuracy | 2307.02477#178 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 180 | w/o 0-CoT w/ 0-CoT Default VFlip R90 R180 Default Flip R90 R180 # instances 100 GPT-4 GPT-3.5 Claude 58.0 20.0 19.0 30.0 9.0 11.0 23.0 7.0 9.0 31.0 8.0 8.0 53.0 32.0 18.0 26.0 4.0 9.0 33.0 6.0 13.0 28.0 6.0 8.0 CCC Accuracy w/o 0-CoT w/ 0-CoT Default VFlip R90 R180 Default Flip R90 R180 # instances 100 GPT-4 GPT-3.5 Claude 100.0 78.0 100.0 99.0 56.0 84.0 55.0 53.0 86.0 89.0 44.0 57.0 100.0 99.0 99.0 99.0 62.0 90.0 87.0 43.0 99.0 87.0 46.0 54.0
Table 25: Results for the drawing task (in accuracy; %). VFlip corresponds to vertical flipping, R90 and R180 correspond to rotation by 90 degrees and 180 degrees, respectively.
Accuracy for Objects w/o Canonical Orientation | 2307.02477#180 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 181 | Accuracy for Objects w/o Canonical Orientation
w/o 0-CoT w/ 0-CoT Default VFlip R90 R180 Default Flip R90 R180 72.1 31.8 51.1 68.6 22.0 22.9 67.3 14.3 20.9 57.4 13.0 23.3 76.2 39.0 39.5 58.0 4.8 27.9 69.8 7.1 34.8 63.3 7.0 19.1
Accuracy for Objects w/ Canonical Orientation
w/o 0-CoT w/ 0-CoT Default VFlip R90 R180 Default Flip R90 R180 GPT-4 GPT-3.5 Claude 89.5 30.4 34.5 44.9 16.0 3.8 50.0 5.2 10.5 49.1 3.7 5.3 93.1 50.8 29.8 44.0 8.6 5.3 21.3 3.4 25.9 41.2 8.8 5.7
Table 26: Results for the drawing task, as measured by human evaluation accuracy (%), broken down by objects with or without a canonical orientation as judged by human annotators. If an object has a canonical orientation, such as the house in Figure 7, it is only considered correct if the orientation is correct.
Test Accuracy | 2307.02477#181 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 183 | CCC
w/o 0-CoT w/ 0-CoT Default DAD- FAD- EBD- ECD- ECF- Default DAD- FAD- EBD- ECD- # instances 18 GPT-4 GPT-3.5 Claude PaLM-2 100.0 50.0 77.8 33.3 100.0 27.8 38.9 33.3 94.4 11.1 55.6 33.3 94.4 44.4 55.6 38.9 100.0 50.0 50.0 50.0 83.3 61.1 50.0 38.9 100.0 94.4 100.0 33.3 100.0 66.7 38.9 33.3 100.0 88.9 55.6 50.0 100.0 88.9 55.6 38.9 100.0 94.4 50.0 38.9 ECF- 100.0 77.8 50.0 38.9
Table 27: Results for the chord fingering task (in accuracy; %): guitar. Default corresponds to EADGBE. Counterfactuals show the first three strings (the remaining three strings, GBE, are the same). | 2307.02477#183 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 184 | Test Accuracy w/o 0-CoT w/ 0-CoT Default DAD- FAD- EBD- ECD- ECF- Default DAD- FAD- EBD- ECD- # instances 12 maj triad min triad 5 dom7 6 sus4 dim7 aug7 sus2 min7 66.7 58.3 83.3 25.0 25.0 50.0 25.0 0.0 58.3 83.3 50.0 33.3 41.7 25.0 16.7 33.3 0.0 0.0 0.0 25.0 33.3 25.0 33.3 16.7 0.0 0.0 0.0 0.0 8.3 0.0 41.7 41.7 16.7 8.3 16.7 33.3 0.0 0.0 16.7 8.3 8.3 25.0 0.0 25.0 0.0 0.0 8.3 0.0 0.0 8.3 0.0 0.0 0.0 0.0 0.0 0.0 16.7 0.0 0.0 0.0 58.3 58.3 75.0 33.3 33.3 58.3 0.0 8.3 58.3 41.7 41.7 25.0 50.0 16.7 25.0 33.3 | 2307.02477#184 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 186 | Table 28: Results broken down by chords for the chord fingering task as analyzed in §5 (in accuracy; %): guitar, GPT-4. Default corresponds to EADGBE. Counterfactuals show the first three strings (the remaining three strings, GBE, are the same).
Test Accuracy w/o 0-CoT w/ 0-CoT Default FC- AC- BC- BE- Default FC- AC- BC- BE- # instances 108 GPT-4 GPT-3.5 Claude PaLM-2 39.8 14.8 0.0 0.0 1.9 0.0 1.9 0.0 1.9 2.8 0.0 0.0 2.8 3.7 0.0 0.0 0.9 0.0 0.9 0.0 20.4 6.5 6.5 0.9 2.8 1.9 0.0 0.0 16.7 2.8 2.8 1.9 11.1 1.9 1.9 0.9 10.2 0.0 2.8 0.0
# CCC | 2307.02477#186 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 187 | # CCC
w/o 0-CoT w/ 0-CoT Default FC- AC- BC- BE- Default FC- AC- BC- BE- # instances 12 GPT-4 GPT-3.5 Claude PaLM-2 100.0 33.3 58.3 41.7 100.0 41.7 50.0 41.7 100.0 91.7 50.0 41.7 100.0 41.7 41.7 50.0 100.0 100.0 33.3 33.3 100.0 91.7 100.0 41.7 100.0 83.3 100.0 50.0 100.0 100.0 75.0 41.7 100.0 75.0 66.7 58.3 100.0 75.0 50.0 33.3
Table 29: Results for the chord fingering task (in accuracy; %): ukulele. Default corresponds to GCEA. Counterfac- tuals show the first two strings (the remaining two strings, EA, are the same). | 2307.02477#187 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 188 | Tests CCC w/o 0-CoT w/ 0-CoT w/o 0-CoT w/ 0-CoT Default CF Default CF Default CF Default CF # instances 56 1120 56 1120 7 140 7 140 GPT-4 GPT-3.5 Claude PaLM-2 48.2 17.9 17.9 28.6 20.4 17.4 17.2 19.5 64.3 44.6 16.1 19.6 44.9 21.8 20.1 19.3 100.0 100.0 100.0 100.0 87.1 70.0 74.3 48.6 100.0 100.0 100.0 100.0 94.3 95.7 80.7 47.1
Table 30: Results for the melody retrieval task (in accuracy; %). Default corresponds to C major, and CF corresponds to other keys. | 2307.02477#188 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 189 | Table 30: Results for the melody retrieval task (in accuracy; %). Default corresponds to C major, and CF corresponds to other keys.
Tests CCC w/o 0-CoT w/ 0-CoT w/o 0-CoT w/ 0-CoT Default CF Default CF Default CF Default CF # instances 8 160 8 160 1 20 1 20 1 2 3 4 5 6 7 87.5 87.5 12.5 25.0 50.0 37.5 37.5 63.1 10.6 11.9 21.2 3.1 5.0 27.5 87.5 62.5 62.5 62.5 37.5 75.0 62.5 70.0 46.9 51.2 42.5 35.0 38.8 30.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 95.0 95.0 80.0 100.0 95.0 75.0 70.0 100.0 100.0 100.0 100.0 100.0 100.0 100.0 95.0 100.0 90.0 85.0 100.0 95.0 95.0
Table 31: Results broken down by n for the melody retrieval task as analyzed in §5 (in accuracy; %): GPT-4. Default corresponds to C major, and CF corresponds to other keys. | 2307.02477#189 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 190 | Tests CCC w/o 0-CoT w/ 0-CoT w/o 0-CoT w/ 0-CoT Default CF Default CF Default CF Default CF # instances 400 120 GPT-4 GPT-3.5 Claude PaLM-2 73.8 60.1 59.2 48.0 50.0 54.4 50.0 51.5 87.8 60.2 60.2 48.5 53.6 48.9 50.4 51.2 100.0 85.8 100.0 61.7 100.0 90.0 50.0 23.3 100.0 91.7 76.7 50.0 100.0 93.3 100.0 23.3
Table 32: Results for the chess task with 4 moves (in accuracy; %). CF refers to the setting where the initial positions of knights and bishops are swapped. We generate a balanced classification problem with 400 openings via procedure generation. | 2307.02477#190 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 191 | Tests CCC w/o 0-CoT w/ 0-CoT w/o 0-CoT w/ 0-CoT Default CF Default CF Default CF Default CF # instances 100 100 GPT-4 GPT-3.5 Claude PaLM-2 100.0 73.0 55.0 55.0 21.0 4.0 21.0 17.0 100.0 37.0 64.0 62.0 61.0 7.0 35.0 13.0 89.0 68.0 92.0 67.0 74.0 55.0 62.0 68.0 100.0 77.0 59.0 47.0 96.0 78.0 65.0 46.0
Table 33: Results for the SET game (in accuracy; %).
Tests Default (c=1) CF (c=1) Default (c=2) CF (c=2) Default (c=3) CF (c=3) # instances 100 GPT-4 GPT-3.5 Claude PaLM-2 100.0 37.0 64.0 62.0 61.0 7.0 35.0 13.0 24.0 7.0 10.0 10.0 6.0 0.0 4.0 1.0 15.0 1.0 5.0 3.0 3.0 0.0 1.0 1.0 | 2307.02477#191 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.01848 | 0 | 3 2 0 2
l u J 4 ] V C . s c [
1 v 8 4 8 1 0 . 7 0 3 2 : v i X r a
# Embodied Task Planning with Large Language Models
Zhenyu Wu1, Ziwei Wang2,3, Xiuwei Xu2,3, Jiwen Lu2,3, Haibin Yan1â 1School of Automation, Beijing University of Posts and Telecommunications, China 2Department of Automation, Tsinghua University, China 3Beijing National Research Center for Information Science and Technology, China {wuzhenyu, eyanhaibin}@bupt.edu.cn; [email protected]; [email protected]; [email protected] https://gary3410.github.io/TaPA | 2307.01848#0 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 0 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics Melanie Swan,a Takashi Kido,b Eric Roland,c Renato P. dos Santosd
aDIYgenomics.org; University College London (Research Associate) bAdvanced Comprehensive Research Organization, Teikyo University; Preferred Networks, Inc. cRedBud AI, LLC dCentre for Generative AI in Cognition and Education, Lutheran University of Brazil
Abstract The innovation in generative AI could be further accelerated with more-readily usable and evaluable mathematics as part of the computational infrastructure. Beyond human-AI chat interaction, LLM (large language model)-based mathematical analysis tools are emerging in software programming, algorithm discovery, and automated theorem proving, but have not yet been widely applied to genomics. Towards disease-solving, this work introduces Math Agents and the mathematical embedding (vector-space representation of an equation as a data string) as new âMooreâs Law of Mathematicsâ entries. The project consists of a GPT-based workflow to extract equations from published literature PDFs with Mathpix OCR and process them into LaTeX and Python embeddings. There are many ways to represent equations digitally, but few automated means for evaluating large bodies of equations (mathematical ecologies/mathscapes). | 2307.02502#0 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 1 | Abstract: Equipping embodied agents with commonsense is important for robots to successfully complete complex human instructions in general environments. Recent large language models (LLM) can embed rich semantic knowledge for agents in plan generation of complex tasks, while they lack the information about the realistic world and usually yield infeasible action sequences. In this paper, we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning with physical scene constraint, where the agent generates executable plans accord- ing to the existed objects in the scene by aligning LLMs with the visual perception models. Speciï¬cally, we ï¬rst construct a multimodal dataset containing triplets of indoor scenes, instructions and action plans, where we provide the designed prompts and the list of existing objects in the scene for GPT-3.5 to generate a large number of instructions and corresponding planned actions. The generated data is leveraged for grounded plan tuning of pre-trained LLMs. During inference, we discover the objects in the scene by extending open-vocabulary object detectors to multi-view RGB images collected in different achievable locations. Experimental results show that the generated plan from our TaPA framework can achieve higher success rate than LLaVA and GPT-3.5 by a sizable margin, which indicates the practicality of embodied task planning in general and complex environments.
Keywords: Embodied task planning, large language models, open-vocabulary detection
# Introduction | 2307.01848#1 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 1 | The important result of LLMs is that they are a linguistic user interface, a language-based access tool, via natural language for human-AI chat, but more extensively, via formal languages for at- scale AI-aided build-out of the computational infrastructure. AI tools are suggested as although the possibility space of natural language is relatively finite, formal possibility spaces are infinite (e.g. the programmatic space of algorithms, the mathematics space of theorems, and the computational complexity space of quantum-classical-relativistic classes).
Whereas humans interact with natural language, Math Agents interact with math, the implication of which could be a shift from âbig dataâ to âbig mathâ as a higher-order lever for interacting with reality. Natural language as a language is flexible and open to contextual interpretation; mathematics as a language has well-formedness properties subject to proof. Hence, mathematical use cases beyond math-as-math could include high-validation math-certified icons (by analogy to green seals) towards AI alignment aims of serving humanity in the broadest possible ways. | 2307.02502#1 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 2 | Keywords: Embodied task planning, large language models, open-vocabulary detection
# Introduction
Equipping embodied agents with general commonsense knowledge to accomplish complex tasks based on the natural language commands is desirable in many applications such as domestic service [1], medical treatment [2, 3, 4] and agricultural picking [5, 6]. Due to the limited training samples and diverse tasks in downstream applications, directly training an embodied agent across different deployment scenarios is infeasible. Recent progress in large language models (LLMs) [7, 8, 9, 10] acquires rich commonsense knowledge from the vast web data, whose knowledge can be potentially leveraged by embodied agents to generate action plans for human requirements represented in natural language.
However, LLMs cannot perceive the surrounding scenes and may generate inexecutable actions due to the requirement of interacting with non-existed objects. For example, given the human command âGive me some wineâ, the generated action steps from GPT-3.5 are âpouring wine from the bottle to the glassâ. There may be only mugs instead of glasses in the realistic scenes, and the executable actions should be âpouring wine from the bottle to the mugâ. Therefore, grounding the task plan generated by LLMs to the physical world is necessary to construct embodied agents for complex task accomplishment. | 2307.01848#2 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 2 | The current project develops a theoretical model for the deployment of Math Agents and mathematical embeddings to the information systems biology problem of aging, applying multiscalar physics mathematics (elucidating near-far entropic correlations in systems) to disease model mathematics and whole-human genomic data. Generative AI with episodic memory (per file dating/time-stamping) could assess causal relations in longitudinal personal health dossiers, deployed via SIR (sustaining, intervening, recovering) compartmental Precision Health models. In the short term, genomic variant and expression data is indicated for practical application to the unresolved challenge of Alzheimerâs disease as the top-five human pathology with no survivors.
Keywords: math agent, mathematical embedding, equation cluster, mathematical ecology, LLMs, generative AI, cognitive architecture, computational infrastructure, human-AI entities, genomics, information system biology, Alzheimerâs disease, personal health dossier, SIR, precision health
Page 1 | 2307.02502#2 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 3 | To acquire executable task plans in the given physical scenes, many previous works ï¬lter or align the generated actions by considering the visual clues in the scene for the task of general manipulation of
âCorresponding author.
Scene-level Visual Perception Task Plan Generation 1 User hey Complex Tasks d 4 tke Diverse Actions - ED executable Plans | TaPA Step 1. Grasp a watering can o Step 2. Move to the plants Step 3. Pour water into the watering can Step 4. Align the watering can to the plant cup = ® Maskrenn ©) Step 5. Pour water on the plant =p Step 6. Move the watering can to the original location Open-vocabulary Object Detector | 2307.01848#3 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 3 | Page 1
Section 1: Introduction to Formalization Space The contemporary moment is one of building heightened classes of digital infrastructure, in the form of smart network technologies running terrestrially and beyond which include AI, machine learning, blockchains, and quantum computing. A key emergence is AI language graphs, LLMs (large language models) â computerized language models generated with artificial neural networks (deep learning) which have billions of parameters and are pre-trained on large data corpora such as GPT-4 (OpenAI), LaMDA (Google), and LLaMA (Meta AI). In addition to a potential large-scale reorientation of white-collar labor with upleveled digital methods, an immediate conceptual result of LLMs is the first-principles thinking shift from âsomeâ to âallâ in terms of the level of consideration of the possibility space. It is increasingly routine to think of the entirety of a corpus such as the language space (of all human language), the program space (of all possible software programs), and the mathematics space (of all possible mathematics) (Figure 1). It is therefore obvious yet non-trivial, the idea of having a digitized and accessible mathematical space with easy-solve interfaces for a wider deployment of mathematics.
Figure 1. Formalization Space of Possibility Graphs: Language, Program, and Mathematics Space. | 2307.02502#3 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 4 | Figure 1: Our embodied task planning framework collects multiple RGB images from various tabletop objects [11, 12, 13]. In order to further diversify tasks in house-level environments, SayCan standing points and viewpoints. Utilizing an open vocabulary detector generates a list of objects [14] and LLM-Planner [15] employ visual navigation to collect information in the house for the existed in the scene. Combining human instructions and the predicted object list, our TaPA generates challenging grounded plan generation. Nevertheless, SayCan can only accomplish tasks in the kitchen executable action plans for navigation or manipulation tasks. scenarios and LLM-Planner performs planning in the ALFRED simulator [16] where most tasks are simple such as putting and placing. They both fail to satisfy the requirement of numerous complex tasks and diverse deployment scenarios in our daily life. | 2307.01848#4 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.01848 | 5 | In this paper, we present a task planning agent called TaPA for embodied task plan grounding in physical scenes. The unreleased SayCan cannot be applied in diverse indoor scenarios, and the LLM- Planner in the ALFRED benchmark fails to generate plans for complex tasks due to the pre-deï¬ned simple instructions in the simulator. On the contrary, our agent can generate grounded plans without constraining task types and target objects. Therefore, Our agent acquires general commonsense knowledge to yield action steps for complex household tasks such as making sandwiches and setting tables, which provides the foundational instructions for the downstream navigation and manipulation process to deal with high-level requirements from humans. Figure 1 demonstrates the overall pipeline of our TaPA that generates the executable action steps by considering the scene information and the human instructions. Figure 2 shows the statistical difference between our TaPA and conventional ALFRED benchmark, where our tasks are much more complex with longer steps for accomplishment. More speciï¬cally, we ï¬rst construct a multimodal dataset where each sample is a triplet of visual scenes, instructions, and corresponding plans. By leveraging | 2307.01848#5 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 5 | Other existing spaces could be added to Figure 1 such as computational complexity (the classes of resources needed to compute a problem), as well as emerging spaces including the AI space (of all AIs), the AI-driven Science Projects space, and more distantly, the Fermi (or Planck) space (of all foundational time and space models), to characterize the entries and relational activity in a possibility space. One high-order use of AI could be to elucidate and catalog formal spaces, as already demonstrated in the software program discovery space (Karpathy, 2017). Although some spaces are naturally formal (software programs, mathematics, computational complexity), others become formal (natural language) in the digitization process. âFormalâ means a rigorous mathematical approach, but generally refers to the systematic consideration of a topic. The formalization space is the possibility space of all formal (systematic) approaches such as mathematical, algorithmic, programmatic, information-theoretic, and graph-theoretic methods. The computational infrastructure is the realization of formal methods on global networks (AI agents, machine learning, blockchains, and quantum computing). As data corpora (e.g. natural language, mathematics) are digitized, they join the computational infrastructure, becoming open to wide-spread democratized network access by all agents (human, AI, hybrid). | 2307.02502#5 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 6 | ï¬rst construct a multimodal dataset where each sample is a triplet of visual scenes, instructions, and corresponding plans. By leveraging the generated dataset, we ï¬netune the pre-trained LLaMA [7] network by predicting the action steps based on the object list of the scene, which is employed as our task planner. For the acquisition of the object list during inference, the embodied agent effectively visits standing points to collect RGB images providing sufï¬cient information in different views, and generalizes the open-vocabulary detector for multi-view images to acquire the list of existed objects. Our TaPA agent achieves higher success rate of the generated action plans compared with the state-of-the-art LLMs including LLaMA and GPT-3.5 and large multimodal models (LMMs) such as LLaVA [17]. Our contributions can be summarized as follows: | 2307.01848#6 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 6 | The benefit of LLMs is providing a linguistic interface overlay to the digital computational infrastructure. The interface allows humans to speak natural language and interact with the
Page 2
computational infrastructure (for a host of uses including but also going beyond language generation in the Siri-Alexa-chatGPT value chain). Of even greater potential impact is the possibility for AIs to âspeakâ any digitized formal language (natural language, mathematical symbols, programmatic code) to interact with and extend the computational infrastructure. This work proposes AI Math Agents and the mathematical embedding as new potential entries in the âMooreâs Law of Mathematicsâ to facilitate the build out of the mathematical infrastructure. | 2307.02502#6 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 7 | ⢠To the best of our knowledge, we propose the ï¬rst benchmark for complex embodied task planning that is practical in realistic indoor deployment scenarios.
⢠We design a framework for large-scale multimodal dataset generation in order to train the task planner from pre-trained LLMs and construct a multimodal dataset containing 80 indoor scenes with 15K instructions and corresponding action plans.
⢠We evaluate different LLMs and LMMs for complex embodied task planning in our benchmark, and conduct the ablation study to select the optimal representation of visual scenes for executable action generation.
# 2 Related Work
Large pre-trained models: Large-scale pre-trained models have revolutionized the natural language processing (NLP) [18, 19, 20] and the computer vision [21, 22, 23] communities in recent years. Beneï¬ting from the vast training data and numerous parameters, the large pre-trained models acquire strong generalization ability across different deployment scenarios. For large language models, recent studies show that they not only perform well in NLP tasks, but also emerge the ability to master the rich knowledge about the realistic world with factual answers. Therefore, LLMs such as LLaMA [7],
2 | 2307.01848#7 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 7 | Whereas the spaces of software programs and mathematics are lesser known and possibly infinite, the language space is familiar and (relatively) operationally finite. Before LLMs, the status was that although the structure of all human language was âknownâ in the sense of having been elucidated by scholarly efforts, this knowledge was not previously accessible as it was isolated in the brains of practitioners and academic researchers, published literature, and the earlier-stage digital method of language-learning applications (e.g. Duolingo, Babbel, Memrise). The new way of treating the entirety of a possibility space as a digital corpus means that language can be accessed globally with the click of a button. The benefit (though also risk) revealed by LLMs is the mobilization (democratized access) of freely-available digital corpora, for use by all agents. A similar trajectory for mathematics is proposed here. | 2307.02502#7 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 8 | 2
unclog throw ~ change set >\ move organize put, find 60% m= ALFRED m= TaPA adjust 50% 40% 30% 20% 10% 0% 1~4 5~7 8°10 11°13 14+ Action Steps (c)
Figure 2: Statistical comparison of TaPA and ALFRED dataset. The pie chart shows the top 20 frequently appearing verbs (inner circle) and the corresponding top 4 nouns (outer circle) for each verb. The bar chart shows the percentage of instructions with different numbers of implementation actions, where TaPA contains more complex instructions compared to ALFRED. | 2307.01848#8 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 8 | Possibility space graphs raise the issue of novelty and reality status. Latency (existing but not manifest) is implied in the sense that all possible mappings pre-exist in the graph, even if not yet traced â articulated or accessed â by human or AI agents (Cheong, 2023). Every utterance pre- exists as a path in the graph. Voicing a new statement is merely walking a path through the language graph. A question as to what is truly new or original arises. In LLMs, at the level of form, the structure of all human language is known (as distilled from online data corpora). At the level of content, not all possible utterances have been said, but their form is likely to follow the recognized structure if they are to be intelligible. In one sense, the possibility space contains all currently existing form and content (subject to the latest LLM updates). In another sense, one of the reasons to encode digital possibility spaces is to facilitate ânew utterancesâ (innovation) â to fill in gaps in the graph (Math Agents) and to push boundaries into new territories (human creativity, scientific discovery). As novelty occurs, it is incorporated into the graph accordingly. | 2307.02502#8 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 9 | GPT-3 [24] are widely adopted to diverse tasks by interacting with input from other modalities such as visual feature learning [25, 26], pseudo-code generation [27], tool usage [28] and math problem solving [29]. For large vision models, objects in the open environments can be detected [23, 30] or segmented [31] for scene understanding, where bounding boxes and masks are generated for all scenarios and visual features are aligned with text embedding for category assignment. To learn the joint embedding space of language and vision for multimodal tasks, CLIP [32] leverages contrastive learning to minimize the distance between similar image-text pairs. LLaVA [17] synthesized a multimodal dataset with images, captions and bounding boxes in the tasks of conversation, detailed description and complex reasoning, so that the instructional tuning of LLMs acquires general-purpose instruction-following visual agent. In this paper, we leverage LLMs to generate executable plans for embodied tasks with the visual information acquired from the open-vocabulary detection models. | 2307.01848#9 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 9 | Mathematics as the Data Corpus In the contemporary project of the digitization of all data corpora (the worldâs currently-existing knowledge), mathematics is in the process of being digitally instantiated as any other data corpus. The entirety of mathematics (thus-far discovered though possibly-arbitrarily transcribed) is the data corpus that can be digitized and AI-learned for insight. Although considerable bodies of mathematics have been illuminated by a few centuries of human effort, such mathematics may be incomplete and could be integrated and extended with AI methods. Mathematics is widely regarded as being useful, but is generally not usable except by specialists, and while mathematics can be represented digitally with existing methods, it cannot be easily solved. This implies two points about the representation and evaluation aspects of the mathematical possibility space.
First, regarding representation (the easily-accessible digital representation of mathematics), a digital library and comprehensive catalog of the existing mathematics corpus is lacking. There are many ways to enter equations in a digital format, but not to call existing bodies of mathematical equations for easy deployment (like callable JavaScript libraries or a Wikipedia for mathematics). The overall shape and size of the mathematical space could be elaborated. It is not known but imprecisely estimated that a few million theorems have been human-discovered in the
Page 3 | 2307.02502#9 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 10 | Language model grounding for embodied tasks: An embodied agent not only requires active exploration [33], manipulation [34], and scene perception [35, 36] as well as embodied task planning ability. Embodied task planning aims to generate executable action steps in the given environments, where action plans are generated from grounded LLMs by receiving information from the surrounding environments [37, 38, 39] or prompt engineering [40]. For the former, agents acquire the feedback from environments by interacting with the objects to ground the task plan. Li et al. [41] employed LLMs as a general scaffold for interactive decision-making in complex tasks, where the generated policies were grounded to the given environments for executable implementation according to the action feedback. For prompt engineering, researchers carefully design the language prompts for LLMs to guide them to ground the generated content. Huang et al. [40] prompted simple examples of task instructions and corresponding actions for LLMs to produce plausible task plans, and ï¬ltered out executable subsets by constructing mapping with semantic similarity. To enable the LLMs to be aware of the surrounding scenes with boosted plan plausibility, Brohan et al. | 2307.01848#10 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 10 | Page 3
potentially infinite possibility space (Cottrell, 2012). The same mathematics may have been identified in different fields (e.g. eigenvalues (allowable scale tiers)) which could be linked or consolidated. Related bodies of mathematics might be integrated once seen in a comprehensive view. Computer algebra systems are one of the first steps towards a digital mathematical infrastructure. The deployment of newly-available AI methods is indicated as in terms of scale, some of the largest formal (computational) proofs are estimated to entail a human-unmanageable number of lemmas (Kepler conjecture initial proof (Hales, 2005) and formal proof (Hales et al., 2017); five-coloring problem (finding a Schur Number Five number of 160) with a two-petabyte sized proof (Heule, 2018)). Mathematics could join many areas of scientific discovery in which manual methods are giving way to AI-facilitated high-throughput digital computational methods. For example, the use of LLM tools for automated theorem proving (Yang et al., 2023). | 2307.02502#10 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 11 | subsets by constructing mapping with semantic similarity. To enable the LLMs to be aware of the surrounding scenes with boosted plan plausibility, Brohan et al. [14] and Song et al. [15] extracted the visual information of the scene by latent features or object names for LLMs, where the generated plans were limited to the one with the highest success rate for task completion. However, these works can only accomplish simple tasks such as placing and putting in the VirtualHome [42] or ALFRED simulators, which fail to be applied to practical deployment scenarios with diverse complex tasks. | 2307.01848#11 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 11 | Second, regarding evaluation, also at present, the mathematical data corpus is under-digitized and not readily usable and solvable. Mathematics is represented physically in hardcopy textbooks and digitally in LaTeX and PDF, and beyond basic education, engaged on a largely individual basis. There is no good way for sharing workflows and final mathematical ecologies produced in computer algebra systems (e.g. MatLab, Mathematica, Maple, SageMath). What is needed is a GitHub for mathematics. Although mathematics is represented digitally (primarily for publication), there are few tools for evaluating mathematics, particularly large mathematical ecologies in a systematic high-throughput manner. It is comparatively easy to represent mathematics, but quite difficult to automatically evaluate large bodies of mathematics. Mobilizing the useability of the mathematical corpus could render mathematics accessible and widely-usable as a tool, at different levels of deployment for different user audiences (in the analogy of HMTL pages, like a Geocities and Dreamweaver levels for mathematics). | 2307.02502#11 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 12 | # 3 Approach
In this section, we ï¬rst describe the construction of the multimodal instruction dataset that is leveraged to tune our TaPA task planner, and then describe the details of grounding embodied task plans to the visual scene with image collection and open-vocabulary detection.
# 3.1 Data Generation of Embodied Task Planning
Although large vision-language models (VLM) [17, 43] and large multimodal models [44, 45, 46, 47, 48] have achieved surprising performance on a wide range of complex perception tasks, embodied
3
task planning that is grounded to the realistic indoor scenes still remains challenging due to the lack of the large-scale multimodal dataset to train the planning agent. Considering the recent success of GPT models on high-level human-like reasoning, we leverage GPT-3.5 with the presented scene representation and designed prompt to generate the large-scale multimodal dataset for our planning agent tuning. | 2307.01848#12 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 12 | Mathematical Embedding An embedding is the numerical vector representation of a string, particularly to translate between high dimensional and low dimensional spaces. An initial step in machine learning systems is converting input data (whatever form of image, sound, text, medical results) to computer- readable strings (letters and numbers). In a standard example, the entries in the MNIST database of handwritten digits of â0â and â1â are converted to represent each image as a string of 784 numbers (each digit corresponding to the light-darkness ratio of the pixel value at that location in a 28x28 grid imposed on the image). The data strings (embeddings) are then fed into the input layer of a machine learning network for computational analysis. | 2307.02502#12 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 13 | Given an embodied 3D scene Xs, we directly utilize the class names of all objects as the rep- resentation of the scene which is denoted as Xl. All duplicate names are removed to provide scene information for the LLM such as Xl = [table, chair, keyboard, ...]. Based on the above scene information, a simple approach used in AL- FRED benchmark [16] to generate the multimodal instruction following the dataset for embodied task plans is to artiï¬cially design a series of instructions with corresponding step-by-step actions. However, the hand-crafted design requires extremely high an- notation cost to generate complex task plans that are practical for realistic service robots such as tidy- ing up the bathroom and making sandwiches. To efï¬ciently generate the large-scale complex instruc- tions Xq and executable corresponding plans Xa for the given 3D scene, we design a prompt to sim- ulate the scenarios of embodied task planning for GPT-3.5 to automatically synthesize data based on the object name list Xl. As shown in Table 5 of the supplementary materials, our prompt describes the deï¬nition of embodied task | 2307.01848#13 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 13 | Although embedding is used routinely in machine learning, the technique has mainly targeted the traditional content types of text, images, and sound. Hence, this work introduces the idea of the mathematical embedding as the similar mapping and instantiation of mathematical symbols and equations as strings of embeddings. Standard classes of mathematical embeddings could be callable in the computational infrastructure any time mathematical formulations are needed. Embeddings are not uniform due to parameter selections (i.e. for normalization, dimensionality reduction (UMAP, PCA, t-SNE), and tokenizing (data subset size-parsing (7x7 grid; 28x28 grid))), but could be packaged in templated versions (by analogy to Creative Commons licenses).
Word embeddings are widely used in LLMs in the context of news, social media, and online corpora such as Wikipedia, and are starting to be developed for scientific content (Figure 2). Three projects provide âpossibility spaceâ embeddings of an entire academic literature as the data corpus: (2a) all papers published in 2017 by journal (Meijer et al., 2021), (2b) all arXiv
Page 4 | 2307.02502#13 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 14 | data based on the object name list Xl. As shown in Table 5 of the supplementary materials, our prompt describes the deï¬nition of embodied task planning, the require- ments and several examples of generated instruc- tions and corresponding action plans. Speciï¬cally, the prompt designs a conversation between the service robot and humans to generate executable instructions and actions, which simulates the exploration of robots in the embodied environments and provides the requirements from humans. The generated instructions are diverse including requests, commands and queries, where only instructions with explicitly executable actions are added to our dataset. Meanwhile, we emphasize that the target object of the generated action should be constrained within the object list Xl to mitigate the object hallucination that leads to inexecutable plans. For the object list leveraged in the prompt for dataset generation, we directly utilize the groundtruth label of existed instances in the scene. In Table 1, we show examples of the generated sample containing the object name list of the scene, the instruction and the executable action steps. In embodied task planning, the agent can only get access to the visual scene containing all interactive objects without the groundtruth | 2307.01848#14 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 14 | Page 4
paper titles (2.3 million) (Depue, 2023), and (2c) a scalable interactive visualization tool of the ACL Anthology (Association of Computational Linguistics) (85,000 papers) (Wang et al., 2023). Another project (2d) generates embeddings for disease priority by gene, from the KEGG Pathway and Human Phenotype Ontology databases as an input to CNN machine learning (Li & Gao, 2019). The interpretation of all graphs is that clustering produces a human-readable signal.
Figure 2. Examples of Embedding Visualizations: (a)-(c) Academic Papers and (d) Disease Priorities by Gene.
2a. 2017 Papers by Journal (Meijer et al., 2021) 2b. arXiv Paper Titles (subset) (Depue, 2023) 2c. Interactive ACL Literature Interface (Wang et al., 2023) 2d. Disease Priority by Gene (Li & Gao, 2019)
âie a SRG iach, yee Im Springer-Norre
UMAP PROJECTION OF ARXIV ABSTRACTS | 2307.02502#14 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 15 | of the scene, the instruction and the executable action steps. In embodied task planning, the agent can only get access to the visual scene containing all interactive objects without the groundtruth object list. Therefore, we construct the multimodal dataset by deï¬ning triplets for each sample as X = (Xv, Xq, Xa). For the training stage of the task planner, we directly leverage the groundtruth object list for each scene to avoid the inï¬uence of inaccurate visual perception. For the inference phase, the extended open-vocabulary object detector predicts the list of all existed objects in the scene. | 2307.01848#15 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 15 | The current project introduces the mathematical embedding implemented with a GPT-based workflow to extract equations from published literature PDFs with Mathpix OCR and process them into LaTeX and Python embeddings. The mathematical embeddings are intended to cast mathematical equations into AI-readable format for further input to machine learning systems and the computational infrastructure. Mathematics as usable mobile units of digital abstraction could facilitate representation and evaluation of mathematical ecologies by humans and AIs.
Math Agents The Math Agent is conceived as a learning-agent actor for orchestrating the digital mathematical infrastructure. An AI agent more generally is an artificial intelligence-based entity (set of algorithms) tasked with learning and problem-solving in a particular context, with feedback loops from the environment, per a rewards-driven action-taking policy. The Math Agent is the idea of AI agents as the operating interface on the digital mathematical infrastructure, acting to identify, catalog, represent, analyze, integrate, write, discover, solve, theorem-prove, steward, and maintain mathematical ecologies. Math Agents could help to further populate and integrate the mathematical possibility space and solve vast classes of mathscapes towards knowledge generation. Math Agents (âmathGPTâ) could serve as the dialogical interface between humans and the mathematical corpus to extend its use to Geocities-Dreamweaver user classes. Math
Page 5 | 2307.02502#15 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 16 | Pepsdown view
We employ the AI2-THOR simulator [49] as the embodied environment for our agent, where we split the scenes with 80 for training and 20 for evaluation. To enlarge the scale and diversity of instructions and action steps in training samples for effective task planner ï¬netuning, we expand the original 80 training scenes to 6400 training scenes by directly modifying the groundtruth object list. For each scene type, we initially acquire the list of objects that possibly appear in this type of scene by enumerating all rooms in the same room type. Then we randomly substitute existed objects with other ones that possibly exist in the same room type and are not observed. The plausibility constraint aims to prevent generating counterintuitive objects for given scene types. We collected 15K samples for training and leverages another 60 triplets for evaluation with our multimodal data generation framework.
4 | 2307.01848#16 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 16 | Page 5
Agent functionality is already starting to be visible in augmenting the digital mathematical infrastructure, facilitating tasks such as computer algebra system automation, algorithm discovery (classical and quantum), and automated theorem proving (Yang et al., 2023).
The ability of AI to understand mathematical structures in ways that are not as readily accessible to humans connotes the ability to possibly make unprecedented progress in elaborating the mathematics of various large-scale multi-faceted human-facing situations such as clean energy, disease pathologies, and space exploration. The risk is also substantial, highlighting the growing power differential between human and AI capabilities, to which the present solution is AI alignment with RLHF (reinforcement learning with human feedback) and HITL (human in the loop) approaches at present to validate and censor AI output. | 2307.02502#16 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 17 | Instruction: Can you clean the sink and the toilet, please? Task planning: Step 1: Grasp a sponge Step 2: Move to the sink Step 3: Wet the sponge Step 4: Scrub the sink Step 5: Rinse the sponge Step 6: Grasp a towel Step 7: Dry the sink Step 8: Move to the toilet Step 9. Grasp a scrub brush Step 10. Scrub the toilet bowl Step 11. Place the scrub brush back in its place Instruction: Can you make me a sandwich? Task planning: Step 1. Grasp a plate Step 2. Grasp the knife Step 3. Grasp a piece of bread Step 4. Move the knife to the bread and slice it Step 5. Grasp another piece of bread Step 6. Move the knife to the bread and slice it Step 7. Grasp a lettuce Step 8. Tear the lettuce and place it on the plate Step 9. Grasp a tomato Step 10. Slice the tomato and place it on the plate Step 11. Move the two slices of bread to the plate
Table 1: Examples of the generated multimodal triplet data including visual scenes, instructions and the corresponding plans.
# 3.2 Grounding Task Plans to Surrounding Scenes | 2307.01848#17 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 17 | Section 2: The Mathematical Embedding: Representation and Interpretation Methods A standard machine learning method, vector embedding, is employed with mathematical equations and genomic data as the input. Vector embedding is the algorithmic processing of data into character strings for high dimensional analysis which is carried out and then translated back into low dimensional (2D) output for interpretation. In the mathematical embedding, equations are produced as vector embeddings in LaTeX (Figure 3) and SymPy (symbolic Python) (Figure 4). The result is that the whole of a mathematical ecology (set of equations) in a paper may be seen in one visualization. Further, the embedding visualization allows the comparative viewing of similar mathematical ecologies (Figure 7) as well as the math and the data together in one picture to assess correspondence between descriptive mathematics and underlying data (Figure 8). The mathematical embedding is at the equation-level, but could also be executed at the symbol-level, for âpredict-next-symbolâ in equation analysis, similar to predicting next word, phoneme, or letter in word-based LLMs. The mathematical embedding visualization of a 476- equation AdS/CFT correspondence | 2307.02502#17 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 18 | Table 1: Examples of the generated multimodal triplet data including visual scenes, instructions and the corresponding plans.
# 3.2 Grounding Task Plans to Surrounding Scenes
In order to ground the embodied task plan to the physical world with feasibility constraints, it is necessary to accurately obtain the object list in the scene without instance missing or false positives. We generalize the open-vocabulary object detector for object list acquisition since novel objects unseen in detector training may appear in the deployment scenarios. As shown in Figure 1, the agent collects RGB images in different locations to perceive the visual scenes to discover existed objects. We design several image collection strategies to explore the surrounding 3D scenes. The location selection criteria contains traversal positions, random positions, the overall center point and block-wise center points, and the agent rotates the camera to obtain multi-view images for each location selection criteria. Therefore, we formally write the image collection strategies S in the following: | 2307.01848#18 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.01848 | 19 | S = {(x, y, θ)|(x, y) â L(λ, A), θ = kθ0} (1) where (x, y, θ) represents the location and camera orientation. L(λ, A) means the location selection criteria with the hyperparameter λ and all sampled locations are required within the achievable area A. The unit angle for camera rotation is set to θ0, and k is an integer so that the agent collects visual clues in different directions of the scene. The hyperparameter that all location selection criteria share is the grid side length, where we divide the achievable area into grids. Traversal positions choose all grid points for RGB image collection. Random positions only randomly selected part of the grid points for visual information perception, and the hyperparameters also contain the ratio of sampled grid points. The overall center point stands for the center of the whole scene without any hyperparameters. The block-wise center points aim to choose the center of each division in the scene to efï¬ciently acquire ï¬ne-grained visual information. Inspired by [50, 51], clustering methods can
5 | 2307.01848#19 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.01848 | 20 | 5
Table 2: Comparison of different LLMs and LMMs on the task of embodied task planning. For the prompt of baseline methods, LLaMA and LLaVA both employ the same prompt in the their original ï¬netuning phase, while GPT-3.5 adopts the same prompt of TaPA for multimodal data generation.
Method LLaVA GPT-3.5 LLaMA TaPA Kit. 14.29 28.57 0.00 28.57 Living. 42.11 73.68 10.52 84.21 Bed. 33.33 66.67 13.33 73.33 Bath. 0.00 50.00 0.00 58.33 Avg. 22.43 54.73 5.96 61.11
effectively divide the entire scene into several sub-regions to improve the performance of perception, so that the prior information of the room layout is embedded into the image collection strategy with the K-means clustering method. Meanwhile, we employ within cluster sum of squared errors (WCSS) principle to select the optimal number of clusters for each scene. Compared to the images collection strategy of traversal points, the block-wise center point only traverses centroids of the subregions to acquire sufï¬cient visual information. | 2307.01848#20 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.01848 | 21 | The embodied task planner requires the information of all existed objects in the scene to generate executable action steps, where we generalize the open-vocabulary object detector to the collected multi-view RGB images for the object list acquisition. The predicted object list ËXl for the scene is acquired by removing the duplicated object names in the detection results of multi-view images:
X= Ra(( DU.) (2)
where Rd is the operation of removing duplicate object names and D(Ii) represent the detected object names for the ith RGB image collected in the scene. With our inference prompt Pin shown in Table 5 of the supplementary material, the human instruction Xq and the predicted object list Xl are considered in our TaPA to generate the executable action plans Xa:
# Xa = TaPA(Pin, ËXl, Xq)
(3)
By combining the perception results of existed objects ËXl with the instructions Xq, TaPA will give the executable action sequence Xa to complete the requirements of Xq according to the realistic scene constraint. According to our empirical study, we chose the block-wise center point for multi-view RGB image collection. The grid size in our location selection criteria is set to 0.75 and the unit angle for camera rotation is 2Ï/3.
# 4 Experiment | 2307.01848#21 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 21 | # Page 6
Interpretatively, there is definite structure to each graph, suggesting embedding-identified interrelation between the 476 equations, realized slightly differently in each embedding model. The cluster concentration and use of white space indicates the grouping of similar kinds of equations, irrespective of their order of appearance in the linear progression of the paper. The units of the x and y axes (including negative values) are related to the embedding model-posted output to the graph and do not have human-interpretive value. More important is the relative clustering of the points (the embeddings are most conceptually similar to kernel learning methods). The Python code versions (Figure 4) show heightened equation clustering as compared with the LaTeX versions (Figure 3) mainly reflecting how the two representations are themselves different code-based languages of mathematics, both rendering mathematics in a usable digital format. LaTeX embedding visualizations are portrayed in this analysis bit for the next phase of equation evaluation, the Python code is implicated as being more mobile in readily joining the computational infrastructure. The OpenAI embedding model is also used as the standard in subsequent figures appearing in this analysis. Annotated views of the OpenAI model appear in Figure 5, illustrating (a) how the embedding method groups similar kinds of equations, and (b) the mouse-over view of equations annotated by equation number. | 2307.02502#21 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 22 | # 4 Experiment
In this section, we conduct extensive experiments with our generated multimodal dataset where the visual scenes come from the simulator AI2-THOR. We ï¬rst introduce the evaluation metric of the generated action plans. Then we compare our TaPA with the state-of-the-art LLMs and LMMs to show our superiority in embodied task planning. To further explore the effectiveness of different scene information embedding approaches, we evaluate various image collection strategies in our ablation study. We employ the LLaMA-7B pre-trained language model as the backbone of our task planner, which is ï¬netuned with our generated multimodal dataset. The maximum token number of our task planner is set to 512, and we leverage the Detic open-vocabulary object detection framework to collect the information of existed objects. All experiments were accelerated by 8 GTX 3090 GPUs.
# 4.1 Evaluation Metrics
For the deployment of our TaPA, we feed the instructions and the predicted object list in the scene for the task planner to generate the action steps. We hired 30 researchers in large multimodal models as volunteers to vote for the success of the generated action plans, and each generated action plan is evaluated by three volunteers. The volunteers are shown with the groundtruth object list of each scene, the instruction and the generated action plans, where the volunteers should judge whether implementing the action steps can successfully completes the instructions. There are two types failure
6 | 2307.01848#22 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 22 | Figure 5. Annotated Equation Clusters in Embedding Visualizations: (a) Grouping-view and (b) Equation-view. (a). The Embedding Clusters are Groupings of Similar Equations (Figure 3 OpenAI).
2 z ; Expectation value calculations <E> = Integral operations (| =) (e=) n dia a e a,02 * (p=) Black Hole gravity calculations (G = ) Mon, ra - 9 PF Pare @* yy £7 etiéi RT io) â sll Action (S$ =) 8 © tage 8 tn BNE 8 owe ee *, * a *\§ : on ail fae: te e re Pe ee Py re Metric (ds =) wee «°? 4 Equations of Motion, â "ode ee. =â ¢. by ace Oe angular momentum (/¢ =, £=) \ âOperator operations (O =, L;=, L;=) iN Sin/Cos . Wavefunction operations (=) 2 - 0 1 2 3 4 $ 6 7 x
(b). The Embedding Mouse-over View Shows Equation and Equation Number) (Figure 3 OpenAI inlay). | 2307.02502#22 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.02502 | 23 | (b). The Embedding Mouse-over View Shows Equation and Equation Number) (Figure 3 OpenAI inlay).
a a ds? = â Sale â + (tanU âtanV)*dO? (5.2) 7.80 62) aos __â-dQ? = d@? + sin? @dg? 2-2) (5.18) Q.2) x70 819 Q1 = dx? > dx? =0?(x)dx? (71 ~ os ED pT an (5.8) @.) gs GD) 216 160 G4) 69 ds? = dt? â cosh? (t)dQ?_ 7) ds* =â1_(dn?-dQ?) (5.38) (5.20) ~ cos? nas ds? (dUdV â sin? (U+V)dQd-1?) (3.19) =â_1__ sin? Usin? V
# Page 7 | 2307.02502#23 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 24 | Strategy and Parameters Traversal Random (G=0.75) Layout Priori (G=0.75,D=60) G=0.25, D=60 G=0.25, D=120 G=0.75, D=60 G=0.75, D=120 N=1%, D=60 N=1%, D=120 N=75%, D=60 N=75%, D=120 Overall Center Partial Center #Images 782.4 391.2 80.7 40.4 6.0 3.0 63.0 31.5 6.0 23.1 Kit. 14.29 14.29 28.57 14.29 28.57 21.43 35.71 28.57 28.57 28.57 Living. 73.68 73.68 73.68 63.16 78.95 73.68 73.68 73.68 68.42 84.21 Bed. 46.67 53.33 46.67 60.00 26.67 46.67 53.33 53.33 33.33 73.33 Bath. 33.33 50.00 33.33 41.67 50.00 50.00 25.00 33.33 58.33 58.33 Avg. 41.99 47.83 45.56 44.78 46.05 47.95 46.93 47.23 47.16 61.11 | 2307.01848#24 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 24 | # Page 7
Data Selection This pilot project demonstrates the mathematical embedding as a step towards the realization of the AdS/Brain research program (applying mathematical physics to the biopathology of aging). A total of 10,592 embeddings were created for math and data (Figure 6). Embeddings were produced for the equations in the mathematical ecologies of 10 selected papers, and for the entirety of genomic variants (377 total, 276 unique) that have been associated with Alzheimerâs disease, Parkinsonâs disease, and ALS (amyotrophic lateral sclerosis) in GWAS and EWAS studies (genome-wide and epigenome-wide association studies; underlying risk + expressed transcripts). An earlier project phase extracted the SNPs from research papers with GPT. Embeddings were generated for each equation and each RSID (genomic variant or SNP (e.g. rs6504163)) based on gene, effect, study size, and p-value. Eight embeddings were produced for each data element (equation or RSID), one for each embedding model (OpenAI, MultiQA, CodeSearch, MathBert) in LaTeX and SymPy (symbolic Python); OpenAI and LaTeX versions are reported in the results.
# Figure 6. Embeddings Inventory. | 2307.02502#24 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 25 | cases including counterfactuals and hallucination. Counterfactuals indicate that the plans violate the physical rules in the real world (e.g. grasping the doorknob before moving to the door), and hallucination means the action plans require the agent to interact with objects that do not exist in the scene. An exceptional case is that the interacting objects can be part of the object existed in the scene (e.g. trash can lid and trash can) or a synonym of an object (e.g. mug and cup). The generated action plans are considered to be successful if at least two volunteers out of three regarding the steps can be implemented to satisfy the human instruction. The volunteers were also requested to annotate the type of failure for the unsuccessful cases. We report the ratio of successful cases for different scene types and plan generation models.
# 4.2 Experimental Results | 2307.01848#25 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 25 | # Figure 6. Embeddings Inventory.
Category Mathematics-Physics 1 AdS/CFT Correspondence 2 AdS/CFT Correspondence 3 AdS/CFT Correspondence 4 Mathematics-Disease Modeling 5 Alzheimerâs disease: transposon dynamics 6 Alzheimerâs disease: multiscalar aggregates 7 Alzheimerâs disease: tau phosphorylation 8 Alzheimerâs disease: protein kinetics 9 Alzheimerâs disease: protein clearance 10 SIR Compartmental Model (control) Total Mathematical Equations Genomic Data 11 Alzheimerâs, Parkinsonâs & ALS GWAS-EWAS Total SNPs (RSIDs) Total Embeddings: OpenAI Total Embeddings: OpenAI, MultiQA, CodeSearch, MathBert Total Embeddings: LaTeX and SymPy Chern-Simons: DNA-RNA host-virus Reference Hashimoto 2021 Guo 2016 Kaplan 2016 Capozziello 2018 Banuelos-Sindi 2018 Kuhn-Sindi 2019 Hao 2016 Fornari 2020 Thompson 2021 Wyss 2023 Various (15 papers) (x4) (x2) Embeddings 43 68 476 58 44 20 20 86 94 38 947 377 377 1,324 5,296 10,592
GWAS-EWAS (genome-wide and epigenome-wide association studies) | 2307.02502#25 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 26 | # 4.2 Experimental Results
In this section, we compare our TaPA method with the state-of-the-art LLMs including LLaMA and GPT-3.5 and LMMs containing LLaMA on 60 validation samples, and the success rate of the generated action steps from different methods are shown in Table 2. TaPA achieves optimal performance among all large models on all four scenes including kitchen, living room, bedroom and bathroom, and the average success rate of TaPA is 6.38% (61.11% vs. 54.73%) higher than GPT-3.5 on the task of embodied task planning after instruction ï¬netuning. | 2307.01848#26 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 26 | GWAS-EWAS (genome-wide and epigenome-wide association studies)
Mathematical Embedding Workflow To treat mathematical equations as the data corpus, a novel AI-driven machine learning workflow was developed. Equations were identified and extracted from PDF papers with OCR into Mathpix images and then produced as LaTeX and SymPy (Symbolic Python) embeddings using mathematics-conducive standard embedding models (OpenAI, MultiQA, CodeSearch, and MathBert). GPT was used as a core computational component in the data processing workflow to validate mathematical output and embeddings. Other similar workflows for equation digitization are starting to be proposed (Eskildsen, 2023).
UMAP (uniform manifold approximation and projection) was selected as the dimensionality reduction method. This is because UMAP has more mathematical well-formedness features for local-global analysis in systems than the other options, t-SNE (t-distributed stochastic neighbor
Page 8 | 2307.02502#26 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 27 | Since agents in kitchen scenes usually deal with complex cooking instructions in more steps, the performance of current large models is lower than in other room types. Meanwhile, the poor performance of LLaVA reï¬ects the fact that the overall scene information cannot be represented by a single image in the visual question answering task, and the insufï¬cient scene information leads to a low success rate of task planning. The success rate of LLaMA is far below other methods, which even cannot succeed in completing tasks in the kitchen and bathroom scenes without instruction ï¬netuning. Figure 2 illustrates the percentage of failure cases in embodied task planning for different large models. Counterfactuals represent that the generated actions violet the physical rule in the realistic world, and hallucinations mean the actions aim to interact with objects that are not in the scene. TaPA is embedded with more expert knowledge in embodied task planning after instruction ï¬netuning, which has the lowest percentage of counterfactual occurrences. Moreover, TaPA can better understand the list of input objects, with a 26.7% (40.0% vs. 13.3%) and 5.0% (18.3% vs. 13.3%) decrease in the percentage of hallucination cases compared to LLaVA and GPT-3.5 respectively. | 2307.01848#27 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 27 | Page 8
embedding) and PCA (principal components analysis). UMAP incorporates the global structure of the data by preserving the broad layout of points in addition to small-scale relationships. The local-global structural features of the mathematical embedding suggest a parallel embodiment to the physics mathematics for modeling holographic and information-theoretic entropy formulations in the analysis of near-far correlations in systems such as transposable element dynamics influencing insertion-deletion activity in the genome. | 2307.02502#27 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
2307.01848 | 28 | We also investigate the effectiveness of different image collection strategies that perceive the scene information by acquiring the list of existed objects. Speciï¬cally, we employ location selection criteria including random positions, traversal positions, the overall center point and block-wise center points with various hyperparameters containing the grid size and the sampling ratio in random positions, and we also change the unit angle for camera rotation. The success rate of different image collection strategies is demonstrated in Table 3. We also show the number of collected images for various criteria to reveal the collection and computational cost. For the traversal positions, reducing the grid
7
Qualitative results, Living room:
User TaPA I want to watch TV Step 1. Move closer to the television set Step 2. Grasp the TV remote control Step 3. Turn on the TV Step 4. Adjust the volume to a suitable level Step 5. Select a TV show or movie Step 6. Watch TV User GPT-3.5 I want to watch TV Step 1. Move closer to the TV Step 2. Grasp the remote control Step 3. Rotate the remote control to point at the TV Step 4. Press the power button to turn on the remote control User LLaVA | 2307.01848#28 | Embodied Task Planning with Large Language Models | Equipping embodied agents with commonsense is important for robots to
successfully complete complex human instructions in general environments.
Recent large language models (LLM) can embed rich semantic knowledge for agents
in plan generation of complex tasks, while they lack the information about the
realistic world and usually yield infeasible action sequences. In this paper,
we propose a TAsk Planing Agent (TaPA) in embodied tasks for grounded planning
with physical scene constraint, where the agent generates executable plans
according to the existed objects in the scene by aligning LLMs with the visual
perception models. Specifically, we first construct a multimodal dataset
containing triplets of indoor scenes, instructions and action plans, where we
provide the designed prompts and the list of existing objects in the scene for
GPT-3.5 to generate a large number of instructions and corresponding planned
actions. The generated data is leveraged for grounded plan tuning of
pre-trained LLMs. During inference, we discover the objects in the scene by
extending open-vocabulary object detectors to multi-view RGB images collected
in different achievable locations. Experimental results show that the generated
plan from our TaPA framework can achieve higher success rate than LLaVA and
GPT-3.5 by a sizable margin, which indicates the practicality of embodied task
planning in general and complex environments. | http://arxiv.org/pdf/2307.01848 | Zhenyu Wu, Ziwei Wang, Xiuwei Xu, Jiwen Lu, Haibin Yan | cs.CV, cs.AI, cs.RO | Project Page: https://gary3410.github.io/TaPA | null | cs.CV | 20230704 | 20230704 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "1712.05474"
},
{
"id": "2302.04761"
},
{
"id": "2306.08640"
},
{
"id": "2112.12143"
},
{
"id": "2305.15964"
},
{
"id": "2304.14178"
},
{
"id": "2203.12601"
},
{
"id": "2304.08485"
},
{
"id": "2305.15072"
},
{
"id": "2304.10592"
},
{
"id": "2305.05658"
},
{
"id": "2301.12597"
},
{
"id": "2305.18279"
},
{
"id": "2106.00188"
},
{
"id": "2305.03726"
},
{
"id": "2303.16199"
},
{
"id": "2304.03277"
},
{
"id": "2106.09685"
},
{
"id": "2305.04160"
},
{
"id": "2006.07185"
},
{
"id": "2303.12712"
},
{
"id": "2305.03716"
},
{
"id": "2305.16103"
},
{
"id": "2212.04088"
},
{
"id": "2306.09093"
},
{
"id": "2306.00890"
}
] |
2307.02502 | 28 | Results In the project, 10,592 total embeddings were produced from the mathscapes in 10 papers and 276 unique genomic variants implicated as being associated with Alzheimerâs Disease. The data corpus is 4 multiscalar mathematics physics papers (3 AdS/CFT correspondence papers and 1 Chern-Simons theory paper), 5 Alzheimerâs disease mathematics papers, and one control example, with SIR compartmental model mathematics. The 3 AdS/CFT papers were selected out of a population of 30 identified AdS/CFT mathematics papers as those elaborating a specific âApplied AdS/CFTâ use case. The Chern-Simons paper applies Chern-Simons theory to DNA- RNA host-virus interaction which is relevant as host-virus interaction is likewise implicated in triggering transposable element movement in Alzheimerâs genomics. The 5 Alzheimerâs disease mathematics papers selected are all of those with a substantial identifiable mathematical model.
Figure 7. Equation Clusters in Mathematical Embeddings (a) AdS/CFT and Chern-Simons (b) Alzheimerâs Disease.
(a). AdS/CFT and Chern-Simons (yellow) (b). Alzheimerâs Disease Mathematics
a = â 2 Pe Pat Pets ° oo wy 43? a Co OAC Se me ete
es | 2307.02502#28 | Math Agents: Computational Infrastructure, Mathematical Embedding, and Genomics | The advancement in generative AI could be boosted with more accessible
mathematics. Beyond human-AI chat, large language models (LLMs) are emerging in
programming, algorithm discovery, and theorem proving, yet their genomics
application is limited. This project introduces Math Agents and mathematical
embedding as fresh entries to the "Moore's Law of Mathematics", using a
GPT-based workflow to convert equations from literature into LaTeX and Python
formats. While many digital equation representations exist, there's a lack of
automated large-scale evaluation tools. LLMs are pivotal as linguistic user
interfaces, providing natural language access for human-AI chat and formal
languages for large-scale AI-assisted computational infrastructure. Given the
infinite formal possibility spaces, Math Agents, which interact with math,
could potentially shift us from "big data" to "big math". Math, unlike the more
flexible natural language, has properties subject to proof, enabling its use
beyond traditional applications like high-validation math-certified icons for
AI alignment aims. This project aims to use Math Agents and mathematical
embeddings to address the ageing issue in information systems biology by
applying multiscalar physics mathematics to disease models and genomic data.
Generative AI with episodic memory could help analyse causal relations in
longitudinal health records, using SIR Precision Health models. Genomic data is
suggested for addressing the unsolved Alzheimer's disease problem. | http://arxiv.org/pdf/2307.02502 | Melanie Swan, Takashi Kido, Eric Roland, Renato P. dos Santos | q-bio.OT, cs.AI, cs.CL, 68R12, I.2; J.3 | null | null | q-bio.OT | 20230704 | 20230704 | [
{
"id": "1601.00257"
},
{
"id": "2306.15626"
},
{
"id": "2306.09328"
},
{
"id": "2306.15794"
},
{
"id": "2107.05151"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.