doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.13149 | 49 | 62.16 92.94 84.57 85.11 80.45 66.36 64.80 54.52 56.78 54.41 48.49 39.40 37.08 21.15 30.24 52.45 24.04 10.91 38.41 53.89 79.82 31.74 46.11 49.71 28.87 60.42 52.97 68.79 52.86 55.84 51.42 42.16 42.59 33.01 39.19 37.23 33.60 31.63 17.11 17.53 92.65 87.50 89.22 85.05 73.53 71.08 57.60 61.76 62.74 55.39 42.40 41.18 17.89 93.10 82.76 93.10 82.76 65.52 62.07 65.51 62.07 82.76 79.31 68.96 58.62 41.38 53.70 37.66 40.44 38.62 32.76 34.49 19.60 31.22 31.03 28.63 26.51 9.89 13.16 | 2308.13149#49 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 51 | Model BK Biology KA SC RA BK Chemistry KA SC BK Physics KA SC 93.57â GPT-4 89.52â GPT-3.5-turbo 61.05â Galactica-30B 79.15â Vicuna-13B Galactica-6.7B 53.59â ChatGLM2-6B 64.99â 55.39â ChatGLM-6B 53.53â Alpaca-7B 50.47â MOSS-16B 41.86â LLaMa-13B 28.42â LLaMa-7B 78.95â 65.18â 38.22â 44.29â 30.77â 34.90â 31.26â 32.87â 29.88â 20.89â 15.38â 88.39â 81.65â 51.31â 65.54â 47.19â 53.93â 43.82â 44.57â 40.82â 34.08â 24.72â 66.63â 58.04â 67.08â 56.58â 69.53â 57.92â 51.67â 60.16â 60.82â 70.31â 64.51â | 2308.13149#51 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 52 | 67.08â 56.58â 69.53â 57.92â 51.67â 60.16â 60.82â 70.31â 64.51â 92.52â 83.54â 46.77â 64.03â 44.10â 53.46â 44.67â 44.48â 39.56â 33.07â 23.82â 54.08â 24.76â 32.27â 35.27â 22.86â 36.51â 26.84â 33.38â 12.67â 2.03â 18.88â 77.46â 66.99â 27.05â 42.13â 23.98â 39.02â 32.58â 32.61â 31.96â 20.77â 18.81â 92.65â¼ 93.10â¼ 71.18â 60.33â 93.10â 84.56â 22.48â 65.52â 54.17â 46.01â 72.41â 75.00â 13.21â 58.62â 46.08â 36.02â 65.52â 58.33â 28.63â 65.52â 51.22â 27.66â | 2308.13149#52 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 54 | Table 10: Detailed Model Performances of Chain-of-Thought setting across different knowledge domains on Static Data. â means the performance is slightly better than that under Answer-Only setting, â means the performance is worse, and â¼ means the performance is nearly the same.
A population of trout lives in a small lake. Some of the trout have a mutation that makes them more colorful. What are some reasons this population is not at Hardy-Weinberg equilibrium? A. No sexual dimorphism, constant population size, no selection, non-overlapping generations B. No sexual reproduction, equal allele frequencies, non-diploid organisms, no migration C. Infinitely large population, no overlapping generations, no mutations, random mating D. Not infinitely large population, overlapping generations, mutations present, non-random mating Answer: D
The bones of a prehistoric man found in the desert of new Mexico contain approximately 5% of the original amount of carbon 14. If the half-life of carbon 14 is 5600 years, approximately how long ago did the man die?
# A. 7523 years B. 10412 years
# C. 9350 years D. 8678.5 years
# Answer: D
Figure 8: A biology example of Scientific Calculation do- main in Static Data.
Figure 7: A biology example of Knowledge Application do- main in Static Data. | 2308.13149#54 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 55 | Model BK Biology KA SC RA BK Chemistry KA SC BK Physics KA SC 94.97â GPT-4 90.82â GPT-3.5-turbo 76.45â Galactica-30B 79.41â Vicuna-13B Galactica-6.7B 64.83â ChatGLM2-6B 72.10â 61.51â ChatGLM-6B 65.82â Alpaca-7B 54.20â MOSS-16B 64.00â LLaMa-13B 37.14â LLaMa-7B 81.62â 62.19â 41.30â 44.37â 33.60â 36.03â 32.23â 35.71â 29.80â 32.39â 29.15â 91.01â 80.52â 66.67â 67.04â 51.31â 57.68â 56.55â 57.30â 43.07â 48.69â 34.46â 78.01â 61.72â 84.11â 55.36â 70.98â 65.29â 53.68â 70.76â 60.60â 35.16â 49.44â 93.16â 84.84â 67.05â 64.64â | 2308.13149#55 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 56 | 70.76â 60.60â 35.16â 49.44â 93.16â 84.84â 67.05â 64.64â 53.34â 58.15â 51.97â 47.46â 41.62â 40.93â 33.68â 66.23â 69.24â 31.29â 9.93â 67.08â 18.62â 53.49â 60.48â 58.52â 61.53â 58.13â 71.18â 52.57â 40.14â 45.36â 32.68â 39.12â 34.80â 33.40â 30.49â 31.01â 26.46â 93.14â 88.24â 69.36â 70.59â 59.31â 64.70â 64.22â 56.37â 42.65â 47.55â 30.64â | 2308.13149#56 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 57 | Table 11: Detailed Model Performances of 3-Shot setting across different knowledge domains on Static Data.
1 Biology 2 3 4 1 Chemistry 3 2 4 1 2 Physics 3 4 Avg 95 90 90 97.5 90 0 92 90 84 82 76 60 100 90 85 95 85 20 100 100 97.5 95 98.33 72 96.25 90.62 81.25 93.75 15 15 88 88 88 95 66 60 72.5 80 80 70 50 30 95 90 88 90 65 36 99 99 92 97 78 32 97.14 95.71 93.57 94.28 61.43 25.71 98.57 87.14 90.71 87.14 0 28.57 86.25 58.75 58.75 53.33 48.75 25 93.31 88.27 85.73 87.5 61.12 33.69
Table 12: Detailed scores model reached in each experiment. GPT-series models and Claude-series models achieve a good performance. | 2308.13149#57 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 58 | Table 12: Detailed scores model reached in each experiment. GPT-series models and Claude-series models achieve a good performance.
To investigate the role of human T-lymphotrophic virus type I (HTLV-I) infection in four patients who developed slowly progressive myelopathy with abnormal MRI lesions in the cervical cord levels.
Clinical and neuroradiologic examinations were performed, and the odds that an HTLV-I-infected individual of specified genotype, age, and provirus load had HTLV-I-associated myelopathy (HAM)/tropical spastic paraparesis (TSP) were calculated.
What is the difference between an alkane, an alkene, and an alkyne?
A. Alkane: double bond; Alkene: single bond; Alkyne: triple bond B. Alkane: single bond; Alkene: double bond; Alkyne: triple bond C. Alkane: triple bond; Alkene: double bond; Alkyne: single bond D. Alkane: single bond; Alkene: triple bond; Alkyne: double bond
# Answer: B | 2308.13149#58 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 59 | # Answer: B
Anti-HTLV-I antibodies were positive in both the serum and the CSF in all of the patients. Biopsied sample from spinal cord lesions showed inflammatory changes in Patient 1. Patient 2 had a demyelinating type of sensorimotor polyneuropathy. Two of the three patients examined showed high risk of developing HAM/TSP in virologic and immunologic aspects.
Figure 10: A chemistry example of Basic Knowledge do- main in Static Data.
Chronic progressive cervical myelopathy with HTLV-I infection: Variant form of HAM/TSP?
# Answer: yes
How would you separate a mixture of alcohol and water?
Figure 9: A biology example of Research Ability domain in Static Data.
A. Freeze the mixture, remove solid water, then melt remaining alcohol.
B. Shake the mixture, let it settle, then remove separated layers.
C. Heat the mixture, collect evaporated alcohol, then collect evaporated water.
D. Filter the mixture through a membrane, then evaporate collected water.
# Answer: C
Figure 11: A chemistry example of Knowledge Application domain in Static Data. | 2308.13149#59 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 60 | D. Filter the mixture through a membrane, then evaporate collected water.
# Answer: C
Figure 11: A chemistry example of Knowledge Application domain in Static Data.
âNa,PO, dissolves in water to produce an electrolyte solution. What is the Osmolarity of a 2.0 * 10°(-3) M Na,PO, solution? A. 8.0 * 104-3) osmol LA(-1) B. 6.0 * 10%(-3) osmol Lâ(-1) C. 12.0 * 104(-3) osmol LA(-1) D. 2.0 * 104-3) osmol LA(-1) Answer: A
Figure 12: A chemistry example of Scientific Calculation domain in Static Data.
How can momentum be decreased? A. Decrease mass or velocity, or transfer momentum through collision. B. Keep mass and velocity constant, avoid collisions. C. Increase mass and velocity, avoid collisions. D. Increase mass, decrease velocity, and avoid collisions. Answer: A
Figure 13: A physics example of Basic Knowledge domain in Static Data. | 2308.13149#60 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 61 | Figure 13: A physics example of Basic Knowledge domain in Static Data.
If i run down some stairs and stop, what happens to your kinetic energy and your initial gravitational potential energy? A. Kinetic energy increases; potential energy decreases. B. Kinetic energy becomes zero; potential energy increases. C. Kinetic energy decreases; potential energy becomes zero. D. Kinetic energy becomes zero; potential energy decreases. Answer: D
Figure 14: A physics example of Knowledge Application domain in Static Data.
An object with a mass of 8 kg is traveling in a circular path of a radius of 12 m. If the object's angular velocity changes from 5 Hz to 7 Hz in 6 s, what torque was applied to the object? A. 4825.4Nm B. 3620.05 Nm C. 2412.7 Nm D. 1206.35 Nm Answer: C
Figure 15: A physics example of Scientific Calculation do- main in Static Data.
What is the molecular formula of (2R,5S)-5-ethyl-2-methylnonanal? Answer: C12H240
What is the molecular weight of (3E,6E)-5,5-dimethylocta-1,3,6- triene?
# Answer (numerical number): 136.23
Figure 16: Two chemistry examples in Dynamic Data. | 2308.13149#61 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.13149 | 62 | # Answer (numerical number): 136.23
Figure 16: Two chemistry examples in Dynamic Data.
Calculate the total energy released in the accretion disk of a black hole with a mass of 9 solar masses, a radius of 68 kilometers, and an accretion rate of 0.2 solar masses per year. Assume the disk is made of gas that is 10% hydrogen and 90% helium and has a temperature of 10 million Kelvin. Please note that the following content involves constants might be used in the calculations : gravitation constant : G = 6.674e-11 N-mâ2/kg*2 ; solar mass = 1.98930 kg ; velocity of light: c = 3e8 m/s. Additionally, please select the option from the given choices that you believe is closest to the correct answer! A. 5.13e+38 J B. 6.83e+38 J C.5.81e+38 J
D. 2.49e+38 J
# Answer: D
Figure 17: An physics examples in Dynamic Data. | 2308.13149#62 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Recently, there has been growing interest in using Large Language Models
(LLMs) for scientific research. Numerous benchmarks have been proposed to
evaluate the ability of LLMs for scientific research. However, current
benchmarks are mostly based on pre-collected objective questions. This design
suffers from data leakage problem and lacks the evaluation of subjective Q/A
ability. In this paper, we propose SciEval, a comprehensive and
multi-disciplinary evaluation benchmark to address these issues. Based on
Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate
scientific research ability. In particular, we design a "dynamic" subset based
on scientific principles to prevent evaluation from potential data leakage.
Both objective and subjective questions are included in SciEval. These
characteristics make SciEval a more effective benchmark for scientific research
ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs
show that, although GPT-4 achieves SOTA performance compared to other LLMs,
there is still substantial room for improvement, especially for dynamic
questions. The data and codes are now publicly available. | http://arxiv.org/pdf/2308.13149 | Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chen, Kai Yu | cs.CL | 12 pages, 17 figures, 12 tables. Under Review | null | cs.CL | 20230825 | 20230825 | [
{
"id": "2307.03109"
},
{
"id": "2302.13971"
},
{
"id": "2306.05685"
},
{
"id": "2305.18365"
},
{
"id": "2304.06364"
},
{
"id": "2206.04615"
},
{
"id": "2103.03874"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2303.08774"
},
{
"id": "2009.03300"
},
{
"id": "1909.06146"
},
{
"id": "2307.10635"
}
] |
2308.12503 | 0 | 3 2 0 2
g u A 8 2 ] I A . s c [
2 v 3 0 5 2 1 . 8 0 3 2 : v i X r a
# CGMI: Configurable General Multi-Agent Interaction Framework
Jinxin Shi1, Jiabao Zhao1*, Yilei Wang1, Xingjiao Wu2, Jiawen Li1, Liang He1 1School of Computer Science and Technology, East China Normal University, Shanghai, China 2School of Computer Science, Fudan University, Shanghai, China [email protected], [email protected], [email protected], xjwu [email protected], [email protected], [email protected]
# Abstract | 2308.12503#0 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12950 | 0 | 4 2 0 2
n a J 1 3 ] L C . s c [
3 v 0 5 9 2 1 . 8 0 3 2 : v i X r a
# Code Llama: Open Foundation Models for Code
Baptiste Rozièreâ , Jonas Gehringâ , Fabian Gloeckleâ ,â, Sten Sootlaâ , Itai Gat, Xiaoqing Ellen Tan, Yossi Adiâ, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeveâ
# Abstract | 2308.12950#0 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12503 | 1 | # Abstract
Benefiting from the powerful capabilities of large language models (LLMs), agents based on LLMs have shown the po- tential to address domain-specific tasks and emulate human behaviors. However, the content generated by these agents remains somewhat superficial, owing to their limited domain expertise and the absence of an effective cognitive archi- tecture. To address this, we present the Configurable Gen- eral Multi-Agent Interaction (CGMI) framework, designed to replicate human interactions in real-world scenarios. Specifi- cally, we propose a tree-structured methodology for the as- signment, detection, and maintenance of agent personality. Additionally, we designed a cognitive architecture equipped with a skill library based on the ACT* model, which contains memory, reflection, and planning modules. We have also in- tegrated general agents to augment the virtual environmentâs realism. Using the CGMI framework, we simulated numerous classroom interactions between teacher and students. The ex- periments indicate that aspects such as the teaching method- ology, curriculum, and student performance closely mirror real classroom settings. We will open source our work. | 2308.12503#1 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 1 | Large language models (LLMs) have demonstrated remarkable advancements and have attracted significant efforts to develop LLMs into agents capable of execut- ing intricate multi-step decision-making tasks beyond traditional NLP applica- tions. Existing approaches to LLM-based decision-making predominantly build upon the manually-designed external performance metrics to guide the decision- making process. However, reliance on the external performance metrics as prior is problematic in real-world scenarios, where such prior may be unavailable, flawed, or even erroneous. For genuine autonomous decision making, it is imperative for the agent to develop its rationality from its posterior experiences to judge deci- sions independently. Central to the development of rationality is the construction of an internalized utility judgment, capable of assigning numerical utilities to each decision. This paper proposes RADAGENT (Rational Decision-Making Agent), which fosters the development of its rationality through an iterative framework involving Experience Exploration and Utility Learning. Within this framework, Elo-based Utility Construction is devised to assign Elo scores to individual deci- sion steps to judge their utilities via pairwise comparisons. | 2308.12519#1 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 1 | # Abstract
Large Language Models (LLMs) have demonstrated impressive planning abilities due to their vast âworld knowl- edgeâ. Yet, obtaining plans that are both feasible (grounded in affordances) and cost-effective (in plan length), remains a challenge, despite recent progress. This contrasts with heuristic planning methods that employ domain knowledge (formalized in action models such as PDDL) and heuristic search to generate feasible, optimal plans. Inspired by this, we propose to combine the power of LLMs and heuristic planning by leveraging the world knowl- edge of LLMs and the principles of heuristic search. Our approach, SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain knowledge, that evaluates actionsâ feasibility (Can) and long-term reward/payoff (Pay), and heuristic search to select the best sequence of actions. Our contributions are (1) a novel framing of the LLM planning problem in the context of heuristic planning, (2) integrating grounding and cost-effective elements into the generated plans, and (3) using heuristic search over actions. Our extensive evaluations show that our model surpasses other LLM planning approaches.
# Introduction | 2308.12682#1 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 1 | # Abstract
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. foundation models We provide multiple flavors to cover a wide range of applications: (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B, and 70B parameters each. These models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. The 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.1
# 1 Introduction | 2308.12950#1 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 1 | # Alibaba Group Code & Demo & Models: https://github.com/QwenLM/Qwen-VL
# Abstract
In this work, we introduce the Qwen-VL series, a set of large-scale vision-language models (LVLMs) designed to perceive and understand both texts and images. Starting from the Qwen-LM as a foundation, we endow it with visual capacity by the meticulously de- signed (i) visual receptor, (ii) input-output interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal cleaned corpus. Beyond the conventional image descrip- tion and question-answering, we implement the grounding and text-reading ability of Qwen-VLs by aligning image-caption-box tuples. The resulting models, including Qwen- VL and Qwen-VL-Chat, set new records for generalist models under similar model scales on a broad range of visual-centric benchmarks (e.g., image captioning, question answer- ing, visual grounding) and different settings (e.g., zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to existing vision-language chatbots. All models are public to facilitate future research.
# voav2dev | 2308.12966#1 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 2 | Introduction Agent-based social simulation (ABSS) simulates social in- teractions in a virtual environment. By observing agent be- havior, we can explore complex social phenomena and ver- ify the effects of different social strategies in a controlled setting(Davidsson and Paul 2002). However, improving sim- ulation accuracy and designing complex agents remain key challenges(Aher, Arriaga, and Kalai 2023). With the capa- bilities of large language models (LLMs) such as GPT4 (OpenAI 2023), we can construct more complex environ- ment and create more realistic agents to simulate social phe- nomena. However, when using LLMs to complete ABSS tasks, the following issues need to be addressed: (1) How to trigger the capabilities of LLMs to solve complex problems? (2) How to ensure that agents have a stable role and behav- ior output based on LLMs without forgetting? (3) How to design a communication mechanism for LLMs-based agents to truly simulate interactions?
Existing LLMs-based agents are mainly divided into ac- tion agents (Yao et al. 2023; Press et al. 2023) and plan-and- execute agents (Wang et al. 2023a). Action agents make de- cisions based on previous outputs and are suitable for small tasks. Plan-and-execute agents formulate and execute action | 2308.12503#2 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 2 | framework, Elo-based Utility Construction is devised to assign Elo scores to individual deci- sion steps to judge their utilities via pairwise comparisons. Consequently, these Elo scores guide the decision-making process to derive optimal outcomes. Exper- imental results on the ToolBench dataset demonstrate RADAGENTâs superiority over baselines, achieving over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality solutions and reduces costs (ChatGPT API calls), highlight- ing its effectiveness and efficiency. | 2308.12519#2 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 2 | With the rise of Large Language Models (LLMs), there has been a growing interest in leveraging their generative capabilities for planning tasks (Huang et al. 2022a; Valmeekam et al. 2022; Silver et al. 2022; Liu et al. 2023). These models have the ability to generate long-horizon plans, capitalizing on their extensive âworld knowledgeâ gained from training on vast amounts of data (e.g. eggs are typically stored in the refrigerator, and placing an apple in the fridge will cool it). Such expansive knowledge can be exploited to plan in an open-world context (Ding et al. 2023). Moreover, planning in the natural language space offers significant flexibility especially, with the advent of multimodal foundation models (Lakhotia et al. 2021; Du et al. 2022; Brohan et al. 2023). Such models have made it easier to represent various modalities such as vision, speech, and even actions in the form of natural language, thus bypassing the need to have domain-specific knowledge (e.g. PDDL) that traditional planning approaches require. However, LLM-based planning often faces challenges, particularly in generating feasible plans. It can fail to | 2308.12682#2 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 2 | # 1 Introduction
Large language models (LLMs) power a rapidly increasing number of applications, having reached a proficiency in natural language that allows them to be commanded and prompted to perform a variety of tasks (OpenAI, 2023; Touvron et al., 2023b). By utilizing large, in-domain datasets, their efficacy can be greatly improved for applications that require a combination of both natural and domain-specific language and understanding of specialized terminology. By training on domain-specific datasets, they have proved effective more broadly on applications that require advanced natural language understanding. A prominent use-case is the formal interaction with computer systems, such as program synthesis from natural language specifications, code completion, debugging, and generating documentation (for a survey, see Xu & Zhu, 2022, also see Section 5). In this work, we present Code Llama, a family of LLMs for code generation and infilling derived from Llama 2 (Touvron et al., 2023b) and released under the same custom permissive license. We provide inference code for both completion and infilling models in the accompanying repository.1 Our approach is based on gradually specializing and increasing the capabilities of Llama 2 models by applying a cascade of training and fine-tuning steps (Figure 2): | 2308.12950#2 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12503 | 3 | plans, suitable for long-term goal tasks. However, in com- plex scenarios, LLMs-based agents may produce mechani- cal and superficial content or not execute according to the plan. Inspired by the Adaptive Control of Thought (ACT*) model (Anderson and R 1983), we designed a cognitive ar- chitecture equipped with skill library for agents. Specifi- cally, we employ the Chain of Thought (CoT) and Chain of Action (CoA) methods to extract declarative and procedural memories from the agentâs working memory. During the re- flection and planning processes, content is retrieved from the skill library, ensuring deeper and more specialized insights. Assigning each intelligent agent with a unique identity, personality, and capability (Wang et al. 2023c) can offer a more humanized and emotional interactive experience, and also enhance the realism of simulating complex social sce- narios (Argyle et al. 2023). Although LLMs like GPT4 pos- sess strong role-playing capabilities, we found that LLMs tend to forget the original character settings in multi-turn di- alogues and make decisions that are inconsistent with the characterâs design. Additionally, due to the limitations | 2308.12503#3 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 3 | # INTRODUCTION
Agent (Searle, 1969; Wooldridge & Jennings, 1995; Maes, 1994; Hendler, 1999), as the long- standing pursuit of artificial intelligence (AI), is expected to possess the ability to plan, make decisions, and take actions to accomplish complex tasks autonomously. As large language mod- els (LLMs) have undergone rapid development, showcasing remarkable capabilities (OpenAI, 2022; 2023), many efforts have devoted to develop LLM-based agent (Richards, 2023; Nakajima, 2023; age, 2023) to accomplish intricate multi-step decision-making tasks (Yao et al., 2022; Hao et al., 2023a; Yao et al., 2023; Qin et al., 2023c) beyond traditional natural language language (NLP) ap- plications. Even with these strides, existed LLM-based agent requires manually-designed external performance measure to guide the decision-making process. For instance, in Game of 24 which uses four numbers and basic arithmetic operations to obtain 24, a value prompt (Yao et al., 2023) is heuristically designed to assess the potential of each decision to reach 24, and then choose cor- rect decisions accordingly. The reliance on the external performance metrics as prior restricts the adaptability in real-world scenarios as such prior may be unavailable, flawed, or even erroneous. | 2308.12519#3 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 3 | PDDL) that traditional planning approaches require. However, LLM-based planning often faces challenges, particularly in generating feasible plans. It can fail to model action affordances (or pre-conditions)1 due to difficulty in modeling the state of the world (e.g. grab milk from the fridge even if the door is closed) or having a pretrained world model that is not aligned with the current environment (e.g. using a controller to regulate the heater where only a knob exists), leading to infeasible plans. Moreover, such models focus greedily on the next actionable step without considering its relevance to the ultimate goal, resulting in longer, cost-inefficient plans (Valmeekam et al. 2023). Recent works like SayCan (Ahn et al. 2022) have sought to address the affordance problem by using pretrained skills to evaluate the actionâs executability â Can the action be executed in the current state? However, the plan cost remains a concern. | 2308.12682#3 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 3 | ⢠Code-training from foundation models. While most LLMs for code generation such as AlphaCode (Li et al., 2022), InCoder (Fried et al., 2023) or StarCoder (Li et al., 2023) are trained on code only, Codex (Chen et al., 2021) was fine-tuned from a general language model. We also start from a foundation model (Llama 2, Touvron et al., 2023b) pretrained on general-purpose text and code data. Our comparison (Section 3.4.1) shows that initializing our model with Llama 2 outperforms the same architecture trained on code only for a given budget.
1https://github.com/facebookresearch/codellama â : Core contributors. â: Meta AI, CERMICS Ãcole des Ponts ParisTech. â: Meta AI & Hebrew University of Jerusalem
1
Meta AI
Prompt: In Bash, how do I list all text files in the current directory (excluding subdirectories) that have been modified in the last month
Response: You can use the âfindâ command in Bash to list all text files in the current directory (excluding subdirectories) that have been modified in the last month. The command would be:
find . -maxdepth 1 -type f -mtime -30 -name "*.txt" | 2308.12950#3 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 3 | @ (4<6010(750,0),(999,999)</booP RIE âWhat are the two cities in the above pictures? Please compare them. Am, According to the picture, which floor BPRinaR, mAMEBAZâ. fv, âThe first image isthe city skyline of Chongging, which reflects the Se siegeey be oat hustle and bustle of a modem metropolis. The second image isthe =, ; ; skyline of Beijing, symbolizing the modernization and âAccording to the sign on the wall, the surgery ro internationalization of the Chinese capital. Both cities are important @ mere âshould be on the 3F floor. jin China, with unique cultures and development histories. (© | Wich oor shou the otolaryngology BRR ERA, MAME | Om sepertnent be on Sh, RRA. y Te oisnoloay department soul beenthe | >> 4F floor. vd ~ ® ocesis pice ABSTRACT In this work, we introduce the Owen-VL series, a set of large-scale vision-language models (LVLMSs) designed to perceive and oe understand both texts and images. Starting from the Owen-LM as a âThe solution inthe image | 2308.12966#3 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 4 | to forget the original character settings in multi-turn di- alogues and make decisions that are inconsistent with the characterâs design. Additionally, due to the limitations of the context window, itâs challenging to set roles comprehen- sively and in fine detail. To address these issues, this paper introduces a tree-structured persona model for character as- signment, detection, and maintenance, which is beneficial for agent interaction performance. | 2308.12503#4 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 4 | When making decisions, human not only draw upon the external measure but also resort to the individual rationality formed in practice from the posterior experience. The rationality is modeled as an internal utility judgment ability which owns two principal properties (Kahneman & Tversky,
â Indicates equal contribution. â Corresponding author.
1
Preprint
2000; Arrow, 1959; Plott, 1973): (1) Completeness: Given any two choices A and B, an individual strictly must prefer one of them (A ⥠B or B ⥠A). (2) Transitivity: If an individual prefers A to B (A ⥠B), and prefers B to C (B ⥠C), then the individual must be prefer A to C (A ⥠B ⥠C). Based on these two properties of the utility judgment, given a set of choices, human can judge their utilities and choose the one with the highest utility to achieve the best outcome. | 2308.12519#4 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 4 | In contrast, traditional planning provides an established approach to developing a sequence of actions to transition from an initial state to a goal state. It uses a domain file (with action models defined in PDDL specifying pre- and post- conditions) and heuristic search planners like Fast Downward (Helmert 2006) to ensure feasibility through grounding in preconditions, and generating cost-effective plans by employing search trees to select the best (or shortest) sequence of actions. However, obtaining a domain file for complex real-world environments is difficult, and its use restricts planning to a closed-world setting. These methods also struggle to handle partial observations, although approximate planning (Kaelbling, Littman, and Cassandra 1998) can alleviate it.
Integrating LLMs with classical planning offers a promising research path, merging the generative abilities and (open) world knowledge of LLMs with the methodological rigor of planning algorithms. To this end, we extend the following contributions. (1) We propose to frame language model planning in the context of heuristic planning, which to
1In robotics, affordances refer to possible actions that can be executed, which is conceptually similar to inferring preconditions in planning â what actions are feasible in a certain situation. | 2308.12682#4 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 4 | find . -maxdepth 1 -type f -mtime -30 -name "*.txt"
Hereâs a breakdown of the command: * â.â specifies the current directory. * â-maxdepth 1â tells âfindâ to only search the current directory and not any subdirectories. * â-type fâ tells âfindâ to only search for files (not directories or other types of files). * â-mtime -30â tells âfindâ to only search for files that have been modified in the last 30 days. * â-name "*.txt"â tells âfindâ to only search for files with the â.txtâ extension. Note that the â-mtimeâ option uses the number of days since the file was last modified, so â-mtime -30â means "modified in the last 30 days". If you want to search for files modified in the last month, you can use the â-mtime -30â option.
Figure 1: Example of response of Code Llama - Instruct (34B) when queried for a specific shell command. | 2308.12950#4 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 4 | models (LVLMSs) designed to perceive and oe understand both texts and images. Starting from the Owen-LM as a âThe solution inthe image isto find the | foundation, we assign it visual capacity by meticulously designed () © minimum valu in an array, And there is visual receptor, i) input-output interface, (ii) 3-stage training Am âbug in the Function. Fix the bug. eee ee ee ee SP the conventional description and question-answering, we inject the grounding ability ito Owen-VLs by importing fine-grained image- {nt solution(int AU] int) { caption-box pais. The resulting models, including Owen-VL and. int ans = A[0]; Owen-VL-Chat, set new records on a broad range of visual-centric for (int i= 1; i <n; it+) { benchmarks (¢.g., image captioning, question answering, visual âif (Afi) <ans) £73 grounding) under different settings (e.g., zero-shot, few-shot). ans. ils Moreover, on real-world dialog benchmarks, our instruction-tuned } âOwen-VL-Chat also demonstrates conspicuous superiority compared to existing | 2308.12966#4 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 5 | Influenced by assistant repeats instruction, infinite loop of messages, and conversation termination conditions, it re- mains challenging for chat agents to automatically collabo- rate to accomplish tasks in specific scenarios(Li et al. 2023). Setting scenario-adapted general agents is used to solve scenario-specific tasks for role agents, can help role agents avoid the aforementioned problems and enhance the real- ism of virtual scenes. For this purpose, this paper explores a Configurable General Multi-Agent Interaction Framework (CGMI), that can simulate real-life scenarios by binding general agents with role agents.
In this work, we take the âclassroom teaching scenarioâ as an example, employing the CGMI framework to simulate the teaching process between âteacherâ and âstudentsâ, in- cluding teacher agent, student agents, assistant agents and supervisory agents. The experimental results indicate that the interactions in the virtual classroom aligns with actual teaching. It helps to assist in teacher instruction, evaluate teaching competencies, and validate teaching hypotheses.
In summary, the major contributions of this paper are
threefold: ⢠The introduction of cognitive structure equipped with skill library, combining human cognition and skill library retrieval, enabling agents to engage in deep reflection and planning.
⢠Designed a tree-structured approach for assigning, de- tecting, and maintaining the personal traits of agents, which reduces memory pressure on agents and improves stability. | 2308.12503#5 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 5 | To this end, we propose RADAGENT (Rational Decision-Making Agent) which internalizes the utility judgment ability to achieve rationality for the agent. In RADAGENT, the internalized utility judgment is constructed based on an iterative framework: (1) Experience Exploration: Due to the complexity of real-world tasks, the solution space may be infinite and it is challenging to find the optimal solution efficiently. The agent should explore potential decisions to find better solutions as more as possible for the utility learning. (2) Utility Learning: Given a series of solutions, the agent should make comparisons between them to judge their assesses. To learn a quantitative utility, we further design Elo-based Utility Construction which assigns each decision with an Elo score to represent its utility as the quantitative judgment through a series of pairwise comparisons between any two solutions. After multiple comparisons, the Elo score converges to an accurate value that represents its actual utility to achieve the task. Through the iterative utility judgment construction, RADAGENT can judge the best solution with the best outcome. | 2308.12519#5 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 5 | 1In robotics, affordances refer to possible actions that can be executed, which is conceptually similar to inferring preconditions in planning â what actions are feasible in a certain situation.
SayCanPay âoal: pick up the box. Say >) nitial State: Room 1 has gent, red key, green ball. âoom 2 has purple box. The loor connecting Room 1 and : pick up green ball âoom 2 is locked. The green drop ball in void all is blocking the door. pick up purple box! : Step 1: pick up green ball : drop ball in void || Step 2: drop ball in void : pick up red key Step 3: pick up red key : toggle red door Step 4: toggle red door : drop key in void || Step 5: drop key in void : pick up purple box || Step 6: pick up purple box : done task Step 7: done task Step 1: : drop key in void Net Say : pick up purple box pick up red key : done task pick up green ball infeasible actions sub-optimal actions _feasible and cost-effective toggle red door || 0.00 done task | 9.00 | 2308.12682#5 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 5 | Figure 1: Example of response of Code Llama - Instruct (34B) when queried for a specific shell command.
⢠Infilling. Autoregressive training and fine-tuning of LLMs is suitable for prompt completion, but does not provide the capability to fill a missing portion of text while taking the full surrounding context into account. Our code-training for 7B, 13B and 70B Code Llama models features a multitask objective (Fried et al., 2023) consisting of both autoregressive and causal infilling prediction, enabling applications such as real-time completion in source code editors or docstring generation. | 2308.12950#5 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12503 | 6 | ⢠Designed a tree-structured approach for assigning, de- tecting, and maintaining the personal traits of agents, which reduces memory pressure on agents and improves stability.
⢠The construction of a Configurable General Multi-agent Interaction framework (CGMI), supporting social exper- imental research in specific scenarios.
Related Work In this section, we will review agent research for solving domain problems, as well as agent research for simulating real human interaction processes. | 2308.12503#6 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 6 | To validate the effectiveness of our proposed approach, we implement RADAGENT with Chat- GPT (OpenAI, 2022) and conduct extensive experiments on ToolBench dataset (Qin et al., 2023c), which contains intricate multi-step decision tasks involving diverse scenarios. Experimental results unequivocally demonstrate the superiority of our approach against several baselines by achieving over 10% improvements in Pass Rate to accomplish complex tasks. Moreover, extensive analyses show that our approach not only delivers superior solutions with higher quality but also achieves greater efficiency by reducing the number of ChatGPT API calls.
Our contributions are threefold:
⢠We propose RADAGENT, a rational decision-making agent that can construct its internal ratio- nality to accomplish diverse real-world tasks, not relying on external performance measure.
⢠We devise Elo-based Utility Construction which can internalize the utility judgment for the agent by learning Elo scores for each decision, leading to the optimal solution.
⢠Extensive experiments on the ToolBench dataset demonstrate the effectiveness and efficiency of our proposed method against representative methods, marking a significant step toward unleash- ing the autonomous decision-making capability of LLMs.
# 2 PRELIMINARY | 2308.12519#6 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 6 | Figure 1: Figure illustrates how SayCanPay scores the next action in BabyAI environment (Chevalier-Boisvert et al. 2019). Given inputs: goal g and initial observation o0, the Say model generates candidate actions with associated probabilities. These are then scored for feasibility by the Can model and for payoff by the Pay model. Here, the Can model deems both pick up red key and pick up green ball equally probable (i.e. both preconditions are satisfied). However, the Pay model ensures a better payoff for pick up green ball. We compare plans generated by Say, SayCan, and SayCanPay scoring. Say scoring can lead to infeasible plans and SayCan to feasible but longer plans. The displayed grid is purely illustrative, with no visual inputs used. | 2308.12682#6 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 6 | Long input contexts. Unlocking repository-level reasoning for completion or synthesis â as opposed to function-level or file-level â requires prompting the model with much longer context than the 4,096 tokens supported by Llama 2. We propose an additional fine-tuning stage that extends the maximum context length from 4,096 tokens to 100,000 tokens by modifying the parameters of the RoPE positional embeddings (Su et al., 2021) used in Llama 2. Our experiments show Code Llama operating on very large contexts with a moderate impact on performances on standard coding benchmarks (Section 3.3). ⢠Instruction fine-tuning. For end-users, the utility of LLMs is significantly improved by instruction fine-tuning (Ouyang et al., 2022; Wei et al., 2022; OpenAI, 2023; Touvron et al., 2023b), which also helps preventing unsafe, toxic or biased generations. Code Llama - Instruct variants are further fine-tuned on a mix of proprietary instruction data for improved safety and helpfulness, and a new machine-generated self-instruct dataset created by prompting Llama 2 for coding problems and Code Llama to generate associated unit tests and solutions. Our results show that Code Llama - Instruct significantly improves performance on various truthfulness, toxicity and bias benchmarks at moderate cost in terms of code generation performance (Section 4). | 2308.12950#6 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12503 | 7 | Related Work In this section, we will review agent research for solving domain problems, as well as agent research for simulating real human interaction processes.
Agents for Solving Domain Problems Recent studies in LLMs have explored the utilization of agent systems for domain-specific tasks across various sectors. In healthcare, (Nair et al. 2023) introduced a multi-agent system that enhances treatment recommenda- tions via communication feedback. (Qian et al. 2023) pre- sented CHATDEV: a simulated development team where agents oversee design, coding, testing, and documenta- tion, thereby ensuring effective game development coor- dination. (Alexandru et al. 2015) designed a multi-agent e-learning environment tailored for education, providing customized support for instructional decisions. ChemCrow, highlighted in (Bran et al. 2023), formulated a framework that grants agents access to external knowledge reposito- ries, consequently amplifying their efficacy in areas like or- ganic synthesis, drug discovery, and materials design. (Wang et al. 2023b) unveiled the DEPS interactive planning tech- nique, addressing long-term planning challenges within the Minecraft game. Collectively, these investigations illumi- nate agent applications tailored to particular domains and hurdles. | 2308.12503#7 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 7 | # 2 PRELIMINARY
Elo Rating System The Elo rating system (Elo, 1967), commonly used in competitive contexts offers a numerical estimation of the skill levels of players. It represents the skill levels of players by Elo scores and assesses the Elo scores through a series of one-to-one competitions. It assumes that each playerâs performance follows a Gaussian distribution (x â¼ N (µ, Ï)) and each comparison of two players is actually comparing between two samples from their Gaussian distributions. Through multiple comparisons, we can approximate their true skill levels by estimating their Elo scores.
Given two players x and y, their Elo scores are denoted as vx and vy, respectively. The expected superiority of x against y is calculated as:
Ex>y = 1 1 + eâ vxâvy r (1)
where r is the Elo coefficient.
Next, we run a competition between them to find the actual winner. We denote the competition result as Rx>y:
Rx>y = 1, if x win, 0, if y win, 0.5, otherwise (2)
2
Preprint
We then update their Elo score accordingly:
vx = vx + K â (Rx>y â Ex>y) vy = vy + K â (Ry>x â Ey>x) (3) | 2308.12519#7 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 7 | our knowledge, is the first of its kind (§ 4). (2) We incorporate feasibility and cost-effective elements into the generated plans using a joint scoring named SayCanPay. As shown in Figure 1, it guides the planning through three key steps: (i) Say: Given a goal and an initial observation, the LLM generates likely candidate actions at each step; (ii) Can: An affordance model scores these actionsâ feasibility, mirroring the evaluation of preconditions; (iii) Pay: Another model scores the actions according to their estimated payoff, akin to heuristic estimators (§ 5). The Can and Pay models undergo domain-specific training to align the plans with the current environment (§ 6). (3) Using this combined score as a heuristic, we search for the most feasible and cost-effective plan (§ 5.2). We demonstrate how our proposed joint scoring and heuristic search improve over the current LLM planning frameworks (§ 7.3).
# 2 Related Work on Planning with LLMs | 2308.12682#7 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 7 | Different combinations of these approaches lead to a family of code-specialized Llama 2 models with three main variants that we release in four sizes (7B, 13B, 34B and 70B parameters):
Code Llama: a foundational model for code generation tasks, ⢠Code Llama - Python: specialized for Python, ⢠Code Llama - Instruct: fine-tuned with human instructions and self-instruct code synthesis data.
An example of using Code Llama - Instruct is given in Figure 1. It show-cases that the model interprets natural language to determine suitable options for a command-line program and provides an explanation of the solution. We provide further qualitative examples in Appendix L. We perform exhaustive evaluations of our models on major code generation benchmarks: HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), and APPS (Hendrycks et al., 2021), as well as a multilingual version of HumanEval (MultiPL-E, Cassano et al., 2023), where our best models establish a new state of the art amongst open-source LLMs. The technical details of our training and fine-tuning procedures are provided in Section 2, followed by in-depth experiments and ablation studies, details of the safety/helpfulness evaluations and a discussion of related work.
2 | 2308.12950#7 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 7 | # 1 Introduction
Recently, Large Language Models (LLMs) (Brown et al., 2020; OpenAI, 2023; Anil et al., 2023; Gao et al., 2023; Qwen, 2023) have attracted wide attention due to their powerful capabilities in text generation and comprehension. These models can be further aligned with user intent through fine-tuning instructions, showcasing strong interactive capabilities and the potential to enhance productivity as intelligent assistants. However, native large language models only live in the pure-text world, lacking the ability to handle other common modalities (such as images, speech, and videos), resulting in great restrictions on their application scope. Motivated by this, a group of Large Vision Language Models (LVLMs) (Alayrac et al., 2022; Chen et al., 2022; Li et al., 2023c; Dai et al., 2023; Huang et al., 2023; Peng et al., 2023; Zhu et al., 2023; Liu et al., 2023; Ye et al., 2023b,a; Chen et al., 2023a; Li et al., 2023a; Zhang et al., 2023; Sun et al., 2023; OpenAI, 2023) have been developed to enhance large language models with the ability to perceive and understand visual signals. These large-scale vision-language models demonstrate promising potential in solving real-world vision-central problems. | 2308.12966#7 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 8 | Agents for Simulating Human Interactions A subsequent line of research focuses on crafting agents that emulate human social behaviors. (Park et al. 2022) fash- ioned a multi-agent town emulating authentic human activ- ities, including orchestrating social parties. (Li et al. 2023) delved into an agent communication framework that facil- itates varied social roles and simulates AI social patterns. Emphasizing the importance of social situational learning, (Krishna et al. 2022) developed an interactive agent capable of querying individuals online to assimilate visual knowl- edge. In the educational realm, (Markel et al. 2023) em- ployed GPT and other LLMs to mimic students, thus of- fering tangible training avenues for educators. (Jiang et al. 2023) explored the simulation of consistent personality and gender variations using conditional language models. Cu- mulatively, these studies accentuate agentsâ capacities to as- similate or mimic human social interactions.
to details. âDescriptionâ: Big Five personality âScoreâ: Openness to Experience âScoreâ: 16 | 2308.12503#8 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12682 | 8 | # 2 Related Work on Planning with LLMs
Model I/O Planner Domain Knowledge Affordances Heuristics Search Planning HSP (Bonet and Geffner 2001) LLM+P (Liu et al. 2023) Planning LM (Huang et al. 2022a) SayCan (Ahn et al. 2022) Grounded Decoding (Huang et al. 2023) Text2Motion (Lin et al. 2023) ProgPrompt (Singh et al. 2023) Plansformer (Pallagani et al. 2022) SayCanPay (Beam-Action) Symbolic Hybrid NL NL NL NL Symbolic Symbolic NL Symbolic Symbolic LLM LLM LLM LLM LLM LLM LLM â â â â â â â â â â â â â â â â â â Heuristic Heuristic Greedyâ Greedyâ Greedyâ Greedyâ Greedyâ Greedyâ Heuristic Offline Offline Offline Online Online Online Offline Offline Offline
Table 1: Table contrasts SayCanPay with existing works. I/O: input (goal/task, observation/state) / output (actions), NL: natural language. Here, Greedyâ suggests the algorithm greedily selects actions while (possibly) searching over tokens. | 2308.12682#8 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 8 | 2
Cove Liama Long context (7B =, 13B =, 34B) + fine-tuning ; Lrama 2 Code training 20B oes Cope Liama - Instruct Foundation models â> nfilling code training = eee.â (7B =, 13B =, 34B) â 5B (7B, 13B, 348) 5008 Python code Long context Cove Liama - PyrHon (7B, 13B, 34B) > training » Fine-tuning > 1008 208
Figure 2: The Code Llama specialization pipeline. The different stages of fine-tuning annotated with the number of tokens seen during training. Infilling-capable models are marked with the â symbol.
# 2 Code Llama: Specializing Llama 2 for code
# 2.1 The Code Llama models family | 2308.12950#8 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 8 | Nevertheless, despite that lots of works have been conducted to explore the limitation and potency of LVLMs, current open-source LVLMs always suffer from inadequate training and optimization, thus lag far behind the proprietary models (Chen et al., 2022, 2023b; OpenAI, 2023), which hinders further exploration and application of LVLMs in open-source community. Whatâs more, as real-world visual scenarios are quite complicated, fine-grained visual understanding plays a crucial role for LVLMs to assist people effectively and precisely. But only a few attempts had been made toward this direction (Peng et al., 2023; Chen et al., 2023a), the majority of open-source LVLMs remain perceiving the image in a coarse-grained approach and lacking the ability to execute fine-grained perception such as object grounding or text reading.
2 | 2308.12966#8 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 9 | to details. âDescriptionâ: Big Five personality âScoreâ: Openness to Experience âScoreâ: 16
Figure 1: Tree structure of the Big Five Personality Scale. The root node has five sub-nodes, representing five coarse personalities. Their dimension values range from 5-25, and each coarse personality has five fine-grained leaf nodes, with dimension values ranging from 1-5. The larger the value, the more pronounced the characteristics of agents.
Method In this section, the tree-structured approach for personality assignment, detection and maintenance, the cognitive struc- ture model enhanced with a skill library, and the construc- tion process of CGMI will be introduced respectively. As shown in Figure 2, the process of reconstructing the âclass- room teachingâ scenario based on CGMI is displayed. | 2308.12503#9 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 9 | We formulate the decision-making process within LLMs as a Markov decision process (MDP). Given a human instruction Q, LLMs are tasked with generating a decision sequence t = {s0, a1, s1, · · · , sN } to accomplish Q. Here, {si}N i=0 represents the decision states, s0 is the ini- tial state, sN is the final state which means that LLMs obtain enough information to give the final response to humans, and {ai}T i=1 denotes the actions taken by LLMs during the decision-making process. At each step in the MDP framework, LLMs decide to take action ai ⼠P (ai|si) based on the current state and subsequently arrive at the next state si+1 ⼠P (si+1|ai, si). Thus, we denote a decision step as di+1 = (si, ai, si+1). To make sequential decisions toward accomplishing Q autonomously, LLMs need to identify the utility of each decision step and select the most valuable ones to further explore. In this procedure, judgment acts as an important role in quantitatively as- sessing the value vi+1 = V (di+1) for each decision step | 2308.12519#9 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 9 | Table 1 categorizes LLM planning works into two broad categories based on whether the inputs (goals, states) and output actions (I/O) are natural language (NL) or symbolic (PDDL, scripting language). The approaches in the first category (Huang et al. 2022a; Valmeekam et al. 2022) often fail to model action affordances and the state of the world, leading to the generation of infeasible plans (Valmeekam et al. 2022). To improve the groundedness, recent works have explored planning guided by learnable domain-specific models that score the actionsâ feasibility akin to preconditions (Huang et al. 2023; Lin et al. 2023). Notably, SayCan (Ahn et al. 2022) uses pretrained low-level skills to ground the LM-generated actions. Others have used online planning with environmental and human feedback (Huang et al. 2022b). A limitation of such models, however, is their short-sighted nature, as they focus greedily on the next feasible action without considering its long-term relevance to the goal. Moreover, the plans are generated in an online fashion, interleaving action generation and execution, thus simplifying state tracking. In contrast, | 2308.12682#9 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 9 | # 2 Code Llama: Specializing Llama 2 for code
# 2.1 The Code Llama models family
Code Llama. The Code Llama models constitute foundation models for code generation. They come in four model sizes: 7B, 13B, 34B and 70B parameters. The 7B, 13B and 70B models are trained using an infilling objective (Section 2.3), and are appropriate to be used in an IDE to complete code in the middle of a file, for example. The 34B model was trained without the infilling objective. All Code Llama models are initialized with Llama 2 model weights and trained on 500B tokens from a code-heavy dataset (see Section 2.2 for more details), except Code Llama 70B which was trained on 1T tokens. They are all fine-tuned to handle long contexts as detailed in Section 2.4. | 2308.12950#9 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 9 | 2
In this paper, we explore a way out and present the newest members of the open-sourced Qwen families: Qwen-VL series. Qwen-VLs are a series of highly performant and versatile vision-language foundation models based on Qwen-7B (Qwen, 2023) language model. We empower the LLM basement with visual capacity by introducing a new visual receptor including a language-aligned visual encoder and a position- aware adapter. The overall model architecture as well as the input-output interface are quite concise and we elaboratedly design a 3-stage training pipeline to optimize the whole model upon a vast collection of image-text corpus.
Our pre-trained checkpoint, termed Qwen-VL, is capable of perceiving and understanding visual inputs, generating desired responses according to given prompts, and accomplishing various vision-language tasks such as image captioning, question answering, text-oriented question answering, and visual grounding. Qwen-VL-Chat is the instruction-tuned vision-language chatbot based on Qwen-VL. As shown in Fig. 2, Qwen-VL-Chat is able to interact with users and perceive the input images following the intention of users.
Specifically, the features of the Qwen-VL series models include: | 2308.12966#9 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 10 | Tree-Structured Persona Model Agent entities with unique personalities can not only com- plete specific tasks, but also enhance the authenticity of in- teractions (Qian et al. 2018; Mara Pudane and Radin 2017). In addition to setting specific personalities for agent entities, it is also necessary to set related styles according to the ap- plication scenario. For example, in teaching, teacher and stu- dents can have their own teaching and learning styles. How- ever, if only a rough persona is set for agents, the person- alized differences in its interactions are not obvious, and its stability will decrease as the complexity of roles, scenarios, and the length of the context increase (Jiang et al. 2023). | 2308.12503#10 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 10 | explore. In this procedure, judgment acts as an important role in quantitatively as- sessing the value vi+1 = V (di+1) for each decision step di+1. Equipped with the value judgment, LLMs can select those decision steps with a higher value that holds the promise of yielding the most promising outcomes, ultimately leading to the derivation of the final decision sequence that fulfills the requirements of Q. | 2308.12519#10 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12950 | 10 | Code Llama - Python. The Code Llama - Python models are specialized for Python code generation and come in sizes of 7B, 13B, 34B and 70B parameters. They are designed to study the performance of models tailored to a single programming language, compared to general-purpose code generation models. Initialized from Llama 2 models and trained on 500B tokens from the Code Llama dataset, Code Llama - Python models are further specialized on 100B tokens using a Python-heavy dataset (Section 2.2). Code Llama - Python with 7B, 13B and 34B parameters are trained without infilling and subsequently fine-tuned to handle long contexts (Section 2.4).
Code Llama - Instruct. For the 7B, 13B and 34B sizes, Code Llama - Instruct models are based on Code Llama and fine-tuned with an additional approx. 5B tokens to better follow human instructions. More details on Code Llama - Instruct can be found in Section 2.5. | 2308.12950#10 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 10 | Specifically, the features of the Qwen-VL series models include:
⢠Leading performance: Qwen-VLs achieve top-tier accuracy on a vast of vision-centric understanding benchmarks compared to counterparts with similar scales. Besides, Qwen-VLâs stuning performance covers not only the conventional benchmarks e.g., captioning, question-answering, grounding), but also some recently introduced dialogue benchmarks.
⢠Multi-lingual: Similar to Qwen-LM, Qwen-VLs are trained upon multilingual image-text data with a considerable amount of corpus being in English and Chinese. In this way, Qwen-VLs naturally support English, Chinese, and multilingual instructions.
⢠Multi-image: In the training phase, we allow arbitrary interleaved image-text data as Qwen-VLâs inputs. This feature allows our Qwen-Chat-VL to compare, understand, and analyze the context when multiple images are given.
⢠Fine-grained visual understanding: Thanks to the higher-resolution input size and fine-grained corpus we used in training, Qwen-VLs exhibit highly competitive fine-grained visual understanding ability. Compared to existing vision-language generalists, our Qwen-VLs possess much better grounding, text-reading, text-oriented question answering, and fine-grained dialog performance.
# 2 Methodology
# 2.1 Model Architecture | 2308.12966#10 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 11 | this work proposes a tree- structured persona model for personality assignment, de- tection, and maintenance. We referred to the Big Five Per- sonality Scale (John, Srivastava et al. 1999), the teaching style scale (Grigorenko and Sternberg 1993), and the learn- ing style scale (Soloman and Felder 2005), and designed a tree structure to help agents remember and set different per- sonas. Taking personality setting as an example, as shwon in Figure 1, we built a personality scale T = {N1, N2, ..., Nn} based on the Big Five Personality Scale, where n = 26. N1 is the root node, and N2 to Nn are child nodes. Each node Ni includes a description Di and a score Si. As shown in Algorithm 1, we use depth-first traversal to set personality traits for the intelligent entity A.
During the detection and maintenance process, this pa- per adopts an efficient random testing method, with the fol- lowing specific steps: (1) Randomly select m coarse-grained | 2308.12503#11 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 11 | # 4 METHODOLOGY
Our RADAGENT aims to find the decision sequence with the highest utility to accomplish complex instructions autonomously. It contains two principal phases to internalized the utility judgment:
⢠Experience Exploration: The agent takes actions sequentially to form a decision sequence toward
# a feasible solution.
⢠Utility Learning: The agent makes judgments among decision sequences to assess the utility (i.e., Elo scores) of existing decision steps.
These two phases work in an iterative fashion, reinforcing one anotherâs outcomes (see in Figure 1). In experience exploration phase, the agent explore more potential decision sequences which can promote to judge the utility of each decision step. In utility learning phase, the Elo score of each decision step serves as a dynamic guide, steering subsequent experience exploration toward more promising and superior solutions. By iteratively cycling through these intertwined phases, the agent progressively evolves toward an optimal decision sequence with the highest utility to address in- structions.
4.1 EXPERIENCE EXPLORATION | 2308.12519#11 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 11 | @ goal gGhistory ho â best token wz @aiscarded token w, @ next best token wf Hil vest action af plldiscarded action a, lj next-best action af (9, ho) as oN Abstraction vs â > ° e te 9% ov Vy Coa) fei a vy 4 wel âes *@? z= ahi * * w3 a3 ay (a) Greedy-Token (b) Beam-Token ây Single Greedy-Action step (©) Greedy-Action (d) Beam-Action
Figure 2: The figure outlines decoding strategies â Greedy-Token, Greedy-Action, and Beam-Action. Greedy-Token greedily selects the next best token by its probability. Greedy-Action (which is a beam search over tokens) greedily selects the next best action based on a specific decoding score. Beam-Action uses a beam search over actions, main- taining k beams and selecting the best sequence as the plan. Here, nodes represent either tokens wt or actions at. The best plan is given by (aâ 3) and represented in red. The second-best node is in orange, discarded ones in black. Here, for Beam-Action, m = 3 and k = 2. | 2308.12682#11 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 11 | Code Llama 70B. Code Llama 70B was trained months after the Code Llama 7B, 13B and 34B model. It was trained using the same data as the smaller versions of Code Llama, and using roughly the same methods. Code Llama 70B was trained on twice the number of tokens: 1 trillion instead of 500 billion. It was trained with FIM, which was an often-requested capability for the 34B model. Only the base Code Llama 70B was trained with LCFT. See Appendix B for Code Llama 70B specialization pipeline. Code Llama - Instruct 70B was trained from Code Llama - Python 70B, which outperforms Code Llama 70B in average on the languages from MultiPL-E including python.
# 2.2 Dataset | 2308.12950#11 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 11 | # 2 Methodology
# 2.1 Model Architecture
The overall network architecture of Qwen-VL consists of three components and the details of model parameters are shown in Table 1:
Large Language Model: Qwen-VL adopts a large language model as its foundation component. The model is initialized with pre-trained weights from Qwen-7B (Qwen, 2023).
Visual Encoder: The visual encoder of Qwen-VL uses the Vision Transformer (ViT) (Dosovitskiy et al., 2021) architecture, initialized with pre-trained weights from Openclipâs ViT-bigG (Ilharco et al., 2021). During both training and inference, input images are resized to a specific resolution. The visual encoder processes images by splitting them into patches with a stride of 14, generating a set of image features. | 2308.12966#11 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 12 | During the detection and maintenance process, this pa- per adopts an efficient random testing method, with the fol- lowing specific steps: (1) Randomly select m coarse-grained
Step 1: Personalized Instructional Design Step 2: Customizable Role Configuration (Tre ws ser can modify the design in the role of observer agen!) (Personality, Cognitive level, learning nh) ee Teacher Students » 1 Agent Agent 4 YingZheng Ryan | he ~) ct ! aa 1 Course! ' = @_â__xteaching 3.1nstructional 1»! => i 1 1 1 \ 1 Objecti D | Topics 1.Learning â esign o Â¥, ~ Supervised Agent *) -=--7 Situation aaa am | 1- Supervise the teaching process | Analysis 4.Lesson â_5.Intention hie Stith Mu 2. Check the consistency of agent j Planning Analysis _ 7 Emily One aon ; Step 3: Teaching Implementation i BA 1 S (Teaching activities are dynamically adjusted according to the skill library and student feedback) \-------% Or)â Overall, we have addressed the cognitive, affective I think it can be applied in problem- solving scenarios of physics, tatiana ea | 2308.12503#12 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 12 | 4.1 EXPERIENCE EXPLORATION
In RADAGENT, each experience exploration benefits from the previous exploration history based on Elo-based Utility Construction (§ 4.2). When exploring a new decision sequence, LLMs will select a decision step with a higher Elo score to explore further. Specifically, in RADAGENT, each decision step is assigned an Elo score explicitly. A decision step with higher Elo scores means that it is more likely to accomplish the instruction and thus Elo scores are used to guide the decision exploration process. Given an intermediate decision step d, its subsequent decision steps are denoted as {d1, d2, · · · , dn}. Given their learned Elo scores {vi}n i=1, the probability of choosing to explore can be modified as follows:
exp() Ys exp)â P(d;) = d; ⬠{dy,dz,+++ ,dn} (4)
where Ï refers to the temperature. Note that only explore the known decisions may cause local optimal solution. Therefore, we define a rejection decision step Ëd with an initial Elo score Ëv to
3
Preprint
Experience Exploration not good. need toexplore & Internalized Utility Judgment
Figure 1: Illustration of the iterative Experience Exploration and Utility Learning phase to derive the final optimal solution. | 2308.12519#12 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 12 | Another line of work employs LLMs to create offline symbolic plans, leveraging LLMsâ training on open-source codebases, where actions appear as function calls (Singh et al. 2023; Liang et al. 2023). The feasibility of plans is ensured through assertion checks (assert ⨠preconditions â©), that may trigger recovery actions. However, it relies solely on the LLMâs domain knowledge which is limited to its training data and may not be aligned with the agentâs current environment (e.g. espresso machine operations vary widely). Conversely, SayCanPay uses additional models trained with domain-specific knowledge collected from the current environment. There are also efforts to fine-tune LLMs like Code-T5 (Wang et al. 2021) to generate plans in PDDL (Pallagani et al. 2022). This requires a significant amount of training data (given LLMsâ minimal PDDL exposure) which is not entirely justified by their performance. | 2308.12682#12 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 12 | # 2.2 Dataset
We train Code Llama 7B, 13B and 34B on 500B tokens, and Code Llama 70B on 1T tokens during the initial phase, starting from the 7B, 13B, 34B, and 70B versions of Llama 2. As shown in Table 1, Code Llama is trained predominantly on a near-deduplicated dataset of publicly available code. We also source 8% of our samples data from natural language datasets related to code. This dataset contains many discussions about code and code snippets included in natural language questions or answers. To help the model retain natural language understanding skills, we also sample a small proportion of our batches from a natural language dataset. Data is tokenized via byte pair encoding (BPE, Sennrich et al. (2016)), employing the same tokenizer as Llama and Llama 2. Preliminary experiments suggested that adding batches sampled from our natural language dataset improves the performance of our models on MBPP.
3
# 2.3 Infilling
Code infilling is the task of predicting the missing part of a program given a surrounding context. Applications include code completion at the cursorâs position in code IDEs, type inference and generation of in-code documentation (e.g., docstrings). | 2308.12950#12 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 12 | Position-aware Vision-Language Adapter: To alleviate the efficiency issues arising from long image feature sequences, Qwen-VL introduces a vision-language adapter that compresses the image features. This adapter comprises a single-layer cross-attention module initialized randomly. The module uses a group of trainable vectors (Embeddings) as query vectors and the image features from the visual encoder as keys for cross- attention operations. This mechanism compresses the visual feature sequence to a fixed length of 256. The ablation about the number of queries is shown in Appendix E.2. Additionally, considering the significance
3
of positional information for fine-grained image comprehension, 2D absolute positional encodings are incorporated into the cross-attention mechanismâs query-key pairs to mitigate the potential loss of positional details during compression. The compressed image feature sequence of length 256 is subsequently fed into the large language model.
# Table 1: Details of Qwen-VL model parameters.
Vision Encoder VL Adapter LLM Total 1.9B 0.08B 7.7B 9.6B | 2308.12966#12 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 13 | Figure 2: Based on CGMI, a classroom teaching scenario is constructed. This scenario includes 3 general intelligent agents (teaching assistant agent, teaching process supervisor agent, consistency checker agent) and 6 role agents (teacher Mrs. Smith, student Ying Zheng, student Emily, student John, student Ryan and student Samantha). After the user inputs the course topic, the virtual classroom teaching scenario launches. The teaching assistant agent generates corresponding teaching plans and distributes them to Mrs. Smith and the teaching process supervisor agent. Mrs. Smith divides the teaching process into stages according to the plan. The teaching process supervisor agent monitors whether the current stage has ended and decide whether to enter the next stage. Before each role agentâs statement, the consistency checker agent detects and maintains consistency between its personality and statement content. When Mrs. Smith asks the class questions, the consistency checker agent judges each studentâs willingness to answer based on personality and classroom status, simulating real hand-raising.
Algorithm 1: The process of endowing the Big Five person- alities through Deep First Traverse (DFS) implementation. Input: Big Five Scale T , Agent A Output: A = {T } 1: Define stack 2: Push root node of T into stack 3: while stack is not empty do 4: Ni = stack.pop() 5: 6: 7: 8: 9: end while 10: return A = {T } | 2308.12503#13 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 13 | 3
Preprint
Experience Exploration not good. need toexplore & Internalized Utility Judgment
Figure 1: Illustration of the iterative Experience Exploration and Utility Learning phase to derive the final optimal solution.
represent that âThe agent decides to explore a new decisionâ. We add this rejection decision step into the subsequent decision steps as {d1, d2, · · · , dn, Ëd} when selecting:
exp(â + P(d;) = Ce) , d; ⬠{d,dz,-+- ,dn, a} (5) x exp(3)
The complete experience exploration process begins from the initial state s0 and chooses the sub- sequent decision steps iteratively based on Equation 5 in a top-down manner. When it chooses the rejection decision step Ëd, the agent will generate a new decision sequence starting from the current intermediate step d. In the iterative experience exploration process, those potential decision steps will be explored thoroughly, until finding the optimal solution.
4.2 UTILITY LEARNING
As external performance measure may be unavailable, flawed, or even erroneous, the agent should resort to their internalized utility judgment ability to solve diverse tasks. To this end, we design an Elo-based Utility Construction, equipping the agent with the Elo rating system to provide a numerical utility to each decision step to guide the decision-making process. | 2308.12519#13 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 13 | Yet another exciting line of work explores hybrid I/O systems like LLM+P (Liu et al. 2023) wherein, given a PDDL domain file (with a predefined action model), the LLM maps the NL inputs (task description, input observation) to a PDDL problem file. A symbolic planner then generates the plan. However, its effectiveness is limited by the closed- world constraint of the domain file, the necessity for fully observable states, and the LLMâs restricted capability in translating NL to PDDL (Xie et al. 2023). | 2308.12682#13 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 13 | We train infilling models following the concept of causal masking (Aghajanyan et al., 2022; Fried et al., 2023), where parts of a training sequence are moved to the end, and the reordered sequence is predicted autoregressively. We train the general-purpose 7B, 13B and 70B models with an infilling objective, following the recommendations of Bavarian et al. (2022). More precisely, we split training documents at the character level into a prefix, a middle part and a suffix with the splitting locations sampled independently from a uniform distribution over the document length. We apply this transformation with a probability of 0.9 and to documents that are not cut across multiple model contexts only. We randomly format half of the splits in the prefix-suffix-middle (PSM) format and the other half in the compatible suffix-prefix-middle (SPM) format described in Bavarian et al. (2022, App. D). We extend Llama 2âs tokenizer with four special tokens that mark the beginning of the prefix, the middle part or the suffix, and the end of the infilling span. To limit the distribution shift between autoregressive and infilling training, we suppress the implicit leading space | 2308.12950#13 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 13 | # Table 1: Details of Qwen-VL model parameters.
Vision Encoder VL Adapter LLM Total 1.9B 0.08B 7.7B 9.6B
Stagel: Pretrainin Stage2:Multi-task Stage3: Supervised ee 6 Pretraining Finetuning d a N Learnable N Learnable N =| Query â| CrossAttn ad Query â CrossAttn ad Embs Embs Learnable Query Embs ââââ ViT & ViT # âo | Low Resolution a High Resolution âo | High Resolution. Image-Text Pairs Multi-task an Chat Interleaved ⬠Interleaved VL Data VL Data
Figure 3: The training pipeline of the Qwen-VL series.
# 2.2 Inputs and Outputs
Image Input: Images are processed through the visual encoder and adapter, yielding fixed-length sequences of image features. To differentiate between image feature input and text feature input, two special tokens (<img> and </img>) are appended to the beginning and end of the image feature sequence respectively, signifying the start and end of image content. | 2308.12966#13 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 14 | A get(Ni.Di, Ni.Si) if Ni has child nodes then
# end if
personalities for testing; (2) If the test is correct, select m fine-grained personalities under these m coarse-grained per- sonalities for further testing. If the fine-grained test is also correct, it is believed that the agentâs personality memory is complete; (3) If an error occurs at any stage, the real values of all selected personalities will be informed to the agent to restore its personality memory.
This random testing method is not only efficient and
comprehensive but also saves contextual window resources. Multi-level testing can avoid the illusion of unchanged coarse-grained personality due to changes in fine-grained personality. This method can also be applied to other related character scales, as detailed in Appendix. | 2308.12503#14 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 14 | The utility learning process (i.e., Elo score estimation process) is conducted in a bottom-up manner. It first adjusts the Elo scores of the final decision steps of each decision sequence via pairwise comparison and then updates the Elo scores of the intermediate decision steps gradually. Once a new decision sequence is generated in the experience exploration phase, the agent will self-judge the Elo scores of existing decision steps via pairwise comparison. Given the newly generated decision sequence tn, we first assign all decision steps of tn with an initial Elo score. Then, we randomly select a decision sequence ti from existing decision sequences T = {t1, t2, · · · , tnâ1} and use LLMs to compare tn with ti to judge which one has the superior performance. Since the LLM- based comparison is sensitive to the candidate order (Qin et al., 2023d; Chiang & Lee, 2023; Wang et al., 2023), we conduct comparisons twice with different orders.
Rtn>ti = 1, if tn win twice, 0, if ti win twice, 0.5, otherwise (6)
Getting the comparison result, we update the Elo scores of the final decision steps of tn and ti based on Equation 3. Next, we calculate the Elo scores of intermediate decision steps based on their
4
# Preprint | 2308.12519#14 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 14 | 3 Preliminaries Planning Framework. We formulate our planning problem, based on approximate planning (Golowich, Moitra, and Rohatgi 2022), as a finite-horizon Partially Observable Markov Decision Process (POMDP) given by the tuple â¨S, SG, b0, A, O, R, Tâ©. Here, S is state space, SG â S is a set of goal states, b0 is the initial belief state, A is the set of actions, O is a set of observations retrieved from states via an observation function O, R : O â R is a known reward function, T : S à A â âS is a known stochastic transition function and âS is a distribution over states. Belief states represent the agentâs knowledge of the environment at any point, given as b â âS . Additionally, let Ht := (A à O)tâ1 denote the set of histories at step t, namely the set of action/observation sequences (o0, a1, o1, . . . , atâ1, otâ1) or (a1:tâ1, o0:tâ1) the agent has access to before selecting action | 2308.12682#14 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 14 | the middle part or the suffix, and the end of the infilling span. To limit the distribution shift between autoregressive and infilling training, we suppress the implicit leading space that SentencePiece tokenizers add upon encoding the middle part and the suffix (Kudo & Richardson, 2018). In SPM format, we concatenate the prefix and the middle part before encoding to tokens. Note that our model doesnât encounter split subtokens in the SPM format while it does in the PSM format. | 2308.12950#14 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 14 | Bounding Box Input and Output: To enhance the modelâs capacity for fine-grained visual understanding and grounding, Qwen-VLâs training involves data in the form of region descriptions, questions, and detections. Differing from conventional tasks involving image-text descriptions or questions, this task necessitates the modelâs accurate understanding and generation of region descriptions in a designated format. For any given bounding box, a normalization process is applied (within the range [0, 1000)) and transformed into a specified string format: "(Xtoplef t, Ytoplef t), (Xbottomright, Ybottomright)". The string is tokenized as text and does not require an additional positional vocabulary. To distinguish between detection strings and regular text strings, two special tokens (<box> and </box> are added at the beginning and end of the bounding box string. Additionally, to appropriately associate bounding boxes with their corresponding descriptive words or sentences, another set of special tokens (<ref> and </ref>) is introduced, marking the content referred to by the bounding box.
4
# 3 Training
As illustrated in Fig. 3, the training process of the Qwen-VL model consists of three stages: two stages of pre-training and a final stage of instruction fine-tuning training.
# 3.1 Pre-training | 2308.12966#14 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 15 | Cognitive architecture equipped with skill library Over time, as interactions between the agent and its envi- ronment accumulate, thereâs a marked increase in the vol- ume and intricacy of the agentâs memory stream.(Park et al. 2023; Weng and Lilian 2023) This proliferation necessitates an advanced cognitive architecture to process the burgeon- ing data. However, the current cognitive architecture embed- ded in LLMs-based agents can only allow agents to plan and reflect in a linear fashion, reminiscent of an assembly line. To redress this shortfall, this paper introduces the cog- nitive architecture infused with a domain-specific skill li- brary, rooted in the Adaptive Control of Thought (ACT*) paradigm(Anderson and R 1983). This novel architecture fa- cilitates parallel and bidirectional planning and reflection, drawing upon the agentâs memory and skill repository, thus steering agent development towards enhanced adaptive con- trol and rational deliberation akin to human cognition.
Central to this cognitive framework are four pivotal com- ponents, as delineated in Figure 3. The foundational pilDeclarative Memory [_rerecr \@a] Skill Library Jo{ Pe J Working Memory Get from the Action to the outside outside Procedural Memory Summarize by COA Summarize by COT 1G rf <i
Figure 3: The cognitive architecture with skill library. | 2308.12503#15 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 15 | 4
# Preprint
subsequent decision steps. Specifically, given an intermediate decision step di, we calculate its Elo scores as follows:
(αj â vj), vi = dj âChild(di) (7)
where Child(di) refers to the set of the subsequent decision steps of di, αj = exp(vj /Ï ) k exp(vk/Ï ) is the normalized weight and Ï is from Equation 5. By repeating the comparison via randomly sampling decision sequences, the Elo score of each decision step will converge to its expected value.
When guiding the experience exploration process, the Elo score of a decision step with a few number of Elo update may not represent its real value accurately. Such a decision step cannot be fully trusted for exhaustive exploration. Hence, we adjust the temperature Ï in Equation 5 based on the number of the Elo update. Let Md be the number of the Elo update of the decision step d. The temperature of d is annealing as follows:
(8) 1 Ta = T *& ââââ ââââ 14+ /In(Ma + 1)
where Ï0 is the default temperature. With the growth of the number of Elo update, the approximated Elo score converges to its real value. At this time, we tend to explore the most possible decision.
4.3 DISCUSSION | 2308.12519#15 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 15 | o1, . . . , atâ1, otâ1) or (a1:tâ1, o0:tâ1) the agent has access to before selecting action at. It is assumed that the goal states are fully observable. Unlike MDPs, the optimal policy in a POMDP typically takes actions depending on not just the most recent observa- tion but the entire history. The objective of the planning algorithm is to find the optimal sequence of actions a1:T (i.e. an optimal plan) from an initial belief state b0 to a given goal state g â SG. Here, T is the length of the horizon. | 2308.12682#15 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 15 | Results on the effect of infilling training on downstream generation tasks and the performance of our infilling models on infilling benchmarks are reported in Section 3.2.
# 2.4 Long context fine-tuning
Effective handling of long sequences is a major topic of research in transformer-based language model- ing (Vaswani et al., 2017). The fundamental modeling challenges are extrapolation, i.e., operating on sequence lengths beyond those seen at training time, and the quadratic complexity of attention passes which favors training on short-to-medium length inputs. | 2308.12950#15 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 15 | # 3.1 Pre-training
In the first stage of pre-training, we mainly utilize a large-scale, weakly labeled, web-crawled set of image-text pairs. Our pre-training dataset is composed of several publicly accessible sources and some in-house data. We made an effort to clean the dataset of certain patterns. As summarized in Table 2, the original dataset contains a total of 5 billion image-text pairs, and after cleaning, 1.4 billion data remain, with 77.3% English (text) data and 22.7% Chinese (text) data.
Table 2: Details of Qwen-VL pre-training data. LAION-en and LAION-zh are the English and Chinese language subset of LAION-5B (Schuhmann et al., 2022a). LAION-COCO (Schuhmann et al., 2022b) is a synthetic dataset generated from LAION-en. DataComp (Gadre et al., 2023) and Coyo (Byeon et al., 2022) are collections of image-text pairs. CC12M (Changpinyo et al., 2021), CC3M (Sharma et al., 2018), SBU (Ordonez et al., 2011) and COCO Caption (Chen et al., 2015) are academic caption datasets. | 2308.12966#15 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 16 | Figure 3: The cognitive architecture with skill library.
lars of agent cognition are Declarative (Md) and Procedural Memory (Mp). The former embodies the agentâs library of factual knowledge, encompassing data on objects, individ- uals, locales, occurrences and their interconnections, serv- ing as the cornerstone for rational deduction. Procedural memory, on the other hand, comprises operational guide- lines that empower the agent to pursue objectives and sur- mount challenges. These guidelines operate by matching with facts stored declaratively, triggering actions geared to- wards achieving specific objectives. Skill Library (L) is a configurable domain knowledge base that provides domain knowledge for the reflective planning of intelligent agents. It can be viewed as a compilation of the agentâs abilities to leverage its knowledge in situation-specific ways. Work- ing Memory (Mw) is an agile, self-refreshing module act- ing as a bridge between memory and the external milieu. It not only directs agent actions based on processed memories but also assimilates external data, subsequently refining it into declarative and procedural knowledge via the Chain of Thoughts (CoT) and Chain of Actions (CoA). | 2308.12503#16 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 16 | 4.3 DISCUSSION
After conducting adequate experience exploration and utility learning process, the agent will con- struct the internalized utility judgment. As all decision steps are estimated their utilities as Elo scores, any two of them can be compared, i.e., satisfying the Completeness property. Given three decision steps A, B, C, if vA > vB and vB > vC, the Elo score of A must be larger than C (vA > vB > vC), i.e., satisfying Transitivity property. Thus, the rationality is internalized in the agent so that it can rationally assess all decision sequences and select the best-performed one as the final solution To derive the best outcome, given all existing decision sequences T = {t1, t2, · · · , tn}, the one which final decision with the largest utility is selected as the final solution.
t = arg max tâT {V (dN )} (9)
where dN refers to the final decision step.
# 5 EXPERIMENT
As the key contribution of this work is to develop an rational decision-making agent with internalized utility judgment, we aim to answer the following research questions through a series of experiments. | 2308.12519#16 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 16 | Heuristic Search Planning. In real-world scenarios where the state space can be exponentially large to explore exhaustively, heuristic search planning (HSP) becomes useful (Bonet and Geffner 2001). Essentially, it uses heuristic functions fheur : Ht à SG â R to guide the search process in the planning problem, by computing a cost estimate from a given history of actions and observations. An example is the Best-First Search algorithms that select the most promising (next) action(s) using a linear combination of previously accumulated cost facc for history htâ1, and the estimated cost fheur from updated history ht = (htâ1, at) and goal g.
f (ht) = z1 · facc(htâ1) + z2 · fheur(ht, g) (1) Here z1, z2 â {0, 1}. The next action at = arg minht f (ht). Special cases are the Aâ algorithm algorithm (z1 = 1
and z2 = 1) and Greedy Best-First Search (z1 = 0 and z2 = 1). | 2308.12682#16 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 16 | For Code Llama, we propose a dedicated long context fine-tuning (LCFT) stage in which models are presented with sequences of 16,384 tokens, up from the 4,096 tokens used for Llama 2 and our initial code training stages. By limiting the training time spent on processing long sequences to a fine-tuning stage, we gain long-range capabilities without significantly increasing the cost of training our models. Our strategy is similar to the recently proposed fine-tuning by position interpolation (Chen et al., 2023b), and we confirm the importance of modifying the rotation frequencies of the rotary position embedding used in the Llama 2 foundation models (Su et al., 2021). However, instead of downscaling frequencies linearly as Chen et al. (2023b), we change the base period from which they are derived. Specifically, with rotary embeddings, the query and key vectors xn at position n are subject to a linear transformation Rd Î,n is a block diagonal matrix with entries of the form
cosné; âsinn6; (Ré..), = ( ), sinn@; cosn6; | 2308.12950#16 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 16 | Language Dataset Original Cleaned Remaining% English LAION-en LAION-COCO DataComp Coyo CC12M CC3M SBU COCO Caption 2B 600M 1.4B 700M 12M 3M 1M 0.6M 280M 300M 300M 200M 8M 3M 0.8M 0.6M 14% 50% 21% 28% 66% 100% 80% 100% Chinese LAION-zh In-house Data 108M 220M 105M 220M 97% 100% Total 5B 1.4B 28%
We freeze the large language model and only optimize the vision encoder and VL adapter in this stage. The input images are resized to 224 Ã 224. The training objective is to minimize the cross-entropy of the text tokens. The maximum learning rate is 2eâ4 and the training process uses a batch size of 30720 for the image-text pairs, and the entire first stage of pre-training lasts for 50,000 steps, consuming approximately 1.5 billion image-text samples. More hyperparameters are detailed in Appendix C and the convergence curve of this stage is shown in Figure 6.
# 3.2 Multi-task Pre-training | 2308.12966#16 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 17 | When starting interaction, an agent, denoted as A = {T, B} and equipped with the cognitive architecture B = {Mw, Md, Mp, L}, seamlessly activates these four compo- nents, ensuring prolonged engagements in multifaceted set- tings. Formally, the mechanism through which the agent gleans information from the external realm at a given time t is depicted as Fget(t).
Upon temporary storage in Mw, the agent A distills this information using thought and action chains, leading to the formation of Declarative and Procedural Memory:
# Md(t) = Fsum(Pcot + Mw(Fget(t))) Mp(t) = Fsum(Pcoa + Mw(Fget(t)))
where Pcot signifies the CoT prompt (e.g., âSummarize the class content sequentiallyâ), while Pcoa denotes the CoA prompt (e.g., âDetail the pedagogical stepsâ). Fsum de- lineates the process of condensing information within the Working Memory. In subsequent interactions, when agent A readies its response for moment t + 1, it first taps into Md, Mp, and L, extracting reflections and strategies from the pre- ceding moment, t, which then translates into overt actions: | 2308.12503#17 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 17 | As the key contribution of this work is to develop an rational decision-making agent with internalized utility judgment, we aim to answer the following research questions through a series of experiments.
RQ1 Can RADAGENT make decisions rationally to accomplish a diverse set of tasks? RQ2 Beyond finding feasible solutions, can RADAGENT find better solution? RQ3 How efficient is RADAGENT in decision making? RQ4 Is Elo-based Utility Construction effective in providing reliable utility assessments? RQ5 What are the key differentiating factors of RADAGENT against other methods?
Next, we describe the experimental settings and then report results by answering the aforementioned research questions.
5.1 EXPERIMENTAL SETTINGS
Datasets We conduct extensive experiments on ToolBench dataset (Qin et al., 2023c), compris- ing a diverse and intricate collection of human instructions necessitating agents to make multi-step In our experiments, we focused on the intra-category decisions for successful task completion. multi-tool instruction scenario. This subset of ToolBench has been thoughtfully curated to reflect the complexity of real-world tasks, encompassing the utilization of various tools and necessitating multi-step decision-making processes. It is a rigorous evaluation to demonstrate the robustness and generalizability of decision making across diverse tasks.
5
# Preprint | 2308.12519#17 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 17 | 4 Language Model Planning Framework We keep the same POMDP formulation while updating our interpretations of the tuple. Previous works have shown that language models (LMs) trained on extensive data would internalize rich world knowledge that can be queried for downstream tasks like planning (Hao et al. 2023). This is akin to an internal transition function Tint. Similarly, LMs also maintain and update an internal belief state bint over tokens (or actions). An observation function maps states 0 , A, O, R, Tintâ©. In our offline to NL observations, O : S â O. The updated POMDP is now given as â¨S, SG, bint 0 = 1s0 , while ot = â
â t > 0, planning experiments, we assume the following: (i) O = {o0} inducing belief state bint due to lack of environmental feedback; (ii) sparse rewards = 1 for plan success, else 0. While our LM does not utilize the reward function, one could use it for alignment (Ziegler et al. 2020). Problem Statement: Given a NL goal g, history h0 = (o0), and a LM generating actions at with probability p(at|htâ1, g), | 2308.12682#17 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 17 | cosné; âsinn6; (Ré..), = ( ), sinn@; cosn6;
and d denotes the embedding dimension. Rotation frequencies are computed as θi = θâ2i/d, and we increase the base period θ from 10,000 to 1,000,000 for fine-tuning. This increase allows for processing much larger sequences and reduces bias towards short-distance attention (see Appendix G.1 for further discussion). Our experiments confirm that Code Llama models are not only effective within the increased sequence length used during fine-tuning, but further show extrapolation capabilities and exhibit stable behavior on very long sequences of up to 100,000 tokens (Section 3.3).
# 2.5 Instruction fine-tuning
Our instruction fine-tuned models Code Llama - Instruct are based on Code Llama and trained to answer questions appropriately. They are trained on three different types of data.
4 | 2308.12950#17 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 17 | In the second stage of multi-task pre-training, we introduce high-quality and fine-grained VL annotation data with a larger input resolution and interleaved image-text data. As summarized in Table 3, we trained Qwen-VL on 7 tasks simultaneously. For text generation, we use the in-house collected corpus to maintain the LLMâs ability. Captioning data is the same with Table 2 except for far fewer samples and excluding LAION-COCO. We use a mixture of publicly available data for the VQA task which includes GQA (Hudson and Manning, 2019), VGQA (Krishna et al., 2017), VQAv2 (Goyal et al., 2017), DVQA (Kafle et al., 2018), OCR- VQA (Mishra et al., 2019) and DocVQA (Mathew et al., 2021). We follow Kosmos-2 to use the GRIT (Peng et al., 2023) dataset for the grounding task with minor modifications. For the reference grounding and grounded captioning duality tasks, we construct training samples from GRIT (Peng et al., 2023), Visual Genome (Krishna et al., 2017), RefCOCO (Kazemzadeh et al., 2014), RefCOCO+, and RefCOCOg (Mao et al.,
5 | 2308.12966#17 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 18 | # R(t) = Fref (Md(t) + L) P (t) = Fpla(Mp(t) + L)
R(t) = Frep(Ma(t) + L) (3)
P(t) = Fyia(Mp(t) + L) (4)
# ACT (t + 1) = Fact(R(t) + P (t) + Mw(Fget(t))
(1) (2)
(3) (4) (5)
where Fref and Fpla illustrate the reflection and synthesis processes for Declarative and Procedural Memory at mo- ment t, respectively. R(t) and P (t) represent the reflective and strategic outcomes at time t, while Fact encapsulates the amalgamation of these insights, plans, and the skill reper- toire to forge ACT (t + 1). | 2308.12503#18 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
2308.12519 | 18 | 5
# Preprint
Given the resource-intensive nature of API calls, we conducted our experiments on a random se- lection of 500 samples from the total pool of 25K human instructions available in ToolBench. This sampling strategy allows us to achieve a representative evaluation while managing computational costs effectively.
Baselines We compare RADAGENT with the following decision-making methods:
⢠CoT (Wei et al., 2023; Yao et al., 2022) decomposes reasoning into explicit intermediate steps. We adapt ReACT (Yao et al., 2022) to decompose a decision step in the format âThought: ..., API Name: ..., Parameters:
⢠CoT@3 extends the CoT approach by running the decision-making process three times indepen- dently for an instruction and finally generates a total of three decision sequences.
⢠Reflexion (Shinn et al., 2023) builds upon CoT@3 and allows LLMs to engage in self-reflection on their previous decision sequences. The reflection summary is concatenated in the prompt before proceeding to the next decision. | 2308.12519#18 | Rational Decision-Making Agent with Internalized Utility Judgment | Large language models (LLMs) have demonstrated remarkable advancements and
have attracted significant efforts to develop LLMs into agents capable of
executing intricate multi-step decision-making tasks beyond traditional NLP
applications. Existing approaches to LLM-based decision-making predominantly
build upon the manually-designed external performance metrics to guide the
decision-making process. However, reliance on the external performance metrics
as prior is problematic in real-world scenarios, where such prior may be
unavailable, flawed, or even erroneous. For genuine autonomous decision making,
it is imperative for the agent to develop its rationality from its posterior
experiences to judge decisions independently. Central to the development of
rationality is the construction of an internalized utility judgment, capable of
assigning numerical utilities to each decision. This paper proposes RadAgent
(Rational Decision-Making Agent), which fosters the development of its
rationality through an iterative framework involving Experience Exploration and
Utility Learning. Within this framework, Elo-based Utility Construction is
devised to assign Elo scores to individual decision steps to judge their
utilities via pairwise comparisons. Consequently, these Elo scores guide the
decision-making process to derive optimal outcomes. Experimental results on the
ToolBench dataset demonstrate RadAgent's superiority over baselines, achieving
over 10% improvement in Pass Rate on diverse tasks. It offers higher-quality
solutions and reduces costs (ChatGPT API calls), highlighting its effectiveness
and efficiency. | http://arxiv.org/pdf/2308.12519 | Yining Ye, Xin Cong, Shizuo Tian, Yujia Qin, Chong Liu, Yankai Lin, Zhiyuan Liu, Maosong Sun | cs.CL | Received 8,6,6,6 scores on ICLR 2024 | null | cs.CL | 20230824 | 20240117 | [
{
"id": "2305.14318"
},
{
"id": "2306.06624"
},
{
"id": "2305.17926"
},
{
"id": "2305.10601"
},
{
"id": "2307.16789"
},
{
"id": "2305.06849"
},
{
"id": "2304.08354"
},
{
"id": "2308.09687"
},
{
"id": "2306.11489"
},
{
"id": "2306.17563"
},
{
"id": "2305.14992"
},
{
"id": "2305.01937"
},
{
"id": "2308.10379"
},
{
"id": "2305.11554"
}
] |
2308.12682 | 18 | 2020). Problem Statement: Given a NL goal g, history h0 = (o0), and a LM generating actions at with probability p(at|htâ1, g), generate the most likely plan (a1:T ) to go from bint We aim to maximize the planâs probability, reframing LM planning as a classical search problem, where we repeatedly expand the current plan a1:tâ1 by adding action at. Rewriting the probability P (a1:T |h0, g) recursively as: | 2308.12682#18 | SayCanPay: Heuristic Planning with Large Language Models using Learnable Domain Knowledge | Large Language Models (LLMs) have demonstrated impressive planning abilities
due to their vast "world knowledge". Yet, obtaining plans that are both
feasible (grounded in affordances) and cost-effective (in plan length), remains
a challenge, despite recent progress. This contrasts with heuristic planning
methods that employ domain knowledge (formalized in action models such as PDDL)
and heuristic search to generate feasible, optimal plans. Inspired by this, we
propose to combine the power of LLMs and heuristic planning by leveraging the
world knowledge of LLMs and the principles of heuristic search. Our approach,
SayCanPay, employs LLMs to generate actions (Say) guided by learnable domain
knowledge, that evaluates actions' feasibility (Can) and long-term
reward/payoff (Pay), and heuristic search to select the best sequence of
actions. Our contributions are (1) a novel framing of the LLM planning problem
in the context of heuristic planning, (2) integrating grounding and
cost-effective elements into the generated plans, and (3) using heuristic
search over actions. Our extensive evaluations show that our model surpasses
other LLM planning approaches. | http://arxiv.org/pdf/2308.12682 | Rishi Hazra, Pedro Zuidberg Dos Martires, Luc De Raedt | cs.AI | Accepted in AAAI 2024. Website:
https://rishihazra.github.io/SayCanPay/ | null | cs.AI | 20230824 | 20240101 | [
{
"id": "2302.13971"
},
{
"id": "2208.07339"
},
{
"id": "2305.14992"
},
{
"id": "2302.05128"
},
{
"id": "2212.08681"
},
{
"id": "1807.03748"
},
{
"id": "2303.00855"
},
{
"id": "2305.10601"
},
{
"id": "2304.11477"
},
{
"id": "2210.17323"
},
{
"id": "2210.11416"
},
{
"id": "2201.04735"
},
{
"id": "2202.10936"
},
{
"id": "2209.07753"
},
{
"id": "2302.06706"
},
{
"id": "1909.08593"
},
{
"id": "2307.15818"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2305.14314"
}
] |
2308.12950 | 18 | Our instruction fine-tuned models Code Llama - Instruct are based on Code Llama and trained to answer questions appropriately. They are trained on three different types of data.
4
Dataset Sampling prop. Epochs Disk size Code Llama (500B tokens) Code Natural language related to code Natural language 85% 8% 7% 2.03 1.39 0.01 859 GB 78 GB 3.5 TB Code Llama - Python (additional 100B tokens) 3.69 Python 0.05 Code 0.35 Natural language related to code 0.00 Natural language 75% 10% 10% 5% 79 GB 859 GB 78 GB 3.5 TB
Table 1: Training dataset of Code Llama and Code Llama - Python. We train Code Llama on 500B additional tokens and Code Llama - Python further on 100B tokens. | 2308.12950#18 | Code Llama: Open Foundation Models for Code | We release Code Llama, a family of large language models for code based on
Llama 2 providing state-of-the-art performance among open models, infilling
capabilities, support for large input contexts, and zero-shot instruction
following ability for programming tasks. We provide multiple flavors to cover a
wide range of applications: foundation models (Code Llama), Python
specializations (Code Llama - Python), and instruction-following models (Code
Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are
trained on sequences of 16k tokens and show improvements on inputs with up to
100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants
support infilling based on surrounding content. Code Llama reaches
state-of-the-art performance among open models on several code benchmarks, with
scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code
Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our
models outperform every other publicly available model on MultiPL-E. We release
Code Llama under a permissive license that allows for both research and
commercial use. | http://arxiv.org/pdf/2308.12950 | Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve | cs.CL | null | null | cs.CL | 20230824 | 20240131 | [] |
2308.12966 | 18 | 5
2016). In order to improve the text-oriented tasks, we collect pdf and HTML format data from Common Crawl1 and generate synthetic OCR data in English and Chinese language with natural scenery background, following (Kim et al., 2022). Finally, we simply construct interleaved image-text data by packing the same task data into sequences of length 2048.
# Table 3: Details of Qwen-VL multi-task pre-training data.
Task # Samples Dataset Captioning VQA Grounding2 Ref Grounding Grounded Cap. OCR Pure-text Autoregression 19.7M 3.6M 3.5M 8.7M 8.7M 24.8M 7.8M LAION-en & zh, DataComp, Coyo, CC12M & 3M, SBU, COCO, In-house Data GQA, VGQA, VQAv2, DVQA, OCR-VQA, DocVQA, TextVQA, ChartQA, AI2D GRIT GRIT, Visual Genome, RefCOCO, RefCOCO+, RefCOCOg GRIT, Visual Genome, RefCOCO, RefCOCO+, RefCOCOg SynthDoG-en & zh, Common Crawl pdf & HTML In-house Data | 2308.12966#18 | Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond | In this work, we introduce the Qwen-VL series, a set of large-scale
vision-language models (LVLMs) designed to perceive and understand both texts
and images. Starting from the Qwen-LM as a foundation, we endow it with visual
capacity by the meticulously designed (i) visual receptor, (ii) input-output
interface, (iii) 3-stage training pipeline, and (iv) multilingual multimodal
cleaned corpus. Beyond the conventional image description and
question-answering, we implement the grounding and text-reading ability of
Qwen-VLs by aligning image-caption-box tuples. The resulting models, including
Qwen-VL and Qwen-VL-Chat, set new records for generalist models under similar
model scales on a broad range of visual-centric benchmarks (e.g., image
captioning, question answering, visual grounding) and different settings (e.g.,
zero-shot, few-shot). Moreover, on real-world dialog benchmarks, our
instruction-tuned Qwen-VL-Chat also demonstrates superiority compared to
existing vision-language chatbots. Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL. | http://arxiv.org/pdf/2308.12966 | Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, Jingren Zhou | cs.CV, cs.CL | Code, demo and models are available at
https://github.com/QwenLM/Qwen-VL | null | cs.CV | 20230824 | 20231013 | [
{
"id": "2211.01335"
},
{
"id": "2307.02499"
},
{
"id": "2305.10403"
},
{
"id": "2308.16890"
},
{
"id": "2208.10442"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11172"
},
{
"id": "2210.08402"
},
{
"id": "2306.02858"
},
{
"id": "2209.06794"
},
{
"id": "1504.00325"
},
{
"id": "2204.13653"
},
{
"id": "2304.08485"
},
{
"id": "2302.14045"
},
{
"id": "2212.04408"
},
{
"id": "2307.05222"
},
{
"id": "2306.15195"
},
{
"id": "2111.08276"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2306.14824"
},
{
"id": "2102.05918"
},
{
"id": "2205.01917"
},
{
"id": "2111.11432"
},
{
"id": "2307.16125"
},
{
"id": "2305.03726"
},
{
"id": "2203.10244"
},
{
"id": "2206.08916"
},
{
"id": "2304.14108"
},
{
"id": "2307.08581"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2305.18565"
}
] |
2308.12503 | 19 | Configurable General Multi-Agent Interaction Framework With the support of structured persona models and enhanced cognitive models with skill libraries, a single agent can play multiple roles in specific scenarios to complete com- plex tasks. However, currently, using LLMs-based agents to achieve preset goals in specific tasks often fails to present real social interactions, because simulating social phenom- ena requires multiple Agents to interact and cooperate in a human-like manner. Therefore, this paper introduces the Configurable General Multi-Agent Interaction Framework (CGMI) that can simulate real interactions.
In the context of classroom teaching, this paper explores how CGMI promotes interaction and collaboration among multiple agents. In addition to virtual teacher Agent and vir- tual student Agents, we have also designed assistant Agents responsible for setting educational goals, planning teaching schedules, and analyzing studentsâ willingness to speak to support teacherâs teaching activities. These assistant Agents can adjust their functional configurations based on specific scenarios. To ensure the quality of the interaction process, we introduced a supervisory Agent responsible for detecting âpersonality forgettingâ, ensuring that the âteacher Agent proceeds with teaching as plannedâ, and âdetermining when to end the discussionâ. Through the CGMI framework, each intelligent entity can engage in more in-depth personalized dialogues and task completion, collaboratively creating a re- alistic virtual teaching environment. | 2308.12503#19 | CGMI: Configurable General Multi-Agent Interaction Framework | Benefiting from the powerful capabilities of large language models (LLMs),
agents based on LLMs have shown the potential to address domain-specific tasks
and emulate human behaviors. However, the content generated by these agents
remains somewhat superficial, owing to their limited domain expertise and the
absence of an effective cognitive architecture. To address this, we present the
Configurable General Multi-Agent Interaction (CGMI) framework, designed to
replicate human interactions in real-world scenarios. Specifically, we propose
a tree-structured methodology for the assignment, detection, and maintenance of
agent personality. Additionally, we designed a cognitive architecture equipped
with a skill library based on the ACT* model, which contains memory,
reflection, and planning modules. We have also integrated general agents to
augment the virtual environment's realism. Using the CGMI framework, we
simulated numerous classroom interactions between teacher and students. The
experiments indicate that aspects such as the teaching methodology, curriculum,
and student performance closely mirror real classroom settings. We will open
source our work. | http://arxiv.org/pdf/2308.12503 | Shi Jinxin, Zhao Jiabao, Wang Yilei, Wu Xingjiao, Li Jiawen, He Liang | cs.AI, cs.HC, cs.MA | 11 pages, 15 figures | null | cs.AI | 20230824 | 20230828 | [
{
"id": "2302.01560"
},
{
"id": "2307.05300"
},
{
"id": "2307.07924"
},
{
"id": "2210.03350"
},
{
"id": "2304.05376"
},
{
"id": "2304.03442"
},
{
"id": "2210.03629"
},
{
"id": "2305.04091"
},
{
"id": "2305.02547"
},
{
"id": "2303.17071"
},
{
"id": "2303.17760"
},
{
"id": "2303.08774"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.