doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.09013 | 120 | [83] Justin Zobel and Alistair Moffat. 2006. Inverted Files for Text Search Engines. Comput. Surveys 38, 2 (Jul 2006), 6âes.
A PROOF OF THEOREM 4.2 Fix two vectors ð¢ and ð£ â Rð . Define ðSketch = â¨ð (ð¢), ð (ð£)â© as the random variable representing the inner product of sketches of size ð, prepared using the projection ð (ð¢) = ð
ð¢, with ð
â ð}ðÃð . ðSketch is an unbiased estimator of â¨ð¢, ð£â©. Its distribution tends to a Gaussian {â1/ with variance:
# 1 ð
1 â (ilullllall3 + (u, 0)? â 29° uz).
Proor. Consider the random variable Z = (dX; Rjuj) ( dk Ryvk), where R;âs are Rademacher random variables. It is clear that nZ is the product of the sketch coordinate i (for any i): f(u)i¢(v);.
111:35 | 2309.09013#120 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 121 | 111:35
111:35
111:36
111:36
# Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
We can expand the expected value of ð as follows:
B[Z] = BI ( » Ryuj)( » Rev) | âDe u;0;] eye Ryu jo%] j#k = =m v; E[R?] + >) woe ELR; Ry] ââ = Oitk eed 1 0 = (u,v).
The variance of ð can be expressed as follows:
Var(Z) = E[Z?] â E[Z]? BUD Rin) (2 Reew) ] = (u,v)?
We have the following:
BUD Rie) (2 Ree) = Bl( Qi + >) RR; uju;) (Die+ 2 Rekioe)| i#j k#l = |lullZlloll3 1 21 ReRoxe] + ELD oy » RRjuju; 1+E,)° RiRjujuj » R,Ryo0)] - i k#l i#j i#j 0 0 (12) (13)
The last term can be decomposed as follows: | 2309.09013#121 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 122 | The last term can be decomposed as follows:
E[ » R)R)RRyujujonvy| + it j#k4l E[ » R)RpRERyujujoro |+ i=k,j#lVitk, j=l E[ » R)R)R_Ryujujoe| i#j,i=k, j=lVi#j,i=l jak
The first two terms are 0 and the last term can be rewritten as follows: 2B » uj0;( » Uj0j â ujv;) | = 2(u,v)? â 2)
2B » uj0;( » Uj0j â ujv;) | = 2(u,v)? â 2) uz? fi 7 7 (14)
We now substitute the last term in Equation (13) with Equation (14) to obtain:
Var (ð ) = â¥ð¢ â¥2 2 + â¨ð¢, ð£â©2 â 2 âï¸ 2â¥ð£ â¥2 ð ð£ 2 ð¢2 ð . (15) ð | 2309.09013#122 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 123 | Observe that Zsxercu = 1/n )); $(u)iG(v); is the sum of independent, identically distributed random variables. Furthermore, for bounded vectors u and v, the variance is finite. By the application of the Central Limit Theorem, we can deduce that the distribution of Zsyprc¢y tends to a normal distribution with the stated expected value. Noting that Var(Zsxrren) = 1/n? D; Var(Z) gives the desired variance. ao
# Bridging Dense and Sparse Maximum Inner Product Search
B PROOF OF THEOREM 4.3 Fix a query vector ð â Rð and let ð be a random vector drawn according to the following probabilistic model. Coordinate ð, ðð , is non-zero with probability ðð > 0 and, if it is non-zero, draws its value from a distribution with mean ð and variance ð 2. ðSketch = â¨ð (ð), ð (ð )â©, with ð (ð¢) = ð
ð¢ and ð
â {â1/ | 2309.09013#123 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 124 | 1/-Vn}"*N, has expected value p >); pigi and variance:
ð, 1/ 1 ð
âlu? +0°)(Iigll3 >) Pi - > pidt) +1°((D) aii)? â > \(GiPi)â) | i i i i
Proof. It is easy to see that:
E[ðSketch] = âï¸ ðð E[ðð ] = ð âï¸ ðððð . ð ð
âï¸
As for variance, we start from Theorem 4.2 and arrive at the following expression:
~ (ligl$2U1X13] + 8L(q.X)"] - 2 )) g?21X71). (16) i
where the expectation is with respect to ð . Let us consider the terms inside the parentheses one by one. The first term becomes:
âï¸ | 2309.09013#124 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 125 | where the expectation is with respect to ð . Let us consider the terms inside the parentheses one by one. The first term becomes:
âï¸
â¥ðâ¥2 2E[â¥ð â¥2 2] = â¥ðâ¥2 2 E[ð 2 ð ] ð = â¥ðâ¥2 2(ð2 + ð 2) âï¸ ðð . ð
# The second term reduces to:
BL(q.X)"] = E[(q.X)]* + Var[(q.xX)]+ = 1°) ap)? + > i [ue + o°)pi - wp? i =1((D\ aid - >) apt) + >) apieâ + 0°). i i i
Finally, the last term breaks down to: â2 âï¸ | 2309.09013#125 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 126 | Finally, the last term breaks down to: â2 âï¸
ð ] = â2 âï¸ ð E[ð 2 ð2 ð (ð2 + ð 2)ðð ð2 ð ð = â2(ð2 + ð 2) âï¸ ð2 ð ðð . ð
Putting all these terms back into Equation (16) yields the desired expression for variance.
Putting all these terms back into Equation (16) yields the desired expression for variance. 0
C PROOF OF THEOREM 4.5 Let ð be a random vector drawn according to the following probabilistic model. Coordinate ð, ðð , is non-zero with probability ðð > 0 and, if it is non-zero, draws its value from a distribution with PDF ð and CDF Φ. Then:
â«
P[X ni) â Xi < 5] © (1â pi)(e7 1-8) Dies Ps) +p | ew m1 W(049)) Lisi Pi h(a) dor | 2309.09013#126 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 127 | Proof. Decomposing the probability of the event by conditioning on whether ðð is âactiveâ (i.e.,
its value is drawn from the distribution with PDF ð) or âinactiveâ (i.e., it is 0), we arrive at:
P[ð ð (ð ) â ðð ⤠ð¿] = ðð P[ð ð (ð ) â ðð ⤠ð¿ | ðð is active] + (1 â ðð )P[ð ð (ð ) ⤠ð¿ | ðð is inactive].
111:37
â¡
111:38
# Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty | 2309.09013#127 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.09013 | 128 | 111:37
â¡
111:38
# Sebastian Bruch, Franco Maria Nardini, Amir Ingber, and Edo Liberty
The term conditioned on ðð being active is given by Theorem 5.4 of [16]. The other event involving an inactive ðð happens when all values that collide with ð ð (ð ) are less than or equal to ð¿. This event is equivalent to the event that every active coordinate whose value is greater than ð¿ maps to any sketch coordinate except ð. Using this alternative event, we can write the conditional probability as follows:
(1 â Ly 0-88) Eyes PF wy eH OAM) Dyers, m
(1 â where we used ð â1 â (1 â 1/ð)ð. That completes the proof.
â¡ | 2309.09013#128 | Bridging Dense and Sparse Maximum Inner Product Search | Maximum inner product search (MIPS) over dense and sparse vectors have
progressed independently in a bifurcated literature for decades; the latter is
better known as top-$k$ retrieval in Information Retrieval. This duality exists
because sparse and dense vectors serve different end goals. That is despite the
fact that they are manifestations of the same mathematical problem. In this
work, we ask if algorithms for dense vectors could be applied effectively to
sparse vectors, particularly those that violate the assumptions underlying
top-$k$ retrieval methods. We study IVF-based retrieval where vectors are
partitioned into clusters and only a fraction of clusters are searched during
retrieval. We conduct a comprehensive analysis of dimensionality reduction for
sparse vectors, and examine standard and spherical KMeans for partitioning. Our
experiments demonstrate that IVF serves as an efficient solution for sparse
MIPS. As byproducts, we identify two research opportunities and demonstrate
their potential. First, we cast the IVF paradigm as a dynamic pruning technique
and turn that insight into a novel organization of the inverted index for
approximate MIPS for general sparse vectors. Second, we offer a unified regime
for MIPS over vectors that have dense and sparse subspaces, and show its
robustness to query distributions. | http://arxiv.org/pdf/2309.09013 | Sebastian Bruch, Franco Maria Nardini, Amir Ingber, Edo Liberty | cs.IR | null | null | cs.IR | 20230916 | 20230916 | [
{
"id": "2104.05740"
},
{
"id": "1909.13459"
},
{
"id": "2110.11540"
},
{
"id": "2106.14807"
},
{
"id": "1903.10391"
},
{
"id": "1507.05910"
},
{
"id": "2112.02179"
},
{
"id": "2212.07551"
},
{
"id": "2112.09628"
},
{
"id": "1603.09320"
},
{
"id": "2010.06467"
},
{
"id": "1706.06064"
},
{
"id": "1903.08690"
},
{
"id": "2010.01195"
}
] |
2309.07864 | 0 | 3 2 0 2
p e S 9 1 ] I A . s c [
3 v 4 6 8 7 0 . 9 0 3 2 : v i X r a
# The Rise and Potential of Large Language Model Based Agents: A Survey
Zhiheng Xiââ , Wenxiang Chenâ, Xin Guoâ, Wei Heâ, Yiwen Dingâ, Boyang Hongâ, Ming Zhangâ, Junzhe Wangâ, Senjie Jinâ, Enyu Zhouâ,
Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin,
Shihan Dou, Rongxiang Weng, Wensen Cheng,
Qi Zhangâ , Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang and Tao Guiâ
Fudan NLP Group
# Abstract | 2309.07864#0 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 0 | 3 2 0 2
t c O 2 ] L C . s c [
2 v 5 1 9 7 0 . 9 0 3 2 : v i X r a
Preprint
# MMICL: EMPOWERING VISION-LANGUAGE MODEL WITH MULTI-MODAL IN-CONTEXT LEARNING
Haozhe ZhaoË1, Zefan CaiË1, Shuzheng SiË1, Xiaojian Ma2, Kaikai An1, Liang Chen1, Zixuan Liu3, Sheng Wang3, Wenjuan Han:4, Baobao Chang:1 1National Key Laboratory for Multimedia Information Processing, Peking University 2National Key Laboratory of General Artificial Intelligence, BIGAI 3Paul G. Allen School of Computer Science and Engineering, University of Washington 4Beijing Jiaotong University [email protected], [email protected] https://github.com/PKUnlp-icler/MIC
# ABSTRACT | 2309.07915#0 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 1 | For a long time, humanity has pursued artificial intelligence (AI) equivalent to or surpassing the human level, with AI agents considered a promising vehicle for this pursuit. AI agents are artificial entities that sense their environment, make decisions, and take actions. Many efforts have been made to develop intelligent agents, but they mainly focus on advancement in algorithms or training strategies to enhance specific capabilities or performance on particular tasks. Actually, what the community lacks is a general and powerful model to serve as a starting point for designing AI agents that can adapt to diverse scenarios. Due to the versatile capabilities they demonstrate, large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI), offering hope for building general AI agents. Many researchers have leveraged LLMs as the foundation to build AI agents and have achieved significant progress. In this paper, we perform a comprehensive survey on LLM-based agents. We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, | 2309.07864#1 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 1 | Since the resurgence of deep learning, vision-language models (VLMs) enhanced by large language models (LLMs) have grown exponentially in popularity. How- ever, while LLMs can utilize extensive background knowledge and task information with in-context learning, most VLMs still struggle with understanding complex multi-modal prompts with multiple images, making VLMs less effective in down- stream vision-language tasks. In this paper, we address the limitation above by 1) introducing MMICL, a new approach to allow the VLM to deal with multi-modal inputs efficiently; 2) proposing a novel context scheme to augment the in-context learning ability of the VLM; 3) constructing the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the VLMâs ability to understand complex multi-modal prompts. Our experiments confirm that MMICL achieves new state- of-the-art zero-shot performance on a wide range of general vision-language tasks, especially for complex benchmarks, including MME and MMBench. Our analysis demonstrates that MMICL effectively tackles the challenge of complex multi- modal prompt understanding and emerges the impressive ICL ability. Furthermore, we observe that MMICL successfully | 2309.07915#1 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 2 | and explain why LLMs are suitable foundations for agents. Building upon this, we present a general framework for LLM-based agents, comprising three main components: brain, perception, and action, and the framework can be tailored for different applications. Subsequently, we explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human- agent cooperation. Following this, we delve into agent societies, exploring the behavior and personality of LLM-based agents, the social phenomena that emerge from an agent society, and the insights they offer for human society. Finally, we discuss several key topics and open problems within the field. A repository for the related papers at https://github.com/WooooDyy/LLM-Agent-Paper-List. | 2309.07864#2 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 4 | 2.3 Why is LLM suitable as the primary component of an Agentâs brain? . . . . . . . . 3.1 Brain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Natural Language Interaction . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.3 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.4 Reasoning and Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.5 Transferability and Generalization . . . . . . . . . . . . . . . . . . . . . . 3.2 Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2309.07864#4 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 4 | General-purpose vision-language pre-trained models (VLMs) have made significant advancements (Li et al., 2022; 2023d;g; Zhu et al., 2023; Li et al., 2023b). Recent VLMs mostly augment a large language model (LLM) with a visual encoder and exhibit impressive zero-shot capacities in various visual tasks. However, unlike LLMs that can extract rich background knowledge and task information from the prompt with in-context learning (ICL), most VLMs still struggle to understand complex multi-modal prompts that include multiple images. Previous studies (Li et al., 2023d;b) primarily focus on handling the user queries with a single image rather than multi-modal prompts with interleaved multiple images and text. Although some VLMs like Flamingo (Alayrac et al., 2022) and Kosmos-1 (Huang et al., 2023b) can handle user queries with multiple images, their pre-training data can not provide more sophisticated multi-modal prompts than interleaved image and text crawled from the web (Awadalla et al., 2023). Hence, there is a gap between the prompts used in pre-training these VLMs and the user queries in real-world | 2309.07915#4 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 5 | 3.2 Perception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Textual Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Visual Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3 Auditory Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.4 Other Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Textual Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Tool Using | 2309.07864#5 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 6 | Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Tool Using . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Embodied Action . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1 General Ability of Single Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Task-oriented Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 Innovation-oriented Deployment . . . . . . . . . . . . . . . . . . . . . . . 4.1.3 Lifecycle-oriented Deployment . . . . . . . . . . . . . . . . . . . . . . . 4.2 Coordinating Potential of Multiple Agents . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Cooperative Interaction for | 2309.07864#6 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 7 | Potential of Multiple Agents . . . . . . . . . . . . . . . . . . . . . . 4.2.1 Cooperative Interaction for Complementarity . . . . . . . . . . . . . . . . 4.2.2 Adversarial Interaction for Advancement . . . . . . . . . . . . . . . . . . 4.3 Interactive Engagement between Human and Agent . . . . . . . . . . . . . . . . . 4.3.1 Instructor-Executor Paradigm . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Equal Partnership Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Behavior and Personality of LLM-based Agents . . . . . . . . . . . . . . . . . . . 9 10 11 12 13 14 15 16 17 17 17 18 19 19 20 20 21 24 25 25 27 27 28 28 30 30 31 32 33 34 | 2309.07864#7 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 7 | Please describe the information of Carefully analyze the given images eel ardengnantogienitem & Tell me, is there a horse ay 2 4 . especially the relationship What differences does between and & . Niel and ees Itis not possible to tell from the @ image a horse is in the image. have? âAman is seen walking down a street Here are some examples. Tiel aca o ea WEIGEL Sporty car on one side of road vs. off- sitting on the man's head, and it seems 2 i road jeep with mountainous terrain in There is a horse in . locighatoual: the background on the other hand. re â (a) â_ while there is no horse in ga - The baby is crying as he broke the Please take a closer look at the two se ser loo cup. Therefore, please refer to = images and explain the connection EE Gehman sonilani and tell me if there is a horse in the âThey has similarity in size and shape 7 of vehicle, One is on the road and the (|. other has mountainous terrain with ir) TEN hes; CONE APD snow capped mountains in the Yes, there is a horse in the image and cup. @ foreground. it's standing on a grassy field (b) © (dd) What | 2309.07915#7 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07915 | 8 | snow capped mountains in the Yes, there is a horse in the image and cup. @ foreground. it's standing on a grassy field (b) © (dd) What happens to the He fall to the ground in front of a green grassy man after hitting the area with trees and shrubbery surrounding the @ ball? area where he hit the ball from his golf. © BO Treseimages depict the growth âThe image 0 is just germinating, the image 1 is \ phases of the tree, please describe the only a bare trunk, the image 2 is luxuriant, and @ contents of each image carefully. the image 3 is a growing plant. (f) | 2309.07915#8 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 9 | 5.2 Environment for Agent Society . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.1 Text-based Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2.2 Virtual Sandbox Environment . . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Physical Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.3 Society Simulation with LLM-based Agents . . . . . . . . . . . . . . . . . . . . . 5.3.1 Key Properties and Mechanism of Agent Society . . . . . . . . . . . . . . 5.3.2 Insights from Agent Society . . . . . . . . . . . . . . . . . . . . . . . . . 5.3.3 Ethical and Social Risks in Agent Society . . . . . . . . . . . . . . . . . . 6.1 Mutual Benefits between LLM Research and Agent Research . . . . . . . . . . . . | 2309.07864#9 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 9 | Figure 1: Examples of vision-language dialogue generated by MMICLtypically contain prompts with interleaved images and text. MMICL understands spatial (a), logical (b), and temporal (e) relationships among images. MMICL can also grasp text-to-image references as (c),(d) and (f).
images. For example, the user may ask a specific question about multiple images(Fig. 1.c and Fig. 1.f) or use multiple images as exemplars to ask the question only about a specific image(Fig. 1.d). However, the training data used in previous studies (Li et al., 2023d; Alayrac et al., 2022; Huang et al., 2023a) are crawled from the web and may lack explicit text-to-image references. VLMs thus might fail to handle user queries involving intricate text-to-image references.
Hard to Understand the Relationships between Multiple Images: There are often spatial, temporal, and logical relationships between multiple images, and correctly understanding them allows the model to handle user queries better. However, the pre-training data used by previous VLMs (Alayrac et al., 2022) are collected from the internet, lacking close connections among images, especially when these images are far apart on the same webpage. It hampers the ability of VLMs to understand the intricate relationships among the images and further limits their reasoning ability. | 2309.07915#9 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 10 | . . . . . . . . . . . . . . 6.1 Mutual Benefits between LLM Research and Agent Research . . . . . . . . . . . . 6.2 Evaluation for LLM-based Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3 Security, Trustworthiness and Other Potential Risks of LLM-based Agents . . . . . 6.3.1 Adversarial Robustness . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2 Trustworthiness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.3.3 Other Potential Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Scaling Up the Number of Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5 Open Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . | 2309.07864#10 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 10 | Hard to Learn from In-Context Multi-Modal Demonstrations: Previous studies have shown that pretrained LLMs can benefit from few in-context demonstrations (Brown et al., 2020; Dong et al., 2023). However, the ICL ability of current VLMs is rather limited, specifically: 1) VLMs like BLIP-2 (Li et al., 2023d), LLaVA (Li et al., 2023b) only support multi-modal prompts with a single image, hampering their abilities to use multiple multi-modal demonstrations to enhance their performance during the inference; 2)Although VLMs such as Flamingo (Alayrac et al., 2022) support multi-image inputs during pretraining and emerge ICL abilities, their context schemes fall to provide text-image references and closely related images. It inhibits them from offering sophisticated enough prompts to the VLMs, thereby limiting the effectiveness of their ICL ability. Besides, the lack of further supervised instruction tuning hinders their effectiveness across downstream tasks.
In this paper, to address the aforementioned limitations 1) We present MMICL, a new approach to allow VLMs to efficiently deal with multi-modal inputs, including relationships among multiple images and text-to-image references. 2) We propose a novel context scheme in which incorporating
2
Preprint | 2309.07915#10 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07915 | 11 | 2
Preprint
â â embed = Text Text. Text embed embed Text Text Text = teat | Text Bel embed t t t VPG PG PG âVPG t t t t Img Img Img Img (a) VLMs Focused on a single image (b) VLMs with few-shot ability (c) MMICL
Figure 2: Comparison of different VLM architectures: VLMs focused on a single image, VLMs with few-shot ability, and MMICL with equal treatment of image and text representation.
an extra image declaration section, along with the inclusion of image proxy tokens, enhances the ICL ability of the VLM. 3) We construct a multi-modal in-context learning dataset in accordance with the proposed scheme. The dataset is adapted from a range of existing datasets and can be used to provide support for the training of more capable VLMs. | 2309.07915#11 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 12 | # 6 Discussion
7 Conclusion
3
48
# Introduction
âIf they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation.â
âDenis Diderot, 1875
Artificial Intelligence (AI) is a field dedicated to designing and developing systems that can replicate human-like intelligence and abilities [1]. As early as the 18th century, philosopher Denis Diderot introduced the idea that if a parrot could respond to every question, it could be considered intelligent [2]. While Diderot was referring to living beings, like the parrot, his notion highlights the profound concept that a highly intelligent organism could resemble human intelligence. In the 1950s, Alan Turing expanded this notion to artificial entities and proposed the renowned Turing Test [3]. This test is a cornerstone in AI and aims to explore whether machines can display intelligent behavior comparable to humans. These AI entities are often termed âagentsâ, forming the essential building blocks of AI systems. Typically in AI, an agent refers to an artificial entity capable of perceiving its surroundings using sensors, making decisions, and then taking actions in response using actuators [1; 4]. | 2309.07864#12 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 12 | Our experiments show that MMICL achieves new state-of-the-art performance on various of vision- language benchmarks including MME (Fu et al., 2023) and MMBench (Liu et al., 2023c) *. Com- prehensive examinations of the three limitations we aim to address reveal that MMICL exhibits exceptional ability in understanding text-to-image references (13-points improvement on the vision- language compositionality benchmark, Winoground (Thrush et al., 2022a)) and intricate relationships among images(12-points improvement on the multi-image reasoning benchmark, RAVEN (Huang et al., 2023a)). Moreover, MMICL demonstrates impressive multi-model ICL performance across var- ious tasks. We also observe that MMICL efficiently mitigates the language bias, which often causes VLMs to ignore visual contents when facing extensive textual contexts, leading to hallucinations.
# 2 MMICL
2.1 MODEL ARCHITECTURE | 2309.07915#12 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 13 | The concept of agents originated in Philosophy, with roots tracing back to thinkers like Aristotle and Hume [5]. It describes entities possessing desires, beliefs, intentions, and the ability to take actions [5]. This idea transitioned into computer science, intending to enable computers to understand usersâ interests and autonomously perform actions on their behalf [6; 7; 8]. As AI advanced, the term âagentâ found its place in AI research to depict entities showcasing intelligent behavior and possessing qualities like autonomy, reactivity, pro-activeness, and social ability [4; 9]. Since then, the exploration and technical advancement of agents have become focal points within the AI community [1; 10]. AI agents are now acknowledged as a pivotal stride towards achieving Artificial General Intelligence (AGI) 1, as they encompass the potential for a wide range of intelligent activities [4; 11; 12]. | 2309.07864#13 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 13 | Most VLMs utilize Visual-Prompt Generators (VPG) (e.g., Resampler (Alayrac et al., 2022), Q- former (Li et al., 2023d)) to extract visual embeddings from the image features encoded by vision backbones and use visual embeddings to help LLMs understand visual inputs. The model architecture shown in Fig. 2.a belongs to VLMs that focus on prompts with a single image, such as Blip-2 (Li et al., 2023d), which always places the image at the top of the entire input and can not handle the inputs with multiple images In Fig. 2.b, VLMs with few-shot ability, such as Flamingo (Alayrac et al., 2022), encode images into image embeddings with a fixed number of visual tokens and use cross-attentions in LLM to mixture the visual and text content. Different from previous work, MMICL shown in Fig. 2.c treats image and text representations equally and establishes the reference between image and text via image declaration. It enables users to have the flexibility to input multiple images and text in any desired order, with no restrictions on the quantity or placement of images in contexts. As shown in Fig. 4, each given | 2309.07915#13 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 14 | From the mid-20th century, significant strides were made in developing smart AI agents as research delved deep into their design and advancement [13; 14; 15; 16; 17; 18]. However, these efforts have predominantly focused on enhancing specific capabilities, such as symbolic reasoning, or mastering particular tasks like Go or Chess [19; 20; 21]. Achieving a broad adaptability across varied scenarios remained elusive. Moreover, previous studies have placed more emphasis on the design of algorithms and training strategies, overlooking the development of the modelâs inherent general abilities like knowledge memorization, long-term planning, effective generalization, and efficient interaction [22; 23]. Actually, enhancing the inherent capabilities of the model is the pivotal factor for advancing the agent further, and the domain is in need of a powerful foundational model endowed with a variety of key attributes mentioned above to serve as a starting point for agent systems. | 2309.07864#14 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 14 | to input multiple images and text in any desired order, with no restrictions on the quantity or placement of images in contexts. As shown in Fig. 4, each given image is encoded by a vision encoder (e.g., ViT (Radford et al., 2021)) to get the image representation. Then, we use the Q-former as the VPG to extract the visual embedding. We utilize a fully connected layer as the projection layer to convert each visual embedding to the same dimension as the text embedding of the LLM. Finally, we combine the visual embeddings of multiple images with text embeddings in an interleaved style and feed them into the LLM. We set the weights for mapping query and value vectors in the attention layer of LLM as learnable to better adapt to multi-modal prompts with multiple images. More details are presented in Appendix D. | 2309.07915#14 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 15 | The development of large language models (LLMs) has brought a glimmer of hope for the further development of agents [24; 25; 26], and significant progress has been made by the community [22; 27; 28; 29]. According to the notion of World Scope (WS) [30] which encompasses five levels that depict the research progress from NLP to general AI (i.e., Corpus, Internet, Perception, Embodiment, and Social), the pure LLMs are built on the second level with internet-scale textual inputs and outputs. Despite this, LLMs have demonstrated powerful capabilities in knowledge acquisition, instruction comprehension, generalization, planning, and reasoning, while displaying effective natural language interactions with humans. These advantages have earned LLMs the designation of sparks for AGI [31], making them highly desirable for building intelligent agents to foster a world where humans and agents coexist harmoniously [22]. Starting from this, if we elevate LLMs to the status of agents and equip them with an expanded perception space and action space, they have the potential to reach the third and fourth levels of WS. Furthermore, these LLMs-based agents can tackle more complex tasks through cooperation or competition, and emergent social phenomena can be observed when placing them together, potentially achieving the fifth WS level. As shown in Figure 1, we envision a harmonious society composed of AI agents where human can also participate. | 2309.07864#15 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 16 | In this paper, we present a comprehensive and systematic survey focusing on LLM-based agents, attempting to investigate the existing studies and prospective avenues in this burgeoning field. To this end, we begin by delving into crucial background information (§ 2). In particular, we commence by tracing the origin of AI agents from philosophy to the AI domain, along with a brief overview of the
1Also known as Strong AI.
4
An Envisioned Agent Society | Planning to cook... Making lanterns. a : T need: We need: dishes? 1.Fish, 2.Sauce ... ; OC "10: 100+ Ordering dishes and cooking Discussing decoration Outdoors Cooperation Let me experience the Band performing festival in this world...
Figure 1: Scenario of an envisioned society composed of AI agents, in which humans can also participate. The above image depicts some specific scenes within society. In the kitchen, one agent orders dishes, while another agent is responsible for planning and solving the cooking task. At the concert, three agents are collaborating to perform in a band. Outdoors, two agents are discussing lantern-making, planning the required materials, and finances by selecting and using tools. Users can participate in any of these stages of this social activity. | 2309.07864#16 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 16 | Original VL Task (a) Image Declaration (b) Multi-modal Data with Interconnected Images Visual Question Answering Carefully analyze image j: [TMG)] Carefully analyze images to answer the question. ¢ ¢ fe a gt Se the questi In image 0: [IMGo] 4° - image 1: [[MG,] \* fA ki (i> Sey 0 answer the question. ge 0: [IMGo] \ * - âHi, is image 1: [1MG,] \ Are the men and women Q: Are the men and women are quarrelling with image 2: [[MG2] *opijis ? are quarreling? quarreling? ki Answer: Yes A: Yes ety (c) Unified Multi-modal-in-context Format â Q: The image 0 is [/MGo] . Carefully analyze the Image Captioning The image j is [JMG] image 0 to generate a concise and accurate description that accurately represents the objects, people, or scenery present. A: An airplane flying in the sky, isi @ Carefully analyze image j to ©. generate a concise and accurate An airplane flying description that accurately Q: The image j is [[MG;]/ _â~__. Carefully analyze the in the sky, represents the objects, | 2309.07915#16 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 17 | debate surrounding the existence of artificial agents (§ 2.1). Next, we take the lens of technological trends to provide a concise historical review of the development of AI agents (§ 2.2). Finally, we delve into an in-depth introduction of the essential characteristics of agents and elucidate why large language models are well-suited to serve as the main component of brains or controllers for AI agents (§ 2.3). | 2309.07864#17 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 18 | Inspired by the definition of the agent, we present a general conceptual framework for the LLM- based agents with three key parts: brain, perception, and action (§ 3), and the framework can be tailored to suit different applications. We first introduce the brain, which is primarily composed of a large language model (§ 3.1). Similar to humans, the brain is the core of an AI agent because it not only stores crucial memories, information, and knowledge but also undertakes essential tasks of information processing, decision-making, reasoning, and planning. It is the key determinant of whether the agent can exhibit intelligent behaviors. Next, we introduce the perception module (§ 3.2). For an agent, this module serves a role similar to that of sensory organs for humans. Its primary function is to expand the agentâs perceptual space from text-only to a multimodal space that includes diverse sensory modalities like text, sound, visuals, touch, smell, and more. This expansion enables the agent to better perceive information from the external environment. Finally, we present the action module for expanding the action space of an agent (§ 3.3). Specifically, we expect the agent to be able to possess textual output, take embodied actions, and use tools so that it can better respond to environmental changes and provide feedback, and even alter and shape the environment. | 2309.07864#18 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 18 | Figure 3: Context scheme for MMICL, which seamlessly transforms the interleaved image-text data into training context in a unified format
2.2.1 IMAGE DECLARATION
Users may use textual descriptions to refer to particular images in their queries. Such reference can provide information about the visual content mentioned in the text to the VLM, allowing it to learn alignment between two modalities. To precisely link text and image, we form image declaration templates for each image in mixed inputs, as shown in Fig. 3.a. Firstly, we allocate a unique image proxy ([IMGj]) to reference the visual embedding of image j, which provides a unique identifier for VLMs to index and distinguish between visual and text embeddings. Then, we utilize natural language prompts to establish references between text and image. Incorporating the explicit text-to-image reference in the image declaration assists the model in correlating the text with the appropriate image. Meanwhile, the image declaration, maintained as textual content, can also preserve the flexibility to appear at any position within the prompt. Each instance Ii follows the structure, where the Xi symbolizes the set of image decorations that can be placed anywhere within the instance Ii. qi and ai denote the question with instruction and corresponding answer, respectively.
Ii â pXi, qi, aiq (1)
2.2.2 MULTI-MODEL DATA WITH INTERCONNECTED IMAGES | 2309.07915#18 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 19 | After that, we provide a detailed and thorough introduction to the practical applications of LLM- based agents and elucidate the foundational design pursuitââHarnessing AI for goodâ (§ 4). To start, we delve into the current applications of a single agent and discuss their performance in text-based tasks and simulated exploration environments, with a highlight on their capabilities in handling specific tasks, driving innovation, and exhibiting human-like survival skills and adaptability (§ 4.1). Following that, we take a retrospective look at the development history of multi-agents. We introduce the interactions between agents in LLM-based multi-agent system applications, where they engage in
5
collaboration, negotiation, or competition. Regardless of the mode of interaction, agents collectively strive toward a shared objective (§ 4.2). Lastly, considering the potential limitations of LLM-based agents in aspects such as privacy security, ethical constraints, and data deficiencies, we discuss the human-agent collaboration. We summarize the paradigms of collaboration between agents and humans: the instructor-executor paradigm and the equal partnership paradigm, along with specific applications in practice (§ 4.3). | 2309.07864#19 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 19 | Ii â pXi, qi, aiq (1)
2.2.2 MULTI-MODEL DATA WITH INTERCONNECTED IMAGES
To incorporate abundant multi-image information within the context schema of MMICL, we generate interconnected multi-image data that includes spatial, logical, and temporal relationships. It aids MMICLin understanding the intricate relationships among images in user queries. Specifically, we derive frames from videos to build multi-image data. The frames extracted from video inherently sustain close temporal and spatial relations, which infuse spatial and temporal correlation information among images into the context scheme. Besides, we build multi-image data from images depicting multiple object interactions. We detect the objects within the image and generate bounding boxes for each object. We acquire multiple sub-images of different objects by cropping the image according to bounding boxes. We then replace the textual references to these objects with their corresponding cropped images, thus forming interleaved multi-modal data with logical and causal interconnected images, as delineated in Fig. 3.b. Each instance Ii comprises a question-answer text pair along with K images, where the xi,k P Xi represents the image declaration for the k-th image.
Ii â ptx1, x2, . . . , xku, qi, aiq (2) | 2309.07915#19 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 20 | Building upon the exploration of practical applications of LLM-based agents, we now shift our focus to the concept of the âAgent Societyâ, examining the intricate interactions between agents and their surrounding environments (§ 5). This section begins with an investigation into whether these agents exhibit human-like behavior and possess corresponding personality (§5.1). Furthermore, we introduce the social environments within which the agents operate, including text-based environment, virtual sandbox, and the physical world (§5.2). Unlike the previous section (§ 3.2), here we will focus on diverse types of the environment rather than how the agents perceive it. Having established the foundation of agents and their environments, we proceed to unveil the simulated societies that they form (§5.3). We will discuss the construction of a simulated society, and go on to examine the social phenomena that emerge from it. Specifically, we will emphasize the lessons and potential risks inherent in simulated societies. | 2309.07864#20 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 20 | Ii â ptx1, x2, . . . , xku, qi, aiq (2)
2.2.3 UNIFIED MULTI-MODAL IN-CONTEXT FORMAT FOR DIFFERENT TASKS
We propose a design for producing multi-modal in-context learning data for different tasks to enrich the context scheme of MMICL. It aims to improve the instruction-aware ability of VLM and expand
4
Preprint
¢ 1 ¢ ith In A te 4 vis f © quarrelling with âii ? Proj Pretraned LLMs rial Yes, image 1 is quarrelling with image 2. j J J Stagel Stagell Pretraining Multi-Mode! In-Context Tuning Si on Encoder | Visi ) UnfreezeQ& Vsâ Text embedding MB Untreeze oma Projection Projection) MB Freeze w Vision Prompt % | Visual embedding meq Image Proxy
Figure 4: Illustration of MMICL architecture and training paradigm. The upper part denotes the overview of model architecture and the bottom denotes the pipeline of the two-stage training paradigm. | 2309.07915#20 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 21 | Finally, we discuss a range of key topics (§ 6) and open problems within the field of LLM-based agents: (1) the mutual benefits and inspirations of the LLM research and the agent research, where we demonstrate that the development of LLM-based agents has provided many opportunities for both agent and LLM communities (§ 6.1); (2) existing evaluation efforts and some prospects for LLM-based agents from four dimensions, including utility, sociability, values and the ability to continually evolve (§ 6.2); (3) potential risks of LLM-based agents, where we discuss adversarial robustness and trustworthiness of LLM-based agents. We also include the discussion of some other risks like misuse, unemployment and the threat to the well-being of the human race (§ 6.3); (4) scaling up the number of agents, where we discuss the potential advantages and challenges of scaling up agent counts, along with the approaches of pre-determined and dynamic scaling (§ 6.4); (5) several open problems, such as the debate over whether LLM-based agents represent a potential path to AGI, challenges from virtual simulated environment to physical environment, collective Intelligence in AI agents, and Agent as a Service (§ 6.5). After all, we hope this paper could provide inspiration to the researchers and practitioners from relevant fields.
# 2 Background | 2309.07864#21 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 21 | Figure 4: Illustration of MMICL architecture and training paradigm. The upper part denotes the overview of model architecture and the bottom denotes the pipeline of the two-stage training paradigm.
its abilities for proficient multi-modal in-context learning. Specifically, we start by crafting diverse instructions for each task and generate different templates for the task utilizing these instructions. We then fill in the randomly selected template with the original task to assemble data equipped with instructions as Appendix F. Moreover, we convert the data into a multi-modal in-context format by constructing few-shot exemplars generated by sampling instances from the data. These exemplars are combined with the input instance to produce the multi-modal in-context data. In this way, we can transform all tasks into a unified multi-modal in-context format, as illustrated in Fig. 3.c. This method facilitates amassing an extensive amount of high-quality data from different tasks, enriching the context schema of MMICL with an abundant diversity of multi-modal in-context data teeming with diverse instructions. Ultimately, this improves the modelâs ability to follow instructions and multi-modal in-context learning ability. Each instance Ii comprises N exemplars. | 2309.07915#21 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 22 | # 2 Background
In this section, we provide crucial background information to lay the groundwork for the subsequent content (§ 2.1). We first discuss the origin of AI agents, from philosophy to the realm of AI, coupled with a discussion of the discourse regarding the existence of artificial agents (§ 2.2). Subsequently, we summarize the development of AI agents through the lens of technological trends. Finally, we introduce the key characteristics of agents and demonstrate why LLMs are suitable to serve as the main part of the brains of AI agents (§ 2.3).
# 2.1 Origin of AI Agent
âAgentâ is a concept with a long history that has been explored and interpreted in many fields. Here, we first explore its origins in philosophy, discuss whether artificial products can possess agency in a philosophical sense, and examine how related concepts have been introduced into the field of AI. | 2309.07864#22 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 22 | Ii â ptP1, ¨ ¨ ¨ , PN u, Xi, qi, aiq Each exemplar Pj â pXj, qj, ajq, Xj denotes the image declaration of the j-th exemplar. qj and aj denote the question and answer for the j-th exemplar, respectively.
2.3 MULTIMODALITY IN-CONTEXT LEARNING (MIC) DATASET CONSTRUCTION
To help VLMs understand the complex prompts, we construct MIC dataset by gathering data from public data resources and converting them based on the context scheme. It has three key aspects: 1) image declaration, 2) multi-modal data with closely related images, and 3) multi- modal in-context data for different tasks. Training set of MIC comes from 16 datasets across 8 categories, while the test set comes from 18 datasets across 10 categories. Ad- ditional details can be found in Ap- pendix B and Appendix C.
Algorithm 1 Image Declaration Require: Interleaved multi-modal input: X, containing visual embedding: V â tv1, v2, . . .u and text embedding H â th1, h2, . . .u, where vi represents the image embedding and hi represents the span between the image embeddings. | 2309.07915#22 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 23 | Agent in philosophy. The core idea of an agent has a historical background in philosophical discussions, with its roots traceable to influential thinkers such as Aristotle and Hume, among others [5]. In a general sense, an âagentâ is an entity with the capacity to act, and the term âagencyâ denotes the exercise or manifestation of this capacity [5]. While in a narrow sense, âagencyâ is usually used to refer to the performance of intentional actions; and correspondingly, the term âagentâ denotes entities that possess desires, beliefs, intentions, and the ability to act [32; 33; 34; 35]. Note that agents can encompass not only individual human beings but also other entities in both the physical and virtual world. Importantly, the concept of an agent involves individual autonomy, granting them the ability to exercise volition, make choices, and take actions, rather than passively reacting to external stimuli.
6 | 2309.07864#23 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 23 | Ensure: Interleaved multi-modal input with image declaration: ËX 1: for each interleaved multi-modal input X do 2: 3: 4: 5: 6: 7: 8: 9: end for
n à number of images in X Initialize image proxy tokens rIM G1s, rIM G2s, . . . for each image i in X do
end for R Ã tRef1, Ref2, . . .u Replace vi in X with Refi: ËX â rRef1, h1, Ref2, h2, . . .s
Firstly, we create an image declara- tion per instance in all datasets using Algorithm 1 to generate datasets with explicit text-to-image
5
# Preprint | 2309.07915#23 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 24 | 6
From the perspective of philosophy, is artificial entities capable of agency? In a general sense, if we define agents as entities with the capacity to act, AI systems do exhibit a form of agency [5]. However, the term agent is more usually used to refer to entities or subjects that possess consciousness, intentionality, and the ability to act [32; 33; 34]. Within this framework, itâs not immediately clear whether artificial systems can possess agency, as it remains uncertain whether they possess internal states that form the basis for attributing desires, beliefs, and intentions. Some people argue that attributing psychological states like intention to artificial agents is a form of anthropomorphism and lacks scientific rigor [5; 36]. As Barandiaran et al. [36] stated, âBeing specific about the requirements for agency has told us a lot about how much is still needed for the development of artificial forms of agency.â In contrast, there are also researchers who believe that, in certain circumstances, employing the intentional stance (that is, interpreting agent behavior in terms of intentions) can provide a better description, explanation and abstraction of the actions of artificial agents, much like it is done for humans [11; 37; 38]. | 2309.07864#24 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 24 | Cognition Perception Model Comm. Num. Text. Code. Existen. Count Pos. Color OCR Poster Cele. Scene Land. Art. LLaVA MiniGPT-4 MultiModal-GPT VisualGLM-6B VPGTrans LaVIN LLaMA-Adapter-V2 mPLUG-Owl InstructBLIP BLIP-2 Lynx GIT2 Otter Cheetor LRV-Instruction BLIVA 50.00 57.50 57.14 45.00 0.00 59.29 62.50 60.00 49.29 45.00 50.00 39.29 50.00 77.50 64.29 65.00 47.50 87.14 62.50 50.00 81.43 78.57 60.00 80.00 129.29 40.00 65.00 110.00 40.00 65.00 110.71 17.50 42.50 50.00 67.50 99.29 106.43 72.50 57.50 98.57 77.50 57.50 100.71 70.00 85.00 136.43 57.50 77.50 50.00 40.00 55.00 47.50 57.50 50.00 55.00 57.50 57.50 75.00 45.00 45.00 70.00 87.50 72.50 60.00 49.00 50.00 | 2309.07915#24 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 25 | With the advancement of language models, the potential emergence of artificial intentional agents appears more promising [24; 25; 39; 40; 41]. In a rigorous sense, language models merely function as conditional probability models, using input to predict the next token [42]. Different from this, humans incorporate social and perceptual context, and speak according to their mental states [43; 44]. Consequently, some researchers argue that the current paradigm of language modeling is not compatible with the intentional actions of an agent [30; 45]. However, there are also researchers who propose that language models can, in a narrow sense, serve as models of agents [46; 47]. They argue that during the process of context-based next-word prediction, current language models can sometimes infer approximate, partial representations of the beliefs, desires, and intentions held by the agent who generated the context. With these representations, the language models can then generate utterances like humans. To support their viewpoint, they conduct experiments to provide some empirical evidence [46; 48; 49]. | 2309.07864#25 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 25 | 50.00 55.00 57.50 57.50 75.00 45.00 45.00 70.00 87.50 72.50 60.00 49.00 50.00 50.00 48.82 50.00 60.50 54.00 71.75 54.41 68.33 59.50 73.82 69.75 68.00 61.67 75.25 53.24 146.25 83.75 85.00 77.25 53.53 141.75 64.75 70.00 47.35 136.75 93.50 87.25 185.00 86.18 148.50 150.25 69.75 120.00 120.00 65.00 136.05 100.29 135.50 159.25 96.25 185.00 143.33 66.67 153.33 72.50 123.81 101.18 153.00 79.75 134.25 160.00 135.00 73.33 148.33 110.00 141.84 105.59 145.25 138.00 136.50 195.00 151.67 90.00 170.00 77.50 124.83 118.24 164.50 162.00 119.50 190.00 118.33 96.67 158.33 65.00 112.59 145.88 158.50 140.50 146.25 88.33 86.67 113.33 72.50 138.78 172.65 | 2309.07915#25 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 26 | Introduction of agents into AI. It might come as a surprise that researchers within the mainstream AI community devoted relatively minimal attention to concepts related to agents until the mid to late 1980s. Nevertheless, there has been a significant surge of interest in this topic within the realms of computer science and artificial intelligence communities since then [50; 51; 52; 53]. As Wooldridge et al. [4] stated, we can define AI by saying that it is a subfield of computer science that aims to design and build computer-based agents that exhibit aspects of intelligent behavior. So we can treat âagentâ as a central concept in AI. When the concept of agent is introduced into the field of AI, its meaning undergoes some changes. In the realm of Philosophy, an agent can be a human, an animal, or even a concept or entity with autonomy [5]. However, in the field of artificial intelligence, an agent is a computational entity [4; 7]. Due to the seemingly metaphysical nature of concepts like consciousness and desires for computational entities [11], and given that we can only observe the behavior of the machine, many AI researchers, including Alan Turing, suggest temporarily setting aside the question of whether an agent is | 2309.07864#26 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 26 | 158.33 65.00 112.59 145.88 158.50 140.50 146.25 88.33 86.67 113.33 72.50 138.78 172.65 158.75 137.25 129.00 195.00 180.00 96.67 80.00 116.67 100.00 147.28 164.12 156.00 145.73 113.50 165.00 111.67 86.67 165.00 110.00 139.04 112.65 147.98 160.53 101.25 180.00 138.33 81.67 180.00 87.50 155.10 140.88 151.50 89.50 133.25 50.00 50.00 55.00 50.00 55.00 43.33 75.00 41.84 55.00 58.33 68.33 57.82 50.00 48.33 55.00 65.99 84.01 85.00 63.33 73.33 88.33 63.33 75.00 107.50 79.59 50.00 48.33 75.00 125.00 99.66 50.00 50.00 55.00 50.00 57.50 82.50 42.50 77.50 MMICL 136.43 82.50 132.50 77.50 170.00 160.00 81.67 156.67 100.00 146.26 141.76 153.75 136.13 | 2309.07915#26 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 27 | given that we can only observe the behavior of the machine, many AI researchers, including Alan Turing, suggest temporarily setting aside the question of whether an agent is âactuallyâ thinking or literally possesses a âmindâ [3]. Instead, researchers employ other attributes to help describe an agent, such as properties of autonomy, reactivity, pro-activeness and social ability [4; 9]. There are also researchers who held that intelligence is âin the eye of the beholderâ; it is not an innate, isolated property [15; 16; 54; 55]. In essence, an AI agent is not equivalent to a philosophical agent; rather, it is a concretization of the philosophical concept of an agent in the context of AI. In this paper, we treat AI agents as artificial entities that are capable of perceiving their surroundings using sensors, making decisions, and then taking actions in response using actuators [1; 4]. | 2309.07864#27 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 28 | # 2.2 Technological Trends in Agent Research
The evolution of AI agents has undergone several stages, and here we take the lens of technological trends to review its development briefly.
Symbolic Agents. In the early stages of artificial intelligence research, the predominant approach utilized was symbolic AI, characterized by its reliance on symbolic logic [56; 57]. This approach employed logical rules and symbolic representations to encapsulate knowledge and facilitate reasoning processes. Early AI agents were built based on this approach [58], and they primarily focused on two problems: the transduction problem and the representation/reasoning problem [59]. These agents are aimed to emulate human thinking patterns. They possess explicit and interpretable reasoning
7
frameworks, and due to their symbolic nature, they exhibit a high degree of expressive capability [13; 14; 60]. A classic example of this approach is knowledge-based expert systems. However, symbolic agents faced limitations in handling uncertainty and large-scale real-world problems [19; 20]. Additionally, due to the intricacies of symbolic reasoning algorithms, it was challenging to find an efficient algorithm capable of producing meaningful results within a finite timeframe [20; 61]. | 2309.07864#28 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 29 | Reactive agents. Different from symbolic agents, reactive agents do not use complex symbolic reasoning. Instead, they primarily focus on the interaction between the agent and its environment, emphasizing quick and real-time responses [15; 16; 20; 62; 63]. These agents are mainly based on a sense-act loop, efficiently perceiving and reacting to the environment. The design of such agents prioritizes direct input-output mappings rather than intricate reasoning and symbolic operations [52]. However, Reactive agents also have limitations. They typically require fewer computational resources, enabling quicker responses, but they might lack complex higher-level decision-making and planning capabilities. | 2309.07864#29 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 29 | reference. We then have annotators scrutinize every datasetâs samples and provide task instructions. This practice aids in gaining a comprehensive understanding of the task and helps craft high-quality templates. Next, we employ ChatGPTâ to rewrite the instructions to describe the key characteristics of each task accurately. After ChatGPT generates the instructions, we undergo a manual review to guarantee the high quality of the instructions. We select ten suitable templates matching as candidates, then merge the original datasetâs input into a randomly chosen template. We assemble demonstrations for each instance from the dataset by selecting a small amount of data and arranging them sequentially. These demonstrations are integrated with the input instance to generate multi-modal contextual dataâ¡. We construct multi-image data by extracting eight frames per video from MSRVTT (Xu et al., 2016) and MSRVTTQA (Xu et al., 2016) datasets. We also crop images from the VCR (Zellers et al., 2019) dataset using object bounding boxes to produce intertwined multi-modal data with closely related images. We convert all data into a vision-language | 2309.07915#29 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 30 | Reinforcement learning-based agents. With the improvement of computational capabilities and data availability, along with a growing interest in simulating interactions between intelligent agents and their environments, researchers have begun to utilize reinforcement learning methods to train agents for tackling more challenging and complex tasks [17; 18; 64; 65]. The primary concern in this field is how to enable agents to learn through interactions with their environments, enabling them to achieve maximum cumulative rewards in specific tasks [21]. Initially, reinforcement learning (RL) agents were primarily based on fundamental techniques such as policy search and value function optimization, exemplified by Q-learning [66] and SARSA [67]. With the rise of deep learning, the integration of deep neural networks and reinforcement learning, known as Deep Reinforcement Learning (DRL), has emerged [68; 69]. This allows agents to learn intricate policies from high- dimensional inputs, leading to numerous significant accomplishments like AlphaGo [70] and DQN [71]. The advantage of this approach lies in its capacity to enable agents to autonomously learn in unknown environments, without explicit human intervention. This allows for its wide application in an array of domains, from gaming to robot control and beyond. Nonetheless, reinforcement learning faces challenges including long training times, low sample efficiency, and stability concerns, particularly when applied in complex real-world environments [21]. | 2309.07864#30 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 30 | et al., 2019) dataset using object bounding boxes to produce intertwined multi-modal data with closely related images. We convert all data into a vision-language Q&A format to create high-quality multi-modal training data and accumulate 5.8M samples in MIC dataset. Due to resource constraints, we use approximately 10% of MIC with the sampling strategy described in Appendix E to finetune MMICL. It is anticipated that a larger model trained on all of our data would yield a more promising result. | 2309.07915#30 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 31 | Agents with transfer learning and meta learning. Traditionally, training a reinforcement learning agent requires huge sample sizes and long training time, and lacks generalization capability [72; 73; 74; 75; 76]. Consequently, researchers have introduced transfer learning to expedite an agentâs learning on new tasks [77; 78; 79]. Transfer learning reduces the burden of training on new tasks and facilitates the sharing and migration of knowledge across different tasks, thereby enhancing learning efficiency, performance, and generalization capabilities. Furthermore, meta-learning has also been introduced to AI agents [80; 81; 82; 83; 84]. Meta-learning focuses on learning how to learn, enabling an agent to swiftly infer optimal policies for new tasks from a small number of samples [85]. Such an agent, when confronted with a new task, can rapidly adjust its learning approach by leveraging acquired general knowledge and policies, consequently reducing the reliance on a large volume of samples. However, when there exist significant disparities between source and target tasks, the effectiveness of transfer learning might fall short of expectations and there may exist negative transfer [86; 87]. Additionally, the substantial amount of pre-training and large sample sizes required by meta learning make it hard to establish a universal learning policy [81; 88]. | 2309.07864#31 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 31 | 2.4 TRAINING PARADIGM
Stage I: Pretraining. This stage aims to assist the model in aligning the image and text embeddings. During this stage, both the vision encoder and the LLM remain frozen. The VPG (i.e., Q-Former) and projection layer are trained to learn visual embeddings that can be interpreted by the LLM.
Stage II: Multi-Modal In-Context Tuning. In this stage, we aim to address the aforementioned limitations and take our model a step further by extending it to multi-modal in-context learning. Specifically, we aim to make the model understand the intricate referential relationships between the text and images and the complex relationships among multiple images and ultimately acquire a proficient multi-modal in-context learning ability. Therefore, we perform multi-modal In-Context Tuning on MIC dataset. During the stage II, we freeze the image encoder, Q-former, and LLM while jointly training the projection layer and query and value vectors.
3 EXPERIMENT
3.1 EXPERIMENTAL SETUP
Evaluation Setup. We aim to develop general-purpose VLMs that can generally adapt to diverse, challenging multi-modal prompts. Therefore, we evaluate our models in several vision-language benchmarks, including tasks that involve images and videos. The metrics used in these benchmarks and further details are shown in Appendix L. | 2309.07915#31 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 32 | Large language model-based agents. As large language models have demonstrated impressive emergent capabilities and have gained immense popularity [24; 25; 26; 41], researchers have started to leverage these models to construct AI agents [22; 27; 28; 89]. Specifically, they employ LLMs as the primary component of brain or controller of these agents and expand their perceptual and action space through strategies such as multimodal perception and tool utilization [90; 91; 92; 93; 94]. These LLM- based agents can exhibit reasoning and planning abilities comparable to symbolic agents through techniques like Chain-of-Thought (CoT) and problem decomposition [95; 96; 97; 98; 99; 100; 101]. They can also acquire interactive capabilities with the environment, akin to reactive agents, by learning from feedback and performing new actions [102; 103; 104]. Similarly, large language models undergo pre-training on large-scale corpora and demonstrate the capacity for few-shot and zero-shot generalization, allowing for seamless transfer between tasks without the need to update parameters [41; 105; 106; 107]. LLM-based agents have been applied to various real-world scenarios,
8 | 2309.07864#32 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 32 | â We use the gpt-3.5-turbo version of ChatGPT. â¡Except for the video datasets, vcr dataset, and LLaVa dataset. More detail can be found in Appendix B
6
# Preprint
C1: Some plants surrounding a lightbulb. Q: Do you agree the following image is: l C2: A lightbulb surrounding some plants. aS +] [4] [o][ ] @ : QU: Is the Caption! matches the image? Py Q2: Is the Caption! matches the image2? t () [@] Cone I Conese (cence Q3: Is the Caption2 matches the image? Q4: Is the Caption2 matches the image2? 4 + âAnswer: P{Yes|Q} MCh) Cot) Chg) MCI) A B a D E
Figure 5: Illustration of two complex vision language reasoning tasks: Winoground (Thrush et al., 2022b) (Left) and RAVEN (Zhang et al., 2019) (Right). | 2309.07915#32 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 33 | 8
such as software development [108; 109] and scientific research [110]. Due to their natural language comprehension and generation capabilities, they can interact with each other seamlessly, giving rise to collaboration and competition among multiple agents [108; 109; 111; 112]. Furthermore, research suggests that allowing multiple agents to coexist can lead to the emergence of social phenomena [22].
# 2.3 Why is LLM suitable as the primary component of an Agentâs brain?
As mentioned before, researchers have introduced several properties to help describe and define agents in the field of AI. Here, we will delve into some key properties, elucidate their relevance to LLMs, and thereby expound on why LLMs are highly suited to serve as the main part of brains of AI agents. | 2309.07864#33 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 33 | Models and Baselines. We provide two versions of MMICL: (1) MMICL (FLAN-T5) which uses BLIP-2 (Li et al., 2023d) as the backbone and (2) MMICL (Instruct-FLAN-T5) which uses Instruct- BLIP (Dai et al., 2023) as the backbone. We also adopt XL and XXL of FLANT5 (Chung et al., 2022) model for both versions. We compare MMICL with following strong baselines: Flamingo (Alayrac et al., 2022), KOSMOS-1 (Huang et al., 2023a), BLIP-2-FLAN-T5, InstructBLIP-FLAN-T5, Shikra (Chen et al., 2023), Otter (Li et al., 2023a), Ying-VLM (Li et al., 2023e). The details of MMICL and baselines are shown in Appendix G, and Appendix M.
3.2 GENERAL PERFORMANCE EVALUATIONS | 2309.07915#33 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 34 | Autonomy. Autonomy means that an agent operates without direct intervention from humans or others and possesses a degree of control over its actions and internal states [4; 113]. This implies that an agent should not only possess the capability to follow explicit human instructions for task completion but also exhibit the capacity to initiate and execute actions independently. LLMs can demonstrate a form of autonomy through their ability to generate human-like text, engage in conversations, and perform various tasks without detailed step-by-step instructions [114; 115]. Moreover, they can dynamically adjust their outputs based on environmental input, reflecting a degree of adaptive autonomy [23; 27; 104]. Furthermore, they can showcase autonomy through exhibiting creativity like coming up with novel ideas, stories, or solutions that havenât been explicitly programmed into them [116; 117]. This implies a certain level of self-directed exploration and decision-making. Applications like Auto-GPT [114] exemplify the significant potential of LLMs in constructing autonomous agents. Simply by providing them with a task and a set of available tools, they can autonomously formulate plans and execute them to achieve the ultimate goal. | 2309.07864#34 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 34 | 3.2 GENERAL PERFORMANCE EVALUATIONS
We evaluate the general performance of MMICL on both MME (Fu et al., 2023) and MMBench (Liu et al., 2023c) benchmarks§. MME evaluates VLMs with 14 sub-tasks that encompass cognition and perception abilities. Results in Table 1 show that MMICL can achieve the best average scores com- pared with current VLMs on cognition and perception tasks. MMICL also demonstrates outstanding performance and significantly surpasses other VLMs on the MMBench benchmark, which thoroughly evaluates the diverse skills of VLMs. The detailed results are presented in Table 21. See Appendix H and I for MMICLâs evaluation detail and comparisons with other VLMs.
3.3 PERFORMANCE PROB
3.3.1 UNDERSTANDING TEXT-TO-IMAGE REFERENCE
Table 2: Results on Winoground across text, image and group score metrics. | 2309.07915#34 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 35 | Reactivity. Reactivity in an agent refers to its ability to respond rapidly to immediate changes and stimuli in its environment [9]. This implies that the agent can perceive alterations in its surroundings and promptly take appropriate actions. Traditionally, the perceptual space of language models has been confined to textual inputs, while the action space has been limited to textual outputs. However, researchers have demonstrated the potential to expand the perceptual space of LLMs using multimodal fusion techniques, enabling them to rapidly process visual and auditory information from the environment [25; 118; 119]. Similarly, itâs also feasible to expand the action space of LLMs through embodiment techniques [120; 121] and tool usage [92; 94]. These advancements enable LLMs to effectively interact with the real-world physical environment and carry out tasks within it. One major challenge is that LLM-based agents, when performing non-textual actions, require an intermediate step of generating thoughts or formulating tool usage in textual form before eventually translating them into concrete actions. This intermediary process consumes time and reduces the response speed. However, this aligns closely with human behavioral patterns, where the principle of âthink before you actâ is observed [122; 123]. | 2309.07864#35 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 35 | 3.3.1 UNDERSTANDING TEXT-TO-IMAGE REFERENCE
Table 2: Results on Winoground across text, image and group score metrics.
The Winoground (Thrush et al., 2022b) proposes a task of correctly matching two given images and captions, as de- picted in the left of Fig. 5. The challenge lies in the fact that both captions consist of the exact same words, albeit in a dif- ferent order. VLMs must compare both images and texts to discern their subtle differences and capture the implicit ref- erence between them. Therefore, we se- lect the Winoground to evaluate whether VLMs understand the text-to-image ref- erence. Results in Table 2 demonstrate that MMICL captures the referential re- lationship between image and text, surpassing previous baselines. | 2309.07915#35 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 36 | Pro-activeness. Pro-activeness denotes that agents donât merely react to their environments; they possess the capacity to display goal-oriented actions by proactively taking the initiative [9]. This property emphasizes that agents can reason, make plans, and take proactive measures in their actions to achieve specific goals or adapt to environmental changes. Although intuitively the paradigm of next token prediction in LLMs may not possess intention or desire, research has shown that they can implicitly generate representations of these states and guide the modelâs inference process [46; 48; 49]. LLMs have demonstrated a strong capacity for generalized reasoning and planning. By prompting large language models with instructions like âletâs think step by stepâ, we can elicit their reasoning abilities, such as logical and mathematical reasoning [95; 96; 97]. Similarly, large language models have shown the emergent ability of planning in forms of goal reformulation [99; 124], task decomposition [98; 125], and adjusting plans in response to environmental changes [100; 126]. | 2309.07864#36 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 36 | Model Image Group Text 85.50 16.67 MTurk Human Random Chance 88.50 25.00 89.50 25.00 CLIP-based Model 42.20 47.00 VQ2 (Yarom et al., 2023) 30.50 Vision-language Model 46.50 44.00 45.00 38.00 26.00 44.99 PALI (Chen et al., 2022) Blip-2 (Li et al., 2023d) MMICL (FLAN-T5-XXL) 28.75 23.50 43.00
3.3.2 UNDERSTANDING COMPLEX IMAGE-TO-IMAGE RELATIONSHIP
RAVEN (Zhang et al., 2019; Huang et al., 2023a) test is widely used to evaluate the nonverbal reason- ing ability of VLMs. It requires visual and logical skills to understand the relationships among images.
§All the reported performance for the baseline methods is from the leaderboard of MME (Fu et al., 2023) and MMBench (Liu et al., 2023c). We report the result of MMICL with FLANT5-XXL backbone.
7
# Preprint | 2309.07915#36 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 37 | Social ability. Social ability refers to an agentâs capacity to interact with other agents, including humans, through some kind of agent-communication language [8]. Large language models exhibit strong natural language interaction abilities like understanding and generation [23; 127; 128]. Com- pared to structured languages or other communication protocals, such capability enables them to interact with other models or humans in an interpretable manner. This forms the cornerstone of social ability for LLM-based agents [22; 108]. Many researchers have demonstrated that LLM-based
9
agents can enhance task performance through social behaviors such as collaboration and competition [108; 111; 129; 130]. By inputting specific prompts, LLMs can also play different roles, thereby simulating the social division of labor in the real world [109]. Furthermore, when we place multiple agents with distinct identities into a society, emergent social phenomena can be observed [22].
# 3 The Birth of An Agent: Construction of LLM-based Agents
amen f Look at the sky, do you think it will rain tomorrow? If so, give the umbrella to me. Knowledge Reasoning from the current weather $9) conditions and the eather reports on the internet, it is Summary] | Recall Lear] | Decision Making Planning / Reasoning Generalize / Transfer + (Agent) (Action ) likely to rain tomorrow. Here is fay? your umbrella. ° Calling API... _ Embodiment tt Lo -. | 2309.07864#37 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 37 | Model Flickr 30K WebSRC VQAv2 Hateful Memes VizWiz Flamingo-3B (Alayrac et al., 2022) (Zero-Shot) Flamingo-3B (Alayrac et al., 2022) (4-Shot) Flamingo-9B (Alayrac et al., 2022) (Zero-Shot) Flamingo-9B (Alayrac et al., 2022) (4-Shot) 60.60 72.00 61.50 72.60 - - - - 49.20 53.20 51.80 56.30 53.70 53.60 57.00 62.70 28.90 34.00 28.80 34.90 KOSMOS-1 (Huang et al., 2023b) (Zero-Shot) KOSMOS-1 (Huang et al., 2023b) (4-Shot) 67.10 75.30 3.80 - 51.00 51.80 63.90 - 29.20 35.30 Zero-Shot Evaluation BLIP-2 (Li et al., 2023d) (FLANT5-XL) BLIP-2 (Li et al., 2023d) (FLANT5-XXL) 64.51 60.74 12.25 10.10 58.79 60.91 60.00 62.25 25.52 | 2309.07915#37 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 38 | Figure 2: Conceptual framework of LLM-based agent with three components: brain, perception, and action. Serving as the controller, the brain module undertakes basic tasks like memorizing, thinking, and decision-making. The perception module perceives and processes multimodal information from the external environment, and the action module carries out the execution using tools and influences the surroundings. Here we give an example to illustrate the workflow: When a human asks whether it will rain, the perception module converts the instruction into an understandable representation for LLMs. Then the brain module begins to reason according to the current weather and the weather reports on the internet. Finally, the action module responds and hands the umbrella to the human. By repeating the above process, an agent can continuously get feedback and interact with the environment. | 2309.07864#38 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 38 | 2023d) (FLANT5-XXL) 64.51 60.74 12.25 10.10 58.79 60.91 60.00 62.25 25.52 22.50 InstructBLIP (Dai et al., 2023) (FLANT5-XL) InstructBLIP (Dai et al., 2023) (FLANT5-XXL) 77.16 73.13 10.80 11.50 36.77 63.69 58.54 61.70 32.08 15.11 Zero-Shot Evaluation MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XXL) MMICL (Instruct-FLAN-T5-XL) MMICL (Instruct-FLAN-T5-XXL) 60.56 78.64 78.89 44.29 12.55 18.85 14.75 17.05 62.17 69.99 69.13 70.30 60.28 60.32 61.12 62.23 25.04 29.34 29.92 24.45 Few-Shot (4-Shot) Evaluation MMICL (FLAN-T5-XL) MMICL (FLAN-T5-XXL) MMICL (Instruct-FLAN-T5-XL) MMICL (Instruct-FLAN-T5-XXL) 71.95 | 2309.07915#38 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 39 | âSurvival of the Fittestâ [131] shows that if an individual wants to survive in the external environment, he must adapt to the surroundings efficiently. This requires him to be cognitive, able to perceive and respond to changes in the outside world, which is consistent with the definition of âagentâ mentioned in §2.1. Inspired by this, we present a general conceptual framework of an LLM-based agent composed of three key parts: brain, perception, and action (see Figure 2). We first describe the structure and working mechanism of the brain, which is primarily composed of a large language model (§ 3.1). The brain is the core of an AI agent because it not only stores knowledge and memories but also undertakes indispensable functions like information processing and decision-making. It can present the process of reasoning and planning, and cope well with unseen tasks, exhibiting the intelligence of an agent. Next, we introduce the perception module (§ 3.2). Its core purpose is to broaden the agentâs perception space from a text-only domain to a multimodal sphere that includes textual, auditory, and visual modalities. This extension equips the agent to grasp and utilize information from its surroundings more | 2309.07864#39 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 40 | a multimodal sphere that includes textual, auditory, and visual modalities. This extension equips the agent to grasp and utilize information from its surroundings more effectively. Finally, we present the action module designed to expand the action space of an agent (§ 3.3). Specifically, we empower the agent with embodied action ability and tool-handling skills, enabling it to adeptly adapt to environmental changes, provide feedback, and even influence and mold the environment. | 2309.07864#40 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 40 | Table 4: Main results of multi-modal in-context learning ability of MMICL across vision-language tasks. All evaluation metrics used in the evaluation is introduced as Table 24.
# Table 3: Zero-shot generalization on Raven IQ test.
We conduct zero-shot experiments on the Raven test to evaluate VLMâs ability to understand image-to-image relationships. Each instance has 3 or 8 images as inputs and 6 candidate im- ages with a unique answer, and the goal is to predict the right image as shown in the right of Fig. 5. The result in Table 3 shows that MMICL achieves 12 points improvement compared to KOSMOS-1. It indicates that MMICL is able to capture the complex image-to-image relationships and conduct nonverbal visual reasoning tasks.
Model Accuracy Random Choice KOSMOS-1 (Huang et al., 2023a) MMICL (FLAN-T5-XXL) 17% 22% 34%
3.4 LEARNING FROM IN-CONTEXT MULTI-MODAL DEMONSTRATIONS | 2309.07915#40 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 41 | The framework can be tailored for different application scenarios, i.e. not every specific component will be used in all studies. In general, agents operate in the following workflow: First, the perception
10
module, corresponding to human sensory systems such as the eyes and ears, perceives changes in the external environment and then converts multimodal information into an understandable representation for the agent. Subsequently, the brain module, serving as the control center, engages in information processing activities such as thinking, decision-making, and operations with storage including memory and knowledge. Finally, the action module, corresponding to human limbs, carries out the execution with the assistance of tools and leaves an impact on the surroundings. By repeating the above process, an agent can continuously get feedback and interact with the environment.
# 3.1 Brain
# Brain | 2309.07864#41 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 41 | 3.4 LEARNING FROM IN-CONTEXT MULTI-MODAL DEMONSTRATIONS
As shown in Table 4, we evaluate the multi-modal in-context learning ability of MMICL across various vision-language tasks. MMICL outperforms other VLMs on both the held-in and held-out datasets and achieves the state-of-art few-shot performance. For example, few-shot evaluation (4-shot) of MMICL on the VizWiz benchmark outperforms the baseline Flamingo-9B (Alayrac et al., 2022) and KOSMOS-1 (Huang et al., 2023b) by 15.38 and 14.98 points, respectively. Since VizWiz has never been exposed in the training data, this superior suggests the ability of MMICL to generalize to new tasks with a few exemplars. The few-shot performance of Flickr30K decreases with examples given because the captions examples may provide noise for the VLM to finish the task(i.e., in-context exemplars generally do not provide hinds for models to perform image captioning tasks).
3.5 HALLUCINATION AND LANGUAGE BIAS OF VLMS | 2309.07915#41 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 42 | Natural Language Interaction §3.1.1 High-quality generation Deep understanding Bang et al. [132], Fang et al. [133], Lin et al. [127], Lu et al. [134], etc. Buehler et al. [135], Lin et al. [128], Shapira et al. [136], etc. Pretrain model Hill et al. [137], Collobert et al. [138], Kaplan et al. [139], Roberts et al. [140], Tandon et al. [141], etc. Knowledge in LLM-based agent Linguistic knowledge Vulic et al. [142], Hewitt et al. [143], Rau et al. [144], Yang et al. [145], Beloucif et al. [146], Zhang et al. [147], Bang et al. [132], etc. Commensense knowledge Safavi et al. [148], Jiang et al. [149], Madaan [150], etc. Knowledge §3.1.2 Actionable knowledge Xu et al. [151], Cobbe et al. [152], Thirunavukarasu et al. [153], Lai et al. [154], Madaan et al. [150], etc. Potential issues of knowledge Edit wrong and outdated knowledge AlKhamissi et al. | 2309.07864#42 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 42 | 3.5 HALLUCINATION AND LANGUAGE BIAS OF VLMS
Current VLMs have significant visual hallucinations (Li et al., 2023f), preventing VLMs from benefiting from multi-modal ICL. Especially when dealing with complex prompts with multiple images (e.g., multi-modal chain of thoughts (Zhang et al., 2023b)), VLMs often overlook visual content when facing extensive text. This language bias reduces their efficiency in answering questions that require both images and text. ScienceQA-IMG (Lu et al., 2022) is a challenging task that requires a model to use both modalities to answer the question. We manually split the dataset into two groups: questions needing images to answer and those not. Extensive experiments in Table 5 demonstrate that MMICL effectively mitigates language bias as it performs equally well in both groups. On the other hand, other VLMs suffer from language bias and exhibit vastly different performances in the two groups. Specifically, MMICL achieves a significant improvement compared to other VLMs with a similar model structure (e.g., Instructblip and Ying-VLM) in reducing language bias. Comparison
8
Preprint | 2309.07915#42 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 43 | Lai et al. [154], Madaan et al. [150], etc. Potential issues of knowledge Edit wrong and outdated knowledge AlKhamissi et al. [155], Kemker et al. [156], Cao et al. [157], Yao et al. [158], Mitchell et al. [159], etc. Mitigate hallucination Manakul et al. [160], Qin et al. [94], Li et al. [161], Gou et al. [162], etc. Raising the length limit of Transformers BART [163], Park et al. [164], LongT5 [165], CoLT5 [166], Ruoss et al. [167], etc. Memory capability Summarizing memory Generative Agents [22], SCM [168], Reflexion [169], Memory- bank [170], ChatEval [171], etc. Memory §3.1.3 Compressing mem- ories with vectors or data structures ChatDev [109], GITM [172], RET-LLM [173], AgentSims [174], ChatDB [175], etc. Automated retrieval Generative Agents [22], Memory- bank [170], AgentSims [174], etc. Memory retrieval Interactive retrieval Memory Sandbox[176], ChatDB [175], etc. Reasoning CoT | 2309.07864#43 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 43 | 8
Preprint
Moder Average Performance Donât Require Visual Infomation Require Visual Infomation Performance Gap Random Guess Ying-VLM (Li et al., 2023e) InstructBLIP (Dai et al., 2023) Otter (Li et al., 2023a) Shikra (Chen et al., 2023) MMICL 35.50 55.70 71.30 63.10 45.80 82.10 35.80 66.60 82.00 70.90 52.90 82.60 34.90 44.90 60.70 55.70 39.30 81.70 - 21.70 21.30 15.20 13.60 0.90
Table 5: Zero-shot performance of different VLMs on ScienceQA-IMG dataset in different split. MMICL outperforms other VLMs by successfully alleviating language bias. | 2309.07915#43 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 44 | Memory- bank [170], AgentSims [174], etc. Memory retrieval Interactive retrieval Memory Sandbox[176], ChatDB [175], etc. Reasoning CoT [95], Zero-shot-CoT [96], Self-Consistency [97], Self- Polish [99], Selection-Inference [177], Self-Refine [178], etc. Reasoning & Planning §3.1.4 Plan formulation Least-to-Most [98], SayCan [179], Hug- gingGPT [180], ToT [181], PET [182], DEPS [183], RAP [184], SwiftSage [185], LLM+P [125], MRKL [186], etc. Planing Plan reflection LLM-Planner [101], Inner Monologue [187], ReAct [91], ChatCoT [188], AI Chains [189], Voyager [190], Zhao et al. [191], SelfCheck [192], etc. Unseen task generalization T0 [106], FLAN [105], Instruct- GPT [24], Chung et al. [107], etc. Transferability & Generalization §3.1.5 In-context learning GPT-3 [41], Wang et al. [193], Wang et al. [194], Dong et al. [195], etc. | 2309.07864#44 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 44 | Model VSR IconQA text VisDial IconQA img Bongard HOI Stage I Stage I (Blip-2-FLANT5-XL) Stage I (Blip-2-FLANT5-XXL) 61.62 63.18 45.44 50.08 35.43 36.48 48.42 48.42 52.75 59.20 Stage I (InstructBLIP-FLANT5-XL) Stage I (InstructBLIP-FLANT5-XXL) 61.54 65.06 47.53 51.39 35.36 36.09 50.11 45.10 53.15 63.35 Stage I + Stage II 62.85 Stage I + Stage II (BLIP-2-FLAN-T5-XL) 64.73 Stage I + Stage II (BLIP-2-FLAN-T5-XXL) 70.54 Stage I + Stage II (InstructBLIP-FLAN-T5-XL) Stage I + Stage II (InstructBLIP-FLAN-T5-XXL) 66.45 47.23 50.55 52.55 52.00 35.76 37.00 36.87 37.98 51.24 34.93 47.27 60.85 56.95 68.05 74.20 67.20 | 2309.07915#44 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07915 | 45 | Table 6: Ablation study on Training Paradigm across five datasets: VSR (Liu et al., 2022), IconQA- text (Lu et al., 2021), VisDial (Das et al., 2017), IconQA-img, and Bongard-HOI (Jiang et al., 2022).
with Otter shows that the lack of understanding in text-to-image reference and multiple-image relationships can result in significant language bias for Otter, even with the multimodal instruction in-context tuning. Shrika¶ mitigates the language bias by including spatial coordinate inputs and achieves the lowest performance gap except for MMICL. We also examined object hallucination in MMICLin Appendix K, which shows impressive performance.
3.6 ABLATION STUDY ON TRAINING PARADIGM | 2309.07915#45 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 46 | Figure 3: Typology of the brain module.
11
The human brain is a sophisticated structure comprised of a vast number of interconnected neu- rons, capable of processing various information, generating diverse thoughts, controlling different behaviors, and even creating art and culture [199]. Much like humans, the brain serves as the central nucleus of an AI agent, primarily composed of a large language model.
Operating mechanism. To ensure effective communication, the ability to engage in natural lan- guage interaction (§3.1.1) is paramount. After receiving the information processed by the perception module, the brain module first turns to storage, retrieving in knowledge (§3.1.2) and recalling from memory (§3.1.3). These outcomes aid the agent in devising plans, reasoning, and making informed decisions (§3.1.4). Additionally, the brain module may memorize the agentâs past observations, thoughts, and actions in the form of summaries, vectors, or other data structures. Meanwhile, it can also update the knowledge such as common sense and domain knowledge for future use. The LLM-based agent may also adapt to unfamiliar scenarios with its inherent generalization and transfer ability (§3.1.5). In the subsequent sections, we delve into a detailed exploration of these extraordinary facets of the brain module as depicted in Figure 3.
# 3.1.1 Natural Language Interaction | 2309.07864#46 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 46 | 3.6 ABLATION STUDY ON TRAINING PARADIGM
We conduct an ablation study on various tasks to evaluate the effect of multi-modal in-context tuning. Table 6 displays a significant enhancement of MMICLâs performance due to the multi-modal in-context tuning. Significant improvements can be observed across all types and sizes of models, especially for tasks that involve multiple images. Specifically, MMICL (Stage I + Stage II) gained 15.75 and 21.05 points improvement in IconQA-img and Bongard-HOI respectively, compared to the Stage I only model. This indicates that with the help of Stage II, MMICL can handle complex multi-modal prompts and accomplish challenging tasks with multiple images. Result in Appendix J also confirms this point with the outstanding performance of MMICL across various video datasets.
4 RELATED WORK | 2309.07915#46 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 47 | # 3.1.1 Natural Language Interaction
As a medium for communication, language contains a wealth of information. In addition to the intuitively expressed content, there may also be the speakerâs beliefs, desires, and intentions hidden behind it [200]. Thanks to the powerful natural language understanding and generation capabilities inherent in LLMs [25; 201; 202; 203], agents can proficiently engage in not only basic interactive conversations [204; 205; 206] in multiple languages [132; 202] but also exhibit in-depth comprehen- sion abilities, which allow humans to easily understand and interact with agents [207; 208]. Besides, LLM-based agents that communicate in natural language can earn more trust and cooperate more effectively with humans [130]. | 2309.07864#47 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 47 | 4 RELATED WORK
Vision-Language Pretraining: Recent VLMs (Zhu et al., 2023; Liu et al., 2023b; Li et al., 2022; Alayrac et al., 2022; Dai et al., 2023) have been proven effective for aligning visual inputs and frozen LLMs to obtain cross-modal generalization ability. However, previous works overlooked multi-image VLMs, mainly focusing on handling single-image prompts. Tsimpoukelli et al. (2021) supports multi-image inputs using self-attention for images but performs poorly in downstream tasks. Although Flamingo (Alayrac et al., 2022) supports Few-Shot Learning in VLMs and uses cross-attention to capture text-image relationships, it still suffers from exact reference to specific images.
Multi-Modal Instruction Tuning: Instruction tuning (Kung & Peng, 2023; Wei et al., 2022) achieves great success in cross-task generalization for LLMs. However, multi-modal instruction tuning still requires further exploration. Multiinstruct (Xu et al., 2023) introduces instruction tuning to enhance the performance of VLMs in instruction-following ability. Due to the architectural design,
¶We use 0708 version of Shikra, which performs better for multi-choice questions to ensure fair competition.
9
Preprint | 2309.07915#47 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 48 | Multi-turn interactive conversation. The capability of multi-turn conversation is the foundation of effective and consistent communication. As the core of the brain module, LLMs, such as GPT series [40; 41; 201], LLaMA series [201; 209] and T5 series [107; 210], can understand natural language and generate coherent and contextually relevant responses, which helps agents to comprehend better and handle various problems [211]. However, even humans find it hard to communicate without confusion in one sitting, so multiple rounds of dialogue are necessary. Compared with traditional text-only reading comprehension tasks like SQuAD [212], multi-turn conversations (1) are interactive, involving multiple speakers, and lack continuity; (2) may involve multiple topics, and the information of the dialogue may also be redundant, making the text structure more complex [147]. In general, the multi-turn conversation is mainly divided into three steps: (1) Understanding the history of natural language dialogue, (2) Deciding what action to take, and (3) Generating natural language responses. LLM-based agents are capable of continuously refining outputs using existing information to conduct multi-turn conversations and effectively achieve the ultimate goal [132; 147]. | 2309.07864#48 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 48 | ¶We use 0708 version of Shikra, which performs better for multi-choice questions to ensure fair competition.
9
Preprint
Multiinstruct still struggles with complex contexts containing multiple images. Otter (Li et al., 2023a) fine-tunes Openflamingo (Awadalla et al., 2023) to augment its instruction comprehension capabilities. However, Otterâs dataset lacks text-to-image references and interconnected image-to-image data. This limitation hinders its capability to handle complex contexts that involve visual-textual relationships.
# 5 CONCLUSION
In this paper, we highlight the limitations of VLMs handling the complex multi-modal prompts with multiple images, which makes VLMs less effective in downstream vision-language tasks. We introduce MMICL to address the aforementioned limitations and take our model a step further by extending it to multi-modal in-context learning. This breakthrough enables VLMs to better understand complex multi-modal prompts. Furthermore, MMICL sets a new state-of-the-art performance on the general VLM benchmarks and complex multi-modal reasoning benchmarks.
# REFERENCES
Aishwarya Agrawal, Jiasen Lu, Stanislaw Antol, Margaret Mitchell, C. Lawrence Zitnick, Dhruv Batra, and Devi Parikh. Vqa: Visual question answering, 2016. | 2309.07915#48 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 49 | High-quality natural language generation. Recent LLMs show exceptional natural language generation capabilities, consistently producing high-quality text in multiple languages [132; 213]. The coherency [214] and grammatical accuracy [133] of LLM-generated content have shown steady enhancement, evolving progressively from GPT-3 [41] to InstructGPT [24], and culminating in GPT-4 [25]. See et al. [214] empirically affirm that these language models can âadapt to the style and content of the conditioning textâ [215]. And the results of Fang et al. [133] suggest that ChatGPT excels in grammar error detection, underscoring its powerful language capabilities. In conversational contexts, LLMs also perform well in key metrics of dialogue quality, including content, relevance, and appropriateness [127]. Importantly, they do not merely copy training data but display a certain degree of creativity, generating diverse texts that are equally novel or even more novel than the benchmarks crafted by humans [216]. Meanwhile, human oversight remains effective through the use of controllable prompts, ensuring precise control over the content generated by these language models [134]. | 2309.07864#49 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07864 | 50 | Intention and implication understanding. Although models trained on the large-scale corpus are already intelligent enough to understand instructions, most are still incapable of emulating human dialogues or fully leveraging the information conveyed in language [217]. Understanding the implied meanings is essential for effective communication and cooperation with other intelligent agents [135],
12
and enables one to interpret othersâ feedback. The emergence of LLMs highlights the potential of foundation models to understand human intentions, but when it comes to vague instructions or other implications, it poses a significant challenge for agents [94; 136]. For humans, grasping the implied meanings from a conversation comes naturally, whereas for agents, they should formalize implied meanings into a reward function that allows them to choose the option in line with the speakerâs preferences in unseen contexts [128]. One of the main ways for reward modeling is inferring rewards based on feedback, which is primarily presented in the form of comparisons [218] (possibly supplemented with reasons [219]) and unconstrained natural language [220]. Another way involves recovering rewards from descriptions, using the action space as a bridge [128]. Jeon et al. [221] suggests that human behavior can be mapped to a choice from an implicit set of options, which helps to interpret all the information in a single unifying formalism. By utilizing their understanding of context, agents can take highly personalized and accurate action, tailored to specific requirements.
# 3.1.2 Knowledge | 2309.07864#50 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 50 | Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, MikoÅ aj Bi´nkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karén Simonyan. Flamingo: a visual language model for few-shot learning. In S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh (eds.), Advances in Neural Information Processing Systems, volume 35, pp. 23716â23736. Curran Associates, Inc., 2022. URL https://proceedings.neurips.cc/paper_ files/paper/2022/file/960a172bc7fbf0177ccccbb411a7d800-Paper-Conference.pdf. | 2309.07915#50 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 51 | # 3.1.2 Knowledge
Due to the diversity of the real world, many NLP researchers attempt to utilize data that has a larger scale. This data usually is unstructured and unlabeled [137; 138], yet it contains enormous knowledge that language models could learn. In theory, language models can learn more knowledge as they have more parameters [139], and it is possible for language models to learn and comprehend everything in natural language. Research [140] shows that language models trained on a large-scale dataset can encode a wide range of knowledge into their parameters and respond correctly to various types of queries. Furthermore, the knowledge can assist LLM-based agents in making informed decisions [222]. All of this knowledge can be roughly categorized into the following types:
⢠Linguistic knowledge. Linguistic knowledge [142; 143; 144] is represented as a system of constraints, a grammar, which defines all and only the possible sentences of the language. It includes morphology, syntax, semantics [145; 146], and pragmatics. Only the agents that acquire linguistic knowledge can comprehend sentences and engage in multi-turn conversations [147]. Moreover, these agents can acquire multilingual knowledge [132] by training on datasets that contain multiple languages, eliminating the need for extra translation models. | 2309.07864#51 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
2309.07915 | 51 | Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe, Yonatan Bitton, Samir Yitzhak Gadre, Shiori Sagawa, Jenia Jitsev, Simon Kornblith, Pang Wei Koh, Gabriel Ilharco, Mitchell Wortsman, and Ludwig Schmidt. Openflamingo: An open-source framework for training large autoregressive vision-language models. ArXiv, abs/2308.01390, 2023. URL https://api.semanticscholar.org/CorpusID:261043320.
Jeffrey P Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, et al. Vizwiz: nearly real-time answers to visual questions. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, pp. 333â342, 2010. | 2309.07915#51 | MMICL: Empowering Vision-language Model with Multi-Modal In-Context Learning | Since the resurgence of deep learning, vision-language models (VLMs) enhanced
by large language models (LLMs) have grown exponentially in popularity.
However, while LLMs can utilize extensive background knowledge and task
information with in-context learning, most VLMs still struggle with
understanding complex multi-modal prompts with multiple images, making VLMs
less effective in downstream vision-language tasks. In this paper, we address
the limitation above by 1) introducing MMICL, a new approach to allow the VLM
to deal with multi-modal inputs efficiently; 2) proposing a novel context
scheme to augment the in-context learning ability of the VLM; 3) constructing
the Multi-modal In-Context Learning (MIC) dataset, designed to enhance the
VLM's ability to understand complex multi-modal prompts. Our experiments
confirm that MMICL achieves new state-of-the-art zero-shot performance on a
wide range of general vision-language tasks, especially for complex benchmarks,
including MME and MMBench. Our analysis demonstrates that MMICL effectively
tackles the challenge of complex multi-modal prompt understanding and emerges
the impressive ICL ability. Furthermore, we observe that MMICL successfully
alleviates language bias in VLMs, a common issue for VLMs that often leads to
hallucination when faced with extensive textual context. | http://arxiv.org/pdf/2309.07915 | Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng Wang, Wenjuan Han, Baobao Chang | cs.CL, cs.AI, cs.CV | Code, dataset, checkpoints, and demos are available at
https://github.com/PKUnlp-icler/MIC | null | cs.CL | 20230914 | 20231002 | [
{
"id": "2305.15023"
},
{
"id": "1505.00855"
},
{
"id": "2306.14565"
},
{
"id": "2101.09465"
},
{
"id": "2306.13394"
},
{
"id": "2304.14178"
},
{
"id": "2305.11383"
},
{
"id": "2302.14794"
},
{
"id": "2209.06794"
},
{
"id": "2110.15943"
},
{
"id": "2305.04790"
},
{
"id": "2110.13214"
},
{
"id": "2210.11416"
},
{
"id": "2205.00363"
},
{
"id": "2302.14045"
},
{
"id": "2205.14100"
},
{
"id": "2305.10400"
},
{
"id": "2012.15723"
},
{
"id": "2103.10360"
},
{
"id": "2308.09936"
},
{
"id": "1811.00491"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2307.02469"
},
{
"id": "2308.04152"
},
{
"id": "2210.14896"
},
{
"id": "2111.02114"
},
{
"id": "2304.15010"
},
{
"id": "2305.06500"
},
{
"id": "2306.00890"
}
] |
2309.07864 | 52 | ⢠Commonsense knowledge. Commonsense knowledge [148; 149; 150] refers to general world facts that are typically taught to most individuals at an early age. For example, people commonly know that medicine is used for curing diseases, and umbrellas are used to protect against rain. Such information is usually not explicitly mentioned in the context. Therefore, the models lacking the corresponding commonsense knowledge may fail to grasp or misinterpret the intended meaning [141]. Similarly, agents without commonsense knowledge may make incorrect decisions, such as not bringing an umbrella when it rains heavily.
⢠Professional domain knowledge. Professional domain knowledge refers to the knowledge associ- ated with a specific domain like programming [151; 154; 150], mathematics [152], medicine [153], etc. It is essential for models to effectively solve problems within a particular domain [223]. For example, models designed to perform programming tasks need to possess programming knowledge, such as code format. Similarly, models intended for diagnostic purposes should possess medical knowledge like the names of specific diseases and prescription drugs. | 2309.07864#52 | The Rise and Potential of Large Language Model Based Agents: A Survey | For a long time, humanity has pursued artificial intelligence (AI) equivalent
to or surpassing the human level, with AI agents considered a promising vehicle
for this pursuit. AI agents are artificial entities that sense their
environment, make decisions, and take actions. Many efforts have been made to
develop intelligent agents, but they mainly focus on advancement in algorithms
or training strategies to enhance specific capabilities or performance on
particular tasks. Actually, what the community lacks is a general and powerful
model to serve as a starting point for designing AI agents that can adapt to
diverse scenarios. Due to the versatile capabilities they demonstrate, large
language models (LLMs) are regarded as potential sparks for Artificial General
Intelligence (AGI), offering hope for building general AI agents. Many
researchers have leveraged LLMs as the foundation to build AI agents and have
achieved significant progress. In this paper, we perform a comprehensive survey
on LLM-based agents. We start by tracing the concept of agents from its
philosophical origins to its development in AI, and explain why LLMs are
suitable foundations for agents. Building upon this, we present a general
framework for LLM-based agents, comprising three main components: brain,
perception, and action, and the framework can be tailored for different
applications. Subsequently, we explore the extensive applications of LLM-based
agents in three aspects: single-agent scenarios, multi-agent scenarios, and
human-agent cooperation. Following this, we delve into agent societies,
exploring the behavior and personality of LLM-based agents, the social
phenomena that emerge from an agent society, and the insights they offer for
human society. Finally, we discuss several key topics and open problems within
the field. A repository for the related papers at
https://github.com/WooooDyy/LLM-Agent-Paper-List. | http://arxiv.org/pdf/2309.07864 | Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, Rui Zheng, Xiaoran Fan, Xiao Wang, Limao Xiong, Yuhao Zhou, Weiran Wang, Changhao Jiang, Yicheng Zou, Xiangyang Liu, Zhangyue Yin, Shihan Dou, Rongxiang Weng, Wensen Cheng, Qi Zhang, Wenjuan Qin, Yongyan Zheng, Xipeng Qiu, Xuanjing Huang, Tao Gui | cs.AI, cs.CL | 86 pages, 12 figures | null | cs.AI | 20230914 | 20230919 | [
{
"id": "2305.08982"
},
{
"id": "1910.00125"
},
{
"id": "1511.06342"
},
{
"id": "2301.13688"
},
{
"id": "2011.00583"
},
{
"id": "1907.12108"
},
{
"id": "1701.07274"
},
{
"id": "2304.10592"
},
{
"id": "2112.00639"
},
{
"id": "2303.11366"
},
{
"id": "2302.02676"
},
{
"id": "1810.03548"
},
{
"id": "2304.06027"
},
{
"id": "1806.10729"
},
{
"id": "2212.10560"
},
{
"id": "2210.13431"
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.