doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.05960 | 8 | However, these approaches neglect to incorporate valuable feedback, such as environment rewards, to enhance the agentâs behaviors, resulting in performances that rely solely on the quality of the pre- trained Language and Learning Model (LLM). Self-refine (Madaan et al., 2023a) tackles this limita- tion by employing a single LLM as a generator, refiner, and provider of feedback, enabling iterative refinement of outputs. However, it is not specifically tailored for real-world task-based interaction with the environment. On the other hand, REX (Murthy et al., 2023) and RAP (Hao et al., 2023) re- purpose the LLM to function as both a comprehensive world model and a reasoning agent. They in- corporate Monte Carlo Tree Search for strategic exploration within the vast realm of reasoning with environment rewards. This approach facilitates effective navigation and decision-making in intricate domains. Shinn et al. (2023) presents Reflexion, a framework that equips agents with dynamic mem- ory and self-reflection capabilities, enhancing their reasoning skills. Self-reflection plays a pivotal role, allowing autonomous agents to | 2308.05960#8 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 8 | Embodied Agents with LLMs In a parallel di- rection, recent works such as ReAct (Yao et al., 2023), Reflexion (Shinn et al., 2023), AutoGPT (Significant-Gravitas, 2023), and Voyager (Wang et al., 2023a), take an agent-based approach and augment the reasoning process through a closed âwhileâ loop that feeds environment observations back to the LLM. ReAct (Yao et al., 2023) allows the LLM agent to either take an action or a âthink- ingâ step. This allows the LLM to augment its context with its reasoning, which can be seen as
1Our code is available at github.com/itl-ed/llm-dp
agent-driven Chain-of-Thought prompting. Voy- ager (Wang et al., 2023a) incrementally builds an agentâs capabilities from its interactions with the environment and an accessible memory compo- nent (skill library). While many of these works show promising results in building general exe- cutable agents in embodied environments (Wang et al., 2023a), they still require many expensive calls to the LLMs, are limited by the LLMâs con- text window, and do not guarantee optimal plans.
# 3 Alfworld | 2308.06391#8 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 8 | hallucination rates by 41%.
3. We show that our reward models trained on this dataset can reduce hallucination rates by 55% in InstructBLIP with best-of-64 rejection sampling. The reward model gen- eralizes to other LVLMs, reducing hallucination rates in LLaVA and mPLUG-OWL by 15% and 57% respectively with best-of-16 sampling.
4. We show that our reward model is an effective evaluator of hallucination rates, giving scores aligned with human ratings. | 2308.06394#8 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 9 | agents with dynamic mem- ory and self-reflection capabilities, enhancing their reasoning skills. Self-reflection plays a pivotal role, allowing autonomous agents to iteratively refine past actions, make improvements, and prevent repetitive errors. Recently, Yao et al. (2023b) proposes a framework, namely Retroformer, which leverages policy gradient optimization to align the agentâs behaviors with environment-specific re- wards by learning a plug-in retrospective language model. | 2308.05960#9 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 9 | # 3 Alfworld
Alfworld (Shridhar et al., 2020) is a text-only home environment where an agent is tasked with seven possible tasks, such as interacting with one or more objects and placing them in a specific receptacle. At the start of each episode, the goal is given in natural language, and the initial observation does not include the location of any objects. Therefore an agent must navigate the environment to search for the relevant objects and perform the correct ac- tions. The possible locations of the environment are known, and the agent can navigate to any re- ceptacle by using a âgo toâ action. However, since none of the objectsâ locations are initially observed, the agent must be able to plan around uncertainty, estimate where objects are likely to be observed and adjust accordingly.
# 4 LLM-DP
To tackle an embodied environment like Alfworld, we introduce the Large Language Model Dynamic Planner (LLM-DP), which operates as a closed- loop agent. LLM-DP uses a combination of lan- guage understanding and symbolic reasoning to plan and solve tasks in the simulated environment. The model tracks a World State W and beliefs B about predicates in the environment, uses an LLM to translate the task description into an executable goal state and samples its beliefs to generate plau- sible world states. We describe the working of the LLM-DP agent as pseudo-code in Algorithm 1. | 2308.06391#9 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 9 | 4. We show that our reward model is an effective evaluator of hallucination rates, giving scores aligned with human ratings.
Related Work Large Vision Language Models (LVLMs) have seen perfor- mative advancements in tasks such as generating text from im- ages(Li 2023) and multi-modal in-context learning(Alayrac et al. 2022). Recent work has focused on utilizing instruction tuning techniques to enhance the zero-shot performance of instruction-aware LVLMs across different vision-language tasks(Liu et al. 2023b; Dai et al. 2023). These approaches utilize GPT-4 to generate multi-modal instruction tuning datasets(Liu et al. 2023b) where the image context is pro- vided to GPT-4 through symbolic representations of the im- age such as captions and object bounding boxes. Others com- bine datasets across various multi-modal tasks (Dai et al. 2023) with hand-crafted instructions, a method that has found success in training traditional LLMs(Wei et al. 2021). This achieves state of the art performance in a variety of multi- modal tasks, such as visual and video question answering, image captioning, and image classification. | 2308.06394#9 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 10 | # 2.2 WEB AGENT
Web navigation is the foundation for humans to collect information and communicate. Before the boom of LLM, previous endeavours (Liu et al., 2018; Shi et al., 2017) already explored how to train web agent in a web simulation environment. Very recently, a series of works have been devoted to developing LAA to tackle complex web navigation tasks. Though action space of web navigation is almost infinite due to numerous available elements online, these action can be divided into a few operation types, such as click, type and select. MIND2Web (Deng et al., 2023) collects a web browser data to fine-tune LLM to generate executable actions, which functions as a Web LAA. WebAgent (Gur et al., 2023) is able to decompose task instruction into sub-tasks, which directly generates executable python program for web navigation. WebArena (Zhou et al., 2023) supports realistic tasks simulation for designing Web LAA. Langchain and ChatGPT both provide convenient web plugin such that the LLM behaves as Web LAA. We believe that the web navigation is the next fundamental task for LAA to shine its superiority.
2.3 TOOL AGENT | 2308.05960#10 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 10 | # 4.1 Assumptions
We make several simplifying assumptions when applying the LLM-DP framework to Alfworld:
1. Known action-descriptions and predicates: Our input to the planner and the LLM re- quires the PDDL domain file, which describes what actions can be taken, their pre- and post- conditions, and what predicates exist.
Algorithm 1 LLM-DP Pseudo-code Require: LLM, PG, AS, Domain, task, obso goal <- LLM(Domain, task) W, B <-observe(goal, obso) while goal not reached do plans + 0 for iin N do Wbelief -LLM(B, W) plans âPG(wreties UW) end for action <-AS(plans) obs <-Env(action) W, B <observe(action, obs)
# end while
2. Perfect observations: The Alfworld environ- ment provides a perfect textual description of the current location. This observation also contains the intrinsic attributes of observed objects and receptacles, such as whether or not a given receptacle can be opened.
3. Causal Environment: changes in the envi- ronment are entirely caused by the agent.
4. Valid actions always succeed | 2308.06391#10 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 10 | Nevertheless, a significant challenge associated with LVLMs has emerged: preventing hallucinations when gen- erating textual output. It is essential to address and mitigate these hallucinations to enhance the reliability and accuracy of LVLMs in production usecases.
Hallucination Analysis in LVLMs In (Li et al. 2023), the evaluation metric âPOPEâ is proposed to evaluate hallucina- tions in LVLMs by polling questions about generated text. They observed that current state-of-the-art LVLM (Instruct- BLIP) has the lowest object hallucination rates among recent LVLMs. Another relevant contribution by Liu et al. (Liu et al.
2023a) is the introduction of the LRV dataset. This dataset contains positive and negative instructions specifically de- signed to enhance the robustness of LVLMs against hallu- cination and inconsistent text generation. Furthermore, they proposed a method called GAVIE, which leverages GPT-4 to assist in evaluating preferred answer generations. | 2308.06394#10 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 11 | 2.3 TOOL AGENT
The evolution of LLM and their interactions with various tools has been a focal point of recent re- search. The concept of a âTool Agentâ encapsulates the idea of LLMs leveraging external tools to enhance their capabilities and solve complex tasks. One of the pioneering works in this domain is the introduction of âGorillaâ (Patil et al., 2023). This model is adept at writing API calls and exhibits the ability to adapt test-time document changes. Another noteworthy work is the âToolLLMâ frame- work (Qin et al., 2023). This open-source framework incorporates LLMs to efficiently engage with a myriad of tools, particularly APIs, to execute intricate tasks. The framework encompasses Tool- Bench, an instruction-tuning dataset tailored for tool utilization More recently, a paradigm shift in teaching LLMs to use new tools has been discussed in (Hsieh et al., 2023), which champions the use of tool documentation. The authors present empirical evidence suggesting that tool documentation offers detailed descriptions of tool usage, which is a more effective and scalable approach. Notably, their research indicates that zero-shot prompts, which are exclusively based on tool documentation, can rival the performance of few-shot prompts.
# 3 AGENT ARCHITECTURES | 2308.05960#11 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 11 | 3. Causal Environment: changes in the envi- ronment are entirely caused by the agent.
4. Valid actions always succeed
4.2 Generating the Goal State LLM-DP uses an LLM to generate a PDDL goal, given the natural language instruction (task) and the valid predicates defined by the PDDL domain file. Figure 1 shows an example task converted to a valid PDDL goal. For each episode, we use a set of three in-context examples that are fixed for the entire evaluation duration. We use the OpenAI gpt-3.5-turbo-0613 LLM model with a temper- ature of 0 in all our LLM-DP experiments.
# 4.3 Sampling Beliefs
We parse the initial scene description into a struc- tured representation of the environment W and a set of beliefs B. The internal representation of the world W contains all known information, for in- stance, all receptacles (possible locations) in the scene from the initial observation and their intrin- sic attributes are known (i.e. a fridge holds the isFridge predicate). Whereas the set of beliefs B are a set of possible valid predicates that can be true or false and which the model does not have enough information to disambiguate. In Alfworld, the objectsâ locations are unknown; therefore, the set of possible predicates for each object includes all possible locations. | 2308.06391#11 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 11 | These studies collectively contribute to the understanding and mitigation of hallucination-related challenges in LVLMs, by providing evaluation metrics, datasets, and evaluation methods that enhance the reliability and consistency of text generation in multi-modal models. Our work extends the scope of the previous works by not only considering halluci- nations on the presence of objects, but also on descriptions of objects such as relative positioning or attributes. We also consider hallucinations on complex object reasoning.
Aligning to Human Preferences Despite having strong zero-shot performance on classical language benchmark datasets, pre-trained LLMs still struggle to produce detailed generations on par with those written by real humans. Super- vised fine-tuning on demonstration data written by humans is not enough, where recent works have focused on using Reinforcement Learning with Human Feedback (RLHF) to address this problem(Stiennon et al. 2020; Touvron et al. 2023; Ouyang et al. 2022; OpenAI 2023). | 2308.06394#11 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 12 | # 3 AGENT ARCHITECTURES
In this section, we compare various LAA architectures. We first present how to design different solo LAA based on the intuition of existing work. We then present the our orchestration designing of multiple LAAs, i.e. BOLAA.
3
PREPRINT
Environment Environment Environment {Action} ( Observation | (Action _}( Observation ] ("Action ] {Observation } 4 Action Parser Action Parser Feta? 8 a Fr + x Ea z g z\|<= = = ic = +2 P| o a z 3 2 =|| 3 5 2 8 Fl s = Zeroshot S Zeroshot g Fewshot 2 - Prompt 3 Prompt Ey Prompt (a) Zeroshot LAA (b) ZeroshotThink LAA (c) ReAct LAA
Figure 1: The LAA architectures for Zeroshot-LAA (ZS-LAA), ZeroshotThink LAA (ZST-LAA) and ReAct LAA. ZS-LAA generates actions from LLM with zeroshot prompt. ZST-LAA extends ZS-LAA with self-think. ReAct LAA advances ZST-LAA with fewshot prompt. They all resolve a given task by interacting with environment via actions to collect observations. Better view in colors.
3.1 SOLO AGENTS | 2308.05960#12 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 12 | Average Accuracy (%) Model clean cool examine heat put puttwo overall (â) LLM Tokens (â) LLM-DP LLM-DP-random ReAct (Yao et al., 2023) ReAct (ours) 0.94 0.94 0.61 0.35 1.00 1.00 0.81 0.90 1.00 1.00 0.89 0.33 0.87 0.87 0.30 0.65 1.00 0.96 0.79 0.71 0.94 1.00 0.47 0.29 0.96 0.96 0.64 0.54 633k 67k â* 9.16M
(a) The average accuracy and number of LLM Tokens processed (context + generation) for each model. *Not reported. Average Episode Length cool
overall (â) Model clean examine heat put puttwo LLM-DP LLM-DP-random ReAct (ours) 12.00 15.06 25.10 13.67 17.14 9.86 12.06 10.56 21.67 12.30 14.04 14.70 12.75 14.62 15.33 17.59 18.94 24.94 13.16 15.02 18.69 | 2308.06391#12 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 12 | RLHF typically uses Proximal Policy Optimiza- tion(Schulman et al. 2017), to optimize a policy model with rewards from a reward model. This reward model is typically trained on preference pairs of same-prompt generations, often sourced from the base policy model. This preference is usually given by humans, though attempts have been made to use more traditional metrics such as BLEU(Papineni et al. 2002) and ROUGE(Ganesan 2018) as proxies. Using human preferences is more effective in aligning LLMs to human preferences(Stiennon et al. 2020), though sees mixed results in hallucination prevention. Ouyang et al. (Ouyang et al. 2022) found that RLHF helps smaller (6B) language models reduce their hallucination rate, while having the opposite effect on larger models (175B). In this paper, we will focus on relatively smaller multi-modal models (7B) that can be more accessible to end users.
DPO has emerged recently as a viable alternative to RLHF | 2308.06394#12 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 13 | 3.1 SOLO AGENTS
Hereafter, we present 5 different LAAs. Each type of LAA is able to interact with the environment with its own interaction strategy.
Zeroshot LAA (ZS-LAA) directly extends the LLM to be action executor. Specifically, the prompt for LLMs to function as the action executor consists of detailed descriptions for those actions. For example, if we prompt LAA to understand the click action with âclick: using this action to click observed [button], the clickable buttons are in [].â, it may behave as a web navigation agent. We present the architecture of ZS-LAA in Figure 1(a). The working flow is as follows:
⢠Initial step: firstly, the ZS-LAA receives the task instruction and constructs the zeroshot prompt. Then, the LLM layer generates a possible response, which is parsed to output a feasible action. After that, the observation from environment is appended into the agent memory.
⢠Working teps: the agent checks whether the task is finished. If not, ZS-LAA retrieves the previous actions and observations from memory, and constructs the prompts for LLM to generate the next executable actions. ZS-LAA continues the working stage until reaching the maximum steps or completing the task. | 2308.05960#13 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 13 | (b) The average episode length for each model, where the length of an episode denotes how many actions the agent has taken or attempted to take to complete a task. We do not count the âthinkingâ action of ReAct as an action in this metric.
Table 1: Summary of model performance on the Alfword test set. LLM-DP and LLM-DP-random differ in the sampling strategy of the belief. LLM-DP uses an LLM to generate n = 3 plausible world states, while LLM-DP-random randomly samples n = 3 plausible world states.
LLM-DP uses stored observations W, beliefs B and an LLM to construct different planning prob- lem files in PDDL . A PDDL problem file includes the objects observed (:objects), a representation of the current state (:init) of the world and the ob- ject attributes, and the goal to be achieved (:goal). The goal is derived from the LLM (Section 4.2), while the objects and their attributes are obtained from W (observations) and the beliefs the B has about the objects.
# 4.4 Plan Generator | 2308.06391#13 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 13 | DPO has emerged recently as a viable alternative to RLHF
for preference alignment, optimizing the policy model di- rectly without needing to train a reward model and sample rewards through reinforcement learning(Rafailov et al. 2023). It has shown comparable performances with RLHF in sum- marization and chatbot usecases on language models, and maintains strong performance in higher temperature sam- pling. At the same time, it avoids the unstable and brittle process of training models with RL(Engstrom et al. 2020).
Fine-grained Preferences A limitation of both RLHF and DPO is their lack of fine-grained interpretability regarding what makes one generation more preferred than the other. Recent research has made significant progress in leveraging fine-grained user preferences to improve the performance and interpretability of reward models. For example, Wu et al. (Wu et al. 2023) utilize fine-grained human feedback to train mul- tiple reward models at different density levels. These reward models covered passage level preferences as in the traditional RLHF setting, but also sentence level and sub-sentence level preferences in the form of error identification. (Lightman et al. 2023) employs process supervision, providing human feedback on individual steps for more robust rewards. | 2308.06394#13 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 14 | ZS-LAA is a minimum LAA architecture. It enables the action generation ability of LLM via zeroshot prompt layer, which is easy to generalize to new environments and requires no examples.
ZeroshotThink LAA (ZST-LAA) is an extended version of ZS-LAA. Different from ZS-LAA, ZST- LAA has an additional self-think flow. The architecture of ZST-LAA is presented in Figure 1(b), where we denote the self-think flow as in pink arrow lines. Self-think is running in intermediate steps of action generations flow, which enables the Chain-of-Thought (CoT) reasoning ability.
⢠Self-think Step: before generating the next action, ZST-LAA collect observations and previous actions to construct the think prompt. Then, the thought is stored into memory.
Self-think step is generally useful when given reasoning tasks. Note that the think prompt is also in a zero-shot format, such as âthink: using this action to plan your actions and reasoningâ. | 2308.05960#14 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 14 | # 4.4 Plan Generator
Upon constructing the different PDDL problems, the agent uses a Plan Generator (PG) to solve each problem and obtain a plan. We use the BFS(f) solver (Lipovetzky et al., 2014) implemented as an executable by LAPKT (Ramirez et al., 2015). A generated plan is a sequence of actions, where each action is represented in a symbolic form, which, if executed, would lead to the goal state from the initial state.
Since B includes possible predicates which are unknown, we sample from B using an LLM to obtain wbelief . For instance, our belief could be that (inReceptacle tomato ?x) where ?x can be countertop, cabinet, fridge, etc. Since we want to condition the sampling of where the tomato can appear, we pass the known world state W along with the predicate (in this case inReceptacle) and its options to the LLM.This sampling leverages the LLM to complete a world state and is extendable to any unknown predicate from which a set of beliefs can be deduced. We also compare LLM sampling with random sam- pling (llmdp-random).
# 4.5 Action Selector | 2308.06391#14 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 14 | To extend this fine-grained feedback mechanism into the multi-modal domain, we introduce a new dataset for multi- modal hallucination detection. Our dataset comprises of 4,000 images with 4 detailed descriptions each, for a total of 16,000 image description pairs, annotated at the sub-sentence level to indicate the accuracy of the generated descriptions. Similarly to (Wu et al. 2023), we train sub-sentence and sen- tence level reward models on this dataset. We also modify the DPO loss to utilize fine-grained annotations.
M-HalDetect : Multi-Modal Hallucination Detection Dataset Dataset Description In this section, we introduce the M-HalDetect dataset that incorporates fine-grained annota- tions for identifying hallucinations in detailed image descrip- tions generated by LVLMs. The dataset comprises of image- description pairs sampled from 4,000 images taken from the val2014 split of the Common Objects in Context (COCO) dataset (Lin et al. 2014). The dataset is divided into a train- ing set with 3,200 images and a development set with 800 images.
We choose to utilize the validation set of COCO to avoid potential training data regurgitation from LVLMs trained on the COCO training set. This is roughly 10% of the original COCO validation set, leaving enough data untouched to not impact further validation too heavily. | 2308.06394#14 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 15 | ReAct LAA additionally advances ZST-LAA in the prompt layer, where fewshot examples are provided. The architecture of ReAct LAA is illustrated in Figure 1(c). ReAct LAA is able to leverage successful running examples to improve the action generation ability of LLM and enhance the environment interaction of LAA, because those fewshot examples endows the in-context learning ability of LLM. However, the drawback for ReAct LAA is that, due to the limited context length, fewer token spaces are available after the occupancy of fewshot examples in the prompt.
PlanAct LAA is designed to facilitate the planning ability of LAA. PlanAct LAA differs from ZS- LAA in two parts: 1) the planning flow and 2) the fewshot prompt. The architecture is depicted
4
PREPRINT
Environment 4 a a rf & a ca 2 = * <= 5 3 3 c c 2 g g 3 3 i 3 Promo Prompt Prompt Jann (a) PlanAct LAA (a) PlanReAct LAA
Figure 2: The LAA architectures for PlanAct LAA and PlanReAct LAA.
in Figure 2. The planning flow is executed before the initial action generation step, which has additional plan prompt to construct the input for the core LLM. | 2308.05960#15 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 15 | # 4.5 Action Selector
The Action Selector (AS) module decides the agentâs immediate next action. It takes the plan- nerâs output, a set of plans, and selects an action from them. In our Alfworld experiments, the Ac- tion Selector simply selects the shortest plan re- turned. If no valid plans are returned, all sampled states were satisfying goal states, there is a mistake with the constructed domain/problem files, or the planner has failed to find a path to the goal. In the first case, we re-sample random world states and re-run the planners once.
We describe our likely world state as the union between a sampled set of beliefs and the known world state wpyetier JW. Then sampling 7 1,.., N different sets of beliefs during the planning loop, we obtain NV likely world states. Finally, we convert each likely world state to lists of predicates to interface with the PDDL planner.
We also propose exploring different strategies when valid plans cannot be found. For instance, similarly to self-reflection (Shinn et al., 2023), the Action Selector could prompt an update in the agentâs belief about the world state if none of gener- ated problem descriptions are solvable. The Action Selector could also interact with a human teacher
or oracle to adjust its understanding of the environ- ment (problem) or its logic (domain). | 2308.06391#15 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 15 | To generate responses, we prompt InstructBLIP (Dai et al. 2023) with each image and a randomly selected question from a pool of instructions for describing an image. We initially reuse instructions from ones used in InstructBLIPâs detailed image description training data, which were sourced from the LLaVA-150k (Liu et al. 2023b) dataset. During initial analysis, we observed that doing so led to less diverse responses, potentially due to the influence of this dataset during training. To address this, we added in our own prompts to improve generation diversity. An exhaustive list of question prompts is listed in the Appendix.
We sample four responses using nucleus sampling from InstructBLIP with a temperature value set to 1.0. This creates 16k image-prompt-response triplets, split between 12800 samples in the train split and 3200 samples in the val split. | 2308.06394#15 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 16 | in Figure 2. The planning flow is executed before the initial action generation step, which has additional plan prompt to construct the input for the core LLM.
⢠Planning Step: PlanAct LAA generates a plan for a given task before interacting with environ- ments. The plan is memorized and will be retrieved to construct prompts.
It is worth noting that the plan prompt in this paper is in fewshot way, which allows LAA to generate plans based on previous successful plans.
PlanReAct LAA extends PlanAct LAA with additional self-think flow, which also enables the CoT ability. The architecture of PlanReAct LAA is presented in Figure 2. Intuitively, since the Planning flow is executed before the LAA observes the environment, self-think flow alleviates the hallucina- tion incurred from incorrect plans.
Next, we introduce our multi-agent orchestrating architecture, i.e. BOLAA.
3.2 BOLAA: ORCHESTRATING MULTIPLE AGENTS.
Environment Ei g z E a g Agents Message Controller
Figure 3: The BOLAA architecture, which employs a controller to orchestrate multiple LAAs. | 2308.05960#16 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 16 | or oracle to adjust its understanding of the environ- ment (problem) or its logic (domain).
# 4.6 Observation Processing
LLM-DP uses the result of each action to update its internal state representation. It uses the symbolic effects of the action to infer changes in the state of the objects and receptacles. Then it integrates the information from the new observation, which might reveal additional details not directly inferred from the action itself. For instance, opening an unseen drawer might reveal new objects inside. Observing also updates the beliefs â if an object is observed at a location, it cannot be elsewhere, but if an object is not observed at a location, it cannot be there. Observations incorporate beliefs into W.
If the agent detects new information from the scene - such as discovering new objects - it triggers a re-planning process. The agent then generates a new set of possible PDDL problems using the up- dated state representation and corresponding plans using the Plan Generator. This approach is similar to some Task and Motion Planning (TAMP) meth- ods (Garrett et al., 2018; Chen et al., 2023), en- abling the agent to adapt to environmental changes and unexpected outcomes of actions.
# 5 Results | 2308.06391#16 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 16 | Dataset Categories The annotation process involves cate- gorizing different segments of each response into three cat- egories: (i) Accurate, (ii) Inaccurate, and (iii) Analysis. We also include an Unsure category for ambiguous cases. We define the classes as follows: ⢠Accurate Objects exist in the image, their descriptions are accurate according the image, and any described rela- tionships can be accurately inferred from the image. ⢠Inaccurate Objects do not exist in the image or their descriptions are inaccurate. Furthermore, if the analysis about the image is not plausible, it is also marked as Inaccurate.
⢠Analysis Scene or object analysis including complex rea- soning or interpretations about the image. These are por- tions of the data that are more subjective and not grounded visually within the image.
Unsure This category is reserved as a last resort if annota- tors cannot make a judgment about the sentence segment into one of the above three categories. We provide fine-grained annotations for these 3 categories on the detailed descriptions of images generated by the LVLM. The annotations are provided at sub-sentence level - i.e. one sentence can comprise of multiple segments from different classes, as seen in Figure 1. | 2308.06394#16 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 17 | Environment Ei g z E a g Agents Message Controller
Figure 3: The BOLAA architecture, which employs a controller to orchestrate multiple LAAs.
Though the success of the existing LLMs in completing various language understanding tasks, plenty of issues are still under-explored, such as the context length constraints, in-context learning and generalization ability, and etc. Hence, it is challenging to employ a solo LAA to complete all tasks, especially when tasks are of high complexity. Therefore, we propose a new agent architecture for orchestrating multiple LAAs, which is illustrated in Figure 3. BOLAA has two main modules, the labor agents pool and the controller. The labor agents pool manages multiple LAAs. Each LAA may only focus on generating one type of actions. For example, in the web navigation environment, we could establish click LAA and search LAA. In this way, the former only generates the next button to click, while the later only outputs search query, which divides a complex task into feasible tasks. The controller is devised to selectively call LAAs from agents pool. Controller has the agents selection
5
PREPRINT | 2308.05960#17 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 17 | # 5 Results
We contrast the LLM-DP approach with ReAct (LLM-only baseline) from the original implemen- tation by Yao et al. (2023). Since we use a differ- ent backbone LLM model (gpt-3.5-turbo rather than text-davinci-002) than the ReAct base- line for cost purposes, we also reproduce their re- sults using gpt-3.5-turbo and adapt the ReAct prompts to a chat format.
As shown in Table 1, LLM-DP solves Alfworld almost perfectly (96%) compared to our baseline reproduction of ReAct (53%). The LLM-DP can translate the task description into an executable PDDL goal 97% of the time, but sampling reduces the accuracy further when it fails to select a valid set of possible world states â for instance, by sam- pling states where the goal is already satisfied.
We note, that the ReAct baseline makes differ- ent assumptions about the problem; while it does not require a domain file containing the action- descriptions and object predicates, it uses two sep- arate human-annotated episodes per example to bootstrap its in-context logic. ReAct also switches out which examples to use in-context based on | 2308.06391#17 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 17 | To make the annotation process user-friendly, we allow a leeway to the annotators to miss a few words in the anno- tations if there are too many segments in a sentence to be annotated. The unmarked words in a sentence are by default considered as âAccurateâ. In our analysis, we noticed that sometime annotators skip annotating punctuation, connector words, or introductory sub-sentences such as âThe image featuresâ (illustrated in Figure 1).
Dataset Collection To collect the annotations, we em- ployed Scale AIâs RAPID(sca 2023) labeling tool and in- volved 10 randomly selected human annotators. These an- notators had to qualify by passing a training course with a minimum accuracy of 85% on the example tasks to be se- lected for the final tagging task. The annotators are presented with an image and four responses about the image generated by InstructBLIP. Their task is to annotate segments of the sentence into one the categories. An example annotation task is illustrated in Figure 1. Further details on dataset generation, diverse prompts, and examples can be found in the Appendix.
# Method | 2308.06394#17 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 18 | 5
PREPRINT
layer for choosing the most relevant LAA to call. Then, the controller constructs the message for the selected LAA and builds the communication. After obtaining the response from the labor LAA, the controller parses it to an executable action and then interacts with the environment. Note that we can also design those labor LAAs to be think/plan agent. In this way, the self-think and plan work flows are also retained.
4 EXPERIMENT
4.1 ENVIRONMENT BENCHMARK
We construct the evaluation benchmarks from two environments, i.e., the WebShop (Yao et al., preprint) and HotPotQA (Yang et al., 2018) with Wikipedia API usage (Yao et al., 2023a). | 2308.05960#18 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 18 | the type of task, such that two examples of the same type of task being solved are always shown. We also find that our reproduction of ReAct is worse than the original and attribute this to the gpt-3.5-turbo model being more conversational than text-davinci-002, and thus less likely to output valid actions as it favours fluency over fol- lowing the templated action language.
We also measure the length of each successful episode and find that LLM-DP reaches the goal state faster on average (13.16 actions) versus ReAct (18.69 actions) and a random search strategy (15.02 actions). The Average Episode Length measures the number of actions taken in the environment and how efficient the agent is.
# 6 Conclusion
The LLM-DP agent effectively integrates language understanding, symbolic planning, and state track- ing in a dynamic environment. It uses the language model to understand tasks and scenes expressed in natural language, constructs and solves plan- ning problems to decide on a course of action, and keeps track of the world state to adapt to changes and make informed decisions. This workflow en- ables the agent to perform complex tasks in the Alfworld environment, making it a promising ap- proach for embodied tasks that involve language understanding, reasoning, and decision-making. | 2308.06391#18 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 18 | # Method
Multi-Modal Reward Model We implement a multi-modal reward model for detecting the presence of hallucinations generated by LVLMs. Specifically, we reuse the InstructBLIP weights and architecture, swap- ping the final embedding layer with a classification head. We do this as initializing the reward model from the genera- tive model weights improves training robustness and reward
20000 15000 10000 5000 0.0 0.2 0.4 0.6 08 10
Figure 2: Label density histogram for the Inaccurate class. The x-axis represents the percentage of a sentence that is an- notated as Inaccurate and the y-axis represents the frequency of such sentences in the dataset.
generalization in later RL(Zheng et al. 2023). InstructBLIP consists of an image encoder that extracts image features and a linear mapping layer that projects these features. These image feature are passed to an instruction-aware attention layer, the QFormer, that attends instructions over the pro- jected image features. The QFormer outputs are passed to a frozen pretrained decoder as soft prompts, prefixed to the instruction. For this paper, we choose to use Vicuna(vic 2023) as the frozen decoder following the original InstructBLIP. | 2308.06394#18 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 19 | WebShop is a recently proposed online shopping website environment with 1.18M real-world prod- ucts and human instructions. Each instruction is associated with one ground-truth product, and contains attribute requirements, e.g. Iâm looking for a travel monopod camera tripod with quick release and easy to carry, and price lower than 130.00 dollars. This instruction includes 3 attribute requirements i.e. âquick releaseâ, âcamera tripodâ and âeasy carryâ attributes. We define the com- plexity of an instruction using the number of attribute requirements. Thus, this instruction example above is of complexity 3. We equally sample 150 instructions regarding each complexity level. Since we have fewer than 150 instructions for complexity larger than 6, we only include instruc- tions from complexity in {1, 2, . . . , 6}, which sums up to 900 tasks for benchmark evaluation in the WebShop environment. In the WebShop environment, an agent operates either SEARCH[QUERY] or CLICK[ELEMENT] actions to interact the environment, for evaluating the interactive decision mak- ing ability of LAA. The observation from WebShop is simplified web browser, which includes the clickable buttons and associated page content. LAA interacts with the WebShop environment as a web navigation agent. | 2308.05960#19 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 19 | LLM-DP offers a cost and efficiency trade-off between a wholly symbolic solution and an LLM- only model. The LLMâs semantic knowledge of the world is leveraged to translate the problem into PDDL while guiding the search process through be- lief instantiation. We find that not only is LLM-DP cheaper, on a per-token comparison, but it is also faster and more successful at long-term planning in an embodied environment. LLM-DP validates the need for LLM research to incorporate specialised tools, such as PDDL solvers, in embodied agents to promote valid
Despite these promising results, numerous topics and unresolved issues remain open for future in- vestigation. Key among these is devising strategies to encode the world model and belief, currently handled symbolically, and managing uncertain ob- servations â particularly from an image model â along with propagating any uncertainty to the planner and Action Selector. We intentionally kept the Action Selector simple for our experiments, but future work may also explore different strategies to | 2308.06391#19 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 19 | We train reward models at sentence level and sub-sentence level densities. For each image-text pair, we run one forward pass similar to (Lightman et al. 2023), and set target class labels at the token concluding each segment, masking out all other indices in the segment. We optimize with cross entropy loss. We fine-tune the entire decoder and reward model head, while freezing the rest of the model. Ablations on model freezing and further hyperparameters as well as details on training can be found in the Appendix.
# Sentence-level Reward Prediction
We condense the labeled sub-sentence segments in M- HalDetect into sentence-level segments for a more structured reward format - this makes it more straightforward to run rejection sampling and train with RL, without worrying about localizing proper segments. We identify these sentences using the Natural Language Toolkit(Bird, Klein, and Loper 2009). For each sentence, if there is any segment that is inaccurate, we label the entire sentence as inaccurate. While this may introduce some noise when converting partially inaccurate sentences, we see in Figure 2 that the frequency of such sen- tences is low. Furthermore, if a sentence has a segment with the âunsureâ category, we merge that sentence into the inaccu- rate class. We experiment with two levels of label granularity with this dataset: | 2308.06394#19 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 20 | HotPotQA with Wikipedia API is another environment considered in this paper, which contains multi-hop questions answering tasks that requires reasoning over two or more Wikipedia passages. This simulation environment serves as a powerful tool for evaluating the multi-step planning and comprehension capabilities and information retrieval skills of AI models, ensuring they are profi- cient in sourcing reliable information from vast online resources. With its unique blend of real-world internet browsing scenarios and text analysis, HotpotQA is an invaluable asset for the advancement of augmented large language agent systems. In HotPotQA environment, an agent has three types of actions, i.e., SEARCH[ENTITY], LOOKUP[STRING] and FINISH[ANSWER] to interact with Hot- PotQA environment. HotPotQA environment aims at evaluate the knowledge reasoning ability of LAA. We randomly sample 100 questions from easy, medium and hard levels, which constitutes the final 300 benchmark questions for evaluating LAAs.
4.2 EVALUATION METRICS | 2308.05960#20 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 20 | encourage self-reflection within the agent loop. For instance, if all plans prove invalid, beliefs may be updated, or it might indicate an incorrect domain definition. Such instances may necessitate agents to interact with an instructor who can provide in- sights about action pre-conditions and effects. This direction could lead us from a static domain file towards an agent truly adaptable to new environ- ments, fostering continual learning and adaptation.
# Acknowledgements
This work was supported in part by the UKRI Cen- tre for Doctoral Training in Natural Language Pro- cessing, funded by the UKRI (grant EP/S022481/1) at the University of Edinburgh, School of Infor- matics and School of Philosophy, Psychology & Language Sciences and by the UKRI-funded TAS Governance Node (grant number EP/V026607/1).
# References | 2308.06391#20 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 20 | ⢠Binary Classification: Condense Analysis and Accurate classes into the Accurate class. In this setting we have two classes: Accurate and Inaccurate
⢠Ternary Classification: In this setting, we have three classes: Accurate, Inaccurate and Analysis.
The dataset distribution is visualized in the Appendix.
ACCURATE ACCURATE ANALYSIS Tue label INACCURATE INACCURATE 2064 Predicted label ACCURATE ANALYSIS INACCURATE Predicted label Ternary Classifier Binary Classifier
Figure 3: Confusion Matrix comparison between Binary and Ternary Classifiers. The right plot represents the binary clas- sifier labels derived from the ternary classifier by merging the Accurate and Analysis classes. | 2308.06394#20 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 21 | 4.2 EVALUATION METRICS
We mainly use the reward score in each environment to evaluate the performances of LAAs. In the WebShop environment, the reward is defined as the attribute overlapping ratio between the bought item and ground truth item. In HotPotQA environment, the reward is defined as the F1 score grading between agent answer and ground-truth answer. Additionally, we develop the Recall performance for WebShop environment, which is defined as 1 if the ground truth item is retrieved and 0 if not during one task session. The Recall is reported as the average recall scores across all tasks in WebShop environment.
# 4.3 LLM UTILIZATION | 2308.05960#21 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 21 | # References
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems 33: Annual Conference on Neural Information Process- ing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis, and He He. 2022. Meta-learning via language model in-context tuning. In Proceedings of the 60th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 719â730, Dublin, Ireland. Association for Computational Lin- guistics. | 2308.06391#21 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 21 | Segment-level Reward Prediction We also train a finer-grained reward model that make hal- lucination judgments on segments of sentences as opposed to entire sentences. This can provide less noisy signal when training on annotations, especially with longer compound sentences and hallucinations isolated to small portions of a sentence. We train on this data in a similar fashion to the sentence level rewards, by labeling the end token index of each span or segment of annotated text into its corresponding label. We then mask out every other index in the sequence. As a baseline, we assume perfect localization of the anno- tation segments as an upper bound for the performance of this method. Future works can consider training a segment localization model in parallel with the reward model, to de- tect when hallucinations start and end. Since we do not do this, we cannot use this reward model for rejection sampling, and evaluate purely on classification metrics over the test set. Similar to sentence-level reward prediction baselines, we also experiment with the binary and ternary variants of the segment-level reward prediction models. | 2308.06394#21 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 22 | # 4.3 LLM UTILIZATION
The core component of LAA is the LLM backbone. We compare different LLMs with various choices of model size and context length. We reported the results w.r.t. open LLM models such as fastchat-3b, vicuna-3b/13b/33b (Zheng et al., 2023), Llama-2-7b/13b/70b6 (Touvron et al., 2023), MPT-7b/30b (Team, 2023), xgen-8k-7b, longchat-16k-7b/13b and OpenAI API LLMs, including text-davinci-003, gpt-3.5-turbo and gpt-3.5-turbo-16k.
6All Llama-2 models are -chat-hf version.
6
PREPRINT
Table 1: Average reward in the WebShop environment. Len denotes the maximum context length. Bold results denote the best results in one row, i.e. best LAA architecture w.r.t. one LLM. Underline results denote the best performance in one column, i.e. best LLM regarding one LAA architecture. | 2308.05960#22 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 22 | Yongchao Chen, Jacob Arkin, Yang Zhang, Nicholas A. Roy, and Chuchu Fan. 2023. Autotamp: Autoregres- sive task and motion planning with llms as translators and checkers. ArXiv, abs/2306.06531.
Richard E. Fikes and Nils J. Nilsson. 1971. Strips: A new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2(3):189â 208.
Caelan Reed Garrett, Tomas Lozano-Perez, and Leslie Pack Kaelbling. 2018. Pddlstream: Integrat- ing symbolic planners and blackbox samplers via optimistic adaptive planning. In International Con- ference on Automated Planning and Scheduling.
Shibo Hao, Yilan Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023. Reasoning with language model is planning with world model. ArXiv, abs/2305.14992.
Jörg Hoffmann and Bernhard Nebel. 2001. The FF plan- ning system: Fast plan generation through heuristic search. Journal of Artificial Intelligence Research, 14:253â302. | 2308.06391#22 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 22 | Rejection Sampling We use the trained reward models to perform rejection sam- pling on the generations of InstructBLIP to promote selection of less hallucinatory responses. We do this on the passage level, computing reward scores for the whole generation at once. We calculate the reward score by averaging the non- hallucination log probabilities of each sentence. This repre- sents the normalized log probability of the entire passage containing no hallucinations. We compute rejection sampling in a best-of-n and worst-of-n setting, for n = 16, 64, to study the ability of the reward model in selecting the best genera- tions from InstructBLIP, and the variance in quality between generations.
As we train two types of sentence level reward models (bi- nary and ternary, including the analysis class), we experiment with using both models for reward scoring. We found in our initial experiments that although the binary reward model is able to penalize hallucinations with low scores, it tends to give very high scores towards the analysis class. We theorize that it is much easier to detect non-hallucinogenic analysis | 2308.06394#22 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 23 | LLM Len. LAA Architecture fastchat-t5-3b vicuna-7b vicuna-13b vicuna-33b llama-2-7b llama-2-13b llama-2-70b mpt-7b-instruct mpt-30b-instruct xgen-8k-7b-instruct longchat-7b-16k longchat-13b-16k text-davinci-003 gpt-3.5-turbo gpt-3.5-turbo-16k 2k 2k 2k 2k 4k 4k 4k 8k 8k 8k 16k 16k 4k 4k 16k ZS 0.3971 0.0012 0.0340 0.1356 0.0042 0.0662 0.0122 0.0001 0.1664 0.0001 0.0165 0.0007 0.5292 0.5061 0.5657 ZST 0.2832 0.0002 0.0451 0.2049 0.0068 0.0420 0.0080 0.0001 0.1255 0.0015 0.0171 0.0007 0.5395 0.5057 0.5642 ReAct 0.3098 0.1033 0.1509 0.1887 0.1248 | 2308.05960#23 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 23 | Or Honovich, Uri Shaham, Samuel R. Bowman, and Omer Levy. 2022. Instruction induction: From few examples to natural language task descriptions. ArXiv, abs/2205.10782.
Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Jake Zhao, and Hang Zhao. 2023. Chatdb: Augmenting llms with databases as their symbolic memory. ArXiv, abs/2306.03901.
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Wenliang Dai, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. ACM Computing Surveys, 55:1 â 38.
Nir Lipovetzky, Miquel Ramirez, Christian Muise, and Hector Geffner. 2014. Width and inference based planners: Siw, bfs (f), and probe. Proceedings of the 8th International Planning Competition (IPC-2014), page 43.
B. Liu, Yuqian Jiang, Xiaohan Zhang, Qian Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. 2023. Llm+p: Empowering large language models with op- timal planning proficiency. ArXiv, abs/2304.11477. | 2308.06391#23 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 23 | Reward Model Fine-grained Predictions Prompt: <image>
Deseribe the following image. Most Preferred Response Rating 1.041 ACCURATE 0.995 The image depicts a person riding a green motorcycle on a race track. ACCURATE 0.57 â The motorcycles leaning sharply as the rider takes a turn, making for an exciting and thriling scene. ACCURATE 0.6123 Therider is wearing a helmet and gloves, ensuring their safety during the high-speed maneuver. INACCURATE 0.0266 In the background, there are several other motorcycles visible, some of which are closer to the viewer, while others are further away. J) INACCURATE 0.2382 A grassy area can also be seen in the background, providing a natural setting for the motorcycle race. ANALYSIS 0.8822 Overall, the image captures the excitement and adrenaline of a competitive motorcycle race, Least Preferred Response Rating 2.771 ACCURATE. 0,9946 The image features a person riding a green motorcycle on a race track. INACCURATE 0.4622 The motorcycle is leaning to the left, and the person is wearing a helmet, gloves, and a backpack. ACCURATE | 2308.06394#23 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 24 | 0.0007 0.5395 0.5057 0.5642 ReAct 0.3098 0.1033 0.1509 0.1887 0.1248 0.2568 0.4426 0.0573 0.3119 0.0685 0.069 0.2373 0.5474 0.5383 0.4898 PlanAct 0.3837 0.0555 0.3120 0.3692 0.3156 0.4892 0.2979 0.0656 0.3060 0.1574 0.0917 0.3978 0.4751 0.4667 0.4565 PlanReAct BOLAA 0.5169 0.0604 0.5350 0.5612 0.4648 0.3716 0.5040 0.0632 0.4381 0.3697 0.1964 0.3205 0.6341 0.6567 0.6541 0.1507 0.0674 0.4127 0.3125 0.2761 0.4091 0.3770 0.1574 0.3198 0.1004 0.1322 0.4019 0.4912 0.5483 0.5607 | 2308.05960#24 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 24 | Drew McDermott. 2000. The 1998 ai planning systems competition. AI Magazine, 21(2):35â55.
Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle- moyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? In Confer- ence on Empirical Methods in Natural Language Processing.
OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774. Computation and Language (cs.CL); Artificial Intelligence (cs.AI).
Shishir G. Patil, Tianjun Zhang, Xin Wang, and Joseph E. Gonzalez. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334.
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Lidén, Zhou Yu, Weizhu Chen, and Jianfeng Gao. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feed- back. ArXiv, abs/2302.12813. | 2308.06391#24 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 24 | INACCURATE 0.4622 The motorcycle is leaning to the left, and the person is wearing a helmet, gloves, and a backpack. ACCURATE 0.517 The motorcycle is positioned towards the right side of the image, and the person appears to be in the middle of a turn. INACCURATE 0.0143. There are two other motoraycies visible in the scene, one closer to the left side and the other closer to the right side of the image. INACCURATE 0.00735 These additional motorcycles add to the excitement of the race. INACCURATE 0.00241 in adcition to the motorcycles, there are several chairs scattered throughout the scene, possibly belonging to spectators or crew members. | 2308.06394#24 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 25 | 4.4 DECISION-MAKING SIMULATION
In this section, we present and compare the decision-making performances of LAAs in the WebShop environment. The performance regarding the average reward is reported in Table 1. The agent prompts are constructed based on the maximum context length of different LLM models. Regarding BOLAA, we devise one search LAA and one click LAA to generate search query and click elements, respectively. We have the following observation:
⢠BOLAA performs the best compared with the other LAA architectures, especially when built on the high performing LLMs. BOLAA is able to actively select the appropriate LAA and yield qualitative communication, which stabilizes the action generation. We observe that BOLAA, when paired with a 3b fastchat-t5 LLM, performs comparably to other LAA architectures with more powerful LLMs. The superiority of BOLAA indicates that orchestrating multiple smaller- sized LAAs is a better choice if the computing resources are limited. This further exemplifies the potential for fine-tuning multiple smaller-sized specialised LAAs rather than fine-tuning one large generalized LAA. | 2308.05960#25 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 25 | Miquel Ramirez, Nir Lipovetzky, and Christian Muise. 2015. Lightweight Automated Planning ToolKiT. http://lapkt.org/. Accessed: 2020.
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. ArXiv, abs/2302.04761.
Noah Shinn, Beck Labash, and Ashwin Gopinath. 2023. Reflexion: An autonomous agent with dy- namic memory and self-reflection. arXiv preprint arXiv:2303.11366.
Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew J. Hausknecht. 2020. Alfworld: Aligning text and em- bodied environments for interactive learning. CoRR, abs/2010.03768.
Significant-Gravitas. 2023. An experimental open- source attempt to make gpt-4 fully autonomous. https://github.com/significant-gravitas/ auto-gpt. Accessed: 2023-06-09. | 2308.06391#25 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 25 | Figure 4: Rejection sampling examples using the ternary reward model. Scores for sampled responses are computed using as the average negative logprob per sentence of a hallucination.
over factual descriptions, and as a result the binary reward model scores are biased towards generations that contain more subjective analysis rather than objective descriptions. This is less of a problem with the ternary reward model, as analysis has been split into its own class. As we will discuss in the results, the ternary modelâs functionaltiy is a superset of the binary model. For these reasons, we choose to use the the ternary reward model for rejection sampling moving forward.
To study our the robustness of our reward model and our dataset, we conduct rejection sampling on generations from other LVLMs, namely LLaVA and mPLUG-OWL. For these experiments, we reuse the reward model initialized from InstructBLIP.
LFDPO (Ïθ; Ïref ) = âE(x,y,c)â¼D [log Ï (βk)] k = c = 0 âr c = 1 r ââ c > 1 , r = log Ïθ (y | x) Ïref (y | x) | 2308.06394#25 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 26 | ⢠Pairing the LLM with the optimal LAA architecture is crucial. For example, Llama-2-13b per- forms best under PlanAct LAA arch while Llama-2-70b performs best under the BOLAA arch. Also, Longchat-13b-16K performs best when using PlanAct and PlanReAct, which may indicate the extraordinary planning ability of longchat-13b-16k models.
⢠Increasing the context length alone may not necessarily improve the LAA performances. For example, when comparing longchat-13b-16k with llama-2-13b models, the latter yields better performances though with less context length. By checking the running log of those LAAs, we observe more occurrence of hallucinated generation when the LAA runs for more steps, which in the end degrades the benefits of longer context. | 2308.05960#26 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 26 | Tom Silver, Varun Hariprasad, Reece S Shuttle- worth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. 2022. Pddl planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop.
DÃdac SurÃs, Sachit Menon, and Carl Vondrick. 2023. Vipergpt: Visual inference via python execution for reasoning. ArXiv, abs/2303.08128.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Man- dlekar, Chaowei Xiao, Yuke Zhu, Linxi (Jim) Fan, and Anima Anandkumar. 2023a. Voyager: An open- ended embodied agent with large language models. ArXiv, abs/2305.16291.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Huai hsin Chi, and Denny Zhou. 2023b. Self- consistency improves chain of thought reasoning in language models. In International Conference on Learning Representations (ICLR). | 2308.06391#26 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 26 | with sample segments x, y, c being drawn from the dataset. Here, x is the entire input up until the start of the current segment, y is the generated segment, and c is the class of the current segment, with c = 1 being the preferred class, c = 0 being the dispreferred class, and all other classes being ignored. Since segments are non-overlapping, we can run a single forward pass for each sample to calculate the loss of all segments within the sample all at once.
Fine-grained Direct Preference Optimization While we train a reward model to show the potential of op- timizing against hallucinations with RL, we also directly optimize InstructBLIP using FDPO to reduce hallucinations. Since M-HalDetect does not contain the traditional pref- erence pairs used in DPO and RLHF, we explicitly segment each generation into sequences of preferred, dispreferred, and neutral chunks. We then reuse the DPO loss in increas- ing the likelihoods of preferred chunks while decreasing the likelihood of dispreferred chunks, each regularized by the original likelihood from the base model for the correspond- ing chunk, while neutral chunks are ignored. Similar to (Wu et al. 2023), this should give stronger signal during training in reducing hallucinatory generations as compared to using pairs of likelihoods over entire generations. | 2308.06394#26 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 27 | ⢠A powerful LLM is able to generalize under the zeroshot LAA arch. The best performance of OpenAI API-based models are actually under ZS and ZST arch. This indicates the great po- tential of developing a generic LAA with powerful LLM. Actually, this is currently what open- source projects are working towards, directly calling OpenAI API and tuning the zeroshot agent prompt instead. Our benchmark results quantitatively justify that using only a ZS LAA can already achieve comparable or even better performances than LAA arch with additional Plan or Self-think flow. However, for other less powerful LLMs, fewshot prompts are necessary for LAAs.
⢠Plan flow generally improves the performances when the agent is built on open-source LLMs. By comparing the performances of ReAct, PlanAct and PlanReAct, we observe a performance gain
7
PREPRINT
Table 2: Average recall in the WebShop environment. Len denotes the maximum context length. Bold results denote the best results in one row, i.e. best LAA architecture w.r.t. one LLM. Underline results denote the best performance in one column, i.e. best LLM regarding one LAA architecture. | 2308.05960#27 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 27 | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompt- ing elicits reasoning in large language models. In NeurIPS.
Zhun Yang, Adam Ishay, and Joohyung Lee. 2023. Cou- pling large language models with logic programming for robust and general reasoning from text. In Find- ings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 5186â5219. Association for Computational Linguis- tics.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. 2023. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR).
HÃ¥kan LS Younes and Michael L Littman. 2004. Ppddl1. 0: An extension to pddl for expressing plan- ning domains with probabilistic effects. Techn. Rep. CMU-CS-04-162, 2:99. | 2308.06391#27 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 27 | Recall the loss used in DPO, with Ïref as the reference model, Ïθ as the policy model, x being the input, yw be- ing the preferred generation, and yl being the dispreferred generation.
This formulation allows us to categorize each class into positive, negative, or neutral signal, the latter of which will be ignored during training. We run ablations on including the analysis class as either a negative or neutral class when optimizing InstructBLIP with FDPO. We fine-tune only the QFormer and language head, keeping the rest of the model frozen. We use β = 0.5 for all our FDPO experiments, and train for a maximum of 5 epochs with lr = 10â6, warmup ratio of .03, and a cosine scheduler. Ablations on model freezing can be found in the Appendix.
Evaluation Recent works in multi-modal LLMs(Liu et al. 2023b,a) some- times use GPT-4 as a human proxy to qualitatively evaluate LM outputs. Specifically, GPT-4 is prompted to give a pref- erence score to a LM generation, either as a stand-alone or compared against GPT-4âs own generation. This metric enables automatic evaluation without depending on human evaluators. | 2308.06394#27 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 28 | LLM Len. LAA Architecture fastchat-t5-3b vicuna-7b vicuna-13b vicuna-33b llama-2-7b llama-2-13b llama-2-70b mpt-7b-instruct mpt-30b-instruct xgen-8k-7b-instruct longchat-7b-16k longchat-13b-16k text-davinci-003 gpt-3.5-turbo gpt-3.5-turbo-16k-0613 2k 2k 2k 2k 4k 4k 4k 8k 8k 8k 16k 16k 4k 4k 16k ZS 0.3533 0.0833 0.0867 0.3600 0.0678 0.2856 0.3344 0.0144 0.2973 0.0667 0.1344 0.0756 0.3800 0.3889 0.3856 ZST 0.3122 0.0500 0.0644 0.3411 0.0311 0.2211 0.3244 0.0322 0.3372 0.1400 0.1856 0.0867 0.3856 0.3756 0.3833 ReAct 0.3800 0.3600 0.3622 0.3822 | 2308.05960#28 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 28 | SR EL LLM-DP (n=3) LLM-DP (n=3) - fallback LLM-DP (n=5) LLM-DP (n=5) - fallback 0.96 0.92 0.96 0.94 13.16 12.80 12.54 12.24
Table 2: We compare the average Success Rate (SR) and average Episode Length (EL) for different sam- pling sizes n and with or without a fallback to random sampling. The random sampling fallback affects the success rate as the LLM sampler can more often sample n world states which are already satisfied. However as n increases, we see that it becomes more likely for the sampling procedure to at find at least one plan, and therefore the SR increases when no fallback (- fallback) is used.
# A Prompts and Few-shot details
See Table 3 and Table 4 for LLM-DP prompts used.
# B ReAct
# B.1 Reproduction with Chat Model | 2308.06391#28 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 28 | 779 (Yw | 3 Tret) = âE(a, wD |l log ââ~âââ Lypo (79; Tret) (2. sy)~D [08 (« 8 Qu la ~ og 2212) w Tret (yt | 2)
Since we donât have preferences over pairs of generations, but spans of fine-grained preferences throughout each gener- ation, our FDPO loss can be modeled as
However, this is plagued with systematic bias such as senstitivity to the ordering of responses (Wang et al. 2023). Furthermore, GPT-4âs public API does not yet support image inputs. Recent multi-modal works instead pass image context in the form of captions and object bounding boxes. In several cases, this symbolic input cannot represent the image robustly and leads to incorrect evaluations. We performed a qualitative analysis on GPT-4âs performance on LLaVA-150kâs detail subset and noted that GPT-4 gave frequent inaccurate scores | 2308.06394#28 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 29 | 0.1856 0.0867 0.3856 0.3756 0.3833 ReAct 0.3800 0.3600 0.3622 0.3822 0.3744 0.3844 0.3789 0.3644 0.3333 0.3711 0.3644 0.3678 0.3767 0.3933 0.4011 PlanAct 0.3700 0.3233 0.3444 0.3733 0.3400 0.3278 0.3400 0.3200 0.3575 0.3400 0.3622 0.3467 0.3711 0.3789 0.3756 PlanReAct BOLAA 0.3867 0.3522 0.3700 0.3956 0.3856 0.4078 0.4011 0.3600 0.3900 0.3800 0.3811 0.3789 0.3956 0.3929 0.3933 0.3722 0.3278 0.2367 0.3567 0.3578 0.3500 0.3600 0.3400 0.3412 0.3278 0.3622 0.3471 0.3889 0.3867 0.3811 | 2308.05960#29 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 29 | # A Prompts and Few-shot details
See Table 3 and Table 4 for LLM-DP prompts used.
# B ReAct
# B.1 Reproduction with Chat Model
We slightly modify the âsystemâ prompt of the original ReAct (see Table 5) to guide the model away from its conversational tendencies. gpt-3.5-turbo apologises significantly more than the text-davinci-002 model, and we found that it would often get stuck in loops of apologising. We also modify the code so that we replace all gen- erated instances of âinâ and âonâ with âin/onâ if the model did not generate it correctly, since Alfworld expects âin/onâ but gpt-3.5-turbo tends to gen- erate only the correct preposition. Without these changes, ReAct would be significantly worse than our reported metric.
# C LLM-DP
# C.1 Generated Goal Examples
See Table 6 for examples of generated goals, both valid and invalid.
# C.2 Varying n
See Table 6 for results when different varying n and fallback. Fallback is when no plans are sam- pled successfully through the LLM, LLM-DP re- samples n plans randomly.
(define (domain alfred) (:predicates | 2308.06391#29 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 29 | Model Type Method InstructBLIP Baseline Baseline (T=0) 0.97 0.71 InstructBLIP InstructBLIP InstructBLIP InstructBLIP DPO DPO DPO DPO IA Finetune Qformer (T=0) IA Finetune Qformer (T=1) DA Finetune Qformer (T=0) DA Finetune Qformer (T=1) 0.48 0.72 0.85 1.03 0.83 0.75 0.70 0.58 InstructBLIP InstructBLIP InstructBLIP RS RS RS Best of 64 Worst of 64 Best of 16 0.26 1.76 0.36 0.87 0.53 0.82 LLaVA LLaVA Baseline RS Baseline (T=0) Best of 16 0.383 0.159 0.805 0.834 mPLUG-OWL Baseline mPLUG-OWL RS Baseline (T=0) Best of 16 1.26 0.595 0.476 0.707
Table 1: Results of reward model and human evaluation scores. The RM score is the average negative log probability of the passage not containing hallucinations, while the human evaluation score is the percentage of content that was truthful. A perfect RM score would be 0, and a perfect human evaluation score would be 1. | 2308.06394#29 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 30 | on most LLM cases when using plan flow. However, planning and thinking require the LLM to be able to reason in steps, which may be challenging for small size LLMs. For example, fastchat- t5-3b performs above average on ZS LAA arch. But the performance degrades by a large margin under PlanReAct arch.
We also report the intermediate Recall performances for all LAAs, which are illustrated in Table 2. Recall is mainly related to the search action. High recall performances indicate that the LAA is capable of generating a precise search query. High recalls usually lead to better rewards. But they are not tightly related. For example, Llama-2-70b has a recall performance of nearly 0.3344 on ZS LAA, which is comparable to the best LAA. However, the reward performance in Table 1 of ZS LAA Llama-2-70b is only 0.0122. The reason is that generating the search query requires a different LLM ability from generating the correct click action, where the latter is more challenging. Another observation is that our proposed BOLAA generally performs the best on all LLMs, which indicates that separating the search agent from the click agent improves the accuracy of the search action, leading to a higher recall value. | 2308.05960#30 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 30 | (define (domain alfred) (:predicates
(isReceptacle ?o - object) ; true if the object is a receptacle (atReceptacleLocation ?r - object) ; true if the robot is at the receptacle location (inReceptacle ?o - object ?r - object) ; true if object ?o is in receptacle ?r (openable ?r - object) ; true if a receptacle is openable (opened ?r - object) ; true if a receptacle is opened (isLight ?o - object) ; true if an object is light source (examined ?o - object ?l - object) ; whether the object has been looked at with light (holds ?o - object) ; object ?o is held by robot (isClean ?o - object) ; true if the object has been cleaned in sink (isHot ?o - object) ; true if the object has been heated up (isCool ?o - object) ; true if the object has been cooled (isSink ?o - object) ; true if the object is a sink (isMicrowave ?o - object) ; true if the object is a microwave (isFridge ?o - object) ; true if the object is a fridge
))
Table 3: System Prompt used by gpt-3.5-turbo for generating the :goal in LLM-DP | 2308.06391#30 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 30 | and explanations, failing to detect hallucinations while incor- rectly penalizing correct generations. For this reason, we do not use GPT-4 for automatic evaluation of generation quality. To combat these limitations, we use human evaluation to evaluate the hallucination rates of our rejection sampling and DPO generations. Following the same labeling instructions as the M-HalDetect, we annotate the generations into accurate, inaccurate, and analysis spans. For generations from our DPO model, we use temperature=1 and nucleus sampling. We apply this across 50 different images sourced from COCOâs validation set, separate from the ones used in M-HalDetect, though we reuse instructions from the dataset.
A common trade-off between reducing hallucinations is a reduction in helpfulness. Consider, for example, a model that outputs nothing - it does not hallucinate, yet it is not helpful either. To avoid this potential bias in our evaluation, we choose to measure the hallucination rate as the number of inaccurate words divided by the number of total words, excluding analysis segments, to calculate what percentage of descriptive objective content contained hallucinations.
level and segment-level) using the development split of the M-HalDetect Dataset. We report Accuracy and F-1 Score for each of the training strategies. All models are initialized with pre-trained InstructBLIP weights, and the results are reported in Table 2. | 2308.06394#30 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 31 | LAA performance w.r.t. Complexity. After the overall performances of those LAAs and LLMs are compared, we conduct more details investigation of the performance w.r.t. the task complexity. Due to the space limitation, we only report the performance of text-davinci-003 and llama-2-70b. The reward performance is illustrated in Figure 4. The BOLAA model consistently performs better on all complexity levels. We also observe the degraded performances when the task complexity is increased, which follows the intuition. Surprisingly, we find out that further increasing the complexity of tasks greater than 4 will not further degrade the performances. The reason is that the recall performance increases when the task is of higher complexity, which we demonstrated in Figure 5. This is due to the fact that high-complexity task instruction provides more additional context information for the LAA. As such, the search action can be more specific and accurate under high complexity levels.
4.5 KNOWLEDGE REASONING SIMULATION | 2308.05960#31 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 31 | ))
Table 3: System Prompt used by gpt-3.5-turbo for generating the :goal in LLM-DP
Your task is to: put a clean plate in microwave. (:goal (exists (?t - plate ?r - microwave) (and (inReceptacle ?t ?r) (isClean ?t) ))) Your task is to: examine an alarmclock with the desklamp", (:goal (exists (?t - alarmclock ?l - desklamp) (and (examined ?t ?l) (holds ?t) ))) Your task is to: put two cellphone in bed (:goal (exists (?t1 - cellphone ?t2 - cellphone ?r - bed) (and (inReceptacle ?t1 ?r) (inReceptacle ?t2 ?r) (not (= ?t1 ?t2)) )))
Table 4: Fixed Few-shot examples used by gpt-3.5-turbo for generating the :goal in LLM-DP
Interact with a household to solve a task. Only reply with > followed by the action to take or 'think'. Do not apologize. Follow the format of the two examples below.
Table 5: System Prompt used by gpt-3.5-turbo in our reproduction of ReAct | 2308.06391#31 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 31 | Although the binary version has higher accuracy and F1 than the ternary in both sentence and segment level appli- cations, we see in Figure 3 that the ternary reward model actually performs about the same as the binary reward model, if we were to reduce from a ternary to a binary setting. The ternary model additionally learns to separate the Accurate and Analysis classes, and we use it for rejection sampling and reward scoring experiments moving forward.
Human Evaluation Figure 4 illustrates an example of rejection sampling using fine-grained feedback from the reward model. The reward model is able to accurately flag hallucinatory sentences which incorrectly claims the presence of other motorcycles and chairs. Furthermore, it is also able to flag sentences that generate analysis about non-existent objects.
# Results
# Rewared Model Classification Metrics
Type Density Accuracy F1 Score Binary Ternary Sentence Level Sentence Level 79.2 71.4 78.37 70.8 Binary Ternary Segment Level Segment Level 83.92 77.2 83.22 76.93
# Table 2: Baseline Reward Model Results | 2308.06394#31 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 32 | 4.5 KNOWLEDGE REASONING SIMULATION
We benchmark on the HotPotQA environment to evaluate the multi-step reasoning ability of LAAs. Since the available search, lookup and finish operations are all related to knowledge reasoning in this environment and hard to separate, we therefore leave the BOLAA arch for future work and only compare the performance on other agent arch. The results are in Table 3. In general, ReAct agent arch achieves the best performances, which can be interpreted in twofold. Firstly, fewshot prompt is necessary to enable the action generation and reasoning ability for LAA, especially when
8
PREPRINT
(a) text-davinci-003 (b) Llama-2-70b
Figure 4: The reward w.r.t. task complexity in WebShop. Each bar represents one LAA.
(a) text-davinci-003 (b) Llama-2-70b
Figure 5: The recall w.r.t. task complexity in WebShop. Each bar represents one LAA. | 2308.05960#32 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06391 | 32 | Table 5: System Prompt used by gpt-3.5-turbo in our reproduction of ReAct
task: put some peppershaker on drawer. Generated: (:goal (exists (?t - peppershaker ?r - drawer) (inReceptacle ?t ?r) )) task: put a clean mug in coffeemachine. Generated: (:goal (exists (?t - mug ?r - coffeemachine) (and (inReceptacle ?t ?r) (isClean ?t) ))) VALID â task: put two cd in safe. Generated: (:goal VALID â task: heat some mug and put it in coffeemachine. Generated: (:goal (exists (?t1 - cd ?t2 - cd ?r - safe) (exists (?m - mug ?c - coffeemachine) (and (inReceptacle ?t1 ?r) (inReceptacle ?t2 ?r) (not (= ?t1 ?t2)) (and (isReceptacle ?m) (isHot ?m) (inReceptacle ?m ?c) ))) VALID â ))) INVALID â | 2308.06391#32 | Dynamic Planning with a LLM | While Large Language Models (LLMs) can solve many NLP tasks in zero-shot
settings, applications involving embodied agents remain problematic. In
particular, complex plans that require multi-step reasoning become difficult
and too costly as the context window grows. Planning requires understanding the
likely effects of one's actions and identifying whether the current environment
satisfies the goal state. While symbolic planners find optimal solutions
quickly, they require a complete and accurate representation of the planning
problem, severely limiting their use in practical scenarios. In contrast,
modern LLMs cope with noisy observations and high levels of uncertainty when
reasoning about a task. Our work presents LLM Dynamic Planner (LLM-DP): a
neuro-symbolic framework where an LLM works hand-in-hand with a traditional
planner to solve an embodied task. Given action-descriptions, LLM-DP solves
Alfworld faster and more efficiently than a naive LLM ReAct baseline. | http://arxiv.org/pdf/2308.06391 | Gautier Dagan, Frank Keller, Alex Lascarides | cs.CL, cs.RO | null | null | cs.CL | 20230811 | 20230811 | [
{
"id": "2303.11366"
},
{
"id": "2303.08774"
},
{
"id": "2305.15334"
}
] |
2308.06394 | 32 | # Table 2: Baseline Reward Model Results
We evaluate the multi-modal reward models (sentenceWe observe in Table 1 that rejection sampling significantly improves the factual rate of InstructBLIPâs outputs. On the other hand, the worst generations of InstructBLIP can be ex- tremely poor, with an almost 50% hallucination rate! We can see from both the human eval results and our reward model scores in Figure 6 that we get exponentially diminishing returns as the sample size increases.
Rejection Sampling We also see that rejection sampling with InstructBLIP manages to reduce hallucination rates for LLaVA and significantly for mPLUG-OWL. This shows that although M-HalDetectâs image descriptions are sourced from InstructBLIP, they can still be used successfully in evaluat- ing and improving on other LVLMs. It is interesting to see LLaVAâs baseline model performing so strongly - we suspect this is because LLaVA is trained specifically for generating
1.0 0.8 0.6 0.4 Accuracy Content 0.2 0.0 0.5 1.0 15 2.0 2.5 3.0 Reward Model Score
Figure 5: Human evaluation scores against reward scores for all human evaluated results.
Reward Model Score 124° 8 16 2 64 N = number of generations per sample | 2308.06394#32 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 33 | Figure 5: The recall w.r.t. task complexity in WebShop. Each bar represents one LAA.
experimenting with those small-size language models. Secondly, comparing ReAct, PlanAct, and PlanReAct, we would conclude that planning flow of LAA hinders performance the in knowledge reasoning environment and tasks. The reason is that knowledge reasoning tasks require contextu- alized information to conduct reasoning, whereas planning flow is executed ahead of interactions. Thus, those generated plans tend to lead to more hallucination of LAA. Thirdly, regarding this knowledge reasoning task, model size is much more important than the context length. Large-sized model has better abilities in reasoning, thus performing better. Additionally, the superior reasoning ability of OpenAI gpt-3.5 models is again verified. We also observe the best performance of Llama- 2-70b on all open-source LLMs, which suggests that potential future fine-tuning can be applied on Llama-2 models. | 2308.05960#33 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 33 | Figure 5: Human evaluation scores against reward scores for all human evaluated results.
Reward Model Score 124° 8 16 2 64 N = number of generations per sample
Figure 6: Reward model score means and variances as n increases in best-of-n rejection sampling. We see diminishing returns as we increase n.
detailed descriptions, whereas InstructBLIP and mPLUG- OWL are more general models with a wide range of task applicability.
Additionally, we study the correlation between reward model and human evaluation scores. In Figure 5, we see that across all human evaluated results, there is a clear and strong correlation between our reward model scores and human accuracy scores. Although this is by no means a robust re- placement for human annotations, this shows the potential of training models as specific evaluators for hallucinations. Despite the noisiness, such a model could be used for early hyper-parameter selection, being much more cost effective than humans evaluation. | 2308.06394#33 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 34 | LAA performance w.r.t. Complexity. Since we have easy, medium, and high level tasks, we com- pare the performance of Llama-2-70b and regarding different levels of complexity, as illustrated in Figure 6. We observe degrading performance if increasing the complexity of tasks. In HotPotQA tasks, the hardness is defined as the question answer hops. Therefore, hard question requires more context understanding and reasoning ability of LAA. Though OpenAI text-davinci-003 model con- sistently outperforms Llama-2-70b on all levels of complexity, their difference is of smaller margin in hard questions. Since hard questions requires more resoning efforts, we can conclude that Llama- 2-70b posses comparable reasoning ability with text-davinci-003.
9
PREPRINT
Table 3: Average reward in the HotPotQA environment. Len denotes the maximum context length. Bold results denote the best results in one row, i.e. best LAA architecture w.r.t. one LLM. Underline results denote the best performance in one column, i.e. best LLM regarding one LAA architecture. | 2308.05960#34 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 34 | Fine-Grained DPO We evaluate two variations of FDPO across the three classes - one that ignores analysis (IA), and one that disprefers analysis (DA), merging it with the inac- curate class. We see in Table 1 that marking analysis as a negative class does not impact hallucination rates in a signifi- cant way when training with FDPO, and may actually worsen rates at higher temperatures. We suspect that this may be be- cause InstructBLIPâs generations often have the last sentence
being subjective analysis of the image, followed by an end of sequence token. Pushing down the likelihoods of generating this sentence increases the likelihood of the generation being lengthened, potentially inducing additional hallucinations as the model runs out of accurate content to describe. | 2308.06394#34 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 35 | LLM Len. LAA Architecture fastchat-t5-3b vicuna-7b vicuna-13b vicuna-33b llama-2-7b llama-2-13b llama-2-70b mpt-7b-instruct mpt-30b-instruct xgen-8k-7b-instruct longchat-7b-16k longchat-13b-16k text-davinci-003 gpt-3.5-turbo gpt-3.5-turbo-16k-0613 2k 2k 2k 2k 4k 4k 4k 8k 8k 8k 16k 16k 4k 4k 16k ZS 0.0252 0.1339 0.1541 0.2180 0.0395 0.1731 0.2809 0.0982 0.1562 0.1502 0.0791 0.1083 0.3430 0.3340 0.3027 ZST 0.0067 0.0797 0.0910 0.2223 0.0207 0.2313 0.3207 0.0483 0.2141 0.1244 0.0672 0.0562 0.3304 0.3254 0.2264 ReAct 0.0692 0.0318 0.2637 0.2602 | 2308.05960#35 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 35 | On the other hand, we see that ignoring analysis in FDPO training almost cuts hallucination rates in half. Even sampling at high temperature, generations still on average contain less hallucinations than the baseline InstructBLIP model sampled at 0 temperature, where it would have the least propensity to hallucinate. This is slightly better than best-of-16 rejection sampling, and almost as good as best-of-64 rejection sam- pling. This performance gap is to be expected as rejection sampling can generalize over the entire set of possible model generations, whereas FDPO is more limited in optimizing only over the data that it sees in the training data. Though, there is a trade-off in this performance, as best-of-n rejection sampling is slower in inference by a factor of n. | 2308.06394#35 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.06394 | 36 | Conclusion We introduce M-HalDetect, a novel multi-modal fine-grained hallucination detection dataset for benchmarking and training LVLMs to produce more truthful generations. We train fine- grained multi-modal reward models to perform rejection sam- pling against InstructBLIP. We innovate FDPO to optimize InstructBLIP directly on M-HalDetect, avoiding the need for preference pairs. Both methods significantly reduce Instruct- BLIPâs hallucination rate, extending their effectiveness to the multi-modal domain, and demonstrating the usefulness of M-HalDetect in catching and reducing hallucinations. We show this dataset is generalizable across multiple LVLMs, successfully reducing the hallucination rates of LLaVA and mPLUG-OWL.
While we show strong performance with rejection sam- pling, it is prohibitively slow for inference in real-world use- cases. The next step would be to optimize a generative model, perhaps InstructBLIP, using reinforcement learning with our trained reward models to create a higher quality LVLM for instruction aware VQA. | 2308.06394#36 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 37 | # H é
(a) text-davinci-003 (b) Llama-2-70b
Figure 6: The reward w.r.t. complexity level in HotPotQA. Each bar represents one LAA.
# 5 CONCLUSION AND FUTURE WORK
In this paper, we systematically investigate the performances of various LAA architecture paired with different LLM backbones. We also provide one novel orchestrating method for multiple agents, i.e. BOLAA. The benchmarking results provide experimental justification for the LAA investigation and verify the potential benefits of BOLAA architecture. During the investigation, we also identify the challenge of designing BOLAA architecture for environments with compounding actions. In the future, we will explore whether we can harness LLMs in the controller such that selection and communication with labor agents is also fully autonomous. We will continue developing more LAA architectures and include more LLMs and environments for evaluations.
10
PREPRINT
# REFERENCES
# Harrison Chase. Langchain. https://github.com/hwchase17/langchain, 2023. | 2308.05960#37 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 37 | A limitation of modern day applications towards training large models with fine-grained feedback is that training typi- cally takes place over multiple iterations of model training and feedback collection. This ensures the final model is more robustly aligned with the high level training objective. In this paper, we only perform one cycle of collecting response feedback and training. Indeed, when analyzing some of the responses, we can see hints of overfitting to our training ob- jective - image descriptions are slightly more generic than before, and the preciseness of descriptions may have gone down. Future work can extend our dataset and methods to also account for descriptiveness and informativeness, training multiple reward models for optimizing a more robust final model. | 2308.06394#37 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 38 | 10
PREPRINT
# REFERENCES
# Harrison Chase. Langchain. https://github.com/hwchase17/langchain, 2023.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web. arXiv preprint arXiv:2306.06070, 2023.
Significant Gravitas. Auto-GPT, 2023. Autogpt. https://github.com/Significant-Gravitas/
Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksandra Faust. A real-world webagent with planning, long context understanding, and pro- gram synthesis. arXiv preprint arXiv:2307.12856, 2023. | 2308.05960#38 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 38 | References 2023. Scale AI Rapid Portal. https://scale.com/docs/how- rapid-works. Accessed: 2023-07-23. 2023. Vicuna. https://github.com/lm-sys/FastChat. Accessed: 2023-07-23. Alayrac, J.-B.; Donahue, J.; Luc, P.; Miech, A.; Barr, I.; Hasson, Y.; Lenc, K.; Mensch, A.; Millican, K.; Reynolds, M.; et al. 2022. Flamingo: a visual language model for few- shot learning. Advances in Neural Information Processing Systems, 35: 23716â23736. Bai, Y.; Jones, A.; Ndousse, K.; Askell, A.; Chen, A.; Das- Sarma, N.; Drain, D.; Fort, S.; Ganguli, D.; Henighan, T.; et al. 2022. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862. Bang, Y.; Cahyawijaya, S.; Lee, N.; Dai, W.; Su, D.; Wilie, B.; Lovenia, H.; Ji, | 2308.06394#38 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 39 | Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992, 2023.
Cheng-Yu Hsieh, Si-An Chen, Chun-Liang Li, Yasuhisa Fujii, Alexander Ratner, Chen-Yu Lee, Ranjay Krishna, and Tomas Pfister. Tool documentation enables zero-shot tool-usage with large language models. arXiv preprint arXiv:2308.00675, 2023.
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pp. 9118â9147. PMLR, 2022.
Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023.
Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. arXiv preprint arXiv:1802.08802, 2018. | 2308.05960#39 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 39 | Bang, Y.; Cahyawijaya, S.; Lee, N.; Dai, W.; Su, D.; Wilie, B.; Lovenia, H.; Ji, Z.; Yu, T.; Chung, W.; Do, Q. V.; Xu, Y.; and Fung, P. 2023. A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity. arXiv:2302.04023. Bird, S.; Klein, E.; and Loper, E. 2009. Natural language pro- cessing with Python: analyzing text with the natural language toolkit. â OâReilly Media, Inc.â. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, T.; Child, R.; Ramesh, A.; Ziegler, D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, | 2308.06394#39 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 40 | Xiao Liu, Hao Yu, Hanchen Zhang, Yifan Xu, Xuanyu Lei, Hanyu Lai, Yu Gu, Hangliang Ding, Kaiwen Men, Kejuan Yang, Shudan Zhang, Xiang Deng, Aohan Zeng, Zhengxiao Du, Chenhui Zhang, Sheng Shen, Tianjun Zhang, Yu Su, Huan Sun, Minlie Huang, Yuxiao Dong, and Jie Tang. Agentbench: Evaluating llms as agents, 2023.
Aman Madaan, Alexander Shypula, Uri Alon, Milad Hashemi, Parthasarathy Ranganathan, Yiming Yang, Graham Neubig, and Amir Yazdanbakhsh. Learning performance-improving code edits. arXiv preprint arXiv:2302.07867, 2023a.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023b. | 2308.05960#40 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 40 | D.; Wu, J.; Winter, C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, A.; Sutskever, I.; and Amodei, D. 2020. Language Models are Few-Shot Learners. Advances in Neural Information Processing Sys- tems, 33: 1877â1901. Dai, W.; Li, J.; Li, D.; Tiong, A. M. H.; Zhao, J.; Wang, W.; Li, B.; Fung, P.; and Hoi, S. 2023. InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning. arXiv:2305.06500. Engstrom, L.; Ilyas, A.; Santurkar, S.; Tsipras, D.; Janoos, F.; Rudolph, L.; and Madry, A. 2020. Implementation Matters in Deep Policy Gradients: A Case Study on PPO and TRPO. CoRR, abs/2005.12729. Ganesan, K. 2018. ROUGE 2.0: Updated and Im- proved Measures for Evaluation of Summarization | 2308.06394#40 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 41 | Rithesh Murthy, Shelby Heinecke, Juan Carlos Niebles, Zhiwei Liu, Le Xue, Weiran Yao, Yihao Feng, Zeyuan Chen, Akash Gokul, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, and Silvio Savarese. Rex: Rapid exploration and exploitation for ai agents, 2023.
# Yohei Nakajima. Babyagi. https://github.com/yoheinakajima/babyagi, 2023.
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christo- pher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
OpenAI. Gpt-4 technical report. ArXiv, 2023.
11
PREPRINT
Joon Sung Park, Joseph C OâBrien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023. | 2308.05960#41 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 41 | abs/2005.12729. Ganesan, K. 2018. ROUGE 2.0: Updated and Im- proved Measures for Evaluation of Summarization Tasks. arXiv:1803.01937. Ji, Z.; Lee, N.; Frieske, R.; Yu, T.; Su, D.; Xu, Y.; Ishii, E.; Bang, Y. J.; Madotto, A.; and Fung, P. 2023. Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12): 1â38. Li, C. 2023. Large Multimodal Models: Notes on CVPR 2023 Tutorial. arXiv preprint arXiv:2306.14895. Li, Y.; Du, Y.; Zhou, K.; Wang, J.; Zhao, W. X.; and Wen, J.-R. 2023. Evaluating object hallucination in large vision- language models. arXiv preprint arXiv:2305.10355. Lightman, H.; Kosaraju, V.; Burda, Y.; Edwards, H.; Baker, B.; Lee, T.; Leike, J.; Schulman, J.; Sutskever, I.; and Cobbe, K. 2023. | 2308.06394#41 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 42 | Shishir G Patil, Tianjun Zhang, Xin Wang, and Joseph E Gonzalez. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334, 2023.
Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi Faltings. Refiner: Reasoning feedback on intermediate representations. arXiv preprint arXiv:2304.01904, 2023.
Yujia Qin, Shihao Liang, Yining Ye, Kunlun Zhu, Lan Yan, Yaxi Lu, Yankai Lin, Xin Cong, Xiangru Tang, Bill Qian, et al. Toolllm: Facilitating large language models to master 16000+ real-world apis. arXiv preprint arXiv:2307.16789, 2023.
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. arXiv preprint Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv:2303.17580, 2023. | 2308.05960#42 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.05960 | 43 | Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In International Conference on Machine Learning, pp. 3135â3144. PMLR, 2017.
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366, 2023.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
MosaicML NLP Team. Introducing mpt-7b: A new standard for open-source, commercially usable llms, 2023. URL www.mosaicml.com/blog/mpt-7b. Accessed: 2023-05-05. | 2308.05960#43 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 43 | Lin, T.; Maire, M.; Belongie, S. J.; Bourdev, L. D.; Girshick, R. B.; Hays, J.; Perona, P.; Ramanan, D.; Doll´ar, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. CoRR, abs/1405.0312.
Liu, F.; Lin, K.; Li, L.; Wang, J.; Yacoob, Y.; and Wang, L. 2023a. Aligning Large Multi-Modal Model with Robust Instruction Tuning. arXiv preprint arXiv:2306.14565.
Liu, H.; Li, C.; Wu, Q.; and Lee, Y. J. 2023b. Visual instruc- tion tuning. arXiv preprint arXiv:2304.08485. | 2308.06394#43 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.06394 | 44 | Nakano, R.; Hilton, J.; Balaji, S.; Wu, J.; Ouyang, L.; Kim, C.; Hesse, C.; Jain, S.; Kosaraju, V.; Saunders, W.; Jiang, X.; Cobbe, K.; Eloundou, T.; Krueger, G.; Button, K.; Knight, M.; Chess, B.; and Schulman, J. 2021. WebGPT: Browser- assisted question-answering with human feedback. CoRR, abs/2112.09332.
OpenAI. 2023. GPT-4 Technical Report.
Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â27744.
Papineni, K.; Roukos, S.; Ward, T.; and Zhu, W.-J. 2002. Bleu: a Method for Automatic Evaluation of Machine Trans- lation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, 311â318. Associ- ation for Computational Linguistics. | 2308.06394#44 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 45 | Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models, 2023. | 2308.05960#45 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 45 | Rafailov, R.; Sharma, A.; Mitchell, E.; Ermon, S.; Manning, C. D.; and Finn, C. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290.
Schulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and Klimov, O. 2017. Proximal Policy Optimization Algorithms. CoRR, abs/1707.06347.
Stiennon, N.; Ouyang, L.; Wu, J.; Ziegler, D. M.; Lowe, R.; Voss, C.; Radford, A.; Amodei, D.; and Christiano, P. F. 2020. Learning to summarize from human feedback. CoRR, abs/2009.01325. | 2308.06394#45 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
2308.05960 | 46 | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
Binfeng Xu, Zhiyuan Peng, Bowen Lei, Subhabrata Mukherjee, Yuchen Liu, and Dongkuan Xu. Rewoo: Decoupling reasoning from observations for efficient augmented language models. arXiv preprint arXiv:2305.18323, 2023.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question In Conference on Empirical Methods in Natural Language Processing (EMNLP), answering. 2018.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International Conference on Learning Representations (ICLR), 2023a.
12
PREPRINT | 2308.05960#46 | BOLAA: Benchmarking and Orchestrating LLM-augmented Autonomous Agents | The massive successes of large language models (LLMs) encourage the emerging
exploration of LLM-augmented Autonomous Agents (LAAs). An LAA is able to
generate actions with its core LLM and interact with environments, which
facilitates the ability to resolve complex tasks by conditioning on past
interactions such as observations and actions. Since the investigation of LAA
is still very recent, limited explorations are available. Therefore, we provide
a comprehensive comparison of LAA in terms of both agent architectures and LLM
backbones. Additionally, we propose a new strategy to orchestrate multiple LAAs
such that each labor LAA focuses on one type of action, \textit{i.e.} BOLAA,
where a controller manages the communication among multiple agents. We conduct
simulations on both decision-making and multi-step reasoning environments,
which comprehensively justify the capacity of LAAs. Our performance results
provide quantitative suggestions for designing LAA architectures and the
optimal choice of LLMs, as well as the compatibility of both. We release our
implementation code of LAAs to the public at
\url{https://github.com/salesforce/BOLAA}. | http://arxiv.org/pdf/2308.05960 | Zhiwei Liu, Weiran Yao, Jianguo Zhang, Le Xue, Shelby Heinecke, Rithesh Murthy, Yihao Feng, Zeyuan Chen, Juan Carlos Niebles, Devansh Arpit, Ran Xu, Phil Mui, Huan Wang, Caiming Xiong, Silvio Savarese | cs.AI | Preprint | null | cs.AI | 20230811 | 20230811 | [
{
"id": "2204.02311"
},
{
"id": "2112.09332"
},
{
"id": "2307.13854"
},
{
"id": "2304.01904"
},
{
"id": "2303.17491"
},
{
"id": "2201.11903"
},
{
"id": "1802.08802"
},
{
"id": "2305.14992"
},
{
"id": "2306.06070"
},
{
"id": "2308.00675"
},
{
"id": "2302.07867"
},
{
"id": "2305.18323"
},
{
"id": "2307.12856"
},
{
"id": "2303.17580"
},
{
"id": "2305.15334"
},
{
"id": "2307.16789"
},
{
"id": "2304.03442"
},
{
"id": "2303.11366"
},
{
"id": "2303.17651"
}
] |
2308.06394 | 46 | Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; Bikel, D.; Blecher, L.; Ferrer, C. C.; Chen, M.; Cucurull, G.; Esiobu, D.; Fernandes, J.; Fu, J.; Fu, W.; Fuller, B.; Gao, C.; Goswami, V.; Goyal, N.; Hartshorn, A.; Hosseini, S.; Hou, R.; Inan, H.; Kardas, M.; Kerkez, V.; Khabsa, M.; Kloumann, I.; Korenev, A.; Koura, P. S.; Lachaux, M.-A.; Lavril, T.; Lee, J.; Liskovich, D.; Lu, Y.; Mao, Y.; Martinet, X.; Mihaylov, T.; Mishra, P.; Molybog, I.; Nie, Y.; Poulton, A.; Reizenstein, J.; Rungta, R.; Saladi, K.; Schelten, A.; Silva, R.; Smith, E. | 2308.06394#46 | Detecting and Preventing Hallucinations in Large Vision Language Models | Instruction tuned Large Vision Language Models (LVLMs) have significantly
advanced in generalizing across a diverse set of multi-modal tasks, especially
for Visual Question Answering (VQA). However, generating detailed responses
that are visually grounded is still a challenging task for these models. We
find that even the current state-of-the-art LVLMs (InstructBLIP) still contain
a staggering 30 percent of the hallucinatory text in the form of non-existent
objects, unfaithful descriptions, and inaccurate relationships. To address
this, we introduce M-HalDetect, a (M)ultimodal (Hal)lucination (Detect)ion
Dataset that can be used to train and benchmark models for hallucination
detection and prevention. M-HalDetect consists of 16k fine-grained annotations
on VQA examples, making it the first comprehensive multi-modal hallucination
detection dataset for detailed image descriptions. Unlike previous work that
only consider object hallucination, we additionally annotate both entity
descriptions and relationships that are unfaithful. To demonstrate the
potential of this dataset for hallucination prevention, we optimize
InstructBLIP through our novel Fine-grained Direct Preference Optimization
(FDPO). We also train fine-grained multi-modal reward models from InstructBLIP
and evaluate their effectiveness with best-of-n rejection sampling. We perform
human evaluation on both FDPO and rejection sampling, and find that they reduce
hallucination rates in InstructBLIP by 41% and 55% respectively. We also find
that our reward model generalizes to other multi-modal models, reducing
hallucinations in LLaVA and mPLUG-OWL by 15% and 57% respectively, and has
strong correlation with human evaluated accuracy scores. | http://arxiv.org/pdf/2308.06394 | Anisha Gunjal, Jihan Yin, Erhan Bas | cs.CV, cs.LG | preprint | null | cs.CV | 20230811 | 20230818 | [
{
"id": "2302.04023"
},
{
"id": "2305.17926"
},
{
"id": "2307.04964"
},
{
"id": "2305.20050"
},
{
"id": "2306.14895"
},
{
"id": "1803.01937"
},
{
"id": "2305.18290"
},
{
"id": "2204.05862"
},
{
"id": "2306.14565"
},
{
"id": "2305.06500"
},
{
"id": "2306.01693"
},
{
"id": "2304.08485"
},
{
"id": "2305.10355"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.