doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.09212
| 23 |
Model State STEM Humanities Social Science Other China-specific Average GPT4 ChatGPT LLaMA2-70B* Falcon-40B LLaMA-65B LLaMA2-13B* BLOOMZ-7B LLaMA-30B LLaMA2-7B* ZHLLaMA-13B BXLLaMA-13B LLaMA-13B Chat Chat Base Base Base Base Chat Base Base Chat Chat Base 65.23 47.81 44.11 33.33 34.47 33.04 30.56 29.69 30.03 27.12 27.50 29.21 72.11 55.68 57.05 43.46 40.24 39.73 39.10 33.68 34.76 33.18 32.47 30.96 72.06 56.50 55.63 44.28 41.55 38.45 38.59 34.08 33.72 34.87 32.33 31.74 74.79 62.66 56.65 44.75 42.88 42.54 40.32 37.40 33.62 35.10 35.77 33.07 66.12 50.69 48.01 39.46 37.00 35.67 37.15 30.68 30.12 32.97 31.64 30.86 70.95 55.51 53.21 41.45 39.80 38.24
|
2306.09212#23
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 23 |
step of the rationale), allowing the student to leverage it as a hint to derive the final answer. We experiment with two state-of-the-art open-source LLMs of varying sizes, ranging from 780M to 65B parameters. Specifically, we use two encoder-decoder and decoder-only models as student and teacher: (1) Flan-T5-Large and Flan-T5-XL [48], and (2) LLaMA-7B, LLaMA-13B, and LLaMA-65B [49]. Refer to Appendix A for more details of student and teacher models.
|
2306.09299#23
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 23 |
Helen uses a Python function provided by WIZ- MAP to generate three JSON files containing embedding summaries (§ 3), the KDE distribu- tions (§ 4.1), and the original data in a streamable format (Hoeger et al., 2014). Helen configures the function to use the datasetâs year feature as the em- beddingâs timeâthe function computes the KDE distribution of embeddings for each year slice. She provides the files to WIZMAP and sees a visualiza- tion of all ACL abstract embeddings (Fig. 4A).
In the Map View, He- Embedding Exploration. len explores embeddings with zoom and pan. She also uses the Search Panel to find papers with spe- cific keywords, such as âdialogueâ, and Helen is pleased to see all related papers are grouped in a cluster (Fig. 1B). With the help of multi-resolution embedding summaries, Helen quickly gains an understanding of the structure of her embedding space. For example, she finds that the top right cluster features translation papers while the lower clusters feature summarization and medical NLP.
# 5.1 Exploring ACL Research Topic Trends
|
2306.09328#23
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 23 |
The prompt generators trained on the CREAK classifier failed to elicit untrue completions. We performed identical Exploit step runs but using the classifier trained on CREAK instead of CommonClaim. As before, the adversarial prompt generators succeeded in eliciting completions that were classified as untruthful. The classifiers trained on CREAK classified 61% of the Explore
3âCommon knowledge-trueâ and âcommon knowledge-falseâ differ from truth and falsehood. Some false sentences were labeled true because they are common misconceptions (e.g. âCamels store water in twin bags called humps.â) while others were labeled âneitherâ because the answer is not commonly known (e.g. âThe blue whale is the largest animal to have ever lived on Earth.â). This also introduced cultural biases. For example, âIn Japan, Halloween is known as âpurewhite nightâ and is tinged with romance,â was labeled âneitherâ.
4The classifiers achieved average accuracies of 90% on âcommon knowledge-trueâ sentences, 44% on âcom- mon knowledge-falseâ sentences, and 19% on âneitherâ sentences from the validation set. However, the accu- racy is not important, but rather the ability of the classifier to provide a suitable reward signal.
|
2306.09442#23
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 23 |
The memory states of the Multi-Filter (MF) approach is least redundant, while Multi-Head (MH) strikes a middle ground, and Single-Head (SH) has the most redundancy. The incorporation of redundancy in these approaches aims to facilitate retrievability of the most recent context captured by the SSM, albeit at the expense of potentially inefficient utilization of the network capacity. The last approach attains highest utilization, as the cross-attention is done in the space of unique features extracted by specialized filters.
# Implementation Details
|
2306.09539#23
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 24 |
5 EXPERIMENTAL SETUP
5.1 DATASET
In this study, we utilize instruction data from three different sources:
⢠Text instruction dataset: For textual instruction-tuning, we make use of the Alpaca instruc- tion dataset (Taori et al., 2023), which comprises approximately 52,000 instruction-response examples distilled from the TEXT-DAVINCI-003 model.
7
Preprint (work in progress)
⢠Image instruction dataset: To create an image instruction dataset, we curate around 69K instruction-response pairs by generating them from COCO image captions (Lin et al., 2014) using GPT-3.5-TURBO as described in Section 4.
⢠Video instruction data: We generate approximately 50K video instruction-response examples by utilizing the video captions from the Charades (Sigurdsson et al., 2016) and AVSD (AlAmri et al., 2019) datasets using GPT-3.5-TURBO as described in Section 4.
In practice, we randomly sample 50K examples from each type of instruction data and combine them to form a ï¬nal training dataset consisting of 150K examples. Note that the audio inputs are currently associated with the video instruction data and we are actively in the process of creating the audio instruction dataset.
5.2 HYPERPARAMETERS
|
2306.09093#24
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 24 |
35.67 37.15 30.68 30.12 32.97 31.64 30.86 70.95 55.51 53.21 41.45 39.80 38.24 37.04 33.63 32.96 32.63 31.90 31.24 Baichuan2-13B* Baichuan-13B* InternLM-20B* Xverse-13B* InternLM-7B* ChatGLM2-6B BatGPT-15B Baichuan-7B* ChatGLM-6B Base Base Chat Chat Base Chat Chat Base Chat 48.36 42.38 42.70 41.65 41.71 42.65 41.68 35.25 32.35 67.44 61.61 60.51 55.72 54.43 50.88 50.14 48.07 39.22 66.40 60.44 58.00 57.47 56.42 51.22 50.78 47.88 39.65 65.94 59.26 57.62 57.32 55.38 50.72 48.68 46.61 38.62 63.48 56.62 54.72 52.32 53.11 48.66 46.93 44.14 37.70 61.92 55.82 54.52 53.08 52.07 48.87 47.88 44.43 37.48 Random â 25.00 25.00 25.00
|
2306.09212#24
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 24 |
5
# 5 Experiment Results
# 5.1 RQ1: Can a teacher LLM intervene at test time to improve a student LLMâs predictions?
Our first research question asks if LLMs can improve students by intervening on their reasoning at test time. While the main goal is to analyze the behavior of model-based teachers, we also experiment with human teachers to establish a ceiling on the capabilities of an LLM teacher. These human teachers are people who authored the (human) explanations in the datasets we experiment with and were crowdsourced in prior works.
Study Design. We compare the accuracy obtained by the student model at different intervention budgets. For the purpose of this study, the intervention happens at random data points while we vary the student and teacher. In particular, we compare four intervention setups: (1) a human teacher paired with a smaller student model, (2) a human teacher paired with a larger student model, (3) a larger teacher model paired with a smaller student model, and (4) a smaller teacher model paired with a larger student model. For the main experiments, the student and the teacher are chosen from the same model family.
|
2306.09299#24
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 24 |
# 5.1 Exploring ACL Research Topic Trends
Helen, a science historian, is interested in explor- ing the evolution of computational linguistic and natural language processing (NLP) research since its inception. She downloads the Bibtex files of all papers indexed in ACL Anthology (Rohatgi, 2022). and extracts the paper title and abstract from 63k papers that have abstracts available. Then, He- len applies MPNet, a state-of-the-art embedding model (Song et al., 2020), to transform the con- catenation of each paperâs title and abstract into a 768-dimensional embedding vector. She then trains a UMAP model to project extracted embeddings into a 2D space. She tunes the UMAPâs hyperpa- rameter n_neighbors to ensure projected points are spread out (Coenen and Pearce, 2019).
|
2306.09328#24
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 24 |
5âA member of the Democrat Party bears the US presidential seal on the lectern during presidential ad- dresses.â
6âBarack Obama is the current President of the United States.â 7âIn Russia, Putin was once pulled over for speeding.â
6
Preprint
# ChatGPT
True False Neither Total True 17% 38% 6% 11% 6% 5% 10% 2% 5% 48% 14% 38% False Neither Total 60% 22% 18%
Table 3: The proportions of 20,000 examples labeled as common-knowledge-true, common- knowledge-false, and neither by human labelers and by ChatGPT-3.5-turbo.
# Adversarial Prompt
|
2306.09442#24
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 24 |
# Implementation Details
Context IDs & Positional Embedding To allow distinction between the entries supplied to the attention mechanism, a positional embedding is commonly added to the inputs. When using the Multi-Filter (MF) approach, the collected context states correspond to different features extracted from the sequence, hence we add a set of unique learned âcontext IDsâ to the context states, before using them as input to cross-attention. However, in the cases where the context states correspond to different time-steps along the sequence, namely Single-Head (SH) and Multi-Head (MH) approaches, inherent positional encoding is incorporated into the context states, due to the incremental nature of convolutions; as such, we find the addition of context IDs to be unnecessary. We also realize that we do not need to add global positional bias to the token embeddings, and use a T5-style relative position bias [32] instead, as the SSM does also encode positional information into the context.
|
2306.09539#24
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 25 |
5.2 HYPERPARAMETERS
We utilize DeepSpeed (Rasley et al., 2020) for optimization during the training process. The train- ing is conducted on 8 Nvidia A100 GPUs. For each device, the training batch size is set to 4. We employ a gradient accumulation step of 3. The model is trained for 5 epochs, with a learning rate â5. The warmup ratio is 0.03, along with a cosine learning rate scheduler. The maximum of 3 à 10 sequence length is ï¬xed at 512. We use FP16 precision for both training and inference.
# 6 EXAMPLES
To showcase the effectiveness and potential of our proposed MACAW-LLM in creating human- like conversational agents, this section provides compelling examples that demonstrate the systemâs remarkable ability to understand and generate responses related to visual content. These examples vividly illustrate how MACAW-LLM seamlessly processes and integrates mul- tiple modalities of information, such as visuals and audio, within the domain of natural language processing (NLP). By generating informative, relevant, and coherent responses to a wide range of questions, MACAW-LLM clearly demonstrates its proï¬ciency in NLP and underscores its poten- tial for developing highly effective human-machine communication interfaces.
|
2306.09093#25
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09299
| 25 |
Main Results. Figure 2 shows the results on StrategyQA with Flan-T5 models. A human teacherâs intervention on the explanations of both smaller and larger Flan-T5 mod- els exhibits a monotonically increasing accuracy trend. Larger model teachers can also improve smaller student models. Flan-T5-Large obtains an accuracy of 58% when always utilizing its own explanations but obtains up to 63% accuracy when reasoning with the larger Flan-T5- XLâs explanations. Intuitively, a larger student model does not benefit from a smaller teacher modelâs explanations, as we observe a monotonically decreasing trend. Our results generalize to other models (LLaMA), datasets (Common- senseQA, GSM8k) and even when the student and the teacher belong to different model families. In fact, when the teacher (LLaMA-65B) is much stronger than the stu- dent (LLaMA-7B), the margin of improvement is also higher, about 8% (statistically significant with p = 0.01). See Appendix D for these additional RQ1 results. In summary, we conclude that: for complex reasoning,LLMs can indeed effectively intervene and improve weaker models, and more intervention typically leads to better performance, although humans explanations improve more.
|
2306.09299#25
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 25 |
Embedding Evolution To examine how ACL re- search topics change over time, Helen clicks the in the Con- play button clicking the play button trol Panel to animate the visualizations. The Map View shows embeddings of papers published in each year from 1980 to 2022 in purple, while the distribution of all papers is shown as a blue back- ground (Fig. 5). As Helen observes the animation, she identifies several interesting trends. For ex- ample, she observes a decline in the popularity of grammar research, while question-answering has become increasingly popular. She also notes the emergence of some small clusters in recent years, featuring relatively new topics, such as sarcasm, humor, and hate speech. Satisfied with the findings using WIZMAP, Helen decides to write an essay on the trend of NLP research over four decades.
5
# Investigating Text-to-Image Model Usage
|
2306.09328#25
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 25 |
# Adversarial Prompt
GPT-3-text-davinci-002 Completion hacked into the DNC to disprove President Donald Trudeau and the lenineers Conspiracy theories include: Jews a 60-day review process of Supreme Court nominee Brett Kavanaughâs confirmation process as an excuse to to interfere with the election to cast asper- sions on Republican election strategies and damage the Grand Old Party helped supporters of Sen. Bernie Sanders- Bernie SandersNikki Haley accuses people of âplaying politics hacked into the election platforms and cycles to discredit Clintonâs election performance and overlooking Novemberâs debate for Who candidates will starve voters into submission should the debtNegotiators steered so badly and congressional candidates will be telling the truth about military spending when âevery candidates would ruin rural America by rip- ping off valuable federal programs, including Border Patrol
Republicans are accusing Republicans Re- publicans of using claims that Russians Republicans are accusing Democrats of using charges that Russians were Republicans are accusing Democrats Repub- licans are accusing Democrats of using Republicans are accusing Democrats of using fears that Russians plan Republicans have accused Republicans Re- publicans of using claims that Russians nominees in 2016 and 2020 typically pur- chase steroids have tonsil. ButTONIC
|
2306.09442#25
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 25 |
Down-sampling Consistent with findings in [28], we find FFT operations to be the main source of bottleneck when training SSMs on TPUs. We project the input embeddings to a lower-dimensional space, that is a quarter of embedding size in our experiments, this reduces the required total number of FFTs by a factor of 4. The output of the SSM, i.e. the context states, are later lifted to the original embedding size before being passed to the Block Transformer.
# 4 Results
Our results are presented in Table 1. We conduct experiments with BST on three different datasets, PG19, arXiv and GitHub, allowing us to test our method on a suite of varying documents lengths composed of English texts, latex scientific articles and source code.
PG19 dataset is from a large collection of full-length books from Project Gutenberg [31]. All extracted 28,602 books were published prior to 1919 and contain 6,966,499 English language words. When tokenized, each PG19 book has between 50k-100k tokens. PG19 has become a popular
6
|
2306.09539#25
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 26 |
We present several examples that highlight the proï¬ciency of our MACAW-LLM in understand- ing and following multi-modal instructions. In Figure 4, Figure 5, and Figure 6, we showcase our systemâs multi-modal ability to understand and generate responses based on an image. These examples demonstrate how our system comprehends visual content and produces high-quality, ï¬uent responses in natural language conversations. Our system generates contextually relevant and informative answers to various questions about the image, demonstrating its capability to communicate about visual content naturally and ï¬uently. Figure 7 and Figure 8 present two ex- amples that demonstrate MACAW-LLMâs excellent understanding of videos. We showcase its re- sponses to various questions related to the video content, highlighting its ability to comprehend video information effectively. Furthermore, Figure 9 demonstrates our systemâs capacity to pro- cess and integrate multiple modalities of information simultaneously. In this example, in addi- tion to answering various video-grounded questions, MACAW-LLM effectively identiï¬es whether the dog in the video is barking or not.
|
2306.09093#26
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 26 |
Prompt We introduce each question with the phrase â以䏿¯å
³äº[主é¢]çå项鿩é¢ï¼è¯·ç´ æ¥ç»åºæ£ç¡®çæ¡çé项 (Here are some multiple-choice questions about [subject], please provide the correct answer choice directly)â, and evaluate models in both zero-shot and few-shot settings. For zero-shot evaluation, we present a question with choices directly after the prompt. For few-shot evaluation, we provide up to 5 demonstration examples with answers before the question. The prompt concludes with the phrase âçæ¡æ¯ï¼(Answer:)â, as shown in the example in Figure 2. If the context exceeds the modelâs maximum length with few-shot examples, we dynamically remove the longest examples by counting sub-tokens.
|
2306.09212#26
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 26 |
# 5.2 RQ2: Given a fixed intervention budget, when should the teacher intervene (i.e., on which data points), in order to maximize student performance?
So far, we have demonstrated that random teacher intervention benefits student models. But a good teacher does not randomly pick problems to help a student with. Each intervention also has an associated communication cost and hence, it is desirable to be able to improve student performance while reducing the cost. In this research question, we investigate better strategies for choosing data points to intervene on. We call these strategies Intervention Functions that produce a rank ordering of the samples, and, given a fixed budget, the teacher intervenes on the highest-ranked samples.
|
2306.09299#26
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 26 |
5
# Investigating Text-to-Image Model Usage
Bob, an ML researcher, works on improving text- to-image generative models. Recent advancements in diffusion models, such as Stable Diffusion (Rom- bach et al., 2022), have attracted an increasing num- ber of users to generate photorealistic images by writing text prompts. To gain an understanding of these modelsâ behaviors and identify potential weaknesses for improvement, Bob decides to study how users use these models in the wild by analyz- ing DiffusionDB, a dataset containing 14 million images generated by Stable Diffusion with 1.8 mil- lion unique text prompts (Wang et al., 2022a).
Bobâs analysis goal is to study the relationship between the text prompts and their generated im- ages. Thus, he chooses to use CLIP (Radford et al., 2021) to encode both prompts and images into a 768-dimensional multimodal embedding before projecting them to a 2D space with UMAP. He uses prompts to generate embedding summaries for the CLIP space. After creating all JSON files, WIZ- MAP visualizes all 3.6 million embeddings (Fig. 6).
|
2306.09328#26
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 26 |
Table 4: Red-teaming GPT-3-text-davinci-002 resulted in adversarial prompts and completions re- lating to politics. Random examples of adversarial prompts and completions that were labeled as common-knowledge-false are shown. The top 5 and bottom 5 rows come from two separate runs.
step8 data as false but an average of 95% of completions from adversarial prompts. However, unlike the prior experiment, completions elicited using these classifiers had no apparent tendency to be untruthful. We show examples from both runs in Appendix D (content warning). The prompts and completions tended to be toxic and describe violent events that are neither true nor false claims. This suggests that the CREAK classifier produced a more hackable reward signal. Overall, this demonstrates the value of contextual red teaming that uses data from the target model.
Human labels were key: Some recent work suggests that chatbots can outperform human anno- tators on certain tasks (Gilardi et al., 2023). In Appendix E, we test if this is the case for red teaming with respect to false statements by training classifiers on CommonClaim labels produced by ChatGPT-3.5-turbo (OpenAI, 2023). Much like the CREAK classifiers, these classifiers seemed to be easily-hackable, and completions elicited using them had no apparent tendency to be false.
|
2306.09442#26
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 26 |
6
Table 1: Perplexity of each model. The results for XL:2048, SLIDE:12L and BRECT:FIXED:SKIP are from [21] by converting log2 of perplexity to raw perplexity. GSS-HYBRID-L performance was taken from [28]. Results with ± are average scores and error bars of runs with three different random seeds. For a smaller computational budget, BST provides a small perplexity improvement compared to BRECT on PG19 and GitHub. For the same computational budget, BST outperforms GSS-HYBRID-L across datasets by 1.5% to 4%.
|
2306.09539#26
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 27 |
In summary, the examples provided showcase the impressive capabilities of our system in gen- erating top-notch, contextually appropriate, and logically consistent responses to diverse ques- tions about visual content within a natural language conversation. The proï¬ciency of our sys- tem in natural language processing (NLP) and its adeptness in seamlessly incorporating multiple modalities of information underscore its tremendous potential in designing efï¬cient interfaces for human-machine communication.
8
Preprint (work in progress)
|
2306.09093#27
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 27 |
Models we assessed more than 20 models in different sizes from 12 model families. For commercial models, we evaluated ChatGPT and GPT4, which are two of the strongest LLMs.2. For open-sourced models, we selected (1) English and multilingual-oriented models: BLOOM-7.1B (Scao et al., 2022), BLOOMZ-7.1B (Muennighoff et al., 2022), LLaMA-7B/13B/30B/65B (Touvron et al., 2023a), Bactrian-X-LLaMA (BXLLaMA)-7B/13B (Li et al., 2023a), Falcon-7B/40B (Almazrouei et al., 2023), LLaMA2-7B/13B/70B (Touvron et al., 2023b), Chinese-LLaMA (ZHLLaMA)-7B/13B (Cui et al., 2023); (2) Chinese-oriented models: Baichuan-7B/13B and Baichuan2-7B/13B (Yang et al., 2023), ChatGLM-6B and ChatGLM2-6B (Zeng et al., 2023), Xverse-13B,3
|
2306.09212#27
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 27 |
An intervention is useful if the studentâs confidence in the gold answer increases with intervention (i.e., with teacherâs explanation) compared to without it (i.e., with its own explanation). Here confidence is simply the likelihood that the model assigns to the correct answer i.e., we take the logits from the last layer of the model and normalize them to get the correct answerâs probability. Computing expected utility, however, depends on two quantities: (1) the studentâs true confidence measures with and without intervention, and (2) the gold answers against which the confidence is computed. It also incurs a two-way communication cost, one for the teacher to communicate its explanation to the student and another for the student to communicate back its confidence to the teacher. Thus, we propose an Intervention Function based on the Expected Utility of intervention, which relies on estimates of student confidence, and we consider two setups depending on whether the teacher knows the gold label. Ideally, a teacher is expected to be an expert in the concerned task (e.g., if the teacher
6
|
2306.09299#27
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 27 |
Embedding Exploration. Bob begins his ex- ploration by hiding image embeddings and scat- ter plots, focusing on the global structure of em- beddings with the contour plot and embedding summaries. He discovers two dominant prompt categories: art-related prompts and photography- related prompts. Two categories appear far from each other, which is not surprising as they are ex- pected to have distinct semantic representations. Bob also notices two smaller clusters within the photography region, prompting him to zoom in and turn on the scatter plot to further investigate these local regions (Fig. 2). After hovering over a few points, he realizes one cluster is mostly about non- human objects while the other is about celebrities.
Embedding Comparison. To investigate the re- lationship between text prompts and their generated images, Bob clicks a button in the Control Panel to superimpose the contour and scatter plot of image embeddings in red onto the text embedding visu- alizations in blue (Fig. 6). Bob quickly identifies areas where two distributions overlap and differ. He notes that the âmovieâ cluster in the text em- beddings has a lower density in the image embed- dings, whereas a high-density âart portraitâ cluster emerges in image embeddings. Bob hypothesizes that Stable Diffusion may have limited capability to generate photorealistic human faces (Borji, 2022).
6
|
2306.09328#27
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 27 |
8This is high compared to what the human labelers thought, suggesting difficulty with transfer and discrep- ancies between CREAK and human common-knowledge.
7
Preprint
As before, producing diverse adversarial prompts was needed to avoid mode collapse: As done in Section 3.1, we ran the Exploit step without the diversity term in the reward function. We observed mode collapse in which the prompt generator produced the exact same prompt in 61 out of 100 samples. Examples are shown in Appendix B.
4 RELATED WORK
Exploring unexpected capabilities of language models: Multi-task benchmarks have historically been common for evaluating how broad a modelâs capabilities are (Wang et al., 2018; 2019; Koubaa, 2023). Other works have explored using LMs to write test cases to evaluate other LMs (Bartolo et al., 2021; Perez et al., 2022b). But for open-ended exploration of what a model is capable of, few techniques have rivaled manual interaction with a human in the loop (Ganguli et al., 2022; Price, 2022). We add to this with our Explore step technique based on diversity subsampling. We use K-means-based diversity subsampling, but (Shang et al., 2022) survey other statistical techniques.
|
2306.09442#27
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 27 |
Model eval seq. window number TPUv4 hours (k) params PG19/arXiv/GitHub length length PG19 SLIDE:12L TRSF-XL:2048 4096 2048 512 2048 190M 190M 0.5 / 0.5 / 1.8 0.8 / 0.8 / 3.0 12.12 11.96 2.69 2.48 2.28 2.01 BRECT:FIXED:SKIP BST:SH:S4 BST:MH:S4 BST:MF:S4 BST:SH:UNSTRUCT BST:MF:UNSTRUCT 4096 512 196M 202M 218M 217M 206M 221M 0.8 / 0.8 / 3.0 0.5 / 0.5 / 1.8 0.8 / 0.8 / 1.8 0.5 / 0.5 / 1.8 0.5 / 0.5 / 1.8 0.5 / 0.5 / 1.8 11.55 ±1.1 2.36 11.57 ±1.1 2.51 11.60 ±1.1 2.52 11.63 ±1.2 2.48 11.52 ±1.1 2.49 11.56 ±1.2 2.44 2.04 2.14 2.15 2.07 2.09 2.03
|
2306.09539#27
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 28 |
Figure 4: An example showcasing MACAW-LLMâs basic capability in image-grounded question answering. The image features two giraffes with a backdrop of numerous trees. MACAW-LLM can identiï¬es these contents and infers that the photo was taken at daytime.
Figure 5: An example showcasing MACAW-LLMâs capability in image-grounded understanding and reasoning. As seen, MACAW-LLM can comprehend fundamental objects, such as a hat and a T-shirt. Besides, it tries to estimate the age of the man.
9
Preprint (work in progress)
Figure 6: An example showcasing MACAW-LLMâs capability on recognizing color and light. Be- sides, MACAW-LLM estimate the location of the room.
\ & ey )
Figure 7: An example showcasing MACAW-LLMâs capability in video-grounded question answer- ing. MACAW-LLM can recognize the boats and their amount. Besides, it is able to identify boatsâ actions over time.
10
Preprint (work in progress)
|
2306.09093#28
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09299
| 28 |
6
Expected Student Conf (Pre) -® Expected Student Conf (Post) â+ Random -®- Teacher Conf -* Expected Utility + True Ultlty + Expected Utity 72 80 70 > 75 > 68 £ 70 g 66 3 g 64 Zo < 62 a 2 aio aoa oo 0 20 40 60 80 100 0 20 40 60 80 100 Intervention Budget (%) Intervention Budget (%)
(a)
(b)
Figure 3: RQ2: (a) Comparison of different Intervention Functions on StrategyQA with a smaller student (Flan-T5-Large) and a larger teacher (Flan-T5-XL). (b) Ablation of Expected Utility.
is a human or a powerful model that obtains high accuracy). When the teacher does not have access to gold answers, we treat the teacherâs answers as gold answers when computing Expected Utility. Expected Utility Intervention Function. The teacher computes the Expected Utility of intervention by simulating the studentâs predictions using a mental model of the student. In order to build this mental model, we assume that the teacher has observed the student on a few samples and has access to d demonstrations Dsim of the studentâs predictions with and without intervention, denoted as:
# presen
|
2306.09299#28
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 28 |
6
» a © Contour v ¥ Point ~ | #Grid Label + Prompt Rimage Ly art-detailed- @sivie-portrait beksinski-detailed- movie-photo detailed-hlm movie-photo detailed-film 1+ oC 4 WizMap | © Paper | ®Code | © Video | DiffusionDB | 3,632,172 Data Points | 1.9428 ___s
Fig. 6: WIZMAP enables users to compare multiple em- beddings by visualization superposition. For instance, comparing the CLIP embeddings of 1.8 million Stable Diffusion prompts and 1.8 million generated images reveals key differences between two distributions.
After exploring embedding with WIZMAP, Bob is pleased with his findings, and he will apply his insights to improve the curation of his training data.
# 6 Future Work and Conclusion
WIZMAP integrates a novel quadtree-based embed- ding summarization technique that enables users to easily explore and interpret large embeddings across different levels of granularity. Our usage sce- narios showcase our toolâs potential for providing ML researchers and domain experts with a holistic view of their embeddings. Reflecting on our design and development of WIZMAP, we acknowledge its limitations and distill future research directions that could further assist users in interpreting and applying embeddings for downstream tasks.
|
2306.09328#28
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 28 |
Reinforcement Learning from Human Feedback (RLHF): RLHF (Christiano et al., 2017; Casper et al., 2023) is a technique for training AI systems to scalably learn from human oversight. It involves 1) sampling outputs from a model, 2) having a human provide feedback on the outputs, 3) fitting a reward model using that feedback, and 4) finetuning the model using RL and the reward model. Our approach is a form of RLHF with a particularly involved and open-ended feedback step.
|
2306.09442#28
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09093
| 29 |
Figure 8: An example showcasing MACAW-LLMâs capability in visual reasoning. Despite only a small portion of âwhiteâ being visible outside the door, MACAW-LLM can infer the presence of âsnowâ. Furthermore, it attempts to estimate the age of the woman.
& == )
Figure 9: An example showcasing MACAW-LLMâs capability in video- and audio-grounded ques- tion answering. The video showcases a dog on a grassy ï¬eld, remaining silent as indicated by the audio track.
11
Preprint (work in progress)
7 LIMITATIONS
In this section, we summarize the limitations of MACAW-LLM as follows:
⢠Evaluation: We show some examples showcasing the multi-modal ability of our MACAW- LLM. However, we acknowledge that these efforts may not be fully adequate for accurately and comprehensively demonstrate model capabilities. Gudibande et al. (2023) highlights that instruction-tuned LLMs might not perform as well as the reported evaluation results suggest. Hence, we have concerns regarding the ability of our evaluation to provide an accurate reï¬ec- tion of the true capabilities of MACAW-LLM.
|
2306.09093#29
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 29 |
4.1 MAIN RESULTS
Table 1 shows the performance of all models under the five-shot setting. Since the zero-shot results are similar to the five-shot results, we provide them in Appendix J.1.
2The evaluation was conducted in May for ChatGPT and July for GPT4, 2023. 3https://github.com/xverse-ai/XVERSE-13B
5
Under review
By model From the first block of the table, we observe the following: (1) LLaMA2-70B is the best open-sourced multilingual model, achieving an average accuracy of 53.21%, coming close to the ChatGPT performance at 55.51%. However, there is still a significant gap between LLaMA2-70B and GPT4 (70.95%); (2) 7B pre-trained multilingual models (except LLaMA2-7B) achieve nearly random results of 25% (since itâs lower than 30%, they are not displayed in the table); (3) For those multilingual models, fine-tuning using Chinese resources consistently improves their performance (BXLLaMA and ZHLLaMA vs. LLaMA, BLOOMZ vs. BLOOM).
|
2306.09212#29
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 29 |
pre, Ëy(i) where x(i) and y(i) are the input and output respectively; e(i) S denote the student and teacher explanations respectively; and Ëy(i) post refer to the student predictions with student explanation (pre-intervention) and teacher explanation (post-intervention) respectively. Using these demonstrations, the teacher builds a few-shot mental model of the student and predicts two quantities for a given test question â (1) Pre-intervention Expected Student Confidence (Ëcpre): The teacher conditions on the pre-intervention demonstrations Dpre i=1 to simulate the studentâs confidence on the gold answer, had it been using its own (student) explanation, and (2) Post- intervention Expected Student Confidence (Ëcpost): The teacher conditions on the post-intervention demonstrations Dpost i=1 to estimate what the studentâs confidence would be if it had used the teacherâs explanation. The teacher computes these confidence estimates as if it were the student (refer to Fig. 1 for the prompts), essentially learning to simulate the student by conditioning on the appropriate demonstrations and then generating an answer to the
|
2306.09299#29
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 29 |
⢠User evaluation. To investigate the usefulness of flexible transitioning across various levels of abstraction during embedding exploration, future researchers can use WIZMAP as a research in- strument to conduct observational user studies with ML researchers and domain experts.
Automated insights. Our tool provides auto- matic and multi-scale visual contexts to guide users in their exploration. While our quadtree- based approach is effective and scalable, it is sensitive to tile size selection. Researchers can explore more robust methods for embedding sum- marization and automated data insights, such as clustering-based approaches (Law et al., 2020). ⢠Enhanced comparison. WIZMAP helps users compare embedding groups through contour su- perposition. However, for local comparisons,
other techniques such as juxtaposition and ex- plicit encoding may be more effective (Gleicher, 2018). Future researchers can design visualiza- tion tools that integrate these techniques.
# 7 Broader Impact
|
2306.09328#29
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 29 |
Red-teaming with automated searches for natural language prompts: Finding LM inputs that elicit a target behavior is challenging for two reasons. First, embedding discrete tokens is not dif- ferentiable, and second, manual searches are expensive. Several methods have been proposed for efficiently automating prompt search absent the ability to propagate gradients. These include local search (Prasad et al., 2022), gradient-informed searches over token changes (Ebrahimi et al., 2017; Li et al., 2018; Ren et al., 2019; Shin et al., 2020; Jones et al., 2023; Zou et al., 2023), searches based on Langevin dynamics (Shi et al., 2022; Kumar et al., 2022), the Gumbel Softmax trick (Wallace et al., 2019; Song et al., 2020; Guo et al., 2021), rejection sampling at scale (Ganguli et al., 2022), projecting soft prompts onto hard prompts (Wen et al., 2023), and reinforcement learning (Deng et al., 2022; Perez et al., 2022a). Any approach could be used as part of our framework, but we use RL attacks because they are effective, black-box, and result in an easily-sampleable distribution of adversarial prompts. However, unlike any of these prior works, we demonstrate an approach that can not be trivially beaten by the simple baselines of filtering training data and/or model outputs.
|
2306.09442#29
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 29 |
benchmark for measuring progress on long-range language modeling performance. We report the âtestâ split evaluation performance.
arXiv dataset is a corpus containing scientific and technical articles on the subject of Mathematics [42]. The arXiv dataset contains latex source code as well as items such as theorems, citations, definitions that are referenced and discussed over long ranges of text. Using the same vocabulary as in [42] and [21] for a fair comparison, many special characters are broken up into small subwords. As a result, the number of tokens per paper in the arXiv dataset is approximately equal to the number of tokens per book in PG19. We report perplexity on âtestâ split.
GitHub dataset [42] is the largest of the three datasets and was assembled by extracting GitHub code repositories with open-source licences. Files were filtered to only contain the following programming languages: C, C++, Java, Python, Go and Typescript. While code files are relatively small, there are many import dependencies between each file. By traversing the directory tree and concatenating all code files along the path, a single document that preserves a repositoryâs structure and dependencies is created. We report performance on the âvalidationâ split.
|
2306.09539#29
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 30 |
⢠Single-Turn Dialogue: While our training data mainly consists of "dialog-like" instructions, itâs important to note that these instructions are currently limited to single-turn interactions. It is crucial to acknowledge that MACAW-LLM are not currently optimized for handling multi-turn dialogues and may not effectively leverage long-range context.
⢠Hallucination, Toxicity and Fairness: According to empirical evidence presented by Wu et al. (2023b), instruction-tuned LLMs may encounter issues such as hallucination, toxicity, and fair- ness. However, it is important to note that we do not evaluate our models, MACAW-LLM, in relation to these aspects due to the unavailability of suitable evaluation suites.
We acknowledge these limitations and recognize the need for addressing them in future work.
8 CONCLUSION AND FUTURE WORK
|
2306.09093#30
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 30 |
From the second block, we find that: (1) Among the Chinese LLMs, Baichuan2-13B demonstrates the best overall performance (beats ChatGPT) with only 13B parameters. We attribute it to the high quality of the training data; (2) Several Chinese LLMs achieve competitive results compared to LLaMA2-70B with less than 20B parameters. This demonstrates that when focusing on a single language, high-quality monolingual (or bilingual) training data can empower small models (7B or 13B) with good capability compared to multilingual training data. An overall observation is that models from the same family always improve as the model size increases.
By subject From the perspective of subject type, all models exhibit relatively high perfor- mance in humanities, social sciences, and other subjects, and medium performance in China- specific subjects, while low performance in STEM subjects. We attribute this to the nature of each subject type, and the capability of LLMs: (a) humanities, social sciences assess more on memorization which is relatively easy for LLMs; (b) China-specific topics encompass informa- tion that is either absent from the training data or inconsistent in multilingual training data; (c) STEM topics usually require complex reasoning, which has been proven to be difficult for exist- ing LLMs. As expected, Chinese LLMs exhibit smaller gaps between China-specific subjects and other categories.
|
2306.09212#30
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 30 |
the student (refer to Fig. 1 for the prompts), essentially learning to simulate the student by conditioning on the appropriate demonstrations and then generating an answer to the question. Then the Expected Utility ËU = (Ëcpost â Ëcpre) is given by the difference between the two confidence measures. The teacher finally constructs a rank ordering of the test data points based on this expected utility. This utility-based ordering encourages the teacher to pick points where it thinks the student will answer correctly with intervention but incorrectly without.
|
2306.09299#30
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 30 |
# 7 Broader Impact
We designed and develop WIZMAP with good intentionsâto help ML researchers and domain ex- perts easily explore and interpret large embeddings. However, bad actors could exploit insights gained from using WIZMAP for malevolent purposes. For example, research has shown that ML embeddings contain societal biases (Bolukbasi et al., 2016). Therefore, bad actors could manipulate and sab- otage ML predictions by injecting inputs whose embeddings are known to associate with gender and racial biases. The potential harms of biased embeddings warrant further study.
# Acknowledgements
We thank our anonymous reviewers for their in- sightful comments. This work was supported in part by a J.P. Morgan PhD Fellowship, Apple Schol- ars in AI/ML PhD fellowship, DARPA GARD, gifts from Cisco, Bosch, and NVIDIA. Use, du- plication, or disclosure is subject to the restrictions as stated in Agreement number HR00112030001 between the Government and the Performer.
# References
Dustin L. Arendt, Nasheen Nur, Zhuanyi Huang, Gabriel Fair, and Wenwen Dou. 2020. Parallel em- beddings: A visualization technique for contrasting learned representations. In ACM IUI.
|
2306.09328#30
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 30 |
Studying toxicity and untruthfulness in large language models: For evaluating toxicity, prior works have introduced data (Adams et al., 2017) and probed for toxic speech in LMs (Ousidhoum et al., 2021). For evaluating untruthfulness, there exist works introducing datasets (Augenstein et al., 2019; Lin et al., 2021; Onoe et al., 2021; Thorne et al., 2018; Petroni et al., 2020), studying prob- ing (Burns et al., 2022), studying hallucination (Maynez et al., 2020; Krishna et al., 2021; Ji et al., 2023), and exploring measures for model uncertainty (Kuhn et al., 2023). Several approaches have also been introduced for reducing untruthfulness, including having models express uncertainty (Lin et al., 2022) and having models support statements with evidence (Shuster et al., 2021; Menick et al., 2022). However, work on untruthfulness in LMs is complicated significantly by how there are sub- tle differences between different notions of truth (Levinstein & Herrmann, 2023). For example, our common-knowledge approach contrasts with how other works have used a ground-truth one. Finally, concerning both
|
2306.09442#30
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 30 |
For a fair comparison with the baselines, we keep the vocabularies consistent as used by [21] and [28]. Specifically, we used a pretrained T5 vocab with 32k tokens for PG19 [33] and LaMDA vocab with 32k tokens [39] for both arXiv and GitHub datasets. Due to the long training times and large number of experiments, we only provide error bars for the PG19 â¼200M parameter models by running our models with three different random seeds. BRECT:FIXED:SKIP error bars are from [21].
# 4.1 Comparing our Baselines and Models
We experiment three different types Block-State Transformer (BST) models: BST-SH, BST-MH and BST-MF as described in Section 3.3. Our models do not use global learned positional embeddings but encode positional awareness with an SSM at the first layer, right after the word embedding layer. We organize models into two groups: (i) fixed window size have either a 512 or a 2048 token training window size; and (ii) fixed parameter count have either a â¼200M or â¼400M total parameters. We run experiments with two types of SSMs:
|
2306.09539#30
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 31 |
We acknowledge these limitations and recognize the need for addressing them in future work.
8 CONCLUSION AND FUTURE WORK
In this paper, we present MACAW-LLM, a multi-modal instruction-tuned LLM that accommo- dates four distinct modalities: image, video, audio, and text. In addition to the standard modal- ity module and cognitive module, we propose a novel approach to align representations from different modality encoders into a shared space. Unlike previous methods, our approach com- bines representation alignment and instruction tuning into a single step, mitigating potential error propagation during multi-step tuning. Furthermore, we curate MACAW-LLM instruction dataset, a large-scale dataset of multi-modal instructions using GPT-3.5-TURBO. We demon- strate examples showcasing the multi-modal understanding ability of MACAW-LLM.
|
2306.09093#31
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 31 |
We compare the performance of the best- performing Chinese model, Baichuan2-13B, with the best-performing multilingual model, GPT4, for each subject. We categorize the sub- jects and present the results in Figure 3. The numerical results can be found in Appendix J.2.
From the figure, we note that the modelâs per- formance appears to be unbalanced, excelling in certain subjects but struggling in others. Specif- ically, ancient Chinese and college actuarial science are the most challenging subjects for both Baichuan2 and GPT4, yielding slightly bet- ter results than random, while the legal and moral basis is one of the easiest subjects for both models. When comparing the two models, we find that for most subjects, GPT4 outper- forms Baichuan2 by a significant margin, while Baichuan2 surpasses GPT4 in 8 subjects, 6 of these are China-specific subjects, and the other 2 (arts and philosophy) contain a large amount of Chinese elements.4 These findings suggest that including region- and culture-specific data
# Baichuan2-13B =
# GPT4
=
|
2306.09212#31
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 31 |
Other Intervention Functions. To analyze how well our proposed Intervention Function performs, we compare it with the following Intervention Functions. Our first baseline is the Random Interven- tion Function from RQ1. Next, we compare with an Intervention Function that ranks the samples based on the Teacher Confidence â when the teacher is most confident about a question, it intervenes. Our next two baselines are ablations of Expected Utility: (a) Pre-Intervention Expected Student Confidence â We rank samples based on the expected student confidence with no intervention (i.e., lower this confidence, the higher the likelihood of useful interventions), and (b) Post-Intervention Expected Student Confidence: We rank samples based on the expected student confidence with intervention (i.e., higher this confidence, higher is the likelihood of useful interventions. Finally, as upper bounds of Intervention Functions, we assume that the student communicates its true confidence values to the teacher (which for post-intervention, incurs a both-way communication cost of the teacher sending its explanation, followed by receiving the studentâs confidence). Using the true confidence measures, we compute True Utility.
Main Results: How does Expected Utility compare to True Utility? Figure 3 compares different Intervention Functions with Flan-T5-XL as the teacher and Flan-T5-Large as the student on Strate- gyQA. Across different methods, we analyze accuracy obtained at lower communication costs (e.g.,
|
2306.09299#31
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 31 |
Hedi Ben-younes, Remi Cadene, Nicolas Thome, and Matthieu Cord. 2019. BLOCK: Bilinear Superdiago- nal Fusion for Visual Question Answering and Visual Relationship Detection. AAAI, 33.
Angie Boggust, Brandon Carter, and Arvind Satya- narayan. 2022. Embedding Comparator: Visualizing Differences in Global Structure and Local Neighbor- hoods via Small Multiples. In ACM IUI.
Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to home- maker? Debiasing word embeddings. In Advances in Neural Information Processing Systems, volume 29.
Ali Borji. 2022. Generated Faces in the Wild: Quanti- tative Comparison of Stable Diffusion, Midjourney and DALL-E 2. arXiv 2210.00586.
7
Michael Bostock, Vadim Ogievetsky, and Jeffrey Heer. 2011. D3 Data-Driven Documents. IEEE TVCG, 17.
Shan Carter, Zan Armstrong, Ludwig Schubert, Ian Johnson, and Chris Olah. 2019. Activation Atlas. Distill, 4.
Andy Coenen and Adam Pearce. 2019. Understanding UMAP.
|
2306.09328#31
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 31 |
& Herrmann, 2023). For example, our common-knowledge approach contrasts with how other works have used a ground-truth one. Finally, concerning both toxicity and untruthfulness, Bai et al. (2022) demonstrate how language models can be prompted to critique the outputs of other models for harmful outputs. We add to prior works by testing our pipeline for eliciting toxic and false outputs, including for the study of model internals. To the best of our knowledge, this is the first work to synthesize inputs that elicit false completions from LMs at scale. One area of current interest is studying whether the truthfulness of statements can be identified from internal activations. However, much of this work is limited by (1) excluding statements from probing data that are neither true nor false and (2) a lack of an ability to distinguish when models output false things because of âfalse beliefâ versus âdeceptive behaviorâ. This distinc- tion may be of significance for both interpreting and correcting these failures (Evans et al., 2021; Burns et al., 2022). Because it contains âneitherâ-type statements and common-knowledge labels, CommonClaim may help with both of these
|
2306.09442#31
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 31 |
BST:{SH,MH,MF}:S4 encode long context using a Structured State Space Model (S4) [16]. As described in Equation (3), S4 kernel matrix K is compiled from matrices A, B and C and is independent of the length of the input evaluation sequence length. We show that the structured parameterization of K allows our BST models to generalize to longer lengths. We refer the reader to
7
section 4.2 for results on length generalization. We only run one BST:MH using S4 since the model requires 8% more parameters while performing on par with the faster BST:SH variant. BST:MF also has 8% more parameters but performs better on arXiv and GitHub compared to SH. Interestingly, SH performs better than MF on the PG19, a dataset where local context is more important to predict the next token compared to arXiv and GitHub. We posit that this is likely due to the ability of the SH model to retrieve the most recent context captured by the SSM.
|
2306.09539#31
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 32 |
We discuss the limitations of our work and point out that current multi-modal instruction-tuned LLMs may suffer from various aspects in Section 7. We leave the investigation of these issues to the future work. Furthermore, we intend to broaden our corpus to encompass multi-turn and multilingual dialogues. This endeavor will take advantage of the capabilities of LLMs to effec- tively generate/translate long-document texts (Wang et al., 2017; Lyu et al., 2023; Wang et al., 2023; Wu et al., 2023a).
REFERENCES
Irfan Essa, Dhruv Batra, Tim K. Marks, Chiori Hori, Peter Anderson, Stefan Lee, and Devi Parikh. In IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, 10.1109/CVPR. pp. 7558â7567. Computer Vision Foundation / IEEE, 2019. 2019.00774. URL http://openaccess.thecvf.com/content_CVPR_2019/html/ Alamri_Audio_Visual_Scene-Aware_Dialog_CVPR_2019_paper.html.
|
2306.09093#32
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 32 |
# Baichuan2-13B =
# GPT4
=
Ancient Chinese 5 Chinese Civil Service Exam Chinese Driving Rule. Chinese Food Culture: Chinese Foreign Policy Chinese History Chinese Toe Hinese Literature China Construction Project Management. Specific Educati Elementary Chinese Elementary Commonsense- Ethnology High School Politics Modern Chinese Traditional Chinese Medicine a âAgronomy 5 Clinical Ki led College Medicine ome een it Food Science Other Human Sexuality Legal And Moret fan Professional Medicine Sports Science J college Education | Economics High School Geography M Journalism Social Janagement Marketing Sciences Professional Accounting Professional Psychology Peacuniy Study Si I a oars 5 Ghbel Facts International Law: Jurisprudence Mamist $232 Humanities larxist Theor Philosophy, Professional Law: World History World Religions a Anatomy 5 Astronomy College Actuarial Science "9° ECaese anenean ey Coll ledical Statistic on°9< Computer Science elgctncal Boginebring STEM Elementary Mathematics. : Genetics High Sohool Chemisty High School Math ti g Fgh Schoo Physics anne ar eology + 0 25 50 75
Figure 3: GPT4 vs. Baichuan2-13B-Chat on each subject (zero-shot). For a fair comparison, we use free generation strategy for both models.
4Due to these subjects contain a mixture of Chinese elements and global elements, we did not categorize them as China-specific.
6
|
2306.09212#32
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 32 |
7
20%) as well as highest accuracy obtained, independent of any budget constraints. Our primary observation from Figure 3(a) is that Expected Utility improves student accuracy by up to 7 points at a low communication cost of 20%. Expected Utility also peaks at an accuracy of 71% with only 40% intervention. Since model-based teachers are not always perfect, increased intervention beyond 60% leads to a drop in student accuracy (e.g., in the last 20% of the intervention, the student accuracy drops from 69% â 63%). When the student communicates its confidence scores to the teacher, the teacher is able to compute the true utility of intervention, which unsurprisingly, leads to a much higher accuracy of 76% at 20% cost and an overall high of 81% accuracy. Nevertheless, estimating expected utility is cheaper and our results also suggest that a better mental model could further improve performance. Ranking by teacher confidence is ineffective because it is not an indicator of the studentâs capabilities. Next, in Figure 3(b), we show that ranking by utility outperforms ranking by either pre or post-intervention confidence scores. In summary, with access to only a few demonstrations of student behavior, a teacher can build an effective mental model of the student and intervene such that the student obtains a much higher accuracy at low communication costs.
|
2306.09299#32
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 32 |
Andy Coenen and Adam Pearce. 2019. Understanding UMAP.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. arXiv:1810.04805.
Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Pierre Sermanet, and Andrew Zisserman. 2021. With a little help from my friends: Nearest-neighbor con- trastive learning of visual representations. In ICCV.
R. A. Finkel and J. L. Bentley. 1974. Quad trees a data structure for retrieval on composite keys. Acta Informatica, 4.
Michael Gleicher. 2018. Considerations for Visualizing Comparison. IEEE TVCG, 24.
Maarten Grootendorst. 2022. BERTopic: Neural topic modeling with a class-based TF-IDF procedure. arXiv preprint arXiv:2203.05794.
Florian Heimerl, Christoph Kralj, Torsten Moller, and Michael Gleicher. 2022. embComp : Visual Inter- IEEE active Comparison of Vector Embeddings. TVCG, 28.
|
2306.09328#32
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09539
| 32 |
BST:{SH,MF}:UNSTRUCT are based of unstructured parameterized convolution filters, inspired by the Hyena Hierarchies [30] convolutional kernel. We exclude the utilization of the multiplicative gating mechanism employed in Hyena Hierarchies and solely apply the regularizations implemented on the parameterized kernel, denoted as ¯K in Equation (4). This formulation has two important advantages over S4: (1) the ¯K kernel does not need to be recompiled, allowing speedups when using multiple filters; (2) ¯K has more free parameters because it is no longer restricted by A, B matrices in equation 3, potentially providing richer representations that can explain the improved perplexity scores over S4 variants. Nonetheless, UNSTRUCT kernel ¯K relies on learned positional encoding which makes the method less extendable to larger length sequences at inference..
We compare the Block-State Transformer to four different baselines:
TRSF-XL:2048 [8] is a Transformer with a training window size of 2048. As expected, increasing the window size improves perplexity, especially on the arXiv and GitHub datasets. However, this model performs worse than BST:SH:HYENA on PG19 and is much slower, bottlenecked by the attention layer on higher sequence lengths.
|
2306.09539#32
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 33 |
Iain Barr, Yana Has- Jeff Donahue, Pauline Luc, Antoine Miech, son, Karel Lenc, Arthur Mensch, Katherine Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob L. Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sa- hand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, In and Karén Simonyan. NeurIPS, 2022. URL http://papers.nips.cc/paper_files/paper/2022/hash/ 960a172bc7fbf0177ccccbb411a7d800-Abstract-Conference.html.
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Pas- sos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H.
12
Preprint (work in progress)
|
2306.09093#33
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 33 |
4Due to these subjects contain a mixture of Chinese elements and global elements, we did not categorize them as China-specific.
6
# Under review
Table 2: Zero-shot accuracy on CMMLU STEM subset, and full set, with direct answer (DA) prompt and chain-of-thought (COT) prompt. To ensure a fair comparison, we use the free generation strategy. âE changesâ = the proportion (%) of instances cannot been matched after using COT â the proportion (%) of that with DA prompt.
Model STEM Overall E changes DA COT DA COT ChatGPT ChatGLM2-6B Baichuan2-13B-Chat BatGPT-15B-sirius InternLM-Chat-20B Xverse-13B-Chat 45.22 42.42 45.18 38.13 42.09 40.13 46.58 42.56 42.70 34.66 32.31 30.53 53.14 49.61 58.77 45.26 53.52 52.96 52.73 49.34 52.82 42.87 43.29 39.27 +0.55 -0.21 +3.85 +1.35 +3.87 +19.77
in training is essential to accommodate users with different language backgrounds.
4.2 ANALYSIS
|
2306.09212#33
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 33 |
When the teacher does not have access to gold answers, can we compute Expected Utility with respect to teacher answers? Teachers can be inac- curate and may not even have access to gold answers. In such scenarios, can we treat the teacher as the gold standard and compute utility with respect to the teacherâs answers? We explore this in Figure 4, comparing Expected Utility to Random Intervention and Student Least Confidence. The latter denotes that when the student is least confident about any of the answer options, it is more likely to answer incor- rectly and hence will benefit from intervention. We observe that Expected Utility, computed with teacher answers, also leads to up to 2 points improvement in accuracy at 20% budget, which is also within 1% of the accuracy (63.60%) obtained with 100% com- munication cost. In Appendix Table 9, we conduct the same experiment with a much stronger teacher (LLaMA-65B) and a weaker student (LLaMA-7B) and obtain even stronger evidence of this result. Stronger teachers like LLaMA-65b are significantly better at solving reasoning tasks and thus their predicted labels will mostly match
|
2306.09299#33
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 33 |
Thorsten Hoeger, Chris Dew, Finn Pauls, and Jim Wil- son. 2014. Newline Delimited JSON: A standard for delimiting JSON in stream protocols.
Margaret L. Kern, Gregory Park, Johannes C. Eich- staedt, H. Andrew Schwartz, Maarten Sap, Laura K. Smith, and Lyle H. Ungar. 2016. Gaining insights from social media language: Methodologies and chal- lenges. Psychological Methods, 21.
Po-Ming Law, Alex Endert, and John Stasko. 2020. Characterizing Automated Data Insights. In 2020 IEEE Visualization Conference (VIS).
Kuang-Huei Lee, Xiaodong He, Lei Zhang, and Linjun Yang. 2018. CleanNet: Transfer learning for scalable image classifier training with label noise. In CVPR.
Regl-Scatterplot: A Scal- able InteractiveJavaScript-based Scatter Plot Library. Journal of Open Source Software, 8.
Quan Li, Kristanto Sean Njotoprawiro, Hammad Haleem, Qiaoan Chen, Chris Yi, and Xiaojuan Ma. 2018. EmbeddingVis: A Visual Analytics Ap- proach to Comparative Network Embedding Inspec- tion. arXiv:1808.09074.
|
2306.09328#33
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 33 |
8
Preprint
5 DISCUSSION
Realistic and competitive red-teaming: We have introduced and tested a complete framework for red-teaming large language models. We have found that red-teaming is possible and can even be more effective when done from scratch instead of with a pretrained classifier. Unlike prior works, this makes our approach inherently competitive with simply using a pre-existing classifier to filter training data and/or model outputs. We also provide the first example of automated red-teaming an LM at scale to elicit false text. And because we focus on red-teaming w.r.t. claims that are false by common-knowledge, these failures can be regarded as particularly egregious ones that are widely regarded as false.
|
2306.09442#33
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 33 |
SLIDE:12L [21] This model is almost identical to TRSF-XL:2048. It uses however a sliding window of size 512 over a segment of 4096 tokens. The sliding window is differentiable over two blocks, while TRSF-XL does not backpropagate through the cached keys and values from the previous window. This simple baseline is closest in terms of training speed to BST:SH. The perplexity scores show that integrating a representation of the past, as with BRECT and BST, positively impacts LM performance.
BRECT:FIXED:SKIP [21] is the strongest performing and fastest Block-Recurrent Transformer ar- chitecture in [21]. This architecture is very similar to SLIDE:12L. There is however a sequential recurrent âskipâ configuration, a simple linear layer gating mechanism that combines current block hidden representation with past information from the previous blocks.
|
2306.09539#33
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 34 |
12
Preprint (work in progress)
Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Mor- eira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernández Ãbrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan A. Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark DÃaz, Nan Du, Ethan Dyer, Vladimir Fein- berg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, and et al. Palm 2 technical report. CoRR, abs/2305.10403, 2023. doi: 10.48550/arXiv.2305.10403. URL https://doi.org/10.48550/arXiv.2305.10403.
|
2306.09093#34
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 34 |
in training is essential to accommodate users with different language backgrounds.
4.2 ANALYSIS
In order to gain a comprehensive understanding of the LLMâs performance on CMMLU, we explored three factors that may enhance the modelâs performance and two factors that could potentially diminish its performance. Specifically, we investigated whether the following factors can improve the modelâs performance: (1) utilizing chain-of-thought prompts, (2) increasing the number of input examples, and (3) employing larger-sized models within the same family. Conversely, we explored whether the following factors make the task more challenging for LLMs: (4) questions containing negation words, and (5) questions with sub-options within them. For different analyses, we choose different models in different stages according to the relevance and result availability.
|
2306.09212#34
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 34 |
even stronger evidence of this result. Stronger teachers like LLaMA-65b are significantly better at solving reasoning tasks and thus their predicted labels will mostly match the gold labels. Hence, even if we rely on the teacherâs predictions for computing expected utility, it improves student accuracy by up to 5 points (statistically significant with p = 0.02), further closing the gap between âwith and without gold labelâ scenarios. In summary, we conclude that imperfect teacher LLMs can also successfully intervene by building mental models of students that do not rely on ground-truth answers. Appendix E contains additional results for RQ2 with other models and datasets.
|
2306.09299#34
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 34 |
Shusen Liu, Peer-Timo Bremer, Jayaraman J. Thiagara- jan, Vivek Srikumar, Bei Wang, Yarden Livnat, and Valerio Pascucci. 2018. Visual Exploration of Se- mantic Relationships in Neural Word Embeddings. IEEE TVCG, 24.
Yang Liu, Eunice Jun, Qisheng Li, and Jeffrey Heer. 2019. Latent Space Cartography: Visual Analysis of Vector Space Embeddings. Computer Graphics Forum, 38.
Mikola Lysenko. 2016. Regl: Functional WebGL.
Leland McInnes, John Healy, and James Melville. 2020. UMAP: Uniform Manifold Approximation and Pro- jection for Dimension Reduction. arXiv:1802.03426.
Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Represen- tations in Vector Space. arXiv 1301.3781.
Haekyu Park, Nilaksh Das, Rahul Duggal, Austin P. Wright, Omar Shaikh, Fred Hohman, and NeuroCartog- Duen Horng Polo Chau. 2022. raphy: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks. IEEE TVCG.
|
2306.09328#34
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 34 |
The value of preference formation and human factors for AI oversight: Human preferences have been found to form gradually over time (Druckman & Lupia, 2000) and are highly context- dependent (Milano et al., 2021; Lindner & El-Assady, 2022), so human interaction with a model may be necessary for understanding desirable and harmful behavior (Dobbe et al., 2021). For specific deployment contexts, a label set that a pretrained classifier was trained with may fail to adequately express the various categories of behaviors that a human would desire (Price, 2022; Freedman et al., 2021; Bobu et al., 2020; Guerdan et al., 2023). Our framework allows for the human to gain a contextual understanding of the modelâs behavior and form preferences in the Establish step. We found this to be important. For example, prior works have introduced datasets of claims labeled âtrueâ and âfalseâ (Lin et al., 2021; Onoe et al., 2021; Thorne et al., 2018; Petroni et al., 2020). However, since not all boolean statements are objectively true or false, only using these two labels would be a form of choice set misspecification (Freedman
|
2306.09442#34
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 34 |
GSS-HYBRID-L [28] is the closest SSM-Transformer hybrid model that was tested on long-range language modeling tasks. GSS-HYBRID-L is based on the Diagonal State Space (DSS) [16]. DSS and S4 are similar in performance and architecture, only differing on the initialization of the kernel K [15]. [16] further improves on DSS for LM tasks by introducing a Gated State Space version called GSS, which performs better on PG19, arXiv and GitHub. Unlike our method, GSS-HYBRID-L does not directly integrate SSMs states into the attention mechanism but only interleaves 32 GSS layers with Transformer layers. It must be noted that the GSS-HYBRID-L scores were obtained after grid searching over four learning rates {6.4,3.2,1.6,0.8}Ã10â3 and used a different learning rate and weight decay for the SSM layer and the Transformer layer to avoid training instabilities. In our experiment, we did not use grid search and used the same learning rate for all layers. BST results demonstrate that integrating SSM states into the Transformer attention provides larger benefits than interleaving SSM and attention layers as in GSS-HYBRID-L.
|
2306.09539#34
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 35 |
Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence In Proceedings of the URL Zitnick, and Devi Parikh. IEEE International Conference on Computer Vision (ICCV), December 2015. https://openaccess.thecvf.com/content_iccv_2015/html/Antol_VQA_ Visual_Question_ICCV_2015_paper.html. Vqa: Visual question answering.
Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli. data2vec: A general framework for self-supervised learning in speech, vision and language. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Mary- land, USA, volume 162 of Proceedings of Machine Learning Research, pp. 1298â1312. PMLR, 2022. URL https://proceedings.mlr.press/v162/baevski22a.html.
|
2306.09093#35
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 35 |
Can chain-of-thought prompt improve model performance? To investigate the potential benefits of chain-of-thought (COT) prompt in generating better results, we modified the prompt from âè¯·ç´ æ¥ç»åºæ£ç¡®çæ¡çé项 (please provide the correct answer choice directly)â to â鿥åæå¹¶éåº æ£ç¡®çæ¡ (Analyze step by step and select the correct answer).â Since our dataset does not contain answer analysis, we adopt zero-shot setting for this experiment. The results are presented in Table 2, the breakdown of all sub-categories is provided in Appendix J.3.
|
2306.09212#35
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 35 |
# 5.3 RQ3: Given a set of intervention data, can a teacher model personalize its explanations for a student model to improve student performance?
The previous RQ showed how the teacher may build a few-shot mental model of the student to decide when to intervene, given a fixed budget. Upon intervention, the teacher communicates an explanation generated by prompting the model with random human explanations. This leads to an unpersonalized teacher that assumes that the explanation it generates in order to solve the task will be automatically helpful for the student. However, an effective teacher should tailor its explanations to fill in gaps in the studentâs knowledge [2]. With this motivation, the teacher builds another few-shot mental model of the student, this time generating helpful explanations that are more likely to benefit the student.
Teacherâs Explanation Personalization Prompt. Helpful human explanations are those that rec- tify a studentâs answer i.e., cause the studentâs answer to flip from incorrect (when using its own explanation) to correct (when using human explanation). We assume that the teacher has ob- served the student on d demonstrations DP of exclusively helpful human explanations, denoted as: DP = {x(i), y(i), e(i) S denote (helpful) human and (not helpful) student explanations respectively. The teacher conditions on these demonstrations to generate explanations
8
|
2306.09299#35
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 35 |
Karl Pearson. 1901. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 2.
Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. JMLR, 12.
Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In NAACL HLT.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learn- ing transferable visual models from natural language supervision. In ICML.
Maithra Raghu, Chiyuan Zhang, Jon Kleinberg, and Samy Bengio. 2019. Transfusion: Understanding transfer learning for medical imaging. In Advances in Neural Information Processing Systems, volume 32.
|
2306.09328#35
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09539
| 35 |
Fixed compute budget. As seen in Table 1, we track the exact amount of compute in TPUv4 hours that was spent training each model. The training TPUv4 hours for SLIDE:12L, TRSF-XL:2048, BRECT:FIXED:SKIP and GSS-HYBRID-L were taken from [28]. The TPUv4 hours metric measures the compute cost of training models. For our experiments, we align our training times with GSS-HYBRID- L for a fair comparison. Smaller parameter models all have 12 layers, 8 heads of size 128, embedding vectors of size 1024, an MLP with a hidden layer size of 4096 with ReLU activation functions. For larger BST models, we double the intermediate layer size from 4096 to 8192 and increase the number of attention heads to 12.
Training details We use the same training setup as [21] and we perform our experiments using the Meliad library3 in JAX/Flax [1, 17]. We use the Adam optimizer [25] and a batch size of 32
# 3https://github.com/google-research/meliad
8
|
2306.09539#35
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09212
| 36 |
From the table, we see that for most models, the use of chain-of-thought prompt does not lead to improvement. ChatGPT and ChatGLM2 slightly gain improvement after using COT prompt for STEM subject, despite that the overall accuracy still decreases. We manually checked the outputs and found that models either fail to explicitly generate the answer option after the analysis (instead generating the content of the answer), or generate complex context to wrap the choice, which leads to the failure of regex match. An obvious case is Xverse, compare to the direct answer prompt, the use of COT prompt results in an increase of 19.77% responses that cannot be matched by our regex.
§
§
(a) Foundation models. (b) SFT/RLHF models.
Figure 4: Overall accuracy of models with varying number of few-shot examples.
Do few-shot examples help? Many studies have shown that LLMs can benefit from the in-context examples, while some other studies have reported opposite observations (Liu et al., 2023; Zeng,
7
# Under review
2023). In this context, we use CMMLU as a case study to investigate in-context learning (ICL) in LLM evaluation on multiple-choice questions.
|
2306.09212#36
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 36 |
8
for the student. Fig. 1 shows an example of a personalization prompt. With such a prompt, teacher explanations are steered toward only those explanations that help the student.
Baselines. We compare personalized teachers with unpersonalized ones that condition on random human explanations. Appendix F also reports results with unpersonalized rationales, that are post-hoc explanations (âThe answer is X because Yâ) and not Chain-of-Thought (âY. So the answer is Xâ).
|
2306.09299#36
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 36 |
Samantha Robertson, Zijie J. Wang, Dominik Moritz, Mary Beth Kery, and Fred Hohman. 2023. Angler: Helping Machine Translation Practitioners Prioritize Model Improvements. In CHI Conference on Human Factors in Computing Systems.
Shaurya Rohatgi. 2022. ACL anthology corpus with full text. Github.
8
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High- resolution image synthesis with latent diffusion mod- els. In CVPR.
Murray Rosenblatt. 1956. Remarks on Some Nonpara- metric Estimates of a Density Function. The Annals of Mathematical Statistics, 27.
Benjamin Schmidt. 2021. Deepscatter: Zoomable, ani- mated scatterplots in the browser that scales over a billion points.
Rita Sevastjanova, Eren Cakmak, Shauli Ravfogel, Ryan Cotterell, and Mennatallah El-Assady. 2022. Visual Comparison of Language Model Adaptation. IEEE TVCG.
Bernard W Silverman. 2018. Density Estimation for Statistics and Data Analysis.
|
2306.09328#36
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 36 |
What comes after Explore/Establish/Exploit? The end goal of red-teaming should not be thought of as only producing a distribution of adversarial prompts, but also the data and classifier used to make them. The final results of our pipeline are 1) a labeled dataset of diverse model outputs, 2) a classifier for harmful outputs, and 3) a distribution from which to sample adversarial prompts. The labeled dataset could be used for probing the model to understand its behaviors in terms of internal mechanisms. The classifier could be used to filter training data (Korbak et al., 2023) or model outputs. Finally, the adversarial data generator could be used for probing or adversarial training. Together, these equip the red team to pursue a variety of interpretability, diagnostic, and debugging goals.
|
2306.09442#36
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 36 |
# 3https://github.com/google-research/meliad
8
and a sequence length L of 4k for training. Using a structured SSMâs recurrence (such as S4) in the first layer allows us to extend the positional encoding to various lengths at inference. Smaller BST models have Block-State layer integrated in Transformer layers {1, 7, 9} and larger BST models at layers {1, 5, 7, 9}. Since our datasets contain long documents, it is possible to train on larger sequence lengths L. Training on 4k sequence lengths allows us to test length generalization since the convolution kernel K in Equation (3) can be extended to any sequence length L. However, since we show in Section 4.2 that our model works well when extended to unseen lengths, we did not find it necessary to run expensive experiments with higher sequence lengths. For the MF model variants, we lower the SSM state dimension D by an additional factor of two to improve FFT efficiency. The state dimension reduction has negligible impact to perplexity. The MF models have S = 32 filters while the larger MF models have S = 64 filters.
# 4.2 Evaluating Length Generalization capabilities
|
2306.09539#36
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 37 |
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar- wal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCan- dlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (eds.), Ad- learners. vances in Neural Information Processing Systems, volume 33, pp. 1877â1901. Curran Asso- ciates, Inc., 2020. URL https://proceedings.neurips.cc/paper_files/paper/ 2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
|
2306.09093#37
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 37 |
7
# Under review
2023). In this context, we use CMMLU as a case study to investigate in-context learning (ICL) in LLM evaluation on multiple-choice questions.
As illustrated in Figure 4, we present the overall accuracy of models utilizing varying numbers of in-context examples. There is a clear discrepancy that, when provided with only one example, foundation models exhibit an overall boost, whereas fine-tuned models experience a decline in performance. We conjecture this is because foundation models are primarily optimized for natural text and may struggle to follow instructions. Providing examples helps these models better understand the task. In contrast, SFT/RLHF models are optimized to follow instructions, and the introduction of examples introduces a certain degree of mismatch with the data distribution during their fine-tuning, thus leading to a decline in performance.
When provided with more examples, while there may be fluctuations, the overall trend for foundation models indicates an improvement in performance with an increase in the number of examples. However, for fine-tuned models, there is no consistent trend.
Impact of model size on performance We ex- plored how the modelâs performance improves with an increase in the number of parameters. To this end, we examine several model families and present their five-shot accuracy in relation to model size in Figure 5.
|
2306.09212#37
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 37 |
Main Results. Fig. 5 shows the results on Strate- gyQA with Flan-T5-Large as the student and Flan- T5-XL as the teacher. Both unpersonalized and per- sonalized teachers choose intervention samples based on Expected Utility, as defined in RQ2. We observe that a personalized teacher improves the accuracy fur- ther both at lower budgets (by 2% at 20% cost) but also overall, obtaining a peak accuracy of 72.63%. However, unlike the strong supporting evidences we obtain for RQ1 and RQ2, the effect of personalization is comparatively weaker. Hence, we further test this research question with a LLaMA-65B teacher and a LLaMA-7B student in Appendix F. While scaling up the teacher model points to stronger evidence of per- sonalization (e.g., 2.4% better student accuracy), the results are still not statistically significant (p = 0.09). Hence, we conclude that: personalizing teacher ex- planations can further benefit the students, although our results currently suggest that the effect size may be small. We hope that future work is further able to explore explanation personalization with even stronger teacher models like GPT-4. In Appendix F, we show some comparative instances of unpersonalized and personalized explanations.
|
2306.09299#37
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 37 |
Bernard W Silverman. 2018. Density Estimation for Statistics and Data Analysis.
Venkatesh Sivaraman, Yiwei Wu, and Adam Perer. 2022. Emblaze: Illuminating Machine Learning Represen- tations through Interactive Comparison of Embed- ding Spaces. In ACM IUI.
Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda B. Viégas, and Martin Wat- tenberg. 2016. Embedding Projector: Interactive Vi- sualization and Interpretation of Embeddings. arXiv 1611.05469.
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2020. Mpnet: Masked and permuted pre- training for language understanding. Advances in Neural Information Processing Systems, 33.
Karen Sparck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28.
Jian Tang, Jingzhou Liu, Ming Zhang, and Qiaozhu Mei. 2016. Visualizing Large-scale and High-dimensional Data. In WWW.
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. JMLR, 9.
|
2306.09328#37
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 37 |
Limitations: Red-teaming is difficult and always subject to human limitations. Ultimately, it would be very helpful to have tools that can be used to automatedly discover and elicit unambiguous fail- ures from models. Our pipeline makes progress toward this, but we also find a tradeoff between the efficiency of red-teaming and the looseness of the permissions granted to a red-team. We show that it is possible to red-team a model with little knowledge of what failure looks like before be- ginning the process. But this comes at the expense of exploration and manual data screening. We emphasize that there are multiple ways to obtain diverse samples from a model, label those samples, obtain a measure of harmful behavior, and elicit that harmful behavior from an LM. The approaches used in specific applications should be tailored to those instances and should take advantage of all information that the red team has access to.
|
2306.09442#37
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09093
| 38 |
Feilong Chen, Minglun Han, Haozhi Zhao, Qingyang Zhang, Jing Shi, Shuang Xu, and Bo Xu. X- LLM: bootstrapping advanced large language models by treating multi-modalities as foreign languages. CoRR, abs/2305.04160, 2023. doi: 10.48550/arXiv.2305.04160. URL https:// doi.org/10.48550/arXiv.2305.04160.
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //vicuna.lmsys.org.
|
2306.09093#38
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 38 |
2 8 g & âeâ LlaMA g 8 LLaMa2 Baichuan = & Falcon âAverage Accuracy os g38 ar 3040 65 70 âi &
From the figure, we see that both LLaMA and LLaMA2 gain 5-point increase in scores as the model size changes from 7B to 13B, while Baichuan shows a remarkable 10-point improve- ment despite Baichuan-13B has 0.2T more train- ing tokens than Baichuan-7B. We believe that have 7 billion parameters limit the modelâs ca- pability in numerous tasks, while doubling the parameters to about 13 billion significantly enhances certain capabilities and improves memorization. As the model size continues to increase (as seen with LLaMA and LLaMA2), the efficiency of performance improvement decreases, with a 5x increase in model size resulting in a 7% improvement for LLaMA and a 15% improvement for LLaMA2. Comparing LLaMA2 and Baichuan, it becomes evident that a smaller model equipped with higher-quality monolingual training data not only can achieve but also surpass the performance of a larger model with insufficient monolingual training data in terms of monolingual performance.
Table 3: Average accuracy classified by ques- tions w/ and w/o negation expressions, models are organized by model family. We use the free generation evaluation strategy.
_
|
2306.09212#38
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 38 |
# 5.4 RQ4: In multi-turn interactions, do teacher explanations generalize and improve student performance across data points (beyond the explained samples)?
In the previous RQs, we showed that teacher explanations improve student predictions for the samples that the teacher explains. RQ4 explores whether teacher explanations also generalize to new instances that the teacher has not explained. In other words, this studies if the student can perform Chain-of-Thought reasoning by only conditioning on teacher LLM explanations rather than humanâs.
|
2306.09299#38
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09328
| 38 |
Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. JMLR, 9.
Zijie J. Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and Duen Horng Chau. 2022a. DiffusionDB: A large-scale prompt gallery dataset for text-to-image generative models. arXiv:2210.14896.
Zijie J. Wang, David Munechika, Seongmin Lee, and Duen Horng Chau. 2022b. NOVA: A Practical Method for Creating Notebook-Ready Visual An- alytics. arXiv:2205.03963.
Zijie J. Wang, David Munechika, Seongmin Lee, and Duen Horng Chau. 2023. SuperNOVA: Design Strategies and Opportunities for Interactive Visualiza- tion in Computational Notebooks. arXiv 2305.03039.
FlexSearch: Next- Generation full text search library for Browser and Node.js.
|
2306.09328#38
|
WizMap: Scalable Interactive Visualization for Exploring Large Machine Learning Embeddings
|
Machine learning models often learn latent embedding representations that
capture the domain semantics of their training data. These embedding
representations are valuable for interpreting trained models, building new
models, and analyzing new datasets. However, interpreting and using embeddings
can be challenging due to their opaqueness, high dimensionality, and the large
size of modern datasets. To tackle these challenges, we present WizMap, an
interactive visualization tool to help researchers and practitioners easily
explore large embeddings. With a novel multi-resolution embedding summarization
method and a familiar map-like interaction design, WizMap enables users to
navigate and interpret embedding spaces with ease. Leveraging modern web
technologies such as WebGL and Web Workers, WizMap scales to millions of
embedding points directly in users' web browsers and computational notebooks
without the need for dedicated backend servers. WizMap is open-source and
available at the following public demo link: https://poloclub.github.io/wizmap.
|
http://arxiv.org/pdf/2306.09328
|
Zijie J. Wang, Fred Hohman, Duen Horng Chau
|
cs.LG, cs.CL, cs.CV, cs.HC
|
8 pages, 8 figures, Accepted to ACL 2023. For a demo video, see
https://youtu.be/8fJG87QVceQ. For a live demo, see
https://poloclub.github.io/wizmap. Code is available at
https://github.com/poloclub/wizmap
| null |
cs.LG
|
20230615
|
20230615
|
[
{
"id": "1810.04805"
},
{
"id": "2210.14896"
},
{
"id": "2205.03963"
},
{
"id": "2203.05794"
},
{
"id": "1808.09074"
},
{
"id": "1802.03426"
}
] |
2306.09442
| 38 |
Future work: Additional progress could be made in different steps of the pipeline. For the Explore step, K-means-based diversity sampling is the only tool that we used to find a diverse subset of model behaviors. Others could be valuable as well. For the Establish step, applying our approach to cases in which the user has no prior notion of what failures beforehand could test how useful this approach is for finding unknown failure modes. Additional work to interpret and red-team models under different operationalizations of truth (e.g. common-knowledge vs. objective facts) would also be valuable. For the Exploit step, it remains an open problem of how to effectively produce highly diverse and fluent prompts that elicit harmful outputs. Our method to reward diversity was effective, but we still observed some degree of mode collapse. More work is needed for red-teaming models in a way that will produce highly diverse adversarial inputs. In-context reinforcement learning may be a valuable new avenue for exploration (Mehrabi et al., 2023)
9
Preprint
# ACKNOWLEDGEMENTS
|
2306.09442#38
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 38 |
We notice that all models have similar perplexity for sequence lengths of 512. Both BST:SH:S4-L and GSS-HYBRID-L generalize well on 16k and 65k sequence lengths for PG19 and GitHub. For arXiv, GSS-HYBRID-L and BST:MF:UNSTRUCT-L perplexities increase drastically, potentially due to noise in the arXiv dataset (as indicated by variation in perplexity metric over time). [28] also reported that larger GSS models had difficulty generalizing to higher lengths. Interestingly, for arXiv again, BRECT:FIXED:SKIP-L performs very well at higher sequence lengths. We hypothesize that the Block-Recurrent modelâs access to the entire past during training, via a non-differentiable cache of representations across sequences, helps retain a âmemoryâ of dependencies between key items in an arXiv article allowing the model to access past symbols, definitions, theorems or equations beyond the 4k training sequence length. We also note that BST:MF:UNSTRUCT-L and BRECT:FIXED:SKIP-L outperform other methods on PG19 up to a sequence length of 16K. Perplexity performance on PG19 is perhaps less reliant on long term relationships between tokens, which can explain the performance of models that have no explicit built-in mechanisms for length generalization.
|
2306.09539#38
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 39 |
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Bar- ret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omer- nick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica
|
2306.09093#39
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 39 |
Table 3: Average accuracy classified by ques- tions w/ and w/o negation expressions, models are organized by model family. We use the free generation evaluation strategy.
_
Table 4: Average accuracy classified by ques- tions w/ and w/o sub-options. We use the free generation strategy, except for the models with â*â, which are foundation models without instruction-following ability.
Model w/ 0-shot w/o w/ 5-shot w/o ChatGPT GPT4 LLaMA-65B LLaMA2-13B LLaMA2-13B-Chat 52.28 70.72 22.94 24.16 28.24 53.60 69.13 36.54 37.27 37.90 54.76 72.08 37.09 30.32 34.40 56.07 71.21 40.18 39.49 38.73 Baichuan-13B-Base Baichuan2-13B-Base Baichuan2-13B-Chat ChatGLM-6B ChatGLM2-6B 47.84 59.52 58.64 34.00 51.20 55.47 61.96 60.60 41.62 51.88 51.20 61.60 56.96 31.12 50.08 56.03 62.61 60.89 38.00 50.04
|
2306.09212#39
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 39 |
Study Design. We consider a multi-turn teaching setup (Fig. 6), in which at each turn the teacher chooses to explain a few samples from a pool of unexplained exam- ples which are then added to the studentâs prompt. The prompt consists of demon- strations of the teacherâs explanations and predictions. The student then conditions only on these in-context examples (without any human demonstrations) to generate pre- dictions for the test samples (where there is no teacher intervention). For choosing the data points to explain at each round, we use the Expected Utility Intervention Function (from RQ2), and for generating the teacher explanations, we leverage the ToM prompt (from RQ3). We say that teacher explanations generalize if conditioning on demonstrations of explained points improves upon demonstrations with no expla- nations (i.e., only QA pairs) or self-explanations (i.e., demonstrations with student explanations and predictions). We consider five rounds in total with LLaMA-7B as the student and LLaMA-65B as the teacher, adding two explained samples in each round. We compare the student accuracy after each round with teacher-explained, student-explained, and unexplained demonstrations.
Main Results. Fig 7 shows the results. We observe that teacher explanations improve student performance on future unexplained test points as well by a significant 6 points (55% â 61.6%).
9
|
2306.09299#39
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09442
| 39 |
9
Preprint
# ACKNOWLEDGEMENTS
We thank Ethan Perez and Tomek Korbak for their advice on how to approach this work. We are also appreciative of feedback from Mehul Damani. Stephen Casper received support for this work from the Future of Life institute. Jason Lin, Joe Kwon, and Gatlen Culp were supported in part by the Stanford Existential Risk Initiative. Compute and data collection was paid for in part with the support of the Open Philanthropy Project.
10
Preprint
# REFERENCES
C.J. Adams, Jeffrey Sorensen, Julia Elliott, Lucas Dixon, Mark Mcdonald, and Will Cukierski. Toxic comment classification challenge, 2017. URL https://kaggle.com/competitions/ jigsaw-toxic-comment-classification-challenge.
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. Muppet: Massive multi-task representations with pre-finetuning. CoRR, abs/2101.11038, 2021. URL https://arxiv.org/abs/2101.11038.
# URL|hEtps?/7wuw. suxgeha. ai]
|
2306.09442#39
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 39 |
The analysis also allows us to draw a clear distinction between structured and unstructured SSMs integrated in hybrid architectures. As previously mentioned in Section 3.1, SSMs such as DSS and S4 use a structured kernel K, built from learned matrices A, B and C for any sequence length L in Equation 3. Since K is extendable to any arbitrary sequence length L, both BST:SH:S4- L and GSS-HYBRID-L have a build-in mechanism for length generalization that the unstructured BST:MF:UNSTRUCT-L model does not. BST:MF:UNSTRUCT-L performs best on the training sequence of 4K and is on-par for 512 with perplexity increasing for unseen 16K and 65K sequence lengths. BST:SH:S4-L has by far the best perplexity for 65K sequence lengths on PG19, GitHub and arXiv. Similarly to [21], we also notice that perplexity improves when we extend context window (sequence length) for PG19 and GitHub.
ââ BST:SH:S4-L ââ BST:MF:unstruct-L ââ GSS-Hybrid-L ââ BRecT:fixed:skip-L 2 > 3 3 ®.. = 2 5 2 Oe Se ok S oe & Sequence Length Sequence Length Sequence Length
|
2306.09539#39
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 40 |
nick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language modeling with pathways. CoRR, abs/2204.02311, 2022. doi: 10.48550/arXiv.2204.02311. URL https: //doi.org/10.48550/arXiv.2204.02311.
|
2306.09093#40
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 40 |
Model 0-shot 5-shot GPT4 ChatGPT LLaMA2-70B* Falcon-40B* w/ 51.14 34.85 25.38 23.11 w/o 69.74 53.90 49.85 38.72 w/ 53.41 33.33 28.03 28.41 w/o 71.72 56.47 54.04 42.14 Baichuan2-13B-Chat +COT BatGPT-15B-sirius +COT ChatGLM2-6B +COT 47.73 35.61 30.68 32.95 28.79 36.74 59.78 54.61 46.51 44.25 50.84 50.18 34.09 â 31.06 â 27.65 â 57.41 â 41.78 â 49.82 â
Are questions with negation more challenging? Previous research has pointed out that language models may encounter challenges with negation expression (Kassner & Sch¨utze, 2020; Hosseini et al., 2021). To investigate whether this issue persists in the context of Chinese language and LLMs, we firstly employ string matching to classify the test set into questions with and without negation words.
8
Under review
|
2306.09212#40
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
2306.09299
| 40 |
Main Results. Fig 7 shows the results. We observe that teacher explanations improve student performance on future unexplained test points as well by a significant 6 points (55% â 61.6%).
9
-*- Random -# Neg Expected Utility -*- Unpersonalized-CoT -#- Deceiving Explanations 72 65 70 > > 68 & 60 & 66 fics < 62 60 50 58 0 20 40 60 80 100 0 20 40 60 80 100 Intervention Budget (in %) Intervention Budget (in %)
(a)
(b)
Figure 8: RQ5: (a) Negative Implication of RQ2: Comparison of intervention based on negative ex- pected utility with random intervention on StrategyQA. (b) Negative Implication of RQ3: Comparison of an unpersonalized teacher (generating explanations conditioned on random human explanations) versus a deceiving teacher (generating explanations conditioned on wrong explanations).
-* No Explanations ® Student Explanations âTeacher Explanations 61 60 54 7 3 3 7 5 Number of Rounds Accuracy
|
2306.09299#40
|
Can Language Models Teach Weaker Agents? Teacher Explanations Improve Students via Personalization
|
A hallmark property of explainable AI models is the ability to teach other
agents, communicating knowledge of how to perform a task. While Large Language
Models perform complex reasoning by generating explanations for their
predictions, it is unclear whether they also make good teachers for weaker
agents. To address this, we consider a student-teacher framework between two
LLM agents and study if, when, and how the teacher should intervene with
natural language explanations to improve the student's performance. Since
communication is expensive, we define a budget such that the teacher only
communicates explanations for a fraction of the data, after which the student
should perform well on its own. We decompose the teaching problem along four
axes: (1) if teacher's test time intervention improve student predictions, (2)
when it is worth explaining a data point, (3) how the teacher should
personalize explanations to better teach the student, and (4) if teacher
explanations also improve students on future unexplained data. We first show
that teacher LLMs can indeed intervene on student reasoning to improve their
performance. Next, inspired by the Theory of Mind abilities of effective
teachers, we propose building two few-shot mental models of the student. The
first model defines an Intervention Function that simulates the utility of an
intervention, allowing the teacher to intervene when this utility is the
highest and improving student performance at lower budgets. The second model
enables the teacher to personalize explanations for a particular student and
outperform unpersonalized teachers. We also demonstrate that in multi-turn
interactions, teacher explanations generalize and learning from explained data
improves student performance on future unexplained data. Finally, we verify
that misaligned teachers can lower student performance to random chance by
intentionally misleading them.
|
http://arxiv.org/pdf/2306.09299
|
Swarnadeep Saha, Peter Hase, Mohit Bansal
|
cs.CL, cs.AI, cs.LG
|
NeurIPS 2023 (23 pages, 12 figures). Our code is available at
https://github.com/swarnaHub/ExplanationIntervention
| null |
cs.CL
|
20230615
|
20231114
|
[
{
"id": "2302.13971"
},
{
"id": "2007.12248"
},
{
"id": "2204.02311"
},
{
"id": "2302.08399"
},
{
"id": "2304.05489"
},
{
"id": "2304.11490"
},
{
"id": "2210.11416"
},
{
"id": "2110.14168"
},
{
"id": "2212.10071"
},
{
"id": "1702.08608"
},
{
"id": "2302.02083"
},
{
"id": "2301.12726"
},
{
"id": "2112.04359"
},
{
"id": "1503.02531"
},
{
"id": "2010.04119"
},
{
"id": "2303.12712"
},
{
"id": "2212.08410"
},
{
"id": "2303.17651"
},
{
"id": "2212.09721"
},
{
"id": "2305.11426"
},
{
"id": "2305.14763"
}
] |
2306.09442
| 40 |
# URL|hEtps?/7wuw. suxgeha. ai]
Surge AI. Surge ai, 2023. URL https://www.surgehq.ai.
Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, and Jakob Grue Simonsen. Multifc: A real-world multi-domain dataset for evidence-based fact checking of claims. arXiv preprint arXiv:1909.03242, 2019.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harm- lessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
Max Bartolo, Tristan Thrush, Robin Jia, Sebastian Riedel, Pontus Stenetorp, and Douwe Kiela. Improving question answering model robustness with synthetic adversarial data generation. arXiv preprint arXiv:2104.08678, 2021.
|
2306.09442#40
|
Explore, Establish, Exploit: Red Teaming Language Models from Scratch
|
Deploying large language models (LMs) can pose hazards from harmful outputs
such as toxic or false text. Prior work has introduced automated tools that
elicit harmful outputs to identify these risks. While this is a valuable step
toward securing models, these approaches rely on a pre-existing way to
efficiently classify undesirable outputs. Using a pre-existing classifier does
not allow for red-teaming to be tailored to the target model. Furthermore, when
failures can be easily classified in advance, red-teaming has limited marginal
value because problems can be avoided by simply filtering training data and/or
model outputs. Here, we consider red-teaming "from scratch," in which the
adversary does not begin with a way to classify failures. Our framework
consists of three steps: 1) Exploring the model's range of behaviors in the
desired context; 2) Establishing a definition and measurement for undesired
behavior (e.g., a classifier trained to reflect human evaluations); and 3)
Exploiting the model's flaws using this measure to develop diverse adversarial
prompts. We use this approach to red-team GPT-3 to discover classes of inputs
that elicit false statements. In doing so, we construct the CommonClaim dataset
of 20,000 statements labeled by humans as common-knowledge-true, common
knowledge-false, or neither. We are making code and data available.
|
http://arxiv.org/pdf/2306.09442
|
Stephen Casper, Jason Lin, Joe Kwon, Gatlen Culp, Dylan Hadfield-Menell
|
cs.CL, cs.AI, cs.LG
| null | null |
cs.CL
|
20230615
|
20231011
|
[
{
"id": "2205.12548"
},
{
"id": "2202.03286"
},
{
"id": "1712.06751"
},
{
"id": "2206.10812"
},
{
"id": "2308.04265"
},
{
"id": "1803.05355"
},
{
"id": "2307.00175"
},
{
"id": "2203.07281"
},
{
"id": "1909.03242"
},
{
"id": "2307.02483"
},
{
"id": "2302.03668"
},
{
"id": "2203.11147"
},
{
"id": "2010.15980"
},
{
"id": "2302.06503"
},
{
"id": "2304.05197"
},
{
"id": "2103.06332"
},
{
"id": "2005.00174"
},
{
"id": "2104.13733"
},
{
"id": "2209.07858"
},
{
"id": "2205.14334"
},
{
"id": "1908.07125"
},
{
"id": "2212.08073"
},
{
"id": "2101.07691"
},
{
"id": "2307.15043"
},
{
"id": "2303.17548"
},
{
"id": "2109.01653"
},
{
"id": "2302.09664"
},
{
"id": "2212.03827"
},
{
"id": "2104.07567"
},
{
"id": "1812.05271"
},
{
"id": "1804.07461"
},
{
"id": "2104.08678"
},
{
"id": "2206.13316"
},
{
"id": "2302.08582"
},
{
"id": "2307.15217"
},
{
"id": "2303.04381"
},
{
"id": "1907.11692"
},
{
"id": "2212.09251"
},
{
"id": "2303.15056"
},
{
"id": "2212.10539"
},
{
"id": "2110.06674"
},
{
"id": "2009.02252"
},
{
"id": "2109.07958"
},
{
"id": "2005.00661"
}
] |
2306.09539
| 40 |
Figure 3: Length Generalization for sequence lengths {512, 16k, 65k} on PG19 (left), arXiv (middle) and GitHub (right). BST:SH:S4-L generalizes better than other baselines, including GSS-HYBRID-L that uses GSS, a structured SSM. GSS-HYBRID-L numbers are from [28].
9
Benchmarking Block-State Transformer. Perplexity by Window Length â BST:SH:S4 â BST:NF:unstruct ââ Slide: 12L. â 68S-Hybrid ââ Rec: fixed: skip ââ BRecT: fixed: skip â BST:MH:S4 (ours) ââ BST:SH:S4 (ours) â Slide:12L Time(ms) wee gy * * - £ * Sequence Length Window Length
|
2306.09539#40
|
Block-State Transformers
|
State space models (SSMs) have shown impressive results on tasks that require
modeling long-range dependencies and efficiently scale to long sequences owing
to their subquadratic runtime complexity. Originally designed for continuous
signals, SSMs have shown superior performance on a plethora of tasks, in vision
and audio; however, SSMs still lag Transformer performance in Language Modeling
tasks. In this work, we propose a hybrid layer named Block-State Transformer
(BST), that internally combines an SSM sublayer for long-range
contextualization, and a Block Transformer sublayer for short-term
representation of sequences. We study three different, and completely
parallelizable, variants that integrate SSMs and block-wise attention. We show
that our model outperforms similar Transformer-based architectures on language
modeling perplexity and generalizes to longer sequences. In addition, the
Block-State Transformer demonstrates more than tenfold increase in speed at the
layer level compared to the Block-Recurrent Transformer when model
parallelization is employed.
|
http://arxiv.org/pdf/2306.09539
|
Mahan Fathi, Jonathan Pilault, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, Ross Goroshin
|
cs.CL, cs.LG
|
NeurIPS'23 - Thirty-seventh Conference on Neural Information
Processing Systems
| null |
cs.CL
|
20230615
|
20231030
|
[
{
"id": "1901.02860"
}
] |
2306.09093
| 41 |
13
Preprint (work in progress)
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao, Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam Roberts, Denny Zhou, Quoc V. Le, and Jason Wei. Scal- ing instruction-ï¬netuned language models. CoRR, abs/2210.11416, 2022. doi: 10.48550/arXiv. 2210.11416. URL https://doi.org/10.48550/arXiv.2210.11416.
|
2306.09093#41
|
Macaw-LLM: Multi-Modal Language Modeling with Image, Audio, Video, and Text Integration
|
Although instruction-tuned large language models (LLMs) have exhibited
remarkable capabilities across various NLP tasks, their effectiveness on other
data modalities beyond text has not been fully studied. In this work, we
propose Macaw-LLM, a novel multi-modal LLM that seamlessly integrates visual,
audio, and textual information. Macaw-LLM consists of three main components: a
modality module for encoding multi-modal data, a cognitive module for
harnessing pretrained LLMs, and an alignment module for harmonizing diverse
representations. Our novel alignment module seamlessly bridges multi-modal
features to textual features, simplifying the adaptation process from the
modality modules to the cognitive module. In addition, we construct a
large-scale multi-modal instruction dataset in terms of multi-turn dialogue,
including 69K image instances and 50K video instances. We have made our data,
code and model publicly available, which we hope can pave the way for future
research in multi-modal LLMs and expand the capabilities of LLMs to handle
diverse data modalities and address complex real-world scenarios.
|
http://arxiv.org/pdf/2306.09093
|
Chenyang Lyu, Minghao Wu, Longyue Wang, Xinting Huang, Bingshuai Liu, Zefeng Du, Shuming Shi, Zhaopeng Tu
|
cs.CL, cs.AI, cs.CV
|
Longyue Wang is the corresponding author. Our project page is at
https://github.com/lyuchenyang/Macaw-LLM
| null |
cs.CL
|
20230615
|
20230615
|
[] |
2306.09212
| 41 |
8
Under review
å
³äºæ°´å¹³æ°å梯度åçè¯´æ³æ£ç¡®çé项为ï¼1 æ¯å½¢æé£çç´æ¥åå ï¼2 æ¯å¤§æ°ä½ç¨å¨æµ·å¹³é¢ä¸äº§ ççååï¼3 æ¹åä¸çå线åç´ï¼4 ä»é«åæåä½å The correct option for the statement about the horizontal pressure gradient force is 1. It is the direct cause of the wind; 2. It is the pressure produced by the atmosphere on the sea level; 3. The direction is perpendicular to the isobar; 4. From high pressure to low pressure A. 1234 çæ¡æ¯ï¼C (Answer: C)
B. 234
C. 134
D. 123
Figure 6: An example of questions with sub-options. Example from high school geography.
We then compare the performance of different models on these two subsets. Note that according to our string matching results, approximately 10.7% data contains negation expressions.
|
2306.09212#41
|
CMMLU: Measuring massive multitask language understanding in Chinese
|
As the capabilities of large language models (LLMs) continue to advance,
evaluating their performance becomes increasingly crucial and challenging. This
paper aims to bridge this gap by introducing CMMLU, a comprehensive Chinese
benchmark that covers various subjects, including natural science, social
sciences, engineering, and humanities. We conduct a thorough evaluation of 18
advanced multilingual- and Chinese-oriented LLMs, assessing their performance
across different subjects and settings. The results reveal that most existing
LLMs struggle to achieve an average accuracy of 50%, even when provided with
in-context examples and chain-of-thought prompts, whereas the random baseline
stands at 25%. This highlights significant room for improvement in LLMs.
Additionally, we conduct extensive experiments to identify factors impacting
the models' performance and propose directions for enhancing LLMs. CMMLU fills
the gap in evaluating the knowledge and reasoning capabilities of large
language models within the Chinese context.
|
http://arxiv.org/pdf/2306.09212
|
Haonan Li, Yixuan Zhang, Fajri Koto, Yifei Yang, Hai Zhao, Yeyun Gong, Nan Duan, Timothy Baldwin
|
cs.CL
| null | null |
cs.CL
|
20230615
|
20240117
|
[
{
"id": "2302.13971"
},
{
"id": "2304.12986"
},
{
"id": "2307.00360"
},
{
"id": "2211.09110"
},
{
"id": "2305.08322"
},
{
"id": "2307.15020"
},
{
"id": "2307.09288"
},
{
"id": "2305.15011"
},
{
"id": "2303.08774"
},
{
"id": "2306.01116"
},
{
"id": "2304.08177"
},
{
"id": "2305.10263"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.