Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
sequencelengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
sequencelengths
0
100
Datasets
sequencelengths
0
15
Spaces
sequencelengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.findings-emnlp.720.bib
https://aclanthology.org/2023.findings-emnlp.720/
@inproceedings{zhang-etal-2023-mind, title = "Mind the Gap Between Conversations for Improved Long-Term Dialogue Generation", author = "Zhang, Qiang and Naradowsky, Jason and Miyao, Yusuke", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.720", doi = "10.18653/v1/2023.findings-emnlp.720", pages = "10735--10762", abstract = "Knowing how to end and resume conversations over time is a natural part of communication, allowing for discussions to span weeks, months, or years. The duration of gaps between conversations dictates which topics are relevant and which questions to ask, and dialogue systems which do not explicitly model time may generate responses that are unnatural. In this work we explore the idea of making dialogue models aware of time, and present GapChat, a multi-session dialogue dataset in which the time between each session varies. While the dataset is constructed in real-time, progress on events in speakers{'} lives is simulated in order to create realistic dialogues occurring across a long timespan. We expose time information to the model and compare different representations of time and event progress. In human evaluation we show that time-aware models perform better in metrics that judge the relevance of the chosen topics and the information gained from the conversation.", }
Knowing how to end and resume conversations over time is a natural part of communication, allowing for discussions to span weeks, months, or years. The duration of gaps between conversations dictates which topics are relevant and which questions to ask, and dialogue systems which do not explicitly model time may generate responses that are unnatural. In this work we explore the idea of making dialogue models aware of time, and present GapChat, a multi-session dialogue dataset in which the time between each session varies. While the dataset is constructed in real-time, progress on events in speakers{'} lives is simulated in order to create realistic dialogues occurring across a long timespan. We expose time information to the model and compare different representations of time and event progress. In human evaluation we show that time-aware models perform better in metrics that judge the relevance of the chosen topics and the information gained from the conversation.
[ "Zhang, Qiang", "Naradowsky, Jason", "Miyao, Yusuke" ]
Mind the Gap Between Conversations for Improved Long-Term Dialogue Generation
findings-emnlp.720
2310.15415
[ "https://github.com/qzx7/mindthetime" ]
https://huggingface.co/papers/2310.15415
0
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.721.bib
https://aclanthology.org/2023.findings-emnlp.721/
@inproceedings{han-etal-2023-structure, title = "A Structure-Aware Generative Adversarial Network for Bilingual Lexicon Induction", author = "Han, Bocheng and Tao, Qian and Li, Lusi and Xiong, Zhihao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.721", doi = "10.18653/v1/2023.findings-emnlp.721", pages = "10763--10775", abstract = "Bilingual lexicon induction (BLI) is the task of inducing word translations with a learned mapping function that aligns monolingual word embedding spaces in two different languages. However, most previous methods treat word embeddings as isolated entities and fail to jointly consider both the intra-space and inter-space topological relations between words. This limitation makes it challenging to align words from embedding spaces with distinct topological structures, especially when the assumption of isomorphism may not hold. To this end, we propose a novel approach called the Structure-Aware Generative Adversarial Network (SA-GAN) model to explicitly capture multiple topological structure information to achieve accurate BLI. Our model first incorporates two lightweight graph convolutional networks (GCNs) to leverage intra-space topological correlations between words for generating source and target embeddings. We then employ a GAN model to explore inter-space topological structures by learning a global mapping function that initially maps the source embeddings to the target embedding space. To further align the coarse-grained structures, we develop a pair-wised local mapping (PLM) strategy that enables word-specific transformations in an unsupervised manner. Extensive experiments conducted on public datasets, including languages with both distant and close etymological relationships, demonstrate the effectiveness of our proposed SA-GAN model.", }
Bilingual lexicon induction (BLI) is the task of inducing word translations with a learned mapping function that aligns monolingual word embedding spaces in two different languages. However, most previous methods treat word embeddings as isolated entities and fail to jointly consider both the intra-space and inter-space topological relations between words. This limitation makes it challenging to align words from embedding spaces with distinct topological structures, especially when the assumption of isomorphism may not hold. To this end, we propose a novel approach called the Structure-Aware Generative Adversarial Network (SA-GAN) model to explicitly capture multiple topological structure information to achieve accurate BLI. Our model first incorporates two lightweight graph convolutional networks (GCNs) to leverage intra-space topological correlations between words for generating source and target embeddings. We then employ a GAN model to explore inter-space topological structures by learning a global mapping function that initially maps the source embeddings to the target embedding space. To further align the coarse-grained structures, we develop a pair-wised local mapping (PLM) strategy that enables word-specific transformations in an unsupervised manner. Extensive experiments conducted on public datasets, including languages with both distant and close etymological relationships, demonstrate the effectiveness of our proposed SA-GAN model.
[ "Han, Bocheng", "Tao, Qian", "Li, Lusi", "Xiong, Zhihao" ]
A Structure-Aware Generative Adversarial Network for Bilingual Lexicon Induction
findings-emnlp.721
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.722.bib
https://aclanthology.org/2023.findings-emnlp.722/
@inproceedings{sainz-etal-2023-nlp, title = "{NLP} Evaluation in trouble: On the Need to Measure {LLM} Data Contamination for each Benchmark", author = "Sainz, Oscar and Campos, Jon and Garc{\'\i}a-Ferrero, Iker and Etxaniz, Julen and de Lacalle, Oier Lopez and Agirre, Eneko", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.722", doi = "10.18653/v1/2023.findings-emnlp.722", pages = "10776--10787", abstract = "In this position paper we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble. The worst kind of data contamination happens when a Large Language Model (LLM) is trained on the test split of a benchmark, and then evaluated in the same benchmark. The extent of the problem is unknown, as it is not straightforward to measure. Contamination causes an overestimation of the performance of a contaminated model in a target benchmark and associated task with respect to their non-contaminated counterparts. The consequences can be very harmful, with wrong scientific conclusions being published while other correct ones are discarded. This position paper defines different levels of data contamination and argues for a community effort, including the development of automatic and semi-automatic measures to detect when data from a benchmark was exposed to a model, and suggestions for flagging papers with conclusions that are compromised by data contamination.", }
In this position paper we argue that the classical evaluation on Natural Language Processing (NLP) tasks using annotated benchmarks is in trouble. The worst kind of data contamination happens when a Large Language Model (LLM) is trained on the test split of a benchmark, and then evaluated in the same benchmark. The extent of the problem is unknown, as it is not straightforward to measure. Contamination causes an overestimation of the performance of a contaminated model in a target benchmark and associated task with respect to their non-contaminated counterparts. The consequences can be very harmful, with wrong scientific conclusions being published while other correct ones are discarded. This position paper defines different levels of data contamination and argues for a community effort, including the development of automatic and semi-automatic measures to detect when data from a benchmark was exposed to a model, and suggestions for flagging papers with conclusions that are compromised by data contamination.
[ "Sainz, Oscar", "Campos, Jon", "Garc{\\'\\i}a-Ferrero, Iker", "Etxaniz, Julen", "de Lacalle, Oier Lopez", "Agirre, Eneko" ]
NLP Evaluation in trouble: On the Need to Measure LLM Data Contamination for each Benchmark
findings-emnlp.722
2310.18018
[ "" ]
https://huggingface.co/papers/2310.18018
4
1
0
6
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.723.bib
https://aclanthology.org/2023.findings-emnlp.723/
@inproceedings{wang-etal-2023-improving-pacing, title = "Improving Pacing in Long-Form Story Planning", author = "Wang, Yichen and Yang, Kevin and Liu, Xiaoming and Klein, Dan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.723", doi = "10.18653/v1/2023.findings-emnlp.723", pages = "10788--10845", abstract = "Existing LLM-based systems for writing long-form stories or story outlines frequently suffer from unnatural pacing, whether glossing over important events or over-elaborating on insignificant details, resulting in a jarring experience for the reader. We propose a **CONC**rete **O**utline **C**on**T**rol (CONCOCT) system to improve pacing when automatically generating story outlines. We first train a *concreteness evaluator* to judge which of two events is more concrete (low-level-detailed). This evaluator can then be used to control pacing in hierarchical outline generation; in this work, we explore a *vaguest-first* expansion procedure that aims for uniform pacing. We further use the evaluator to filter new outline items based on predicted concreteness. Compared to a baseline hierarchical outline generator, humans judge CONCOCT{'}s pacing to be more consistent over 57{\%} of the time across multiple outline lengths; the gains also translate to downstream stories. All code, data, and models are open-sourced.", }
Existing LLM-based systems for writing long-form stories or story outlines frequently suffer from unnatural pacing, whether glossing over important events or over-elaborating on insignificant details, resulting in a jarring experience for the reader. We propose a **CONC**rete **O**utline **C**on**T**rol (CONCOCT) system to improve pacing when automatically generating story outlines. We first train a *concreteness evaluator* to judge which of two events is more concrete (low-level-detailed). This evaluator can then be used to control pacing in hierarchical outline generation; in this work, we explore a *vaguest-first* expansion procedure that aims for uniform pacing. We further use the evaluator to filter new outline items based on predicted concreteness. Compared to a baseline hierarchical outline generator, humans judge CONCOCT{'}s pacing to be more consistent over 57{\%} of the time across multiple outline lengths; the gains also translate to downstream stories. All code, data, and models are open-sourced.
[ "Wang, Yichen", "Yang, Kevin", "Liu, Xiaoming", "Klein, Dan" ]
Improving Pacing in Long-Form Story Planning
findings-emnlp.723
2311.04459
[ "https://github.com/yichenzw/pacing" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.724.bib
https://aclanthology.org/2023.findings-emnlp.724/
@inproceedings{liu-etal-2023-argument, title = "Argument mining as a multi-hop generative machine reading comprehension task", author = "Liu, Boyang and Schlegel, Viktor and Batista-Navarro, Riza and Ananiadou, Sophia", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.724", doi = "10.18653/v1/2023.findings-emnlp.724", pages = "10846--10858", abstract = "Argument mining (AM) is a natural language processing task that aims to generate an argumentative graph given an unstructured argumentative text. An argumentative graph that consists of argumentative components and argumentative relations contains completed information of an argument and exhibits the logic of an argument. As the argument structure of an argumentative text can be regarded as an answer to a {``}why{''} question, the whole argument structure is therefore similar to the {``}chain of thought{''} concept, i.e., the sequence of ideas that lead to a specific conclusion for a given argument (Wei et al., 2022). For argumentative texts in the same specific genre, the {``}chain of thought{''} of such texts is usually similar, i.e., in a student essay, there is usually a major claim supported by several claims, and then a number of premises which are related to the claims are included (Eger et al., 2017). In this paper, we propose a new perspective which transfers the argument mining task into a multi-hop reading comprehension task, allowing the model to learn the argument structure as a {``}chain of thought{''}. We perform a comprehensive evaluation of our approach on two AM benchmarks and find that we surpass SOTA results. A detailed analysis shows that specifically the {``}chain of thought{''} information is helpful for the argument mining task.", }
Argument mining (AM) is a natural language processing task that aims to generate an argumentative graph given an unstructured argumentative text. An argumentative graph that consists of argumentative components and argumentative relations contains completed information of an argument and exhibits the logic of an argument. As the argument structure of an argumentative text can be regarded as an answer to a {``}why{''} question, the whole argument structure is therefore similar to the {``}chain of thought{''} concept, i.e., the sequence of ideas that lead to a specific conclusion for a given argument (Wei et al., 2022). For argumentative texts in the same specific genre, the {``}chain of thought{''} of such texts is usually similar, i.e., in a student essay, there is usually a major claim supported by several claims, and then a number of premises which are related to the claims are included (Eger et al., 2017). In this paper, we propose a new perspective which transfers the argument mining task into a multi-hop reading comprehension task, allowing the model to learn the argument structure as a {``}chain of thought{''}. We perform a comprehensive evaluation of our approach on two AM benchmarks and find that we surpass SOTA results. A detailed analysis shows that specifically the {``}chain of thought{''} information is helpful for the argument mining task.
[ "Liu, Boyang", "Schlegel, Viktor", "Batista-Navarro, Riza", "Ananiadou, Sophia" ]
Argument mining as a multi-hop generative machine reading comprehension task
findings-emnlp.724
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.725.bib
https://aclanthology.org/2023.findings-emnlp.725/
@inproceedings{zhang-etal-2023-huatuogpt, title = "{H}uatuo{GPT}, Towards Taming Language Model to Be a Doctor", author = "Zhang, Hongbo and Chen, Junying and Jiang, Feng and Yu, Fei and Chen, Zhihong and Chen, Guiming and Li, Jianquan and Wu, Xiangbo and Zhiyi, Zhang and Xiao, Qingying and Wan, Xiang and Wang, Benyou and Li, Haizhou", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.725", doi = "10.18653/v1/2023.findings-emnlp.725", pages = "10859--10885", abstract = "In this paper, we present HuatuoGPT, a Large Language Model (LLM) for medical consultation. The core recipe of HuatuoGPT is to leverage both distilled data from **ChatGPT** and real-world data from **doctors** in the supervised fine-tuning stage. This is not only because purely using **ChatGPT**-distilled data might cause {`}model collapse{'}, but also because real-world data from **doctors** would be complementary to **ChatGPT**-distilled data. The responses from ChatGPT are usually detailed, well-presented, fluent, and instruction-followed, but it cannot perform like a doctor in many aspects, e.g. for interactive diagnosis. Therefore, the extra doctors{'} data could tame a distilled language model to perform like doctors. To synergize the strengths of both data sources, we introduce RLMF (Reinforcement Learning from Mixed Feedback) where a reward model is trained to align the language model with the merits that both sources (ChatGPT and doctors) bring. Experimental results (in GPT-4 evaluation, human evaluation, and medical benchmark datasets) demonstrate that HuatuoGPT achieves state-of-the-art results in performing medical consultation among open-source LLMs. It is worth noting that by using additional real-world data and RLMF, the distilled language model (i.e., HuatuoGPT) outperforms its teacher model (i.e., ChatGPT) in most cases.", }
In this paper, we present HuatuoGPT, a Large Language Model (LLM) for medical consultation. The core recipe of HuatuoGPT is to leverage both distilled data from **ChatGPT** and real-world data from **doctors** in the supervised fine-tuning stage. This is not only because purely using **ChatGPT**-distilled data might cause {`}model collapse{'}, but also because real-world data from **doctors** would be complementary to **ChatGPT**-distilled data. The responses from ChatGPT are usually detailed, well-presented, fluent, and instruction-followed, but it cannot perform like a doctor in many aspects, e.g. for interactive diagnosis. Therefore, the extra doctors{'} data could tame a distilled language model to perform like doctors. To synergize the strengths of both data sources, we introduce RLMF (Reinforcement Learning from Mixed Feedback) where a reward model is trained to align the language model with the merits that both sources (ChatGPT and doctors) bring. Experimental results (in GPT-4 evaluation, human evaluation, and medical benchmark datasets) demonstrate that HuatuoGPT achieves state-of-the-art results in performing medical consultation among open-source LLMs. It is worth noting that by using additional real-world data and RLMF, the distilled language model (i.e., HuatuoGPT) outperforms its teacher model (i.e., ChatGPT) in most cases.
[ "Zhang, Hongbo", "Chen, Junying", "Jiang, Feng", "Yu, Fei", "Chen, Zhihong", "Chen, Guiming", "Li, Jianquan", "Wu, Xiangbo", "Zhiyi, Zhang", "Xiao, Qingying", "Wan, Xiang", "Wang, Benyou", "Li, Haizhou" ]
HuatuoGPT, Towards Taming Language Model to Be a Doctor
findings-emnlp.725
2305.15075
[ "https://github.com/freedomintelligence/huatuogpt" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.726.bib
https://aclanthology.org/2023.findings-emnlp.726/
@inproceedings{guo-etal-2023-debias, title = "Debias {NLU} Datasets via Training-free Perturbations", author = "Guo, Qi and Tang, Yuanhang and Ouyang, Yawen and Wu, Zhen and Dai, Xinyu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.726", doi = "10.18653/v1/2023.findings-emnlp.726", pages = "10886--10901", abstract = "Several recent studies have shown that advanced models for natural language understanding (NLU) are prone to capture biased features that are independent of the task but spuriously correlated to labels. Such models often perform well on in-distribution (ID) datasets but fail to generalize to out-of-distribution (OOD) datasets. Existing solutions can be separated into two orthogonal approaches: model-centric methods and data-centric methods. Model-centric methods improve OOD performance at the expense of ID performance. Data-centric strategies usually boost both of them via data-level manipulations such as generative data augmentation. However, the high cost of fine-tuning a generator to produce valid samples limits the potential of such approaches. To address this issue, we propose PDD, a framework that conducts training-free Perturbations on samples containing biased features to Debias NLU Datasets. PDD works by iteratively conducting perturbations via pre-trained mask language models (MLM). PDD exhibits the advantage of low cost by adopting a training-free perturbation strategy and further improves the label consistency by utilizing label information during perturbations. Extensive experiments demonstrate that PDD shows competitive performance with previous state-of-the-art debiasing strategies. When combined with the model-centric debiasing methods, PDD establishes a new state-of-the-art.", }
Several recent studies have shown that advanced models for natural language understanding (NLU) are prone to capture biased features that are independent of the task but spuriously correlated to labels. Such models often perform well on in-distribution (ID) datasets but fail to generalize to out-of-distribution (OOD) datasets. Existing solutions can be separated into two orthogonal approaches: model-centric methods and data-centric methods. Model-centric methods improve OOD performance at the expense of ID performance. Data-centric strategies usually boost both of them via data-level manipulations such as generative data augmentation. However, the high cost of fine-tuning a generator to produce valid samples limits the potential of such approaches. To address this issue, we propose PDD, a framework that conducts training-free Perturbations on samples containing biased features to Debias NLU Datasets. PDD works by iteratively conducting perturbations via pre-trained mask language models (MLM). PDD exhibits the advantage of low cost by adopting a training-free perturbation strategy and further improves the label consistency by utilizing label information during perturbations. Extensive experiments demonstrate that PDD shows competitive performance with previous state-of-the-art debiasing strategies. When combined with the model-centric debiasing methods, PDD establishes a new state-of-the-art.
[ "Guo, Qi", "Tang, Yuanhang", "Ouyang, Yawen", "Wu, Zhen", "Dai, Xinyu" ]
Debias NLU Datasets via Training-free Perturbations
findings-emnlp.726
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.727.bib
https://aclanthology.org/2023.findings-emnlp.727/
@inproceedings{chai-etal-2023-aspect, title = "Aspect-to-Scope Oriented Multi-view Contrastive Learning for Aspect-based Sentiment Analysis", author = "Chai, Heyan and Yao, Ziyi and Tang, Siyu and Wang, Ye and Nie, Liqiang and Fang, Binxing and Liao, Qing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.727", doi = "10.18653/v1/2023.findings-emnlp.727", pages = "10902--10913", abstract = "Aspect-based sentiment analysis (ABSA) aims to align aspects and corresponding sentiment expressions, so as to identify the sentiment polarities of specific aspects. Most existing ABSA methods focus on mining syntactic or semantic information, which still suffers from noisy interference introduced by the attention mechanism and dependency tree when multiple aspects exist in a sentence. To address these issues, in this paper, we revisit ABSA from a novel perspective by proposing a novel scope-assisted multi-view graph contrastive learning framework. It not only mitigates noisy interference for better locating aspect and its corresponding sentiment opinion with aspect-specific scope, but also captures the correlation and difference between sentiment polarities and syntactic/semantic information. Extensive experiments on five benchmark datasets show that our proposed approach substantially outperforms state-of-the-art methods and verifies the effectiveness and robustness of our model.", }
Aspect-based sentiment analysis (ABSA) aims to align aspects and corresponding sentiment expressions, so as to identify the sentiment polarities of specific aspects. Most existing ABSA methods focus on mining syntactic or semantic information, which still suffers from noisy interference introduced by the attention mechanism and dependency tree when multiple aspects exist in a sentence. To address these issues, in this paper, we revisit ABSA from a novel perspective by proposing a novel scope-assisted multi-view graph contrastive learning framework. It not only mitigates noisy interference for better locating aspect and its corresponding sentiment opinion with aspect-specific scope, but also captures the correlation and difference between sentiment polarities and syntactic/semantic information. Extensive experiments on five benchmark datasets show that our proposed approach substantially outperforms state-of-the-art methods and verifies the effectiveness and robustness of our model.
[ "Chai, Heyan", "Yao, Ziyi", "Tang, Siyu", "Wang, Ye", "Nie, Liqiang", "Fang, Binxing", "Liao, Qing" ]
Aspect-to-Scope Oriented Multi-view Contrastive Learning for Aspect-based Sentiment Analysis
findings-emnlp.727
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.728.bib
https://aclanthology.org/2023.findings-emnlp.728/
@inproceedings{goodarzi-etal-2023-robustness, title = "Robustness of Named-Entity Replacements for In-Context Learning", author = "Goodarzi, Saeed and Kagita, Nikhil and Minn, Dennis and Wang, Shufan and Dessi, Roberto and Toshniwal, Shubham and Williams, Adina and Lanchantin, Jack and Sinha, Koustuv", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.728", doi = "10.18653/v1/2023.findings-emnlp.728", pages = "10914--10931", abstract = "A key feature of modern large language models (LLMs) is their ability to perform in-context learning, a prompting technique where query- answer demonstrations are shown before the final query. This allows for generalization to novel distributions at inference time where the LLM can learn new rules without parameter updates. However, the choice of demonstrations and their relationship to a particular query can have a profound impact on model accuracy, raising concerns about the true in-context generalization capabilities (Zhao et al., 2021). In this work, we explore the robustness of the in-context learning paradigm by focusing on entities. In particular, we seek to understand the robustness of LLM in-context learning with respect to named entity replacements. We discover a significant variance in downstream performance based on the choice of the named entities, across three popular reasoning tasks and two popular LLMs. Specifically, model accuracy on the test sets can fluctuate between -2.7 to +8.0 points depending on the choice of named entity replacements. Our analysis exposes the sensitivity of LLM in-context learning with respect to named entities, and offers a simple recipe to improve test performance by hyper-parameter tuning the named entities for a given dataset. Code and datasets for reproducing the results are publicly available.", }
A key feature of modern large language models (LLMs) is their ability to perform in-context learning, a prompting technique where query- answer demonstrations are shown before the final query. This allows for generalization to novel distributions at inference time where the LLM can learn new rules without parameter updates. However, the choice of demonstrations and their relationship to a particular query can have a profound impact on model accuracy, raising concerns about the true in-context generalization capabilities (Zhao et al., 2021). In this work, we explore the robustness of the in-context learning paradigm by focusing on entities. In particular, we seek to understand the robustness of LLM in-context learning with respect to named entity replacements. We discover a significant variance in downstream performance based on the choice of the named entities, across three popular reasoning tasks and two popular LLMs. Specifically, model accuracy on the test sets can fluctuate between -2.7 to +8.0 points depending on the choice of named entity replacements. Our analysis exposes the sensitivity of LLM in-context learning with respect to named entities, and offers a simple recipe to improve test performance by hyper-parameter tuning the named entities for a given dataset. Code and datasets for reproducing the results are publicly available.
[ "Goodarzi, Saeed", "Kagita, Nikhil", "Minn, Dennis", "Wang, Shufan", "Dessi, Roberto", "Toshniwal, Shubham", "Williams, Adina", "Lanchantin, Jack", "Sinha, Koustuv" ]
Robustness of Named-Entity Replacements for In-Context Learning
findings-emnlp.728
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.729.bib
https://aclanthology.org/2023.findings-emnlp.729/
@inproceedings{kurita-etal-2023-contrastive, title = "Contrastive Learning-based Sentence Encoders Implicitly Weight Informative Words", author = "Kurita, Hiroto and Kobayashi, Goro and Yokoi, Sho and Inui, Kentaro", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.729", doi = "10.18653/v1/2023.findings-emnlp.729", pages = "10932--10947", abstract = "The performance of sentence encoders can be significantly improved through the simple practice of fine-tuning using contrastive loss. A natural question arises: what characteristics do models acquire during contrastive learning? This paper theoretically and experimentally shows that contrastive-based sentence encoders implicitly weight words based on information-theoretic quantities; that is, more informative words receive greater weight, while others receive less. The theory states that, in the lower bound of the optimal value of the contrastive learning objective, the norm of word embedding reflects the information gain associated with the distribution of surrounding words. We also conduct comprehensive experiments using various models, multiple datasets, two methods to measure the implicit weighting of models (Integrated Gradients and SHAP), and two information-theoretic quantities (information gain and self-information). The results provide empirical evidence that contrastive fine-tuning emphasizes informative words.", }
The performance of sentence encoders can be significantly improved through the simple practice of fine-tuning using contrastive loss. A natural question arises: what characteristics do models acquire during contrastive learning? This paper theoretically and experimentally shows that contrastive-based sentence encoders implicitly weight words based on information-theoretic quantities; that is, more informative words receive greater weight, while others receive less. The theory states that, in the lower bound of the optimal value of the contrastive learning objective, the norm of word embedding reflects the information gain associated with the distribution of surrounding words. We also conduct comprehensive experiments using various models, multiple datasets, two methods to measure the implicit weighting of models (Integrated Gradients and SHAP), and two information-theoretic quantities (information gain and self-information). The results provide empirical evidence that contrastive fine-tuning emphasizes informative words.
[ "Kurita, Hiroto", "Kobayashi, Goro", "Yokoi, Sho", "Inui, Kentaro" ]
Contrastive Learning-based Sentence Encoders Implicitly Weight Informative Words
findings-emnlp.729
2310.15921
[ "https://github.com/kuriyan1204/sentence-encoder-word-weighting" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.730.bib
https://aclanthology.org/2023.findings-emnlp.730/
@inproceedings{luo-etal-2023-legally, title = "Legally Enforceable Hate Speech Detection for Public Forums", author = "Luo, Chu and Bhambhoria, Rohan and Dahan, Samuel and Zhu, Xiaodan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.730", doi = "10.18653/v1/2023.findings-emnlp.730", pages = "10948--10963", abstract = "Hate speech causes widespread and deep-seated societal issues. Proper enforcement of hate speech laws is key for protecting groups of people against harmful and discriminatory language. However, determining what constitutes hate speech is a complex task that is highly open to subjective interpretations. Existing works do not align their systems with enforceable definitions of hate speech, which can make their outputs inconsistent with the goals of regulators. This research introduces a new perspective and task for enforceable hate speech detection centred around legal definitions, and a dataset annotated on violations of eleven possible definitions by legal experts. Given the challenge of identifying clear, legally enforceable instances of hate speech, we augment the dataset with expert-generated samples and an automatically mined challenge set. We experiment with grounding the model decision in these definitions using zero-shot and few-shot prompting. We then report results on several large language models (LLMs). With this task definition, automatic hate speech detection can be more closely aligned to enforceable laws, and hence assist in more rigorous enforcement of legal protections against harmful speech in public forums.", }
Hate speech causes widespread and deep-seated societal issues. Proper enforcement of hate speech laws is key for protecting groups of people against harmful and discriminatory language. However, determining what constitutes hate speech is a complex task that is highly open to subjective interpretations. Existing works do not align their systems with enforceable definitions of hate speech, which can make their outputs inconsistent with the goals of regulators. This research introduces a new perspective and task for enforceable hate speech detection centred around legal definitions, and a dataset annotated on violations of eleven possible definitions by legal experts. Given the challenge of identifying clear, legally enforceable instances of hate speech, we augment the dataset with expert-generated samples and an automatically mined challenge set. We experiment with grounding the model decision in these definitions using zero-shot and few-shot prompting. We then report results on several large language models (LLMs). With this task definition, automatic hate speech detection can be more closely aligned to enforceable laws, and hence assist in more rigorous enforcement of legal protections against harmful speech in public forums.
[ "Luo, Chu", "Bhambhoria, Rohan", "Dahan, Samuel", "Zhu, Xiaodan" ]
Legally Enforceable Hate Speech Detection for Public Forums
findings-emnlp.730
[ "https://github.com/chufeiluo/legalhatespeech" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.731.bib
https://aclanthology.org/2023.findings-emnlp.731/
@inproceedings{kim-etal-2023-conprompt, title = "{C}on{P}rompt: Pre-training a Language Model with Machine-Generated Data for Implicit Hate Speech Detection", author = "Kim, Youngwook and Park, Shinwoo and Namgoong, Youngsoo and Han, Yo-Sub", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.731", doi = "10.18653/v1/2023.findings-emnlp.731", pages = "10964--10980", abstract = "Implicit hate speech detection is a challenging task in text classification since no explicit cues (e.g., swear words) exist in the text. While some pre-trained language models have been developed for hate speech detection, they are not specialized in implicit hate speech. Recently, an implicit hate speech dataset with a massive number of samples has been proposed by controlling machine generation. We propose a pre-training approach, ConPrompt, to fully leverage such machine-generated data. Specifically, given a machine-generated statement, we use example statements of its origin prompt as positive samples for contrastive learning. Through pre-training with ConPrompt, we present ToxiGen-ConPrompt, a pre-trained language model for implicit hate speech detection. We conduct extensive experiments on several implicit hate speech datasets and show the superior generalization ability of ToxiGen-ConPrompt compared to other pre-trained models. Additionally, we empirically show that ConPrompt is effective in mitigating identity term bias, demonstrating that it not only makes a model more generalizable but also reduces unintended bias. We analyze the representation quality of ToxiGen-ConPrompt and show its ability to consider target group and toxicity, which are desirable features in terms of implicit hate speeches.", }
Implicit hate speech detection is a challenging task in text classification since no explicit cues (e.g., swear words) exist in the text. While some pre-trained language models have been developed for hate speech detection, they are not specialized in implicit hate speech. Recently, an implicit hate speech dataset with a massive number of samples has been proposed by controlling machine generation. We propose a pre-training approach, ConPrompt, to fully leverage such machine-generated data. Specifically, given a machine-generated statement, we use example statements of its origin prompt as positive samples for contrastive learning. Through pre-training with ConPrompt, we present ToxiGen-ConPrompt, a pre-trained language model for implicit hate speech detection. We conduct extensive experiments on several implicit hate speech datasets and show the superior generalization ability of ToxiGen-ConPrompt compared to other pre-trained models. Additionally, we empirically show that ConPrompt is effective in mitigating identity term bias, demonstrating that it not only makes a model more generalizable but also reduces unintended bias. We analyze the representation quality of ToxiGen-ConPrompt and show its ability to consider target group and toxicity, which are desirable features in terms of implicit hate speeches.
[ "Kim, Youngwook", "Park, Shinwoo", "Namgoong, Youngsoo", "Han, Yo-Sub" ]
ConPrompt: Pre-training a Language Model with Machine-Generated Data for Implicit Hate Speech Detection
findings-emnlp.731
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.732.bib
https://aclanthology.org/2023.findings-emnlp.732/
@inproceedings{iwamoto-etal-2023-incorporating, title = "Incorporating Syntactic Knowledge into Pre-trained Language Model using Optimization for Overcoming Catastrophic Forgetting", author = "Iwamoto, Ran and Yoshida, Issei and Kanayama, Hiroshi and Ohko, Takuya and Muraoka, Masayasu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.732", doi = "10.18653/v1/2023.findings-emnlp.732", pages = "10981--10993", abstract = "Syntactic knowledge is invaluable information for many tasks which handle complex or long sentences, but typical pre-trained language models do not contain sufficient syntactic knowledge. Thus it results in failures in downstream tasks that require syntactic knowledge. In this paper, we explore additional training to incorporate syntactic knowledge to a language model. We designed four pre-training tasks that learn different syntactic perspectives. For adding new syntactic knowledge and keeping a good balance between the original and additional knowledge, we addressed the problem of catastrophic forgetting that prevents the model from keeping semantic information when the model learns additional syntactic knowledge. We demonstrated that additional syntactic training produced consistent performance gains while clearly avoiding catastrophic forgetting.", }
Syntactic knowledge is invaluable information for many tasks which handle complex or long sentences, but typical pre-trained language models do not contain sufficient syntactic knowledge. Thus it results in failures in downstream tasks that require syntactic knowledge. In this paper, we explore additional training to incorporate syntactic knowledge to a language model. We designed four pre-training tasks that learn different syntactic perspectives. For adding new syntactic knowledge and keeping a good balance between the original and additional knowledge, we addressed the problem of catastrophic forgetting that prevents the model from keeping semantic information when the model learns additional syntactic knowledge. We demonstrated that additional syntactic training produced consistent performance gains while clearly avoiding catastrophic forgetting.
[ "Iwamoto, Ran", "Yoshida, Issei", "Kanayama, Hiroshi", "Ohko, Takuya", "Muraoka, Masayasu" ]
Incorporating Syntactic Knowledge into Pre-trained Language Model using Optimization for Overcoming Catastrophic Forgetting
findings-emnlp.732
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.733.bib
https://aclanthology.org/2023.findings-emnlp.733/
@inproceedings{shi-etal-2023-toward, title = "Toward Human Readable Prompt Tuning: Kubrick{'}s The Shining is a good movie, and a good prompt too?", author = "Shi, Weijia and Han, Xiaochuang and Gonen, Hila and Holtzman, Ari and Tsvetkov, Yulia and Zettlemoyer, Luke", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.733", doi = "10.18653/v1/2023.findings-emnlp.733", pages = "10994--11005", abstract = "Large language models can perform downstream tasks in a zero-shot fashion, given natural language prompts that specify the desired behavior. Such prompts are typically hand engineered, but can also be learned with gradient-based methods from labeled data. However, it is underexplored what factors make the prompts effective, especially when the prompts are in natural language. In this paper, we investigate common attributes shared by effective prompts in classification problems. We first propose a human readable prompt tuning method (FluentPrompt) based on Langevin dynamics that incorporates a fluency constraint to find a distribution of effective and fluent prompts. Our analysis reveals that effective prompts are topically related to the task domain and calibrate the prior probability of output labels. Based on these findings, we also propose a method for generating prompts using only unlabeled data, outperforming strong baselines by an average of 7.0{\%} accuracy across three tasks.", }
Large language models can perform downstream tasks in a zero-shot fashion, given natural language prompts that specify the desired behavior. Such prompts are typically hand engineered, but can also be learned with gradient-based methods from labeled data. However, it is underexplored what factors make the prompts effective, especially when the prompts are in natural language. In this paper, we investigate common attributes shared by effective prompts in classification problems. We first propose a human readable prompt tuning method (FluentPrompt) based on Langevin dynamics that incorporates a fluency constraint to find a distribution of effective and fluent prompts. Our analysis reveals that effective prompts are topically related to the task domain and calibrate the prior probability of output labels. Based on these findings, we also propose a method for generating prompts using only unlabeled data, outperforming strong baselines by an average of 7.0{\%} accuracy across three tasks.
[ "Shi, Weijia", "Han, Xiaochuang", "Gonen, Hila", "Holtzman, Ari", "Tsvetkov, Yulia", "Zettlemoyer, Luke" ]
Toward Human Readable Prompt Tuning: Kubrick's The Shining is a good movie, and a good prompt too?
findings-emnlp.733
2212.10539
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.734.bib
https://aclanthology.org/2023.findings-emnlp.734/
@inproceedings{zheng-etal-2023-chain, title = "Chain-of-Thought Reasoning in Tabular Language Models", author = "Zheng, Mingyu and Yang, Hao and Jiang, Wenbin and Lin, Zheng and Lyu, Yajuan and She, Qiaoqiao and Wang, Weiping", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.734", doi = "10.18653/v1/2023.findings-emnlp.734", pages = "11006--11019", abstract = "Tabular mathematical reasoning task requires models to perform multi-step operations including information look-up and numerical calculation, based on heterogeneous data from tables and questions. Existing solutions tend to extend chain-of-thought (CoT) reasoning into powerful large language models (LLMs) to promote multi-hop mathematical reasoning. However, such LLM-based approaches are not a viable solution in the scenario of privatization deployment or limited resources. To address this problem, we revisit small-scale tabular language models (TaLMs) and extend chain-of-thought reasoning into TaLMs for the first time. Specifically, we propose a novel framework, TaCo, which coordinates two TaLMs responsible for CoT generation and answer inference, respectively. Besides, our framework can be combined with an external calculator to enhance accurate numerical calculation. On the TABMWP dataset, TaCo outperforms the state-of-the-art ChatGPT by 9.55{\%} (82.60{\%}$\rightarrow$92.15{\%} in accuracy) with much less parameters (0.8B). The code will be released along with the paper.", }
Tabular mathematical reasoning task requires models to perform multi-step operations including information look-up and numerical calculation, based on heterogeneous data from tables and questions. Existing solutions tend to extend chain-of-thought (CoT) reasoning into powerful large language models (LLMs) to promote multi-hop mathematical reasoning. However, such LLM-based approaches are not a viable solution in the scenario of privatization deployment or limited resources. To address this problem, we revisit small-scale tabular language models (TaLMs) and extend chain-of-thought reasoning into TaLMs for the first time. Specifically, we propose a novel framework, TaCo, which coordinates two TaLMs responsible for CoT generation and answer inference, respectively. Besides, our framework can be combined with an external calculator to enhance accurate numerical calculation. On the TABMWP dataset, TaCo outperforms the state-of-the-art ChatGPT by 9.55{\%} (82.60{\%}$\rightarrow$92.15{\%} in accuracy) with much less parameters (0.8B). The code will be released along with the paper.
[ "Zheng, Mingyu", "Yang, Hao", "Jiang, Wenbin", "Lin, Zheng", "Lyu, Yajuan", "She, Qiaoqiao", "Wang, Weiping" ]
Chain-of-Thought Reasoning in Tabular Language Models
findings-emnlp.734
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.735.bib
https://aclanthology.org/2023.findings-emnlp.735/
@inproceedings{huang-etal-2023-diffusion, title = "Diffusion Language Model with Query-Document Relevance for Query-Focused Summarization", author = "Huang, Shaoyao and Qin, Luozheng and Cao, Ziqiang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.735", doi = "10.18653/v1/2023.findings-emnlp.735", pages = "11020--11030", abstract = "Query-Focused Summarization (QFS) aims to generate summaries from source documents that can answer specific queries. Although the QFS task has gained increasing attention recently, its development is constrained by the fact that mainstream QFS models are BART variants, which are autoregressive and suffer from long-term dependencies and exposure bias. To address these problems, we adopt a diffusion language model that performs well in non-autoregressive scenarios to effectively resolve issues related to autoregressive methods. However, QFS requires guidance from queries to generate adequate summaries, while diffusion language models have limited sensitivity to queries. In this paper, we propose QFS-DLM, a non-autoregressive diffusion language model that incorporates query-document fragment relevance and query-document global relevance to enhance the adaptability of QFS tasks. Firstly, we extract key fragments from documents based on queries and assign higher weights to them, thereby emphasizing crucial and continuous information within the document. Secondly, we calculate global relevance scores between queries and documents, and then integrate these scores into the model{'}s loss function, enabling the model to prefer high-quality data and distance itself from low-quality data. Overall, our method achieves state-of-the-art performance on Debatepedia and PubMedQA datasets in ROUGE scores, GPT-4, and human evaluations.", }
Query-Focused Summarization (QFS) aims to generate summaries from source documents that can answer specific queries. Although the QFS task has gained increasing attention recently, its development is constrained by the fact that mainstream QFS models are BART variants, which are autoregressive and suffer from long-term dependencies and exposure bias. To address these problems, we adopt a diffusion language model that performs well in non-autoregressive scenarios to effectively resolve issues related to autoregressive methods. However, QFS requires guidance from queries to generate adequate summaries, while diffusion language models have limited sensitivity to queries. In this paper, we propose QFS-DLM, a non-autoregressive diffusion language model that incorporates query-document fragment relevance and query-document global relevance to enhance the adaptability of QFS tasks. Firstly, we extract key fragments from documents based on queries and assign higher weights to them, thereby emphasizing crucial and continuous information within the document. Secondly, we calculate global relevance scores between queries and documents, and then integrate these scores into the model{'}s loss function, enabling the model to prefer high-quality data and distance itself from low-quality data. Overall, our method achieves state-of-the-art performance on Debatepedia and PubMedQA datasets in ROUGE scores, GPT-4, and human evaluations.
[ "Huang, Shaoyao", "Qin, Luozheng", "Cao, Ziqiang" ]
Diffusion Language Model with Query-Document Relevance for Query-Focused Summarization
findings-emnlp.735
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.736.bib
https://aclanthology.org/2023.findings-emnlp.736/
@inproceedings{mickus-etal-2023-grounded, title = "Grounded and well-rounded: a methodological approach to the study of cross-modal and cross-lingual grounding", author = "Mickus, Timothee and Zosa, Elaine and Paperno, Denis", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.736", doi = "10.18653/v1/2023.findings-emnlp.736", pages = "11031--11042", abstract = "Grounding has been argued to be a crucial component towards the development of more complete and truly semantically competent artificial intelligence systems. Literature has divided into two camps: While some argue that grounding allows for qualitatively different generalizations, others believe it can be compensated by mono-modal data quantity. Limited empirical evidence has emerged for or against either position, which we argue is due to the methodological challenges that come with studying grounding and its effects on NLP systems. In this paper, we establish a methodological framework for studying what the effects are{---}if any{---}of providing models with richer input sources than text-only. The crux of it lies in the construction of comparable samples of populations of models trained on different input modalities, so that we can tease apart the qualitative effects of different input sources from quantifiable model performances. Experiments using this framework reveal qualitative differences in model behavior between cross-modally grounded, cross-lingually grounded, and ungrounded models, which we measure both at a global dataset level as well as for specific word representations, depending on how concrete their semantics is.", }
Grounding has been argued to be a crucial component towards the development of more complete and truly semantically competent artificial intelligence systems. Literature has divided into two camps: While some argue that grounding allows for qualitatively different generalizations, others believe it can be compensated by mono-modal data quantity. Limited empirical evidence has emerged for or against either position, which we argue is due to the methodological challenges that come with studying grounding and its effects on NLP systems. In this paper, we establish a methodological framework for studying what the effects are{---}if any{---}of providing models with richer input sources than text-only. The crux of it lies in the construction of comparable samples of populations of models trained on different input modalities, so that we can tease apart the qualitative effects of different input sources from quantifiable model performances. Experiments using this framework reveal qualitative differences in model behavior between cross-modally grounded, cross-lingually grounded, and ungrounded models, which we measure both at a global dataset level as well as for specific word representations, depending on how concrete their semantics is.
[ "Mickus, Timothee", "Zosa, Elaine", "Paperno, Denis" ]
Grounded and well-rounded: a methodological approach to the study of cross-modal and cross-lingual grounding
findings-emnlp.736
2310.11938
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.737.bib
https://aclanthology.org/2023.findings-emnlp.737/
@inproceedings{nguyen-etal-2023-emo, title = "{EMO}-{KNOW}: A Large Scale Dataset on Emotion-Cause", author = "Nguyen, Mia and Samaradivakara, Yasith and Sasikumar, Prasanth and Gupta, Chitralekha and Nanayakkara, Suranga", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.737", doi = "10.18653/v1/2023.findings-emnlp.737", pages = "11043--11051", abstract = "Emotion-Cause analysis has attracted the attention of researchers in recent years. However, most existing datasets are limited in size and number of emotion categories. They often focus on extracting parts of the document that contain the emotion cause and fail to provide more abstractive, generalizable root cause. To bridge this gap, we introduce a large-scale dataset of emotion causes, derived from 9.8 million cleaned tweets over 15 years. We describe our curation process, which includes a comprehensive pipeline for data gathering, cleaning, labeling, and validation, ensuring the dataset{'}s reliability and richness. We extract emotion labels and provide abstractive summarization of the events causing emotions. The final dataset comprises over 700,000 tweets with corresponding emotion-cause pairs spanning 48 emotion classes, validated by human evaluators. The novelty of our dataset stems from its broad spectrum of emotion classes and the abstractive emotion cause that facilitates the development of an emotion-cause knowledge graph for nuanced reasoning. Our dataset will enable the design of emotion-aware systems that account for the diverse emotional responses of different people for the same event.", }
Emotion-Cause analysis has attracted the attention of researchers in recent years. However, most existing datasets are limited in size and number of emotion categories. They often focus on extracting parts of the document that contain the emotion cause and fail to provide more abstractive, generalizable root cause. To bridge this gap, we introduce a large-scale dataset of emotion causes, derived from 9.8 million cleaned tweets over 15 years. We describe our curation process, which includes a comprehensive pipeline for data gathering, cleaning, labeling, and validation, ensuring the dataset{'}s reliability and richness. We extract emotion labels and provide abstractive summarization of the events causing emotions. The final dataset comprises over 700,000 tweets with corresponding emotion-cause pairs spanning 48 emotion classes, validated by human evaluators. The novelty of our dataset stems from its broad spectrum of emotion classes and the abstractive emotion cause that facilitates the development of an emotion-cause knowledge graph for nuanced reasoning. Our dataset will enable the design of emotion-aware systems that account for the diverse emotional responses of different people for the same event.
[ "Nguyen, Mia", "Samaradivakara, Yasith", "Sasikumar, Prasanth", "Gupta, Chitralekha", "Nanayakkara, Suranga" ]
EMO-KNOW: A Large Scale Dataset on Emotion-Cause
findings-emnlp.737
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.738.bib
https://aclanthology.org/2023.findings-emnlp.738/
@inproceedings{chen-etal-2023-boosting-inference, title = "Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models", author = "Chen, Weize and Xu, Xiaoyue and Han, Xu and Lin, Yankai and Xie, Ruobing and Liu, Zhiyuan and Sun, Maosong and Zhou, Jie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.738", doi = "10.18653/v1/2023.findings-emnlp.738", pages = "11052--11067", abstract = "Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise. However, it is important to note that parameter sharing does not alleviate computational burdens associated with inference, thus impeding its practicality in situations characterized by limited stringent latency requirements or computational resources. Building upon neural ordinary differential equations (ODEs), we introduce a straightforward technique to enhance the inference efficiency of parameter-shared PLMs. Additionally, we propose a simple pre-training technique that leads to fully or partially shared models capable of achieving even greater inference acceleration. The experimental results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs, providing novel insights into more efficient utilization of parameter-shared models in resource-constrained settings.", }
Parameter-shared pre-trained language models (PLMs) have emerged as a successful approach in resource-constrained environments, enabling substantial reductions in model storage and memory costs without significant performance compromise. However, it is important to note that parameter sharing does not alleviate computational burdens associated with inference, thus impeding its practicality in situations characterized by limited stringent latency requirements or computational resources. Building upon neural ordinary differential equations (ODEs), we introduce a straightforward technique to enhance the inference efficiency of parameter-shared PLMs. Additionally, we propose a simple pre-training technique that leads to fully or partially shared models capable of achieving even greater inference acceleration. The experimental results demonstrate the effectiveness of our methods on both autoregressive and autoencoding PLMs, providing novel insights into more efficient utilization of parameter-shared models in resource-constrained settings.
[ "Chen, Weize", "Xu, Xiaoyue", "Han, Xu", "Lin, Yankai", "Xie, Ruobing", "Liu, Zhiyuan", "Sun, Maosong", "Zhou, Jie" ]
Boosting Inference Efficiency: Unleashing the Power of Parameter-Shared Pre-trained Language Models
findings-emnlp.738
2310.12818
[ "" ]
https://huggingface.co/papers/2310.12818
1
0
0
8
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.739.bib
https://aclanthology.org/2023.findings-emnlp.739/
@inproceedings{chen-etal-2023-natural, title = "Natural Response Generation for {C}hinese Reading Comprehension", author = "Chen, Nuo and Li, Hongguang and Bao, Yinan and Wang, Baoyuan and Li, Jia", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.739", doi = "10.18653/v1/2023.findings-emnlp.739", pages = "11068--11081", abstract = "Machine reading comprehension (MRC) is an important area of conversation agents and draws a lot of attention. However, there is a notable limitation to current MRC benchmarks: The labeled answers are mostly either spans extracted from the target corpus or the choices of the given candidates, ignoring the natural aspect of high-quality responses. As a result, MRC models trained on these datasets can not generate human-like responses in real QA scenarios. To this end, we construct a new dataset called \textbf{Penguin} to promote the research of MRC, providing a training and test bed for natural response generation to real scenarios. Concretely, Penguin consists of 200k training data with high-quality fluent, and well-informed responses. Penguin is the first benchmark towards natural response generation in Chinese MRC on a relatively large scale. To address the challenges in Penguin, we develop two strong baselines: end-to-end and two-stage frameworks. Following that, we further design \textit{Prompt-BART}: fine-tuning the pre-trained generative language models with a mixture of prefix prompts in Penguin. Extensive experiments validated the effectiveness of this design.", }
Machine reading comprehension (MRC) is an important area of conversation agents and draws a lot of attention. However, there is a notable limitation to current MRC benchmarks: The labeled answers are mostly either spans extracted from the target corpus or the choices of the given candidates, ignoring the natural aspect of high-quality responses. As a result, MRC models trained on these datasets can not generate human-like responses in real QA scenarios. To this end, we construct a new dataset called \textbf{Penguin} to promote the research of MRC, providing a training and test bed for natural response generation to real scenarios. Concretely, Penguin consists of 200k training data with high-quality fluent, and well-informed responses. Penguin is the first benchmark towards natural response generation in Chinese MRC on a relatively large scale. To address the challenges in Penguin, we develop two strong baselines: end-to-end and two-stage frameworks. Following that, we further design \textit{Prompt-BART}: fine-tuning the pre-trained generative language models with a mixture of prefix prompts in Penguin. Extensive experiments validated the effectiveness of this design.
[ "Chen, Nuo", "Li, Hongguang", "Bao, Yinan", "Wang, Baoyuan", "Li, Jia" ]
Natural Response Generation for Chinese Reading Comprehension
findings-emnlp.739
2302.08817
[ "https://github.com/nuochenpku/penguin" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.740.bib
https://aclanthology.org/2023.findings-emnlp.740/
@inproceedings{wang-etal-2023-treepiece, title = "Treepiece: Faster Semantic Parsing via Tree Tokenization", author = "Wang, Sid and Shrivastava, Akshat and Livshits, Aleksandr", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.740", doi = "10.18653/v1/2023.findings-emnlp.740", pages = "11082--11092", abstract = "\textit{Autoregressive} (AR) encoder-decoder neural networks have proved successful in many NLP problems, including \textit{Semantic Parsing} {--} a task that translates natural language to machine-readable \textit{parse trees}. However, the sequential prediction process of AR models can be slow. To accelerate AR for semantic parsing, we introduce a new technique called \textit{TreePiece} that tokenizes a parse tree into subtrees and generates one subtree per decoding step. On TOPv2 benchmark, TreePiece shows 4.6 times faster decoding speed than standard AR, and comparable speed but significantly higher accuracy compared to \textit{Non-Autoregressive} (NAR).", }
\textit{Autoregressive} (AR) encoder-decoder neural networks have proved successful in many NLP problems, including \textit{Semantic Parsing} {--} a task that translates natural language to machine-readable \textit{parse trees}. However, the sequential prediction process of AR models can be slow. To accelerate AR for semantic parsing, we introduce a new technique called \textit{TreePiece} that tokenizes a parse tree into subtrees and generates one subtree per decoding step. On TOPv2 benchmark, TreePiece shows 4.6 times faster decoding speed than standard AR, and comparable speed but significantly higher accuracy compared to \textit{Non-Autoregressive} (NAR).
[ "Wang, Sid", "Shrivastava, Akshat", "Livshits, Aleks", "r" ]
Treepiece: Faster Semantic Parsing via Tree Tokenization
findings-emnlp.740
2303.17161
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.741.bib
https://aclanthology.org/2023.findings-emnlp.741/
@inproceedings{wu-etal-2023-semantic, title = "Semantic Parsing by Large Language Models for Intricate Updating Strategies of Zero-Shot Dialogue State Tracking", author = "Wu, Yuxiang and Dong, Guanting and Xu, Weiran", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.741", doi = "10.18653/v1/2023.findings-emnlp.741", pages = "11093--11099", abstract = "Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiring and annotating task-oriented dialogues, which can be time-consuming and costly. However, DST extends beyond simple slot-filling and requires effective updating strategies for tracking dialogue state as conversations progress. In this paper, we propose ParsingDST, a new In-Context Learning (ICL) method, to introduce additional intricate updating strategies in zero-shot DST. Our approach reformulates the DST task by leveraging powerful Large Language Models (LLMs) and translating the original dialogue text to JSON through semantic parsing as an intermediate state. We also design a novel framework that includes more modules to ensure the effectiveness of updating strategies in the text-to-JSON process. Experimental results demonstrate that our approach outperforms existing zero-shot DST methods on MultiWOZ, exhibiting significant improvements in Joint Goal Accuracy (JGA) and slot accuracy compared to existing ICL methods.", }
Zero-shot Dialogue State Tracking (DST) addresses the challenge of acquiring and annotating task-oriented dialogues, which can be time-consuming and costly. However, DST extends beyond simple slot-filling and requires effective updating strategies for tracking dialogue state as conversations progress. In this paper, we propose ParsingDST, a new In-Context Learning (ICL) method, to introduce additional intricate updating strategies in zero-shot DST. Our approach reformulates the DST task by leveraging powerful Large Language Models (LLMs) and translating the original dialogue text to JSON through semantic parsing as an intermediate state. We also design a novel framework that includes more modules to ensure the effectiveness of updating strategies in the text-to-JSON process. Experimental results demonstrate that our approach outperforms existing zero-shot DST methods on MultiWOZ, exhibiting significant improvements in Joint Goal Accuracy (JGA) and slot accuracy compared to existing ICL methods.
[ "Wu, Yuxiang", "Dong, Guanting", "Xu, Weiran" ]
Semantic Parsing by Large Language Models for Intricate Updating Strategies of Zero-Shot Dialogue State Tracking
findings-emnlp.741
2310.10520
[ "https://github.com/ToLightUpTheSky/ParsingDST" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.742.bib
https://aclanthology.org/2023.findings-emnlp.742/
@inproceedings{bang-etal-2023-mitigating, title = "Mitigating Framing Bias with Polarity Minimization Loss", author = "Bang, Yejin and Lee, Nayeon and Fung, Pascale", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.742", doi = "10.18653/v1/2023.findings-emnlp.742", pages = "11100--11110", abstract = "Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report).", }
Framing bias plays a significant role in exacerbating political polarization by distorting the perception of actual events. Media outlets with divergent political stances often use polarized language in their reporting of the same event. We propose a new loss function that encourages the model to minimize the polarity difference between the polarized input articles to reduce framing bias. Specifically, our loss is designed to jointly optimize the model to map polarity ends bidirectionally. Our experimental results demonstrate that incorporating the proposed polarity minimization loss leads to a substantial reduction in framing bias when compared to a BART-based multi-document summarization model. Notably, we find that the effectiveness of this approach is most pronounced when the model is trained to minimize the polarity loss associated with informational framing bias (i.e., skewed selection of information to report).
[ "Bang, Yejin", "Lee, Nayeon", "Fung, Pascale" ]
Mitigating Framing Bias with Polarity Minimization Loss
findings-emnlp.742
2311.01817
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.743.bib
https://aclanthology.org/2023.findings-emnlp.743/
@inproceedings{gao-etal-2023-chatgpt, title = "Is {C}hat{GPT} a Good Causal Reasoner? A Comprehensive Evaluation", author = "Gao, Jinglong and Ding, Xiao and Qin, Bing and Liu, Ting", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.743", doi = "10.18653/v1/2023.findings-emnlp.743", pages = "11111--11126", abstract = "Causal reasoning ability is crucial for numerous NLP applications. Despite the impressive emerging ability of ChatGPT in various NLP tasks, it is unclear how well ChatGPT performs in causal reasoning. In this paper, we conduct the first comprehensive evaluation of the ChatGPT{'}s causal reasoning capabilities. Experiments show that ChatGPT is not a good causal reasoner, but a good causal interpreter. Besides, ChatGPT has a serious hallucination on causal reasoning, possibly due to the reporting biases between causal and non-causal relationships in natural language, as well as ChatGPT{'}s upgrading processes, such as RLHF. The In-Context Learning (ICL) and Chain-of-Thought (COT) techniques can further exacerbate such causal hallucination. Additionally, the causal reasoning ability of ChatGPT is sensitive to the words used to express the causal concept in prompts, and close-ended prompts perform better than open-ended prompts. For events in sentences, ChatGPT excels at capturing explicit causality rather than implicit causality, and performs better in sentences with lower event density and smaller lexical distance between events.", }
Causal reasoning ability is crucial for numerous NLP applications. Despite the impressive emerging ability of ChatGPT in various NLP tasks, it is unclear how well ChatGPT performs in causal reasoning. In this paper, we conduct the first comprehensive evaluation of the ChatGPT{'}s causal reasoning capabilities. Experiments show that ChatGPT is not a good causal reasoner, but a good causal interpreter. Besides, ChatGPT has a serious hallucination on causal reasoning, possibly due to the reporting biases between causal and non-causal relationships in natural language, as well as ChatGPT{'}s upgrading processes, such as RLHF. The In-Context Learning (ICL) and Chain-of-Thought (COT) techniques can further exacerbate such causal hallucination. Additionally, the causal reasoning ability of ChatGPT is sensitive to the words used to express the causal concept in prompts, and close-ended prompts perform better than open-ended prompts. For events in sentences, ChatGPT excels at capturing explicit causality rather than implicit causality, and performs better in sentences with lower event density and smaller lexical distance between events.
[ "Gao, Jinglong", "Ding, Xiao", "Qin, Bing", "Liu, Ting" ]
Is ChatGPT a Good Causal Reasoner? A Comprehensive Evaluation
findings-emnlp.743
2305.07375
[ "https://github.com/ArrogantL/ChatGPT4CausalReasoning" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.744.bib
https://aclanthology.org/2023.findings-emnlp.744/
@inproceedings{alves-etal-2023-steering, title = "Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning", author = "Alves, Duarte and Guerreiro, Nuno and Alves, Jo{\~a}o and Pombal, Jos{\'e} and Rei, Ricardo and de Souza, Jos{\'e} and Colombo, Pierre and Martins, Andre", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.744", doi = "10.18653/v1/2023.findings-emnlp.744", pages = "11127--11148", abstract = "Large language models (LLMs) are a promising avenue for machine translation (MT). However, current LLM-based MT systems are brittle: their effectiveness highly depends on the choice of few-shot examples and they often require extra post-processing due to overgeneration. Alternatives such as finetuning on translation instructions are computationally expensive and may weaken in-context learning capabilities, due to overspecialization. In this paper, we provide a closer look at this problem. We start by showing that adapter-based finetuning with LoRA matches the performance of traditional finetuning while reducing the number of training parameters by a factor of 50. This method also outperforms few-shot prompting and eliminates the need for post-processing or in-context examples. However, we show that finetuning generally degrades few-shot performance, hindering adaptation capabilities. Finally, to obtain the best of both worlds, we propose a simple approach that incorporates few-shot examples during finetuning. Experiments on 10 language pairs show that our proposed approach recovers the original few-shot capabilities while keeping the added benefits of finetuning.", }
Large language models (LLMs) are a promising avenue for machine translation (MT). However, current LLM-based MT systems are brittle: their effectiveness highly depends on the choice of few-shot examples and they often require extra post-processing due to overgeneration. Alternatives such as finetuning on translation instructions are computationally expensive and may weaken in-context learning capabilities, due to overspecialization. In this paper, we provide a closer look at this problem. We start by showing that adapter-based finetuning with LoRA matches the performance of traditional finetuning while reducing the number of training parameters by a factor of 50. This method also outperforms few-shot prompting and eliminates the need for post-processing or in-context examples. However, we show that finetuning generally degrades few-shot performance, hindering adaptation capabilities. Finally, to obtain the best of both worlds, we propose a simple approach that incorporates few-shot examples during finetuning. Experiments on 10 language pairs show that our proposed approach recovers the original few-shot capabilities while keeping the added benefits of finetuning.
[ "Alves, Duarte", "Guerreiro, Nuno", "Alves, Jo{\\~a}o", "Pombal, Jos{\\'e}", "Rei, Ricardo", "de Souza, Jos{\\'e}", "Colombo, Pierre", "Martins, Andre" ]
Steering Large Language Models for Machine Translation with Finetuning and In-Context Learning
findings-emnlp.744
2310.13448
[ "" ]
https://huggingface.co/papers/2310.13448
2
1
0
8
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.745.bib
https://aclanthology.org/2023.findings-emnlp.745/
@inproceedings{chen-etal-2023-many, title = "How Many Demonstrations Do You Need for In-context Learning?", author = "Chen, Jiuhai and Chen, Lichang and Zhu, Chen and Zhou, Tianyi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.745", doi = "10.18653/v1/2023.findings-emnlp.745", pages = "11149--11159", abstract = "Large language models (LLMs) are capable to perform complex reasoning by in-context learning (ICL) when provided with a few input-output demonstrations (demos) and more powerful when intermediate reasoning steps (chain of thoughts (CoT)) of the demos are given. Is it necessary to use multi-demo in ICL? In this paper, we study ICL using fewer demos for each test query on the tasks in (Wei et al., 2022). Surprisingly, we do not observe significant degradation when using only one randomly chosen demo. To study this phenomenon, for each test query, we categorize demos into {``}positive demos{''} leading to the correct answer, and {``}negative demos{''} resulting in wrong answers. Our analysis reveals an inherent bias in those widely studied datasets and the redundancy of demos: most demos are positive for a majority of test queries, which explains the good performance of ICL with one random demo. Moreover, ICL (with and w/o CoT) using only one positive demo significantly outperforms multi-demo ICL adopted by most previous works, indicating the weakness of LLMs in finding positive demo(s) for input queries, which is difficult to evaluate on the biased datasets. Furthermore, we observe a counterintuitive behavior of ICL using multi-demo, i.e., its accuracy degrades(improves) when given more positive(negative) demos. This implies that ICL can be easily misguided by interference among demos and their spurious correlations. Our analyses highlight several fundamental challenges that need to be addressed in LLMs training, ICL, and benchmark design.", }
Large language models (LLMs) are capable to perform complex reasoning by in-context learning (ICL) when provided with a few input-output demonstrations (demos) and more powerful when intermediate reasoning steps (chain of thoughts (CoT)) of the demos are given. Is it necessary to use multi-demo in ICL? In this paper, we study ICL using fewer demos for each test query on the tasks in (Wei et al., 2022). Surprisingly, we do not observe significant degradation when using only one randomly chosen demo. To study this phenomenon, for each test query, we categorize demos into {``}positive demos{''} leading to the correct answer, and {``}negative demos{''} resulting in wrong answers. Our analysis reveals an inherent bias in those widely studied datasets and the redundancy of demos: most demos are positive for a majority of test queries, which explains the good performance of ICL with one random demo. Moreover, ICL (with and w/o CoT) using only one positive demo significantly outperforms multi-demo ICL adopted by most previous works, indicating the weakness of LLMs in finding positive demo(s) for input queries, which is difficult to evaluate on the biased datasets. Furthermore, we observe a counterintuitive behavior of ICL using multi-demo, i.e., its accuracy degrades(improves) when given more positive(negative) demos. This implies that ICL can be easily misguided by interference among demos and their spurious correlations. Our analyses highlight several fundamental challenges that need to be addressed in LLMs training, ICL, and benchmark design.
[ "Chen, Jiuhai", "Chen, Lichang", "Zhu, Chen", "Zhou, Tianyi" ]
How Many Demonstrations Do You Need for In-context Learning?
findings-emnlp.745
2303.08119
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.746.bib
https://aclanthology.org/2023.findings-emnlp.746/
@inproceedings{yamagiwa-etal-2023-improving, title = "Improving word mover{'}s distance by leveraging self-attention matrix", author = "Yamagiwa, Hiroaki and Yokoi, Sho and Shimodaira, Hidetoshi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.746", doi = "10.18653/v1/2023.findings-emnlp.746", pages = "11160--11183", abstract = "Measuring the semantic similarity between two sentences is still an important task. The word mover{'}s distance (WMD) computes the similarity via the optimal alignment between the sets of word embeddings. However, WMD does not utilize word order, making it challenging to distinguish sentences with significant overlaps of similar words, even if they are semantically very different. Here, we attempt to improve WMD by incorporating the sentence structure represented by BERT{'}s self-attention matrix (SAM). The proposed method is based on the Fused Gromov-Wasserstein distance, which simultaneously considers the similarity of the word embedding and the SAM for calculating the optimal transport between two sentences. Experiments demonstrate the proposed method enhances WMD and its variants in paraphrase identification with near-equivalent performance in semantic textual similarity.", }
Measuring the semantic similarity between two sentences is still an important task. The word mover{'}s distance (WMD) computes the similarity via the optimal alignment between the sets of word embeddings. However, WMD does not utilize word order, making it challenging to distinguish sentences with significant overlaps of similar words, even if they are semantically very different. Here, we attempt to improve WMD by incorporating the sentence structure represented by BERT{'}s self-attention matrix (SAM). The proposed method is based on the Fused Gromov-Wasserstein distance, which simultaneously considers the similarity of the word embedding and the SAM for calculating the optimal transport between two sentences. Experiments demonstrate the proposed method enhances WMD and its variants in paraphrase identification with near-equivalent performance in semantic textual similarity.
[ "Yamagiwa, Hiroaki", "Yokoi, Sho", "Shimodaira, Hidetoshi" ]
Improving word mover's distance by leveraging self-attention matrix
findings-emnlp.746
2211.06229
[ "https://github.com/ymgw55/WSMD" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.747.bib
https://aclanthology.org/2023.findings-emnlp.747/
@inproceedings{ji-etal-2023-improving, title = "Improving Span Representation by Efficient Span-Level Attention", author = "Ji, Pengyu and Yang, Songlin and Tu, Kewei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.747", doi = "10.18653/v1/2023.findings-emnlp.747", pages = "11184--11192", abstract = "High-quality span representations are crucial to natural language processing tasks involving span prediction and classification. Most existing methods derive a span representation by aggregation of token representations within the span. In contrast, we aim to improve span representations by considering span-span interactions as well as more comprehensive span-token interactions. Specifically, we introduce layers of span-level attention on top of a normal token-level transformer encoder. Given that attention between all span pairs results in $O(n^4)$ complexity ($n$ being the sentence length) and not all span interactions are intuitively meaningful, we restrict the range of spans that a given span could attend to, thereby reducing overall complexity to $O(n^3)$. We conduct experiments on various span-related tasks and show superior performance of our model surpassing baseline models. Our code is publicly available at \url{https://github.com/jipy0222/Span-Level-Attention}.", }
High-quality span representations are crucial to natural language processing tasks involving span prediction and classification. Most existing methods derive a span representation by aggregation of token representations within the span. In contrast, we aim to improve span representations by considering span-span interactions as well as more comprehensive span-token interactions. Specifically, we introduce layers of span-level attention on top of a normal token-level transformer encoder. Given that attention between all span pairs results in $O(n^4)$ complexity ($n$ being the sentence length) and not all span interactions are intuitively meaningful, we restrict the range of spans that a given span could attend to, thereby reducing overall complexity to $O(n^3)$. We conduct experiments on various span-related tasks and show superior performance of our model surpassing baseline models. Our code is publicly available at \url{https://github.com/jipy0222/Span-Level-Attention}.
[ "Ji, Pengyu", "Yang, Songlin", "Tu, Kewei" ]
Improving Span Representation by Efficient Span-Level Attention
findings-emnlp.747
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.748.bib
https://aclanthology.org/2023.findings-emnlp.748/
@inproceedings{stepputtis-etal-2023-long, title = "Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models", author = "Stepputtis, Simon and Campbell, Joseph and Xie, Yaqi and Qi, Zhengyang and Zhang, Wenxin and Wang, Ruiyi and Rangreji, Sanketh and Lewis, Charles and Sycara, Katia", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.748", doi = "10.18653/v1/2023.findings-emnlp.748", pages = "11193--11208", abstract = "Deception and persuasion play a critical role in long-horizon dialogues between multiple parties, especially when the interests, goals, and motivations of the participants are not aligned. Such complex tasks pose challenges for current Large Language Models (LLM) as deception and persuasion can easily mislead them, especially in long-horizon multi-party dialogues. To this end, we explore the game of Avalon: The Resistance, a social deduction game in which players must determine each other{'}s hidden identities to complete their team{'}s objective. We introduce an online testbed and a dataset containing 20 carefully collected and labeled games among human players that exhibit long-horizon deception in a cooperative-competitive setting. We discuss the capabilities of LLMs to utilize deceptive long-horizon conversations between six human players to determine each player{'}s goal and motivation. Particularly, we discuss the multimodal integration of the chat between the players and the game{'}s state that grounds the conversation, providing further insights into the true player identities. We find that even current state-of-the-art LLMs do not reach human performance, making our dataset a compelling benchmark to investigate the decision-making and language-processing capabilities of LLMs. Our dataset and online testbed can be found at our project website: https://sstepput.github.io/Avalon-NLU/", }
Deception and persuasion play a critical role in long-horizon dialogues between multiple parties, especially when the interests, goals, and motivations of the participants are not aligned. Such complex tasks pose challenges for current Large Language Models (LLM) as deception and persuasion can easily mislead them, especially in long-horizon multi-party dialogues. To this end, we explore the game of Avalon: The Resistance, a social deduction game in which players must determine each other{'}s hidden identities to complete their team{'}s objective. We introduce an online testbed and a dataset containing 20 carefully collected and labeled games among human players that exhibit long-horizon deception in a cooperative-competitive setting. We discuss the capabilities of LLMs to utilize deceptive long-horizon conversations between six human players to determine each player{'}s goal and motivation. Particularly, we discuss the multimodal integration of the chat between the players and the game{'}s state that grounds the conversation, providing further insights into the true player identities. We find that even current state-of-the-art LLMs do not reach human performance, making our dataset a compelling benchmark to investigate the decision-making and language-processing capabilities of LLMs. Our dataset and online testbed can be found at our project website: https://sstepput.github.io/Avalon-NLU/
[ "Stepputtis, Simon", "Campbell, Joseph", "Xie, Yaqi", "Qi, Zhengyang", "Zhang, Wenxin", "Wang, Ruiyi", "Rangreji, Sanketh", "Lewis, Charles", "Sycara, Katia" ]
Long-Horizon Dialogue Understanding for Role Identification in the Game of Avalon with Large Language Models
findings-emnlp.748
2311.05720
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.749.bib
https://aclanthology.org/2023.findings-emnlp.749/
@inproceedings{han-etal-2023-improving, title = "Improving Sequential Model Editing with Fact Retrieval", author = "Han, Xiaoqi and Li, Ru and Tan, Hongye and Yuanlong, Wang and Chai, Qinghua and Pan, Jeff", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.749", doi = "10.18653/v1/2023.findings-emnlp.749", pages = "11209--11224", abstract = "The task of sequential model editing is to fix erroneous knowledge in Pre-trained Language Models (PLMs) efficiently, precisely and continuously. Although existing methods can deal with a small number of modifications, these methods experience a performance decline or require additional annotated data, when the number of edits increases. In this paper, we propose a $\textbf{R}$etrieval $\textbf{A}$ugmented $\textbf{S}$equential Model $\textbf{E}$diting framework ($\textbf{RASE}$) that leverages factual information to enhance editing generalization and to guide the identification of edits by retrieving related facts from the fact-patch memory we constructed. Our main findings are: (i) State-of-the-art models can hardly correct massive mistakes stably and efficiently; (ii) Even if we scale up to thousands of edits, RASE can significantly enhance editing generalization and maintain consistent performance and efficiency; (iii) RASE can edit large-scale PLMs and increase the performance of different editors. Moreover, it can integrate with ChatGPT and further improve performance. Our code and data are available at: https://github.com/sev777/RASE.", }
The task of sequential model editing is to fix erroneous knowledge in Pre-trained Language Models (PLMs) efficiently, precisely and continuously. Although existing methods can deal with a small number of modifications, these methods experience a performance decline or require additional annotated data, when the number of edits increases. In this paper, we propose a $\textbf{R}$etrieval $\textbf{A}$ugmented $\textbf{S}$equential Model $\textbf{E}$diting framework ($\textbf{RASE}$) that leverages factual information to enhance editing generalization and to guide the identification of edits by retrieving related facts from the fact-patch memory we constructed. Our main findings are: (i) State-of-the-art models can hardly correct massive mistakes stably and efficiently; (ii) Even if we scale up to thousands of edits, RASE can significantly enhance editing generalization and maintain consistent performance and efficiency; (iii) RASE can edit large-scale PLMs and increase the performance of different editors. Moreover, it can integrate with ChatGPT and further improve performance. Our code and data are available at: https://github.com/sev777/RASE.
[ "Han, Xiaoqi", "Li, Ru", "Tan, Hongye", "Yuanlong, Wang", "Chai, Qinghua", "Pan, Jeff" ]
Improving Sequential Model Editing with Fact Retrieval
findings-emnlp.749
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.750.bib
https://aclanthology.org/2023.findings-emnlp.750/
@inproceedings{sun-etal-2023-battle, title = "Battle of the Large Language Models: Dolly vs {LL}a{MA} vs Vicuna vs Guanaco vs Bard vs {C}hat{GPT} - A Text-to-{SQL} Parsing Comparison", author = "Sun, Shuo and Zhang, Yuchen and Yan, Jiahuan and Gao, Yuze and Ong, Donovan and Chen, Bin and Su, Jian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.750", doi = "10.18653/v1/2023.findings-emnlp.750", pages = "11225--11238", abstract = "The success of ChatGPT has ignited an AI race, with researchers striving to develop new large language models (LLMs) that can match or surpass the language understanding and generation abilities of commercial ones. In recent times, a number of models have emerged, claiming performance near that of GPT-3.5 or GPT-4 through various instruction-tuning methods. As practitioners of Text-to-SQL parsing, we are grateful for their valuable contributions to open-source research. However, it is important to approach these claims with a sense of scrutiny and ascertain the actual effectiveness of these models. Therefore, we pit six popular large language models against each other, systematically evaluating their Text-to-SQL parsing capability on nine benchmark datasets with five different prompting strategies, covering both zero-shot and few-shot scenarios. Regrettably, the open-sourced models fell significantly short of the performance achieved by closed-source models like GPT-3.5, highlighting the need for further work to bridge the performance gap between these models.", }
The success of ChatGPT has ignited an AI race, with researchers striving to develop new large language models (LLMs) that can match or surpass the language understanding and generation abilities of commercial ones. In recent times, a number of models have emerged, claiming performance near that of GPT-3.5 or GPT-4 through various instruction-tuning methods. As practitioners of Text-to-SQL parsing, we are grateful for their valuable contributions to open-source research. However, it is important to approach these claims with a sense of scrutiny and ascertain the actual effectiveness of these models. Therefore, we pit six popular large language models against each other, systematically evaluating their Text-to-SQL parsing capability on nine benchmark datasets with five different prompting strategies, covering both zero-shot and few-shot scenarios. Regrettably, the open-sourced models fell significantly short of the performance achieved by closed-source models like GPT-3.5, highlighting the need for further work to bridge the performance gap between these models.
[ "Sun, Shuo", "Zhang, Yuchen", "Yan, Jiahuan", "Gao, Yuze", "Ong, Donovan", "Chen, Bin", "Su, Jian" ]
Battle of the Large Language Models: Dolly vs LLaMA vs Vicuna vs Guanaco vs Bard vs ChatGPT - A Text-to-SQL Parsing Comparison
findings-emnlp.750
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.751.bib
https://aclanthology.org/2023.findings-emnlp.751/
@inproceedings{geng-etal-2023-kbioxlm, title = "{KB}io{XLM}: A Knowledge-anchored Biomedical Multilingual Pretrained Language Model", author = "Geng, Lei and Yan, Xu and Cao, Ziqiang and Li, Juntao and Li, Wenjie and Li, Sujian and Zhou, Xinjie and Yang, Yang and Zhang, Jun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.751", doi = "10.18653/v1/2023.findings-emnlp.751", pages = "11239--11250", abstract = "Most biomedical pretrained language models are monolingual and cannot handle the growing cross-lingual requirements. The scarcity of non-English domain corpora, not to mention parallel data, poses a significant hurdle in training multilingual biomedical models. Since knowledge forms the core of domain-specific corpora and can be translated into various languages accurately, we propose a model called KBioXLM, which transforms the multilingual pretrained model XLM-R into the biomedical domain using a knowledge-anchored approach. We achieve a biomedical multilingual corpus by incorporating three granularity knowledge alignments (entity, fact, and passage levels) into monolingual corpora. Then we design three corresponding training tasks (entity masking, relation masking, and passage relation prediction) and continue training on top of the XLM-R model to enhance its domain cross-lingual ability. To validate the effectiveness of our model, we translate the English benchmarks of multiple tasks into Chinese. Experimental results demonstrate that our model significantly outperforms monolingual and multilingual pretrained models in cross-lingual zero-shot and few-shot scenarios, achieving improvements of up to 10+ points.", }
Most biomedical pretrained language models are monolingual and cannot handle the growing cross-lingual requirements. The scarcity of non-English domain corpora, not to mention parallel data, poses a significant hurdle in training multilingual biomedical models. Since knowledge forms the core of domain-specific corpora and can be translated into various languages accurately, we propose a model called KBioXLM, which transforms the multilingual pretrained model XLM-R into the biomedical domain using a knowledge-anchored approach. We achieve a biomedical multilingual corpus by incorporating three granularity knowledge alignments (entity, fact, and passage levels) into monolingual corpora. Then we design three corresponding training tasks (entity masking, relation masking, and passage relation prediction) and continue training on top of the XLM-R model to enhance its domain cross-lingual ability. To validate the effectiveness of our model, we translate the English benchmarks of multiple tasks into Chinese. Experimental results demonstrate that our model significantly outperforms monolingual and multilingual pretrained models in cross-lingual zero-shot and few-shot scenarios, achieving improvements of up to 10+ points.
[ "Geng, Lei", "Yan, Xu", "Cao, Ziqiang", "Li, Juntao", "Li, Wenjie", "Li, Sujian", "Zhou, Xinjie", "Yang, Yang", "Zhang, Jun" ]
KBioXLM: A Knowledge-anchored Biomedical Multilingual Pretrained Language Model
findings-emnlp.751
2311.11564
[ "https://github.com/ngwlh-gl/kbioxlm" ]
https://huggingface.co/papers/2311.11564
0
1
0
9
[ "ngwlh/KBioXLM" ]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.752.bib
https://aclanthology.org/2023.findings-emnlp.752/
@inproceedings{nair-resnik-2023-words, title = "Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?", author = "Nair, Sathvik and Resnik, Philip", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.752", doi = "10.18653/v1/2023.findings-emnlp.752", pages = "11251--11260", abstract = "An important assumption that comes with using LLMs on psycholinguistic data has gone unverified. LLM-based predictions are based on subword tokenization, not decomposition of words into morphemes. Does that matter? We carefully test this by comparing surprisal estimates using orthographic, morphological, and BPE tokenization against reading time data. Our results replicate previous findings and provide evidence that *in the aggregate*, predictions using BPE tokenization do not suffer relative to morphological and orthographic segmentation. However, a finer-grained analysis points to potential issues with relying on BPE-based tokenization, as well as providing promising results involving morphologically-aware surprisal estimates and suggesting a new method for evaluating morphological prediction.", }
An important assumption that comes with using LLMs on psycholinguistic data has gone unverified. LLM-based predictions are based on subword tokenization, not decomposition of words into morphemes. Does that matter? We carefully test this by comparing surprisal estimates using orthographic, morphological, and BPE tokenization against reading time data. Our results replicate previous findings and provide evidence that *in the aggregate*, predictions using BPE tokenization do not suffer relative to morphological and orthographic segmentation. However, a finer-grained analysis points to potential issues with relying on BPE-based tokenization, as well as providing promising results involving morphologically-aware surprisal estimates and suggesting a new method for evaluating morphological prediction.
[ "Nair, Sathvik", "Resnik, Philip" ]
Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?
findings-emnlp.752
2310.17774
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.753.bib
https://aclanthology.org/2023.findings-emnlp.753/
@inproceedings{li-etal-2023-zero, title = "A Zero-Shot Language Agent for Computer Control with Structured Reflection", author = "Li, Tao and Li, Gang and Deng, Zhiwei and Wang, Bryan and Li, Yang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.753", doi = "10.18653/v1/2023.findings-emnlp.753", pages = "11261--11274", abstract = "Large language models (LLMs) have shown increasing capacity at planning and executing a high-level goal in a live computer environment (e.g. MiniWoB++). To perform a task, recent works often require a model to learn from trace examples of the task via either supervised learning or few/many-shot prompting. Without these trace examples, it remains a challenge how an agent can autonomously learn and improve its control on a computer, which limits the ability of an agent to perform a new task. We approach this problem with a zero-shot agent that requires no given expert traces. Our agent plans for executable actions on a partially observed environment, and iteratively progresses a task by identifying and learning from its mistakes via self-reflection and structured thought management. On the easy tasks of MiniWoB++, we show that our zero-shot agent often outperforms recent SoTAs, with more efficient reasoning. For tasks with more complexity, our reflective agent performs on par with prior best models, even though previous works had the advantages of accessing expert traces or additional screen information.", }
Large language models (LLMs) have shown increasing capacity at planning and executing a high-level goal in a live computer environment (e.g. MiniWoB++). To perform a task, recent works often require a model to learn from trace examples of the task via either supervised learning or few/many-shot prompting. Without these trace examples, it remains a challenge how an agent can autonomously learn and improve its control on a computer, which limits the ability of an agent to perform a new task. We approach this problem with a zero-shot agent that requires no given expert traces. Our agent plans for executable actions on a partially observed environment, and iteratively progresses a task by identifying and learning from its mistakes via self-reflection and structured thought management. On the easy tasks of MiniWoB++, we show that our zero-shot agent often outperforms recent SoTAs, with more efficient reasoning. For tasks with more complexity, our reflective agent performs on par with prior best models, even though previous works had the advantages of accessing expert traces or additional screen information.
[ "Li, Tao", "Li, Gang", "Deng, Zhiwei", "Wang, Bryan", "Li, Yang" ]
A Zero-Shot Language Agent for Computer Control with Structured Reflection
findings-emnlp.753
2310.08740
[ "" ]
https://huggingface.co/papers/2310.08740
2
14
2
5
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.754.bib
https://aclanthology.org/2023.findings-emnlp.754/
@inproceedings{dong-etal-2023-steerlm, title = "{S}teer{LM}: Attribute Conditioned {SFT} as an (User-Steerable) Alternative to {RLHF}", author = "Dong, Yi and Wang, Zhilin and Sreedhar, Makesh and Wu, Xianchao and Kuchaiev, Oleksii", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.754", doi = "10.18653/v1/2023.findings-emnlp.754", pages = "11275--11288", abstract = "Model alignment with human preferences is an essential step in making Large Language Models (LLMs) helpful and consistent with human values. It typically consists of supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) stages. However, RLHF faces inherent limitations stemming from a complex training setup and its tendency to align the model with implicit values that end users cannot control at run-time. Moreover, reward models in RLHF stage commonly rely on single-dimensional feedback as opposed to explicit, multifaceted signals that indicate attributes such as helpfulness, humor, and toxicity. To address these limitations, we propose SteerLM, a supervised fine-tuning method that empowers end-users to control responses during inference. SteerLM conditions responses to conform to an explicitly defined multi-dimensional set of attributes, thereby empowering a steerable AI capable of generating helpful and high-quality responses while maintaining customizability. Experiments show that SteerLM trained on open source datasets generates responses that are preferred by human and automatic evaluators to many state-of-the-art baselines trained with RLHF while being much easier to train. Try SteerLM at https://huggingface.co/nvidia/SteerLM-llama2-13B", }
Model alignment with human preferences is an essential step in making Large Language Models (LLMs) helpful and consistent with human values. It typically consists of supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) stages. However, RLHF faces inherent limitations stemming from a complex training setup and its tendency to align the model with implicit values that end users cannot control at run-time. Moreover, reward models in RLHF stage commonly rely on single-dimensional feedback as opposed to explicit, multifaceted signals that indicate attributes such as helpfulness, humor, and toxicity. To address these limitations, we propose SteerLM, a supervised fine-tuning method that empowers end-users to control responses during inference. SteerLM conditions responses to conform to an explicitly defined multi-dimensional set of attributes, thereby empowering a steerable AI capable of generating helpful and high-quality responses while maintaining customizability. Experiments show that SteerLM trained on open source datasets generates responses that are preferred by human and automatic evaluators to many state-of-the-art baselines trained with RLHF while being much easier to train. Try SteerLM at https://huggingface.co/nvidia/SteerLM-llama2-13B
[ "Dong, Yi", "Wang, Zhilin", "Sreedhar, Makesh", "Wu, Xianchao", "Kuchaiev, Oleksii" ]
SteerLM: Attribute Conditioned SFT as an (User-Steerable) Alternative to RLHF
findings-emnlp.754
2310.05344
[ "" ]
https://huggingface.co/papers/2310.05344
0
1
0
5
[ "nvidia/SteerLM-llama2-13B", "nvidia/Llama2-70B-SteerLM-Chat", "nvidia/nemotron-3-8b-chat-4k-steerlm", "nvidia/Llama2-13B-SteerLM-RM", "nvidia/Llama3-70B-SteerLM-Chat" ]
[ "nvidia/HelpSteer", "orionriker/Mistral-HelpSteer" ]
[ "meval/multilingual-chatbot-arena-leaderboard", "Omnibus/InferenceClient_Chatbots", "li-qing/FIRE", "dbasu/multilingual-chatbot-arena-leaderboard", "Bofeee5675/FIRE", "evelyn-lo/evelyn", "K00B404/Teachershub" ]
1
Poster
https://aclanthology.org/2023.findings-emnlp.755.bib
https://aclanthology.org/2023.findings-emnlp.755/
@inproceedings{you-etal-2023-idealgpt, title = "{I}deal{GPT}: Iteratively Decomposing Vision and Language Reasoning via Large Language Models", author = "You, Haoxuan and Sun, Rui and Wang, Zhecan and Chen, Long and Wang, Gengyu and Ayyubi, Hammad and Chang, Kai-Wei and Chang, Shih-Fu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.755", doi = "10.18653/v1/2023.findings-emnlp.755", pages = "11289--11303", abstract = "The field of vision-and-language (VL) understanding has made unprecedented progress with end-to-end large pre-trained VL models (VLMs). However, they still fall short in zero-shot reasoning tasks that require multi-step inferencing. To achieve this goal, previous works resort to a divide-and-conquer pipeline. In this paper, we argue that previous efforts have several inherent shortcomings: 1) They rely on domain-specific sub-question decomposing models. 2) They force models to predict the final answer even if the sub-questions or sub-answers provide insufficient information. We address these limitations via IdealGPT, a framework that iteratively decomposes VL reasoning using large language models (LLMs). Specifically, IdealGPT utilizes an LLM to generate sub-questions, a VLM to provide corresponding sub-answers, and another LLM to reason to achieve the final answer. These three modules perform the divide-and-conquer procedure iteratively until the model is confident about the final answer to the main question. We evaluate IdealGPT on multiple challenging VL reasoning tasks under a zero-shot setting. In particular, our IdealGPT outperforms the best existing GPT-4-like models by an absolute 10{\%} on VCR and 15{\%} on SNLI-VE. Code is available at https://github.com/Hxyou/IdealGPT.", }
The field of vision-and-language (VL) understanding has made unprecedented progress with end-to-end large pre-trained VL models (VLMs). However, they still fall short in zero-shot reasoning tasks that require multi-step inferencing. To achieve this goal, previous works resort to a divide-and-conquer pipeline. In this paper, we argue that previous efforts have several inherent shortcomings: 1) They rely on domain-specific sub-question decomposing models. 2) They force models to predict the final answer even if the sub-questions or sub-answers provide insufficient information. We address these limitations via IdealGPT, a framework that iteratively decomposes VL reasoning using large language models (LLMs). Specifically, IdealGPT utilizes an LLM to generate sub-questions, a VLM to provide corresponding sub-answers, and another LLM to reason to achieve the final answer. These three modules perform the divide-and-conquer procedure iteratively until the model is confident about the final answer to the main question. We evaluate IdealGPT on multiple challenging VL reasoning tasks under a zero-shot setting. In particular, our IdealGPT outperforms the best existing GPT-4-like models by an absolute 10{\%} on VCR and 15{\%} on SNLI-VE. Code is available at https://github.com/Hxyou/IdealGPT.
[ "You, Haoxuan", "Sun, Rui", "Wang, Zhecan", "Chen, Long", "Wang, Gengyu", "Ayyubi, Hammad", "Chang, Kai-Wei", "Chang, Shih-Fu" ]
IdealGPT: Iteratively Decomposing Vision and Language Reasoning via Large Language Models
findings-emnlp.755
2305.14985
[ "https://github.com/hxyou/idealgpt" ]
https://huggingface.co/papers/2305.14985
1
0
0
8
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.756.bib
https://aclanthology.org/2023.findings-emnlp.756/
@inproceedings{ali-etal-2023-gri, title = "{GRI}: Graph-based Relative Isomorphism of Word Embedding Spaces", author = "Ali, Muhammad and Hu, Yan and Qin, Jianbin and Wang, Di", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.756", doi = "10.18653/v1/2023.findings-emnlp.756", pages = "11304--11313", abstract = "Automated construction of bi-lingual dictionaries using monolingual embedding spaces is a core challenge in machine translation. The end performance of these dictionaries relies on the geometric similarity of individual spaces, i.e., their degree of isomorphism. Existing attempts aimed at controlling the relative isomorphism of different spaces fail to incorporate the impact of lexically different but semantically related words in the training objective. To address this, we propose GRI that combines the distributional training objectives with attentive graph convolutions to unanimously consider the impact of lexical variations of semantically similar words required to define/compute the relative isomorphism of multiple spaces. Exper imental evaluation shows that GRI outperforms the existing research by improving the average P@1 by a relative score of upto 63.6{\%}.", }
Automated construction of bi-lingual dictionaries using monolingual embedding spaces is a core challenge in machine translation. The end performance of these dictionaries relies on the geometric similarity of individual spaces, i.e., their degree of isomorphism. Existing attempts aimed at controlling the relative isomorphism of different spaces fail to incorporate the impact of lexically different but semantically related words in the training objective. To address this, we propose GRI that combines the distributional training objectives with attentive graph convolutions to unanimously consider the impact of lexical variations of semantically similar words required to define/compute the relative isomorphism of multiple spaces. Exper imental evaluation shows that GRI outperforms the existing research by improving the average P@1 by a relative score of upto 63.6{\%}.
[ "Ali, Muhammad", "Hu, Yan", "Qin, Jianbin", "Wang, Di" ]
GRI: Graph-based Relative Isomorphism of Word Embedding Spaces
findings-emnlp.756
2310.12360
[ "https://github.com/asif6827/gri" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.757.bib
https://aclanthology.org/2023.findings-emnlp.757/
@inproceedings{mathur-etal-2023-personalm, title = "{P}ersona{LM}: Language Model Personalization via Domain-distributed Span Aggregated K-Nearest N-gram Retrieval Augmentation", author = "Mathur, Puneet and Liu, Zhe and Li, Ke and Ma, Yingyi and Keren, Gil and Ahmed, Zeeshan and Manocha, Dinesh and Zhang, Xuedong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.757", doi = "10.18653/v1/2023.findings-emnlp.757", pages = "11314--11328", abstract = "We introduce PersonaLM - Domain-distributed Span-Aggregated K-nearest N-gram retrieval augmentation to improve language modeling for Automatic Speech Recognition (ASR) personalization. PersonaLM leverages contextually similar n-gram word frequencies for recognizing rare word patterns associated with unseen domains. It aggregates the next-word probability distribution based on the relative importance of different domains to the input query. To achieve this, we propose a Span Aggregated Group-Contrastive Neural (SCAN) retriever that learns to rank external domains/users by utilizing a group-wise contrastive span loss that pulls together span representations belonging to the same group while pushing away spans from unrelated groups in the semantic space. We propose ASAP benchmark for ASR LM personalization that consists of three user-specific speech-to-text tasks for meetings, TED talks, and financial earnings calls. Extensive experiments show that PersonaLM significantly outperforms strong baselines with a 10-16{\%} improvement in perplexity and a 5-8{\%} reduction in Word Error Rates on popular Wikitext-103, UserLibri, and our ASAP dataset. We further demonstrate the usefulness of the SCAN retriever for improving user-personalized text generation and classification by retrieving relevant context for zero-shot prompting and few-shot fine-tuning of LLMs by 7-12{\%} on the LAMP benchmark.", }
We introduce PersonaLM - Domain-distributed Span-Aggregated K-nearest N-gram retrieval augmentation to improve language modeling for Automatic Speech Recognition (ASR) personalization. PersonaLM leverages contextually similar n-gram word frequencies for recognizing rare word patterns associated with unseen domains. It aggregates the next-word probability distribution based on the relative importance of different domains to the input query. To achieve this, we propose a Span Aggregated Group-Contrastive Neural (SCAN) retriever that learns to rank external domains/users by utilizing a group-wise contrastive span loss that pulls together span representations belonging to the same group while pushing away spans from unrelated groups in the semantic space. We propose ASAP benchmark for ASR LM personalization that consists of three user-specific speech-to-text tasks for meetings, TED talks, and financial earnings calls. Extensive experiments show that PersonaLM significantly outperforms strong baselines with a 10-16{\%} improvement in perplexity and a 5-8{\%} reduction in Word Error Rates on popular Wikitext-103, UserLibri, and our ASAP dataset. We further demonstrate the usefulness of the SCAN retriever for improving user-personalized text generation and classification by retrieving relevant context for zero-shot prompting and few-shot fine-tuning of LLMs by 7-12{\%} on the LAMP benchmark.
[ "Mathur, Puneet", "Liu, Zhe", "Li, Ke", "Ma, Yingyi", "Keren, Gil", "Ahmed, Zeeshan", "Manocha, Dinesh", "Zhang, Xuedong" ]
PersonaLM: Language Model Personalization via Domain-distributed Span Aggregated K-Nearest N-gram Retrieval Augmentation
findings-emnlp.757
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.758.bib
https://aclanthology.org/2023.findings-emnlp.758/
@inproceedings{shen-etal-2023-scaling, title = "Scaling Vision-Language Models with Sparse Mixture of Experts", author = "Shen, Sheng and Yao, Zhewei and Li, Chunyuan and Darrell, Trevor and Keutzer, Kurt and He, Yuxiong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.758", doi = "10.18653/v1/2023.findings-emnlp.758", pages = "11329--11344", abstract = "The field of natural language processing (NLP) has made significant strides in recent years, particularly in the development of large-scale vision-language models (VLMs). These models aim to bridge the gap between text and visual information, enabling a more comprehensive understanding of multimedia data. However, as these models become larger and more complex, they also become more challenging to train and deploy. One approach to addressing this challenge is the use of sparsely-gated mixture-of-experts (MoE) techniques, which divide the model into smaller, specialized sub-models that can jointly solve a task. In this paper, we explore the effectiveness of MoE in scaling vision-language models, demonstrating its potential to achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost. Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling VLMs. We hope our work will inspire further research into the use of MoE for scaling large-scale vision-language models and other multimodal machine learning applications.", }
The field of natural language processing (NLP) has made significant strides in recent years, particularly in the development of large-scale vision-language models (VLMs). These models aim to bridge the gap between text and visual information, enabling a more comprehensive understanding of multimedia data. However, as these models become larger and more complex, they also become more challenging to train and deploy. One approach to addressing this challenge is the use of sparsely-gated mixture-of-experts (MoE) techniques, which divide the model into smaller, specialized sub-models that can jointly solve a task. In this paper, we explore the effectiveness of MoE in scaling vision-language models, demonstrating its potential to achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost. Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling VLMs. We hope our work will inspire further research into the use of MoE for scaling large-scale vision-language models and other multimodal machine learning applications.
[ "Shen, Sheng", "Yao, Zhewei", "Li, Chunyuan", "Darrell, Trevor", "Keutzer, Kurt", "He, Yuxiong" ]
Scaling Vision-Language Models with Sparse Mixture of Experts
findings-emnlp.758
2303.07226
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.759.bib
https://aclanthology.org/2023.findings-emnlp.759/
@inproceedings{cui-etal-2023-aspect, title = "Aspect-Category Enhanced Learning with a Neural Coherence Model for Implicit Sentiment Analysis", author = "Cui, Jin and Fukumoto, Fumiyo and Wang, Xinfeng and Suzuki, Yoshimi and Li, Jiyi and Kong, Wanzeng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.759", doi = "10.18653/v1/2023.findings-emnlp.759", pages = "11345--11358", abstract = "Aspect-based sentiment analysis (ABSA) has been widely studied since the explosive growth of social networking services. However, the recognition of implicit sentiments that do not contain obvious opinion words remains less explored. In this paper, we propose aspect-category enhanced learning with a neural coherence model (ELCoM). It captures document-level coherence by using contrastive learning, and sentence-level by a hypergraph to mine opinions from explicit sentences to aid implicit sentiment classification. To address the issue of sentences with different sentiment polarities in the same category, we perform cross-category enhancement to offset the impact of anomalous nodes in the hypergraph and obtain sentence representations with enhanced aspect-category. Extensive experiments on benchmark datasets show that the ELCoM achieves state-of-the-art performance. Our source codes and data are released at \url{https://github.com/cuijin-23/ELCoM}.", }
Aspect-based sentiment analysis (ABSA) has been widely studied since the explosive growth of social networking services. However, the recognition of implicit sentiments that do not contain obvious opinion words remains less explored. In this paper, we propose aspect-category enhanced learning with a neural coherence model (ELCoM). It captures document-level coherence by using contrastive learning, and sentence-level by a hypergraph to mine opinions from explicit sentences to aid implicit sentiment classification. To address the issue of sentences with different sentiment polarities in the same category, we perform cross-category enhancement to offset the impact of anomalous nodes in the hypergraph and obtain sentence representations with enhanced aspect-category. Extensive experiments on benchmark datasets show that the ELCoM achieves state-of-the-art performance. Our source codes and data are released at \url{https://github.com/cuijin-23/ELCoM}.
[ "Cui, Jin", "Fukumoto, Fumiyo", "Wang, Xinfeng", "Suzuki, Yoshimi", "Li, Jiyi", "Kong, Wanzeng" ]
Aspect-Category Enhanced Learning with a Neural Coherence Model for Implicit Sentiment Analysis
findings-emnlp.759
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.760.bib
https://aclanthology.org/2023.findings-emnlp.760/
@inproceedings{liu-sun-2023-end, title = "End-to-end Adversarial Sample Generation for Data Augmentation", author = "Liu, Tianyuan and Sun, Yuqing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.760", doi = "10.18653/v1/2023.findings-emnlp.760", pages = "11359--11368", abstract = "Adversarial samples pose a significant challenge to neural inference models. In this paper, we propose a novel enhancing approach A3 for the robustness of the neural NLP models, which combines the adversarial training and data augmentation. We propose an adversarial sample generator that consists of a conditioned paraphrasing model and a condition generator. The latter aims to generate conditions which guides the paraphrasing model to generate adversarial samples. A pretrained discriminator is introduced to help the adversarial sample generator adapt to the data characteristics for different tasks. We adopt a weighted loss to incorporate the generated adversarial samples with the original samples for augmented training. Compared to existing methods, our approach is much efficient since the generation process is independent to the target model and the generated samples are reusable for different models. Experimental results on several tasks show that our approach improves the overall performance of the trained model. Specially, the enhanced model is robust for various attacking techniques.", }
Adversarial samples pose a significant challenge to neural inference models. In this paper, we propose a novel enhancing approach A3 for the robustness of the neural NLP models, which combines the adversarial training and data augmentation. We propose an adversarial sample generator that consists of a conditioned paraphrasing model and a condition generator. The latter aims to generate conditions which guides the paraphrasing model to generate adversarial samples. A pretrained discriminator is introduced to help the adversarial sample generator adapt to the data characteristics for different tasks. We adopt a weighted loss to incorporate the generated adversarial samples with the original samples for augmented training. Compared to existing methods, our approach is much efficient since the generation process is independent to the target model and the generated samples are reusable for different models. Experimental results on several tasks show that our approach improves the overall performance of the trained model. Specially, the enhanced model is robust for various attacking techniques.
[ "Liu, Tianyuan", "Sun, Yuqing" ]
End-to-end Adversarial Sample Generation for Data Augmentation
findings-emnlp.760
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.761.bib
https://aclanthology.org/2023.findings-emnlp.761/
@inproceedings{xu-etal-2023-query2triple, title = "{Q}uery2{T}riple: Unified Query Encoding for Answering Diverse Complex Queries over Knowledge Graphs", author = "Xu, Yao and He, Shizhu and Wang, Cunguang and Cai, Li and Liu, Kang and Zhao, Jun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.761", doi = "10.18653/v1/2023.findings-emnlp.761", pages = "11369--11382", abstract = "Complex Query Answering (CQA) is a challenge task of Knowledge Graph (KG). Due to the incompleteness of KGs, query embedding (QE) methods have been proposed to encode queries and entities into the same embedding space, and treat logical operators as neural set operators to obtain answers. However, these methods train KG embeddings and neural set operators concurrently on both simple (one-hop) and complex (multi-hop and logical) queries, which causes performance degradation on simple queries and low training efficiency. In this paper, we propose Query to Triple (Q2T), a novel approach that decouples the training for simple and complex queries. Q2T divides the training into two stages: (1) Pre-training the neural link predictor on simple queries to predict tail entities based on the head entity and relation. (2) Training the query encoder on complex queries to encode diverse complex queries into a unified triple form that can be efficiently solved by the pretrained link predictor. Our proposed Q2T is not only efficient to train, but also modular, thus easily adaptable to various neural link predictors that have been studied well. Extensive experiments demonstrate that, even without explicit modeling for neural set operators, Q2T still achieves state-of-the-art performance on diverse complex queries over three public benchmarks.", }
Complex Query Answering (CQA) is a challenge task of Knowledge Graph (KG). Due to the incompleteness of KGs, query embedding (QE) methods have been proposed to encode queries and entities into the same embedding space, and treat logical operators as neural set operators to obtain answers. However, these methods train KG embeddings and neural set operators concurrently on both simple (one-hop) and complex (multi-hop and logical) queries, which causes performance degradation on simple queries and low training efficiency. In this paper, we propose Query to Triple (Q2T), a novel approach that decouples the training for simple and complex queries. Q2T divides the training into two stages: (1) Pre-training the neural link predictor on simple queries to predict tail entities based on the head entity and relation. (2) Training the query encoder on complex queries to encode diverse complex queries into a unified triple form that can be efficiently solved by the pretrained link predictor. Our proposed Q2T is not only efficient to train, but also modular, thus easily adaptable to various neural link predictors that have been studied well. Extensive experiments demonstrate that, even without explicit modeling for neural set operators, Q2T still achieves state-of-the-art performance on diverse complex queries over three public benchmarks.
[ "Xu, Yao", "He, Shizhu", "Wang, Cunguang", "Cai, Li", "Liu, Kang", "Zhao, Jun" ]
Query2Triple: Unified Query Encoding for Answering Diverse Complex Queries over Knowledge Graphs
findings-emnlp.761
2310.11246
[ "https://github.com/yaooxu/q2t" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.762.bib
https://aclanthology.org/2023.findings-emnlp.762/
@inproceedings{xi-etal-2023-self, title = "Self-{P}olish: Enhance Reasoning in Large Language Models via Problem Refinement", author = "Xi, Zhiheng and Jin, Senjie and Zhou, Yuhao and Zheng, Rui and Gao, Songyang and Liu, Jia and Gui, Tao and Zhang, Qi and Huang, Xuanjing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.762", doi = "10.18653/v1/2023.findings-emnlp.762", pages = "11383--11406", abstract = "To enhance the multi-step reasoning capabilities of large language models, researchers have extensively explored prompting methods, notably the Chain-of-Thought (CoT) method which explicitly elicits human-like rationales. However, they have inadvertently overlooked the potential of enhancing model reasoning performance by formulating higher-quality problems. In this work, we start from the problem side and propose Self-Polish (SP), a novel method that facilitates the model{'}s reasoning by guiding it to progressively refine the given problems to be more comprehensible and solvable. We also explore several automatic prompting varients and propose the Self-Polish prompt bank for the community. SP is orthogonal to all other prompting methods of answer/reasoning side like CoT, allowing for seamless integration with state-of-the-art techniques for further improvement. Thorough experiments show that the proposed method attains notable and consistent effectiveness on five reasoning benchmarks across different models. Furthermore, our method also showcases impressive performance on robustness evaluation. Codes and prompts are available at https://github.com/WooooDyy/Self-Polish.", }
To enhance the multi-step reasoning capabilities of large language models, researchers have extensively explored prompting methods, notably the Chain-of-Thought (CoT) method which explicitly elicits human-like rationales. However, they have inadvertently overlooked the potential of enhancing model reasoning performance by formulating higher-quality problems. In this work, we start from the problem side and propose Self-Polish (SP), a novel method that facilitates the model{'}s reasoning by guiding it to progressively refine the given problems to be more comprehensible and solvable. We also explore several automatic prompting varients and propose the Self-Polish prompt bank for the community. SP is orthogonal to all other prompting methods of answer/reasoning side like CoT, allowing for seamless integration with state-of-the-art techniques for further improvement. Thorough experiments show that the proposed method attains notable and consistent effectiveness on five reasoning benchmarks across different models. Furthermore, our method also showcases impressive performance on robustness evaluation. Codes and prompts are available at https://github.com/WooooDyy/Self-Polish.
[ "Xi, Zhiheng", "Jin, Senjie", "Zhou, Yuhao", "Zheng, Rui", "Gao, Songyang", "Liu, Jia", "Gui, Tao", "Zhang, Qi", "Huang, Xuanjing" ]
Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement
findings-emnlp.762
2305.14497
[ "https://github.com/woooodyy/self-polish" ]
https://huggingface.co/papers/2305.14497
0
0
0
8
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.763.bib
https://aclanthology.org/2023.findings-emnlp.763/
@inproceedings{li-etal-2023-breaking, title = "Breaking through Deterministic Barriers: Randomized Pruning Mask Generation and Selection", author = "Li, Jianwei and Gao, Weizhi and Lei, Qi and Xu, Dongkuan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.763", doi = "10.18653/v1/2023.findings-emnlp.763", pages = "11407--11423", abstract = "It is widely acknowledged that large and sparse models have higher accuracy than small and dense models under the same model size constraints. This motivates us to train a large model and then remove its redundant neurons or weights by pruning. Most existing works pruned the networks in a deterministic way, the performance of which solely depends on a single pruning criterion and thus lacks variety. Instead, in this paper, we propose a model pruning strategy that first generates several pruning masks in a designed random way. Subsequently, along with an effective mask-selection rule, the optimal mask is chosen from the pool of mask candidates. To further enhance efficiency, we introduce an early mask evaluation strategy, mitigating the overhead associated with training multiple masks. Our extensive experiments demonstrate that this approach achieves state-of-the-art performance across eight datasets from GLUE, particularly excelling at high levels of sparsity.", }
It is widely acknowledged that large and sparse models have higher accuracy than small and dense models under the same model size constraints. This motivates us to train a large model and then remove its redundant neurons or weights by pruning. Most existing works pruned the networks in a deterministic way, the performance of which solely depends on a single pruning criterion and thus lacks variety. Instead, in this paper, we propose a model pruning strategy that first generates several pruning masks in a designed random way. Subsequently, along with an effective mask-selection rule, the optimal mask is chosen from the pool of mask candidates. To further enhance efficiency, we introduce an early mask evaluation strategy, mitigating the overhead associated with training multiple masks. Our extensive experiments demonstrate that this approach achieves state-of-the-art performance across eight datasets from GLUE, particularly excelling at high levels of sparsity.
[ "Li, Jianwei", "Gao, Weizhi", "Lei, Qi", "Xu, Dongkuan" ]
Breaking through Deterministic Barriers: Randomized Pruning Mask Generation and Selection
findings-emnlp.763
2310.13183
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.764.bib
https://aclanthology.org/2023.findings-emnlp.764/
@inproceedings{maharaj-etal-2023-eyes, title = "Eyes Show the Way: Modelling Gaze Behaviour for Hallucination Detection", author = "Maharaj, Kishan and Saxena, Ashita and Kumar, Raja and Mishra, Abhijit and Bhattacharyya, Pushpak", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.764", doi = "10.18653/v1/2023.findings-emnlp.764", pages = "11424--11438", abstract = "Detecting hallucinations in natural language processing (NLP) is a critical undertaking that demands a deep understanding of both the semantic and pragmatic aspects of languages. Cognitive approaches that leverage users{'} behavioural signals, such as gaze, have demonstrated effectiveness in addressing NLP tasks with similar linguistic complexities. However, their potential in the context of hallucination detection remains largely unexplored. In this paper, we propose a novel cognitive approach for hallucination detection that leverages gaze signals from humans. We first collect and introduce an eye tracking corpus (IITB-HGC: IITB-Hallucination Gaze corpus) consisting of 500 instances, annotated by five annotators for hallucination detection. Our analysis reveals that humans selectively attend to relevant parts of the text based on distributional similarity, similar to the attention bias phenomenon in psychology. We identify two attention strategies employed by humans: global attention, which focuses on the most informative sentence, and local attention, which focuses on important words within a sentence. Leveraging these insights, we propose a novel cognitive framework for hallucination detection that incorporates these attention biases. Experimental evaluations on the FactCC dataset demonstrate the efficacy of our approach, obtaining a balanced accuracy of 87.1{\%}. Our study highlights the potential of gaze-based approaches in addressing the task of hallucination detection and sheds light on the cognitive processes employed by humans in identifying inconsistencies.", }
Detecting hallucinations in natural language processing (NLP) is a critical undertaking that demands a deep understanding of both the semantic and pragmatic aspects of languages. Cognitive approaches that leverage users{'} behavioural signals, such as gaze, have demonstrated effectiveness in addressing NLP tasks with similar linguistic complexities. However, their potential in the context of hallucination detection remains largely unexplored. In this paper, we propose a novel cognitive approach for hallucination detection that leverages gaze signals from humans. We first collect and introduce an eye tracking corpus (IITB-HGC: IITB-Hallucination Gaze corpus) consisting of 500 instances, annotated by five annotators for hallucination detection. Our analysis reveals that humans selectively attend to relevant parts of the text based on distributional similarity, similar to the attention bias phenomenon in psychology. We identify two attention strategies employed by humans: global attention, which focuses on the most informative sentence, and local attention, which focuses on important words within a sentence. Leveraging these insights, we propose a novel cognitive framework for hallucination detection that incorporates these attention biases. Experimental evaluations on the FactCC dataset demonstrate the efficacy of our approach, obtaining a balanced accuracy of 87.1{\%}. Our study highlights the potential of gaze-based approaches in addressing the task of hallucination detection and sheds light on the cognitive processes employed by humans in identifying inconsistencies.
[ "Maharaj, Kishan", "Saxena, Ashita", "Kumar, Raja", "Mishra, Abhijit", "Bhattacharyya, Pushpak" ]
Eyes Show the Way: Modelling Gaze Behaviour for Hallucination Detection
findings-emnlp.764
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.765.bib
https://aclanthology.org/2023.findings-emnlp.765/
@inproceedings{zhang-etal-2023-noisy, title = "Noisy Pair Corrector for Dense Retrieval", author = "Zhang, Hang and Gong, Yeyun and He, Xingwei and Liu, Dayiheng and Guo, Daya and Lv, Jiancheng and Guo, Jian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.765", doi = "10.18653/v1/2023.findings-emnlp.765", pages = "11439--11451", abstract = "Most dense retrieval models contain an implicit assumption: the training query-document pairs are exactly matched. Since it is expensive to annotate the corpus manually, training pairs in real-world applications are usually collected automatically, which inevitably introduces mismatched-pair noise. In this paper, we explore an interesting and challenging problem in dense retrieval, how to train an effective model with mismatched-pair noise. To solve this problem, we propose a novel approach called Noisy Pair Corrector (NPC), which consists of a detection module and a correction module. The detection module estimates noise pairs by calculating the perplexity between annotated positive and easy negative documents. The correction module utilizes an exponential moving average (EMA) model to provide a soft supervised signal, aiding in mitigating the effects of noise. We conduct experiments on text-retrieval benchmarks Natural Question and TriviaQA, code-search benchmarks StaQC and SO-DS. Experimental results show that NPC achieves excellent performance in handling both synthetic and realistic noise.", }
Most dense retrieval models contain an implicit assumption: the training query-document pairs are exactly matched. Since it is expensive to annotate the corpus manually, training pairs in real-world applications are usually collected automatically, which inevitably introduces mismatched-pair noise. In this paper, we explore an interesting and challenging problem in dense retrieval, how to train an effective model with mismatched-pair noise. To solve this problem, we propose a novel approach called Noisy Pair Corrector (NPC), which consists of a detection module and a correction module. The detection module estimates noise pairs by calculating the perplexity between annotated positive and easy negative documents. The correction module utilizes an exponential moving average (EMA) model to provide a soft supervised signal, aiding in mitigating the effects of noise. We conduct experiments on text-retrieval benchmarks Natural Question and TriviaQA, code-search benchmarks StaQC and SO-DS. Experimental results show that NPC achieves excellent performance in handling both synthetic and realistic noise.
[ "Zhang, Hang", "Gong, Yeyun", "He, Xingwei", "Liu, Dayiheng", "Guo, Daya", "Lv, Jiancheng", "Guo, Jian" ]
Noisy Pair Corrector for Dense Retrieval
findings-emnlp.765
2311.03798
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.766.bib
https://aclanthology.org/2023.findings-emnlp.766/
@inproceedings{sousa-etal-2023-enhancing, title = "Enhancing Accessible Communication: from {E}uropean {P}ortuguese to {P}ortuguese {S}ign {L}anguage", author = "Sousa, Catarina and Coheur, Luisa and Moita, Mara", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.766", doi = "10.18653/v1/2023.findings-emnlp.766", pages = "11452--11460", abstract = "Portuguese Sign Language (LGP) is the official language in deaf education in Portugal. Current approaches in developing a translation system between European Portuguese and LGP rely on hand-crafted rules. In this paper, we present a fully automatic corpora-driven rule-based machine translation system between European Portuguese and LGP glosses, and also two neural machine translation models. We also contribute with the LGP-5-Domain corpus, composed of five different text domains, built with the help of our rule-based system, and used to train the neural models. In addition, we provide a gold collection, annotated by LGP experts, that can be used for future evaluations. Compared with the only similar available translation system, PE2LGP, results are always improved with the new rule-based model, which competes for the highest scores with one of the neural models.", }
Portuguese Sign Language (LGP) is the official language in deaf education in Portugal. Current approaches in developing a translation system between European Portuguese and LGP rely on hand-crafted rules. In this paper, we present a fully automatic corpora-driven rule-based machine translation system between European Portuguese and LGP glosses, and also two neural machine translation models. We also contribute with the LGP-5-Domain corpus, composed of five different text domains, built with the help of our rule-based system, and used to train the neural models. In addition, we provide a gold collection, annotated by LGP experts, that can be used for future evaluations. Compared with the only similar available translation system, PE2LGP, results are always improved with the new rule-based model, which competes for the highest scores with one of the neural models.
[ "Sousa, Catarina", "Coheur, Luisa", "Moita, Mara" ]
Enhancing Accessible Communication: from European Portuguese to Portuguese Sign Language
findings-emnlp.766
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.767.bib
https://aclanthology.org/2023.findings-emnlp.767/
@inproceedings{sung-shin-2023-diversifying, title = "Diversifying language models for lesser-studied languages and language-usage contexts: A case of second language {K}orean", author = "Sung, Hakyung and Shin, Gyu-Ho", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.767", doi = "10.18653/v1/2023.findings-emnlp.767", pages = "11461--11473", abstract = "This study investigates the extent to which currently available morpheme parsers/taggers apply to lesser-studied languages and language-usage contexts, with a focus on second language (L2) Korean. We pursue this inquiry by (1) training a neural-network model (pre-trained on first language [L1] Korean data) on varying L2 datasets and (2) measuring its morpheme parsing/POS tagging performance on L2 test sets from both the same and different sources of the L2 train sets. Results show that the L2 trained models generally excel in domain-specific tokenization and POS tagging compared to the L1 pre-trained baseline model. Interestingly, increasing the size of the L2 training data does not lead to improving model performance consistently.", }
This study investigates the extent to which currently available morpheme parsers/taggers apply to lesser-studied languages and language-usage contexts, with a focus on second language (L2) Korean. We pursue this inquiry by (1) training a neural-network model (pre-trained on first language [L1] Korean data) on varying L2 datasets and (2) measuring its morpheme parsing/POS tagging performance on L2 test sets from both the same and different sources of the L2 train sets. Results show that the L2 trained models generally excel in domain-specific tokenization and POS tagging compared to the L1 pre-trained baseline model. Interestingly, increasing the size of the L2 training data does not lead to improving model performance consistently.
[ "Sung, Hakyung", "Shin, Gyu-Ho" ]
Diversifying language models for lesser-studied languages and language-usage contexts: A case of second language Korean
findings-emnlp.767
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.768.bib
https://aclanthology.org/2023.findings-emnlp.768/
@inproceedings{falissard-etal-2023-improving, title = "Improving generalization in large langue model by learning prefix subspaces", author = "Falissard, Louis and Guigue, Vincent and Soulier, Laure", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.768", doi = "10.18653/v1/2023.findings-emnlp.768", pages = "11474--11483", abstract = "This article focuses on large language models (LLMs) fine-tuning in the scarce data regime (also known as {``}few-shot learning setting{''}). We propose a method to increase the generalization capabilities of LLMs based on neural network subspaces. This optimization method, recently introduced in computer vision, aims to improve model generalization by identifying wider local optima through the joint optimization of an entire simplex of models in parameter space. Although this property would be highly beneficial in the context of training large language models in the {``}few-shot learning{''} setting, its adaptation to massive, pretrained transformers poses some challenges. First, their considerable number of parameters make it difficult to train several model jointly, and second, their deterministic parameter initialisation schemes make them unfit to the subspace method as originaly proposed. We show in this paper that its application to {``}Parameter Efficient Fine-Tuning{''} (PEFT) methods, however, is relatively natural, and we propose to apply it to prefix-tuning, by learning entire simplexes of continous prefixes. We test our method on a variant of the GLUE benchmark adapted to the few-shot learning setting, and show that both our contributions (learning prefix simplexes, and non-deterministic validation metric inference) jointly lead to a gain in average performances compared to state of the art methods.", }
This article focuses on large language models (LLMs) fine-tuning in the scarce data regime (also known as {``}few-shot learning setting{''}). We propose a method to increase the generalization capabilities of LLMs based on neural network subspaces. This optimization method, recently introduced in computer vision, aims to improve model generalization by identifying wider local optima through the joint optimization of an entire simplex of models in parameter space. Although this property would be highly beneficial in the context of training large language models in the {``}few-shot learning{''} setting, its adaptation to massive, pretrained transformers poses some challenges. First, their considerable number of parameters make it difficult to train several model jointly, and second, their deterministic parameter initialisation schemes make them unfit to the subspace method as originaly proposed. We show in this paper that its application to {``}Parameter Efficient Fine-Tuning{''} (PEFT) methods, however, is relatively natural, and we propose to apply it to prefix-tuning, by learning entire simplexes of continous prefixes. We test our method on a variant of the GLUE benchmark adapted to the few-shot learning setting, and show that both our contributions (learning prefix simplexes, and non-deterministic validation metric inference) jointly lead to a gain in average performances compared to state of the art methods.
[ "Falissard, Louis", "Guigue, Vincent", "Soulier, Laure" ]
Improving generalization in large langue model by learning prefix subspaces
findings-emnlp.768
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.769.bib
https://aclanthology.org/2023.findings-emnlp.769/
@inproceedings{rostami-etal-2023-domain, title = "Domain Adaptation for Sentiment Analysis Using Robust Internal Representations", author = "Rostami, Mohammad and Bose, Digbalay and Narayanan, Shrikanth and Galstyan, Aram", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.769", doi = "10.18653/v1/2023.findings-emnlp.769", pages = "11484--11498", abstract = "Sentiment analysis is a costly yet necessary task for enterprises to study the opinions of their customers to improve their products and to determine optimal marketing strategies. Due to the existence of a wide range of domains across different products and services, cross-domain sentiment analysis methods have received significant attention. These methods mitigate the domain gap between different applications by training cross-domain generalizable classifiers which relax the need for data annotation for each domain. We develop a domain adaptation method which induces large margins between data representations that belong to different classes in an embedding space. This embedding space is trained to be domain-agnostic by matching the data distributions across the domains. Large interclass margins in the source domain help to reduce the effect of {``}domain shift{''} in the target domain. Theoretical and empirical analysis are provided to demonstrate that the proposed method is effective.", }
Sentiment analysis is a costly yet necessary task for enterprises to study the opinions of their customers to improve their products and to determine optimal marketing strategies. Due to the existence of a wide range of domains across different products and services, cross-domain sentiment analysis methods have received significant attention. These methods mitigate the domain gap between different applications by training cross-domain generalizable classifiers which relax the need for data annotation for each domain. We develop a domain adaptation method which induces large margins between data representations that belong to different classes in an embedding space. This embedding space is trained to be domain-agnostic by matching the data distributions across the domains. Large interclass margins in the source domain help to reduce the effect of {``}domain shift{''} in the target domain. Theoretical and empirical analysis are provided to demonstrate that the proposed method is effective.
[ "Rostami, Mohammad", "Bose, Digbalay", "Narayanan, Shrikanth", "Galstyan, Aram" ]
Domain Adaptation for Sentiment Analysis Using Robust Internal Representations
findings-emnlp.769
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.770.bib
https://aclanthology.org/2023.findings-emnlp.770/
@inproceedings{niu-etal-2023-kefvp, title = "{K}e{FVP}: Knowledge-enhanced Financial Volatility Prediction", author = "Niu, Hao and Xiong, Yun and Wang, Xiaosu and Yu, Wenjing and Zhang, Yao and Yang, Weizu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.770", doi = "10.18653/v1/2023.findings-emnlp.770", pages = "11499--11513", abstract = "Financial volatility prediction is vital for indicating a company{'}s risk profile. Transcripts of companies{'} earnings calls are important unstructured data sources to be utilized to access companies{'} performance and risk profiles. However, current works ignore the role of financial metrics knowledge (such as EBIT, EPS, and ROI) in transcripts, which is crucial for understanding companies{'} performance, and little consideration is given to integrating text and price information. In this work, we statistic common financial metrics and make a special dataset based on these metrics. Then, we introduce a knowledge-enhanced financial volatility prediction method (KeFVP) to inject knowledge of financial metrics into text comprehension by knowledge-enhanced adaptive pre-training (KePt) and effectively incorporating text and price information by introducing a conditional time series prediction module. We conduct extensive experiments on three real-world public datasets, and the results indicate that KeFVP is effective and outperforms all the state-of-the-art methods.", }
Financial volatility prediction is vital for indicating a company{'}s risk profile. Transcripts of companies{'} earnings calls are important unstructured data sources to be utilized to access companies{'} performance and risk profiles. However, current works ignore the role of financial metrics knowledge (such as EBIT, EPS, and ROI) in transcripts, which is crucial for understanding companies{'} performance, and little consideration is given to integrating text and price information. In this work, we statistic common financial metrics and make a special dataset based on these metrics. Then, we introduce a knowledge-enhanced financial volatility prediction method (KeFVP) to inject knowledge of financial metrics into text comprehension by knowledge-enhanced adaptive pre-training (KePt) and effectively incorporating text and price information by introducing a conditional time series prediction module. We conduct extensive experiments on three real-world public datasets, and the results indicate that KeFVP is effective and outperforms all the state-of-the-art methods.
[ "Niu, Hao", "Xiong, Yun", "Wang, Xiaosu", "Yu, Wenjing", "Zhang, Yao", "Yang, Weizu" ]
KeFVP: Knowledge-enhanced Financial Volatility Prediction
findings-emnlp.770
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.771.bib
https://aclanthology.org/2023.findings-emnlp.771/
@inproceedings{huang-etal-2023-frustratingly, title = "A Frustratingly Easy Plug-and-Play Detection-and-Reasoning Module for {C}hinese Spelling Check", author = "Huang, Haojing and Ye, Jingheng and Zhou, Qingyu and Li, Yinghui and Li, Yangning and Zhou, Feng and Zheng, Hai-Tao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.771", doi = "10.18653/v1/2023.findings-emnlp.771", pages = "11514--11525", abstract = "In recent years, Chinese Spelling Check (CSC) has been greatly improved by designing task-specific pre-training methods or introducing auxiliary tasks, which mostly solve this task in an end-to-end fashion. In this paper, we propose to decompose the CSC workflow into detection, reasoning, and searching subtasks so that the rich external knowledge about the Chinese language can be leveraged more directly and efficiently. Specifically, we design a plug-and-play detection-and-reasoning module that is compatible with existing SOTA non-autoregressive CSC models to further boost their performance. We find that the detection-and-reasoning module trained for one model can also benefit other models. We also study the primary interpretability provided by the task decomposition. Extensive experiments and detailed analyses demonstrate the effectiveness and competitiveness of the proposed module.", }
In recent years, Chinese Spelling Check (CSC) has been greatly improved by designing task-specific pre-training methods or introducing auxiliary tasks, which mostly solve this task in an end-to-end fashion. In this paper, we propose to decompose the CSC workflow into detection, reasoning, and searching subtasks so that the rich external knowledge about the Chinese language can be leveraged more directly and efficiently. Specifically, we design a plug-and-play detection-and-reasoning module that is compatible with existing SOTA non-autoregressive CSC models to further boost their performance. We find that the detection-and-reasoning module trained for one model can also benefit other models. We also study the primary interpretability provided by the task decomposition. Extensive experiments and detailed analyses demonstrate the effectiveness and competitiveness of the proposed module.
[ "Huang, Haojing", "Ye, Jingheng", "Zhou, Qingyu", "Li, Yinghui", "Li, Yangning", "Zhou, Feng", "Zheng, Hai-Tao" ]
A Frustratingly Easy Plug-and-Play Detection-and-Reasoning Module for Chinese Spelling Check
findings-emnlp.771
2310.09119
[ "https://github.com/thukelab/dr-csc" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.772.bib
https://aclanthology.org/2023.findings-emnlp.772/
@inproceedings{lee-etal-2023-asking, title = "Asking Clarification Questions to Handle Ambiguity in Open-Domain {QA}", author = "Lee, Dongryeol and Kim, Segwang and Lee, Minwoo and Lee, Hwanhee and Park, Joonsuk and Lee, Sang-Woo and Jung, Kyomin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.772", doi = "10.18653/v1/2023.findings-emnlp.772", pages = "11526--11544", abstract = "Ambiguous questions persist in open-domain question answering, because formulating a precise question with a unique answer is often challenging. Previous works have tackled this issue by asking disambiguated questions for all possible interpretations of the ambiguous question. Instead, we propose to ask a clarification question, where the user{'}s response will help identify the interpretation that best aligns with the user{'}s intention. We first present CAmbigNQ, a dataset consisting of 5,653 ambiguous questions, each with relevant passages, possible answers, and a clarification question. The clarification questions were efficiently created by generating them using InstructGPT and manually revising them as necessary. We then define a pipeline of three tasks{---}(1) ambiguity detection, (2) clarification question generation, and (3) clarification-based QA. In the process, we adopt or design appropriate evaluation metrics to facilitate sound research. Lastly, we achieve F1 of 61.3, 25.1, and 40.5 on the three tasks, demonstrating the need for further improvements while providing competitive baselines for future work.", }
Ambiguous questions persist in open-domain question answering, because formulating a precise question with a unique answer is often challenging. Previous works have tackled this issue by asking disambiguated questions for all possible interpretations of the ambiguous question. Instead, we propose to ask a clarification question, where the user{'}s response will help identify the interpretation that best aligns with the user{'}s intention. We first present CAmbigNQ, a dataset consisting of 5,653 ambiguous questions, each with relevant passages, possible answers, and a clarification question. The clarification questions were efficiently created by generating them using InstructGPT and manually revising them as necessary. We then define a pipeline of three tasks{---}(1) ambiguity detection, (2) clarification question generation, and (3) clarification-based QA. In the process, we adopt or design appropriate evaluation metrics to facilitate sound research. Lastly, we achieve F1 of 61.3, 25.1, and 40.5 on the three tasks, demonstrating the need for further improvements while providing competitive baselines for future work.
[ "Lee, Dongryeol", "Kim, Segwang", "Lee, Minwoo", "Lee, Hwanhee", "Park, Joonsuk", "Lee, Sang-Woo", "Jung, Kyomin" ]
Asking Clarification Questions to Handle Ambiguity in Open-Domain QA
findings-emnlp.772
2305.13808
[ "https://github.com/dongryeollee96/askcq" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.773.bib
https://aclanthology.org/2023.findings-emnlp.773/
@inproceedings{zhuocheng-etal-2023-addressing, title = "Addressing the Length Bias Challenge in Document-Level Neural Machine Translation", author = "Zhuocheng, Zhang and Gu, Shuhao and Zhang, Min and Feng, Yang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.773", doi = "10.18653/v1/2023.findings-emnlp.773", pages = "11545--11556", abstract = "Document-level neural machine translation (DNMT) has shown promising results by incorporating context information through increased maximum lengths of source and target sentences. However, this approach also introduces a length bias problem, whereby DNMT suffers from significant translation quality degradation when decoding sentences that are much shorter or longer than the maximum sentence length during training, i.e., the length bias problem. To prevent the model from neglecting shorter sentences, we sample the training data to ensure a more uniform distribution across different sentence lengths while progressively increasing the maximum sentence length during training. Additionally, we introduce a length-normalized attention mechanism to aid the model in focusing on target information, mitigating the issue of attention divergence when processing longer sentences. Furthermore, during the decoding stage of DNMT, we propose a sliding decoding strategy that limits the length of target sentences to not exceed the maximum length encountered during training. The experimental results indicate that our method can achieve state-of-the-art results on several open datasets, and further analysis shows that our method can significantly alleviate the length bias problem.", }
Document-level neural machine translation (DNMT) has shown promising results by incorporating context information through increased maximum lengths of source and target sentences. However, this approach also introduces a length bias problem, whereby DNMT suffers from significant translation quality degradation when decoding sentences that are much shorter or longer than the maximum sentence length during training, i.e., the length bias problem. To prevent the model from neglecting shorter sentences, we sample the training data to ensure a more uniform distribution across different sentence lengths while progressively increasing the maximum sentence length during training. Additionally, we introduce a length-normalized attention mechanism to aid the model in focusing on target information, mitigating the issue of attention divergence when processing longer sentences. Furthermore, during the decoding stage of DNMT, we propose a sliding decoding strategy that limits the length of target sentences to not exceed the maximum length encountered during training. The experimental results indicate that our method can achieve state-of-the-art results on several open datasets, and further analysis shows that our method can significantly alleviate the length bias problem.
[ "Zhuocheng, Zhang", "Gu, Shuhao", "Zhang, Min", "Feng, Yang" ]
Addressing the Length Bias Challenge in Document-Level Neural Machine Translation
findings-emnlp.773
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.774.bib
https://aclanthology.org/2023.findings-emnlp.774/
@inproceedings{lasri-etal-2023-econberta, title = "{E}con{BERT}a: Towards Robust Extraction of Named Entities in Economics", author = "Lasri, Karim and de Castro, Pedro Vitor Quinta and Schirmer, Mona and San Martin, Luis Eduardo and Wang, Linxi and Dulka, Tom{\'a}{\v{s}} and Naushan, Haaya and Pougu{\'e}-Biyong, John and Legovini, Arianna and Fraiberger, Samuel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.774", doi = "10.18653/v1/2023.findings-emnlp.774", pages = "11557--11577", abstract = "Adapting general-purpose language models has proven to be effective in tackling downstream tasks within specific domains. In this paper, we address the task of extracting entities from the economics literature on impact evaluation. To this end, we release EconBERTa, a large language model pretrained on scientific publications in economics, and ECON-IE, a new expert-annotated dataset of economics abstracts for Named Entity Recognition (NER). We find that EconBERTa reaches state-of-the-art performance on our downstream NER task. Additionally, we extensively analyze the model{'}s generalization capacities, finding that most errors correspond to detecting only a subspan of an entity or failure to extrapolate to longer sequences. This limitation is primarily due to an inability to detect part-of-speech sequences unseen during training, and this effect diminishes when the number of unique instances in the training set increases. Examining the generalization abilities of domain-specific language models paves the way towards improving the robustness of NER models for causal knowledge extraction.", }
Adapting general-purpose language models has proven to be effective in tackling downstream tasks within specific domains. In this paper, we address the task of extracting entities from the economics literature on impact evaluation. To this end, we release EconBERTa, a large language model pretrained on scientific publications in economics, and ECON-IE, a new expert-annotated dataset of economics abstracts for Named Entity Recognition (NER). We find that EconBERTa reaches state-of-the-art performance on our downstream NER task. Additionally, we extensively analyze the model{'}s generalization capacities, finding that most errors correspond to detecting only a subspan of an entity or failure to extrapolate to longer sequences. This limitation is primarily due to an inability to detect part-of-speech sequences unseen during training, and this effect diminishes when the number of unique instances in the training set increases. Examining the generalization abilities of domain-specific language models paves the way towards improving the robustness of NER models for causal knowledge extraction.
[ "Lasri, Karim", "de Castro, Pedro Vitor Quinta", "Schirmer, Mona", "San Martin, Luis Eduardo", "Wang, Linxi", "Dulka, Tom{\\'a}{\\v{s}}", "Naushan, Haaya", "Pougu{\\'e}-Biyong, John", "Legovini, Arianna", "Fraiberger, Samuel" ]
EconBERTa: Towards Robust Extraction of Named Entities in Economics
findings-emnlp.774
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.775.bib
https://aclanthology.org/2023.findings-emnlp.775/
@inproceedings{al-shaibani-ahmad-2023-consonant, title = "Consonant is all you need: a compact representation of {E}nglish text for efficient {NLP}", author = "Al-shaibani, Maged and Ahmad, Irfan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.775", doi = "10.18653/v1/2023.findings-emnlp.775", pages = "11578--11588", abstract = "In natural language processing (NLP), the representation of text plays a crucial role in various tasks such as language modeling, sentiment analysis, and machine translation. The standard approach is to represent text in the same way as we, as humans, read and write. In this paper, we propose a novel approach to represent text with only consonants which presents a compact representation of English text that offers improved efficiency without sacrificing performance. We exploit the fact that consonants are more discriminative than vowels and by representing text using consonants, we can significantly reduce the overall memory and compute footprint required for storing and processing textual data. We present two alternative representations: {`}consonants-only{'}, where we completely remove the vowels from the text, and {`}masked-vowels{'}, where we mask all the vowels into one special symbol. To evaluate our approaches, we conducted experiments on various NLP tasks, including text classification, part-of-speech (POS) tagging, named-entity recognition (NER), and neural machine translation (NMT), in addition to language modeling. Our results demonstrate that the proposed consonant-based representation achieves comparable performance compared to the standard text representation while requiring significantly fewer computational resources. Furthermore, we show that our representation can be seamlessly integrated with existing NLP models and frameworks, providing a practical solution for efficient text processing. Last but not the least, we present a technique to retrieve the vowel information from our processed text representation keeping in mind the need to reproduce text in human readable form in some NLP applications.", }
In natural language processing (NLP), the representation of text plays a crucial role in various tasks such as language modeling, sentiment analysis, and machine translation. The standard approach is to represent text in the same way as we, as humans, read and write. In this paper, we propose a novel approach to represent text with only consonants which presents a compact representation of English text that offers improved efficiency without sacrificing performance. We exploit the fact that consonants are more discriminative than vowels and by representing text using consonants, we can significantly reduce the overall memory and compute footprint required for storing and processing textual data. We present two alternative representations: {`}consonants-only{'}, where we completely remove the vowels from the text, and {`}masked-vowels{'}, where we mask all the vowels into one special symbol. To evaluate our approaches, we conducted experiments on various NLP tasks, including text classification, part-of-speech (POS) tagging, named-entity recognition (NER), and neural machine translation (NMT), in addition to language modeling. Our results demonstrate that the proposed consonant-based representation achieves comparable performance compared to the standard text representation while requiring significantly fewer computational resources. Furthermore, we show that our representation can be seamlessly integrated with existing NLP models and frameworks, providing a practical solution for efficient text processing. Last but not the least, we present a technique to retrieve the vowel information from our processed text representation keeping in mind the need to reproduce text in human readable form in some NLP applications.
[ "Al-shaibani, Maged", "Ahmad, Irfan" ]
Consonant is all you need: a compact representation of English text for efficient NLP
findings-emnlp.775
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.776.bib
https://aclanthology.org/2023.findings-emnlp.776/
@inproceedings{oh-thorne-2023-detrimental, title = "Detrimental Contexts in Open-Domain Question Answering", author = "Oh, Philhoon and Thorne, James", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.776", doi = "10.18653/v1/2023.findings-emnlp.776", pages = "11589--11605", abstract = "For knowledge intensive NLP tasks, it has been widely accepted that accessing more information is a contributing factor to improvements in the model{'}s end-to-end performance. However, counter-intuitively, too much context can have a negative impact on the model when evaluated on common question answering (QA) datasets. In this paper, we analyze how passages can have a detrimental effect on retrieve-then-read architectures used in question answering. Our empirical evidence indicates that the current read architecture does not fully leverage the retrieved passages and significantly degrades its performance when using the whole passages compared to utilizing subsets of them. Our findings demonstrate that model accuracy can be improved by 10{\%} on two popular QA datasets by filtering out detrimental passages. Additionally, these outcomes are attained by utilizing existing retrieval methods without further training or data. We further highlight the challenges associated with identifying the detrimental passages. First, even with the correct context, the model can make an incorrect prediction, posing a challenge in determining which passages are most influential. Second, evaluation typically considers lexical matching, which is not robust to variations of correct answers. Despite these limitations, our experimental results underscore the pivotal role of identifying and removing these detrimental passages for the context-efficient retrieve-then-read pipeline.", }
For knowledge intensive NLP tasks, it has been widely accepted that accessing more information is a contributing factor to improvements in the model{'}s end-to-end performance. However, counter-intuitively, too much context can have a negative impact on the model when evaluated on common question answering (QA) datasets. In this paper, we analyze how passages can have a detrimental effect on retrieve-then-read architectures used in question answering. Our empirical evidence indicates that the current read architecture does not fully leverage the retrieved passages and significantly degrades its performance when using the whole passages compared to utilizing subsets of them. Our findings demonstrate that model accuracy can be improved by 10{\%} on two popular QA datasets by filtering out detrimental passages. Additionally, these outcomes are attained by utilizing existing retrieval methods without further training or data. We further highlight the challenges associated with identifying the detrimental passages. First, even with the correct context, the model can make an incorrect prediction, posing a challenge in determining which passages are most influential. Second, evaluation typically considers lexical matching, which is not robust to variations of correct answers. Despite these limitations, our experimental results underscore the pivotal role of identifying and removing these detrimental passages for the context-efficient retrieve-then-read pipeline.
[ "Oh, Philhoon", "Thorne, James" ]
Detrimental Contexts in Open-Domain Question Answering
findings-emnlp.776
2310.18077
[ "https://github.com/xfactlab/emnlp2023-damaging-retrieval" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.777.bib
https://aclanthology.org/2023.findings-emnlp.777/
@inproceedings{urlana-etal-2023-pmindiasum, title = "{PMI}ndia{S}um: Multilingual and Cross-lingual Headline Summarization for Languages in {I}ndia", author = "Urlana, Ashok and Chen, Pinzhen and Zhao, Zheng and Cohen, Shay and Shrivastava, Manish and Haddow, Barry", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.777", doi = "10.18653/v1/2023.findings-emnlp.777", pages = "11606--11628", abstract = "This paper introduces PMIndiaSum, a multilingual and massively parallel summarization corpus focused on languages in India. Our corpus provides a training and testing ground for four language families, 14 languages, and the largest to date with 196 language pairs. We detail our construction workflow including data acquisition, processing, and quality assurance. Furthermore, we publish benchmarks for monolingual, cross-lingual, and multilingual summarization by fine-tuning, prompting, as well as translate-and-summarize. Experimental results confirm the crucial role of our data in aiding summarization between Indian languages. Our dataset is publicly available and can be freely modified and re-distributed.", }
This paper introduces PMIndiaSum, a multilingual and massively parallel summarization corpus focused on languages in India. Our corpus provides a training and testing ground for four language families, 14 languages, and the largest to date with 196 language pairs. We detail our construction workflow including data acquisition, processing, and quality assurance. Furthermore, we publish benchmarks for monolingual, cross-lingual, and multilingual summarization by fine-tuning, prompting, as well as translate-and-summarize. Experimental results confirm the crucial role of our data in aiding summarization between Indian languages. Our dataset is publicly available and can be freely modified and re-distributed.
[ "Urlana, Ashok", "Chen, Pinzhen", "Zhao, Zheng", "Cohen, Shay", "Shrivastava, Manish", "Haddow, Barry" ]
PMIndiaSum: Multilingual and Cross-lingual Headline Summarization for Languages in India
findings-emnlp.777
2305.08828
[ "https://github.com/ashokurlana/pmindiasum" ]
https://huggingface.co/papers/2305.08828
1
0
0
6
[]
[ "PMIndiaData/PMIndiaSum" ]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.778.bib
https://aclanthology.org/2023.findings-emnlp.778/
@inproceedings{yao-etal-2023-beyond, title = "Beyond Labels: Empowering Human Annotators with Natural Language Explanations through a Novel Active-Learning Architecture", author = "Yao, Bingsheng and Jindal, Ishan and Popa, Lucian and Katsis, Yannis and Ghosh, Sayan and He, Lihong and Lu, Yuxuan and Srivastava, Shashank and Li, Yunyao and Hendler, James and Wang, Dakuo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.778", doi = "10.18653/v1/2023.findings-emnlp.778", pages = "11629--11643", abstract = "Real-world domain experts (e.g., doctors) rarely annotate only a decision label in their day-to-day workflow without providing explanations. Yet, existing low-resource learning techniques, such as Active Learning (AL), that aim to support human annotators mostly focus on the label while neglecting the natural language explanation of a data point. This work proposes a novel AL architecture to support experts{'} real-world need for label and explanation annotations in low-resource scenarios. Our AL architecture leverages an explanation-generation model to produce explanations guided by human explanations, a prediction model that utilizes generated explanations toward prediction faithfully, and a novel data diversity-based AL sampling strategy that benefits from the explanation annotations. Automated and human evaluations demonstrate the effectiveness of incorporating explanations into AL sampling and the improved human annotation efficiency and trustworthiness with our AL architecture. Additional ablation studies illustrate the potential of our AL architecture for transfer learning, generalizability, and integration with large language models (LLMs). While LLMs exhibit exceptional explanation-generation capabilities for relatively simple tasks, their effectiveness in complex real-world tasks warrants further in-depth study.", }
Real-world domain experts (e.g., doctors) rarely annotate only a decision label in their day-to-day workflow without providing explanations. Yet, existing low-resource learning techniques, such as Active Learning (AL), that aim to support human annotators mostly focus on the label while neglecting the natural language explanation of a data point. This work proposes a novel AL architecture to support experts{'} real-world need for label and explanation annotations in low-resource scenarios. Our AL architecture leverages an explanation-generation model to produce explanations guided by human explanations, a prediction model that utilizes generated explanations toward prediction faithfully, and a novel data diversity-based AL sampling strategy that benefits from the explanation annotations. Automated and human evaluations demonstrate the effectiveness of incorporating explanations into AL sampling and the improved human annotation efficiency and trustworthiness with our AL architecture. Additional ablation studies illustrate the potential of our AL architecture for transfer learning, generalizability, and integration with large language models (LLMs). While LLMs exhibit exceptional explanation-generation capabilities for relatively simple tasks, their effectiveness in complex real-world tasks warrants further in-depth study.
[ "Yao, Bingsheng", "Jindal, Ishan", "Popa, Lucian", "Katsis, Yannis", "Ghosh, Sayan", "He, Lihong", "Lu, Yuxuan", "Srivastava, Shashank", "Li, Yunyao", "Hendler, James", "Wang, Dakuo" ]
Beyond Labels: Empowering Human Annotators with Natural Language Explanations through a Novel Active-Learning Architecture
findings-emnlp.778
2305.12710
[ "https://github.com/neuhai/explanation-enriched-active-learning" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.779.bib
https://aclanthology.org/2023.findings-emnlp.779/
@inproceedings{goldstein-etal-2023-decoding, title = "Decoding Stumpers: Large Language Models vs. Human Problem-Solvers", author = "Goldstein, Alon and Havin, Miriam and Reichart, Roi and Goldstein, Ariel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.779", doi = "10.18653/v1/2023.findings-emnlp.779", pages = "11644--11653", abstract = "This paper investigates the problem-solving capabilities of Large Language Models (LLMs) by evaluating their performance on stumpers, unique single-step intuition problems that pose challenges for human solvers but are easily verifiable. We compare the performance of four state-of-the-art LLMs (Davinci-2, Davinci-3, GPT-3.5-Turbo, GPT-4) to human participants. Our findings reveal that the new-generation LLMs excel in solving stumpers and surpass human performance. However, humans exhibit superior skills in verifying solutions to the same problems. This research enhances our understanding of LLMs{'} cognitive abilities and provides insights for enhancing their problem-solving potential across various domains.", }
This paper investigates the problem-solving capabilities of Large Language Models (LLMs) by evaluating their performance on stumpers, unique single-step intuition problems that pose challenges for human solvers but are easily verifiable. We compare the performance of four state-of-the-art LLMs (Davinci-2, Davinci-3, GPT-3.5-Turbo, GPT-4) to human participants. Our findings reveal that the new-generation LLMs excel in solving stumpers and surpass human performance. However, humans exhibit superior skills in verifying solutions to the same problems. This research enhances our understanding of LLMs{'} cognitive abilities and provides insights for enhancing their problem-solving potential across various domains.
[ "Goldstein, Alon", "Havin, Miriam", "Reichart, Roi", "Goldstein, Ariel" ]
Decoding Stumpers: Large Language Models vs. Human Problem-Solvers
findings-emnlp.779
2310.16411
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.780.bib
https://aclanthology.org/2023.findings-emnlp.780/
@inproceedings{xu-etal-2023-efficient, title = "Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition", author = "Xu, Yige and Zeng, Zhiwei and Shen, Zhiqi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.780", doi = "10.18653/v1/2023.findings-emnlp.780", pages = "11654--11666", abstract = "Emotion Recognition in Conversation (ERC) has been widely studied due to its importance in developing emotion-aware empathetic machines. The rise of pre-trained language models (PLMs) has further pushed the limit of ERC performance. However, most recent works on ERC using PLMs are heavily data-driven, and requires fine-tuning the entire PLMs. To improve both sample and computational efficiency, we propose a derivative-free optimization method called Cross-Task Prompt Tuning (CTPT) for few-shot conversational emotion recognition. Unlike existing methods that learn independent knowledge from individual tasks, CTPT leverages sharable cross-task knowledge by exploiting external knowledge from other source tasks to improve learning performance under the few-shot setting. Moreover, CTPT only needs to optimize a vector under the low intrinsic dimensionality without gradient, which is highly parameter-efficient compared with existing approaches. Experiments on five different contextual conversation datasets demonstrate that our CTPT method has superior results on both few-shot scenarios and zero-shot transfers.", }
Emotion Recognition in Conversation (ERC) has been widely studied due to its importance in developing emotion-aware empathetic machines. The rise of pre-trained language models (PLMs) has further pushed the limit of ERC performance. However, most recent works on ERC using PLMs are heavily data-driven, and requires fine-tuning the entire PLMs. To improve both sample and computational efficiency, we propose a derivative-free optimization method called Cross-Task Prompt Tuning (CTPT) for few-shot conversational emotion recognition. Unlike existing methods that learn independent knowledge from individual tasks, CTPT leverages sharable cross-task knowledge by exploiting external knowledge from other source tasks to improve learning performance under the few-shot setting. Moreover, CTPT only needs to optimize a vector under the low intrinsic dimensionality without gradient, which is highly parameter-efficient compared with existing approaches. Experiments on five different contextual conversation datasets demonstrate that our CTPT method has superior results on both few-shot scenarios and zero-shot transfers.
[ "Xu, Yige", "Zeng, Zhiwei", "Shen, Zhiqi" ]
Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition
findings-emnlp.780
2310.14614
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.781.bib
https://aclanthology.org/2023.findings-emnlp.781/
@inproceedings{kim-nakashole-2023-symptomify, title = "{SYMPTOMIFY}: Transforming Symptom Annotations with Language Model Knowledge Harvesting", author = "Kim, Bosung and Nakashole, Ndapa", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.781", doi = "10.18653/v1/2023.findings-emnlp.781", pages = "11667--11681", abstract = "Given the high-stakes nature of healthcare decision-making, we aim to improve the efficiency of human annotators rather than replacing them with fully automated solutions. We introduce a new comprehensive resource, SYMPTOMIFY, a dataset of annotated vaccine adverse reaction reports detailing individual vaccine reactions. The dataset, consisting of over 800k reports, surpasses previous datasets in size. Notably, it features reasoning-based explanations alongside background knowledge obtained via language model knowledge harvesting. We evaluate performance across various methods and learning paradigms, paving the way for future comparisons and benchmarking.", }
Given the high-stakes nature of healthcare decision-making, we aim to improve the efficiency of human annotators rather than replacing them with fully automated solutions. We introduce a new comprehensive resource, SYMPTOMIFY, a dataset of annotated vaccine adverse reaction reports detailing individual vaccine reactions. The dataset, consisting of over 800k reports, surpasses previous datasets in size. Notably, it features reasoning-based explanations alongside background knowledge obtained via language model knowledge harvesting. We evaluate performance across various methods and learning paradigms, paving the way for future comparisons and benchmarking.
[ "Kim, Bosung", "Nakashole, Ndapa" ]
SYMPTOMIFY: Transforming Symptom Annotations with Language Model Knowledge Harvesting
findings-emnlp.781
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.782.bib
https://aclanthology.org/2023.findings-emnlp.782/
@inproceedings{nagarajan-raghunathan-2023-tokendrop, title = "{T}oken{D}rop + {B}ucket{S}ampler: Towards Efficient Padding-free Fine-tuning of Language Models", author = "Nagarajan, Amrit and Raghunathan, Anand", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.782", doi = "10.18653/v1/2023.findings-emnlp.782", pages = "11682--11695", abstract = "The great success of Language Models (LMs) for various Natural Language Processing (NLP) tasks is accompanied by computational challenges during both pre-training and fine-tuning. Pre-training has attracted significant attention due to its huge computational footprint. We focus on the fine-tuning of pre-trained LMs, which is expected to be performed much more frequently as the pre-trained models are adapted to downstream tasks. During fine-tuning, the presence of variable-length input sequences necessitates the use of padding tokens when batching sequences. These padding tokens lead to ineffectual computations, adversely impacting the efficiency of fine-tuning. We also observe that LMs memorize the limited task-specific training data despite the use of known regularization methods. Based on these insights, we present TokenDrop + BucketSampler, a framework that simultaneously improves efficiency and accuracy of LM fine-tuning. BucketSampler generates batches of samples with lower variance in sequence lengths to reduce the number of padding tokens, but does so without the accompanying accuracy drop seen in previous approaches. TokenDrop is a new regularizer that prunes a random subset of insignificant tokens from each input sequence in every epoch to prevent overfitting. TokenDrop drops more tokens from the longer sequences in each batch to further reduce variance in input lengths and the need for padding. TokenDrop + BucketSampler accelerates fine-tuning on diverse downstream tasks by up to 10.61X, while also producing models that are up to 1.17{\%} more accurate compared to conventional fine-tuning. Code is available at https://github.com/amrnag/TokenDrop-BucketSampler. .", }
The great success of Language Models (LMs) for various Natural Language Processing (NLP) tasks is accompanied by computational challenges during both pre-training and fine-tuning. Pre-training has attracted significant attention due to its huge computational footprint. We focus on the fine-tuning of pre-trained LMs, which is expected to be performed much more frequently as the pre-trained models are adapted to downstream tasks. During fine-tuning, the presence of variable-length input sequences necessitates the use of padding tokens when batching sequences. These padding tokens lead to ineffectual computations, adversely impacting the efficiency of fine-tuning. We also observe that LMs memorize the limited task-specific training data despite the use of known regularization methods. Based on these insights, we present TokenDrop + BucketSampler, a framework that simultaneously improves efficiency and accuracy of LM fine-tuning. BucketSampler generates batches of samples with lower variance in sequence lengths to reduce the number of padding tokens, but does so without the accompanying accuracy drop seen in previous approaches. TokenDrop is a new regularizer that prunes a random subset of insignificant tokens from each input sequence in every epoch to prevent overfitting. TokenDrop drops more tokens from the longer sequences in each batch to further reduce variance in input lengths and the need for padding. TokenDrop + BucketSampler accelerates fine-tuning on diverse downstream tasks by up to 10.61X, while also producing models that are up to 1.17{\%} more accurate compared to conventional fine-tuning. Code is available at https://github.com/amrnag/TokenDrop-BucketSampler. .
[ "Nagarajan, Amrit", "Raghunathan, An", "" ]
TokenDrop + BucketSampler: Towards Efficient Padding-free Fine-tuning of Language Models
findings-emnlp.782
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.783.bib
https://aclanthology.org/2023.findings-emnlp.783/
@inproceedings{zeng-bhat-2023-unified, title = "Unified Representation for Non-compositional and Compositional Expressions", author = "Zeng, Ziheng and Bhat, Suma", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.783", doi = "10.18653/v1/2023.findings-emnlp.783", pages = "11696--11710", abstract = "Accurate processing of non-compositional language relies on generating good representations for such expressions. In this work, we study the representation of language non-compositionality by proposing a language model, PIER+, that builds on BART and can create semantically meaningful and contextually appropriate representations for English potentially idiomatic expressions (PIEs). PIEs are characterized by their non-compositionality and contextual ambiguity in their literal and idiomatic interpretations. Via intrinsic evaluation on embedding quality and extrinsic evaluation on PIE processing and NLU tasks, we show that representations generated by PIER+ result in 33{\%} higher homogeneity score for embedding clustering than BART, whereas 3.12{\%} and 3.29{\%} gains in accuracy and sequence accuracy for PIE sense classification and span detection compared to the state-of-the-art IE representation model, GIEA. These gains are achieved without sacrificing PIER+{'}s performance on NLU tasks (+/- 1{\%} accuracy) compared to BART.", }
Accurate processing of non-compositional language relies on generating good representations for such expressions. In this work, we study the representation of language non-compositionality by proposing a language model, PIER+, that builds on BART and can create semantically meaningful and contextually appropriate representations for English potentially idiomatic expressions (PIEs). PIEs are characterized by their non-compositionality and contextual ambiguity in their literal and idiomatic interpretations. Via intrinsic evaluation on embedding quality and extrinsic evaluation on PIE processing and NLU tasks, we show that representations generated by PIER+ result in 33{\%} higher homogeneity score for embedding clustering than BART, whereas 3.12{\%} and 3.29{\%} gains in accuracy and sequence accuracy for PIE sense classification and span detection compared to the state-of-the-art IE representation model, GIEA. These gains are achieved without sacrificing PIER+{'}s performance on NLU tasks (+/- 1{\%} accuracy) compared to BART.
[ "Zeng, Ziheng", "Bhat, Suma" ]
Unified Representation for Non-compositional and Compositional Expressions
findings-emnlp.783
2310.19127
[ "https://github.com/zzeng13/pier" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.784.bib
https://aclanthology.org/2023.findings-emnlp.784/
@inproceedings{akimoto-etal-2023-context, title = "Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering", author = "Akimoto, Kosuke and Takeoka, Kunihiro and Oyamada, Masafumi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.784", doi = "10.18653/v1/2023.findings-emnlp.784", pages = "11711--11729", abstract = "Retrieval-augmented generation models augment knowledge encoded in a language model by providing additional relevant external knowledge (context) during generation. Although it has been shown that the quantity and quality of context impact the performance of retrieval-augmented generation models during inference, limited research explores how these characteristics affect model training. This paper explores how context quantity and quality during model training affect the performance of Fusion-in-Decoder (FiD), the state-of-the-art retrieval-augmented generation model, in extractive open-domain question answering tasks. Experimental results suggest that FiD models overfit to context quality during training and show suboptimal performance when evaluated on different context quality. Through the experimental results, we also reveal FiD models trained with different context quality have different cross-attention distribution patterns. Specifically, as context quality during training increases, FiD models tend to attend more uniformly to each passage in context. Finally, based on these observations, we propose a method to mitigate overfitting to specific context quality by introducing bias to the cross-attention distribution, which we demonstrate to be effective in improving the performance of FiD models on different context quality.", }
Retrieval-augmented generation models augment knowledge encoded in a language model by providing additional relevant external knowledge (context) during generation. Although it has been shown that the quantity and quality of context impact the performance of retrieval-augmented generation models during inference, limited research explores how these characteristics affect model training. This paper explores how context quantity and quality during model training affect the performance of Fusion-in-Decoder (FiD), the state-of-the-art retrieval-augmented generation model, in extractive open-domain question answering tasks. Experimental results suggest that FiD models overfit to context quality during training and show suboptimal performance when evaluated on different context quality. Through the experimental results, we also reveal FiD models trained with different context quality have different cross-attention distribution patterns. Specifically, as context quality during training increases, FiD models tend to attend more uniformly to each passage in context. Finally, based on these observations, we propose a method to mitigate overfitting to specific context quality by introducing bias to the cross-attention distribution, which we demonstrate to be effective in improving the performance of FiD models on different context quality.
[ "Akimoto, Kosuke", "Takeoka, Kunihiro", "Oyamada, Masafumi" ]
Context Quality Matters in Training Fusion-in-Decoder for Extractive Open-Domain Question Answering
findings-emnlp.784
2403.14197
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.785.bib
https://aclanthology.org/2023.findings-emnlp.785/
@inproceedings{chen-etal-2023-error, title = "Error Detection for Text-to-{SQL} Semantic Parsing", author = "Chen, Shijie and Chen, Ziru and Sun, Huan and Su, Yu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.785", doi = "10.18653/v1/2023.findings-emnlp.785", pages = "11730--11743", abstract = "Despite remarkable progress in text-to-SQL semantic parsing in recent years, the performance of existing parsers is still far from perfect. Specifically, modern text-to-SQL parsers based on deep learning are often over-confident, thus casting doubt on their trustworthiness when deployed for real use. In this paper, we propose a parser-independent error detection model for text-to-SQL semantic parsing. Using a language model of code as its bedrock, we enhance our error detection model with graph neural networks that learn structural features of both natural language questions and SQL queries. We train our model on realistic parsing errors collected from a cross-domain setting, which leads to stronger generalization ability. Experiments with three strong text-to-SQL parsers featuring different decoding mechanisms show that our approach outperforms parser-dependent uncertainty metrics. Our model could also effectively improve the performance and usability of text-to-SQL semantic parsers regardless of their architectures.", }
Despite remarkable progress in text-to-SQL semantic parsing in recent years, the performance of existing parsers is still far from perfect. Specifically, modern text-to-SQL parsers based on deep learning are often over-confident, thus casting doubt on their trustworthiness when deployed for real use. In this paper, we propose a parser-independent error detection model for text-to-SQL semantic parsing. Using a language model of code as its bedrock, we enhance our error detection model with graph neural networks that learn structural features of both natural language questions and SQL queries. We train our model on realistic parsing errors collected from a cross-domain setting, which leads to stronger generalization ability. Experiments with three strong text-to-SQL parsers featuring different decoding mechanisms show that our approach outperforms parser-dependent uncertainty metrics. Our model could also effectively improve the performance and usability of text-to-SQL semantic parsers regardless of their architectures.
[ "Chen, Shijie", "Chen, Ziru", "Sun, Huan", "Su, Yu" ]
Error Detection for Text-to-SQL Semantic Parsing
findings-emnlp.785
2305.13683
[ "https://github.com/osu-nlp-group/text2sql-error-detection" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.786.bib
https://aclanthology.org/2023.findings-emnlp.786/
@inproceedings{li-etal-2023-ultra, title = "Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy", author = "Li, Na and Bouraoui, Zied and Schockaert, Steven", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.786", doi = "10.18653/v1/2023.findings-emnlp.786", pages = "11744--11756", abstract = "Ultra-fine entity typing (UFET) is the task of inferring the semantic types from a large set of fine-grained candidates that apply to a given entity mention. This task is especially challenging because we only have a small number of training examples for many types, even with distant supervision strategies. State-of-the-art models, therefore, have to rely on prior knowledge about the type labels in some way. In this paper, we show that the performance of existing methods can be improved using a simple technique: we use pre-trained label embeddings to cluster the labels into semantic domains and then treat these domains as additional types. We show that this strategy consistently leads to improved results as long as high-quality label embeddings are used. Furthermore, we use the label clusters as part of a simple post-processing technique, which results in further performance gains. Both strategies treat the UFET model as a black box and can thus straightforwardly be used to improve a wide range of existing models.", }
Ultra-fine entity typing (UFET) is the task of inferring the semantic types from a large set of fine-grained candidates that apply to a given entity mention. This task is especially challenging because we only have a small number of training examples for many types, even with distant supervision strategies. State-of-the-art models, therefore, have to rely on prior knowledge about the type labels in some way. In this paper, we show that the performance of existing methods can be improved using a simple technique: we use pre-trained label embeddings to cluster the labels into semantic domains and then treat these domains as additional types. We show that this strategy consistently leads to improved results as long as high-quality label embeddings are used. Furthermore, we use the label clusters as part of a simple post-processing technique, which results in further performance gains. Both strategies treat the UFET model as a black box and can thus straightforwardly be used to improve a wide range of existing models.
[ "Li, Na", "Bouraoui, Zied", "Schockaert, Steven" ]
Ultra-Fine Entity Typing with Prior Knowledge about Labels: A Simple Clustering Based Strategy
findings-emnlp.786
2305.12802
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.787.bib
https://aclanthology.org/2023.findings-emnlp.787/
@inproceedings{espana-bonet-2023-multilingual, title = "Multilingual Coarse Political Stance Classification of Media. The Editorial Line of a {C}hat{GPT} and Bard Newspaper", author = "Espa{\~n}a-Bonet, Cristina", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.787", doi = "10.18653/v1/2023.findings-emnlp.787", pages = "11757--11777", abstract = "Neutrality is difficult to achieve and, in politics, subjective. Traditional media typically adopt an editorial line that can be used by their potential readers as an indicator of the media bias. Several platforms currently rate news outlets according to their political bias. The editorial line and the ratings help readers in gathering a balanced view of news. But in the advent of instruction-following language models, tasks such as writing a newspaper article can be delegated to computers. Without imposing a biased persona, where would an AI-based news outlet lie within the bias ratings? In this work, we use the ratings of authentic news outlets to create a multilingual corpus of news with coarse stance annotations (Left and Right) along with automatically extracted topic annotations. We show that classifiers trained on this data are able to identify the editorial line of most unseen newspapers in English, German, Spanish and Catalan. We then apply the classifiers to 101 newspaper-like articles written by ChatGPT and Bard in the 4 languages at different time periods. We observe that, similarly to traditional newspapers, ChatGPT editorial line evolves with time and, being a data-driven system, the stance of the generated articles differs among languages.", }
Neutrality is difficult to achieve and, in politics, subjective. Traditional media typically adopt an editorial line that can be used by their potential readers as an indicator of the media bias. Several platforms currently rate news outlets according to their political bias. The editorial line and the ratings help readers in gathering a balanced view of news. But in the advent of instruction-following language models, tasks such as writing a newspaper article can be delegated to computers. Without imposing a biased persona, where would an AI-based news outlet lie within the bias ratings? In this work, we use the ratings of authentic news outlets to create a multilingual corpus of news with coarse stance annotations (Left and Right) along with automatically extracted topic annotations. We show that classifiers trained on this data are able to identify the editorial line of most unseen newspapers in English, German, Spanish and Catalan. We then apply the classifiers to 101 newspaper-like articles written by ChatGPT and Bard in the 4 languages at different time periods. We observe that, similarly to traditional newspapers, ChatGPT editorial line evolves with time and, being a data-driven system, the stance of the generated articles differs among languages.
[ "Espa{\\~n}a-Bonet, Cristina" ]
Multilingual Coarse Political Stance Classification of Media. The Editorial Line of a ChatGPT and Bard Newspaper
findings-emnlp.787
2310.16269
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.788.bib
https://aclanthology.org/2023.findings-emnlp.788/
@inproceedings{shan-etal-2023-english, title = "Do {``}{E}nglish{''} Named Entity Recognizers Work Well on Global Englishes?", author = "Shan, Alexander and Bauer, John and Carlson, Riley and Manning, Christopher", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.788", doi = "10.18653/v1/2023.findings-emnlp.788", pages = "11778--11791", abstract = "The vast majority of the popular English named entity recognition (NER) datasets contain American or British English data, despite the existence of many global varieties of English. As such, it is unclear whether they generalize for analyzing use of English globally. To test this, we build a newswire dataset, the Worldwide English NER Dataset, to analyze NER model performance on low-resource English variants from around the world. We test widely used NER toolkits and transformer models, including models using the pre-trained contextual models RoBERTa and ELECTRA, on three datasets: a commonly used British English newswire dataset, CoNLL 2003, a more American focused dataset OntoNotes, and our global dataset. All models trained on the CoNLL or OntoNotes datasets experienced significant performance drops{---}over 10 F1 in some cases{---}when tested on the Worldwide English dataset. Upon examination of region-specific errors, we observe the greatest performance drops for Oceania and Africa, while Asia and the Middle East had comparatively strong performance. Lastly, we find that a combined model trained on the Worldwide dataset and either CoNLL or OntoNotes lost only 1-2 F1 on both test sets.", }
The vast majority of the popular English named entity recognition (NER) datasets contain American or British English data, despite the existence of many global varieties of English. As such, it is unclear whether they generalize for analyzing use of English globally. To test this, we build a newswire dataset, the Worldwide English NER Dataset, to analyze NER model performance on low-resource English variants from around the world. We test widely used NER toolkits and transformer models, including models using the pre-trained contextual models RoBERTa and ELECTRA, on three datasets: a commonly used British English newswire dataset, CoNLL 2003, a more American focused dataset OntoNotes, and our global dataset. All models trained on the CoNLL or OntoNotes datasets experienced significant performance drops{---}over 10 F1 in some cases{---}when tested on the Worldwide English dataset. Upon examination of region-specific errors, we observe the greatest performance drops for Oceania and Africa, while Asia and the Middle East had comparatively strong performance. Lastly, we find that a combined model trained on the Worldwide dataset and either CoNLL or OntoNotes lost only 1-2 F1 on both test sets.
[ "Shan, Alex", "er", "Bauer, John", "Carlson, Riley", "Manning, Christopher" ]
Do “English” Named Entity Recognizers Work Well on Global Englishes?
findings-emnlp.788
[ "https://github.com/stanfordnlp/en-worldwide-newswire" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.789.bib
https://aclanthology.org/2023.findings-emnlp.789/
@inproceedings{huang-etal-2023-affective, title = "Affective and Dynamic Beam Search for Story Generation", author = "Huang, Tenghao and Qasemi, Ehsan and Li, Bangzheng and Wang, He and Brahman, Faeze and Chen, Muhao and Chaturvedi, Snigdha", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.789", doi = "10.18653/v1/2023.findings-emnlp.789", pages = "11792--11806", abstract = "Storytelling{'}s captivating potential makes it a fascinating research area, with implications for entertainment, education, therapy, and cognitive studies. In this paper, we propose Affective Story Generator (AffGen) for generating interesting narratives. AffGen introduces {`}intriguing twists{'} in narratives by employing two novel techniques{---}Dynamic Beam Sizing and Affective Reranking. Dynamic Beam Sizing encourages less predictable, more captivating word choices using a contextual multi-arm bandit model. Affective Reranking prioritizes sentence candidates based on affect intensity. Our empirical evaluations, both automatic and human, demonstrate AffGen{'}s superior performance over existing baselines in generating affectively charged and interesting narratives. Our ablation study and analysis provide insights into the strengths and weaknesses of AffGen.", }
Storytelling{'}s captivating potential makes it a fascinating research area, with implications for entertainment, education, therapy, and cognitive studies. In this paper, we propose Affective Story Generator (AffGen) for generating interesting narratives. AffGen introduces {`}intriguing twists{'} in narratives by employing two novel techniques{---}Dynamic Beam Sizing and Affective Reranking. Dynamic Beam Sizing encourages less predictable, more captivating word choices using a contextual multi-arm bandit model. Affective Reranking prioritizes sentence candidates based on affect intensity. Our empirical evaluations, both automatic and human, demonstrate AffGen{'}s superior performance over existing baselines in generating affectively charged and interesting narratives. Our ablation study and analysis provide insights into the strengths and weaknesses of AffGen.
[ "Huang, Tenghao", "Qasemi, Ehsan", "Li, Bangzheng", "Wang, He", "Brahman, Faeze", "Chen, Muhao", "Chaturvedi, Snigdha" ]
Affective and Dynamic Beam Search for Story Generation
findings-emnlp.789
2310.15079
[ "https://github.com/tenghaohuang/affgen" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.790.bib
https://aclanthology.org/2023.findings-emnlp.790/
@inproceedings{shi-etal-2023-multiview, title = "Multiview Clickbait Detection via Jointly Modeling Subjective and Objective Preference", author = "Shi, Chongyang and Yin, Yijun and Zhang, Qi and Xiao, Liang and Naseem, Usman and Wang, Shoujin and Hu, Liang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.790", doi = "10.18653/v1/2023.findings-emnlp.790", pages = "11807--11816", abstract = "Clickbait posts tend to spread inaccurate or misleading information to manipulate people{'}s attention and emotions, which greatly harms the credibility of social media. Existing clickbait detection models rely on analyzing the objective semantics in posts or correlating posts with article content only. However, these models fail to identify and exploit the manipulation intention of clickbait from a user{'}s subjective perspective, leading to limited capability to explore comprehensive clues of clickbait. To address such a issue, we propose a multiview clickbait detection model, named MCDM, to model subjective and objective preferences simultaneously. MCDM introduces two novel complementary modules for modeling subjective feeling and objective content relevance, respectively. The subjective feeling module adopts a user-centric approach to capture subjective features of posts, such as language patterns and emotional inclinations. The objective module explores news elements from posts and models article content correlations to capture objective clues for clickbait detection. Extensive experimental results on two real-world datasets show that our proposed MCDM outperforms state-of-the-art approaches for clickbait detection, verifying the effectiveness of integrating subjective and objective preferences for detecting clickbait.", }
Clickbait posts tend to spread inaccurate or misleading information to manipulate people{'}s attention and emotions, which greatly harms the credibility of social media. Existing clickbait detection models rely on analyzing the objective semantics in posts or correlating posts with article content only. However, these models fail to identify and exploit the manipulation intention of clickbait from a user{'}s subjective perspective, leading to limited capability to explore comprehensive clues of clickbait. To address such a issue, we propose a multiview clickbait detection model, named MCDM, to model subjective and objective preferences simultaneously. MCDM introduces two novel complementary modules for modeling subjective feeling and objective content relevance, respectively. The subjective feeling module adopts a user-centric approach to capture subjective features of posts, such as language patterns and emotional inclinations. The objective module explores news elements from posts and models article content correlations to capture objective clues for clickbait detection. Extensive experimental results on two real-world datasets show that our proposed MCDM outperforms state-of-the-art approaches for clickbait detection, verifying the effectiveness of integrating subjective and objective preferences for detecting clickbait.
[ "Shi, Chongyang", "Yin, Yijun", "Zhang, Qi", "Xiao, Liang", "Naseem, Usman", "Wang, Shoujin", "Hu, Liang" ]
Multiview Clickbait Detection via Jointly Modeling Subjective and Objective Preference
findings-emnlp.790
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.791.bib
https://aclanthology.org/2023.findings-emnlp.791/
@inproceedings{wang-etal-2023-lets, title = "Let{'}s Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models", author = "Wang, Ruida and Zhou, Wangchunshu and Sachan, Mrinmaya", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.791", doi = "10.18653/v1/2023.findings-emnlp.791", pages = "11817--11831", abstract = "*Data Synthesis* is a promising way to train a small model with very little labeled data. One approach for data synthesis is to leverage the rich knowledge from large language models to synthesize pseudo training examples for small models, making it possible to achieve both data and compute efficiency at the same time. However, a key challenge in data synthesis is that the synthesized dataset often suffers from a large distributional discrepancy from the *real task* data distribution. Thus, in this paper, we propose *Synthesis Step by Step* (**S3**), a data synthesis framework that shrinks this distribution gap by iteratively extrapolating the errors made by a small model trained on the synthesized dataset on a small real-world validation dataset using a large language model. Extensive experiments on multiple NLP tasks show that our approach improves the performance of a small model by reducing the gap between the synthetic dataset and the real data, resulting in significant improvement compared to several baselines: 9.48{\%} improvement compared to ZeroGen and 2.73{\%} compared to GoldGen, and at most 15.17{\%} improvement compared to the small model trained on human-annotated data.", }
*Data Synthesis* is a promising way to train a small model with very little labeled data. One approach for data synthesis is to leverage the rich knowledge from large language models to synthesize pseudo training examples for small models, making it possible to achieve both data and compute efficiency at the same time. However, a key challenge in data synthesis is that the synthesized dataset often suffers from a large distributional discrepancy from the *real task* data distribution. Thus, in this paper, we propose *Synthesis Step by Step* (**S3**), a data synthesis framework that shrinks this distribution gap by iteratively extrapolating the errors made by a small model trained on the synthesized dataset on a small real-world validation dataset using a large language model. Extensive experiments on multiple NLP tasks show that our approach improves the performance of a small model by reducing the gap between the synthetic dataset and the real data, resulting in significant improvement compared to several baselines: 9.48{\%} improvement compared to ZeroGen and 2.73{\%} compared to GoldGen, and at most 15.17{\%} improvement compared to the small model trained on human-annotated data.
[ "Wang, Ruida", "Zhou, Wangchunshu", "Sachan, Mrinmaya" ]
Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large Language Models by Extrapolating Errors from Small Models
findings-emnlp.791
2310.13671
[ "https://github.com/rickyskywalker/synthesis_step-by-step_official" ]
https://huggingface.co/papers/2310.13671
2
18
1
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.792.bib
https://aclanthology.org/2023.findings-emnlp.792/
@inproceedings{gollapalli-etal-2023-identifying, title = "Identifying {Early Maladaptive Schemas} from Mental Health Question Texts", author = "Gollapalli, Sujatha and Ang, Beng and Ng, See-Kiong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.792", doi = "10.18653/v1/2023.findings-emnlp.792", pages = "11832--11843", abstract = "In Psychotherapy, maladaptive schemas{--} negative perceptions that an individual has of the self, others, or the world that endure despite objective reality{--}often lead to resistance to treatments and relapse of mental health issues such as depression, anxiety, panic attacks etc. Identification of early maladaptive schemas (EMS) is thus a crucial step during Schema Therapy-based counseling sessions, where patients go through a detailed and lengthy EMS questionnaire. However, such an approach is not practical in {`}offline{'} counseling scenarios, such as community QA forums which are gaining popularity for people seeking mental health support. In this paper, we investigate both LLM (Large Language Models) and non-LLM approaches for identifying EMS labels using resources from Schema Therapy. Our evaluation indicates that recent LLMs can be effective for identifying EMS but their predictions lack explainability and are too sensitive to precise {`}prompts{'}. Both LLM and non-LLM methods are unable to reliably address the null cases, i.e. cases with no EMS labels. However, we posit that the two approaches show complementary properties and together, they can be used to further devise techniques for EMS identification.", }
In Psychotherapy, maladaptive schemas{--} negative perceptions that an individual has of the self, others, or the world that endure despite objective reality{--}often lead to resistance to treatments and relapse of mental health issues such as depression, anxiety, panic attacks etc. Identification of early maladaptive schemas (EMS) is thus a crucial step during Schema Therapy-based counseling sessions, where patients go through a detailed and lengthy EMS questionnaire. However, such an approach is not practical in {`}offline{'} counseling scenarios, such as community QA forums which are gaining popularity for people seeking mental health support. In this paper, we investigate both LLM (Large Language Models) and non-LLM approaches for identifying EMS labels using resources from Schema Therapy. Our evaluation indicates that recent LLMs can be effective for identifying EMS but their predictions lack explainability and are too sensitive to precise {`}prompts{'}. Both LLM and non-LLM methods are unable to reliably address the null cases, i.e. cases with no EMS labels. However, we posit that the two approaches show complementary properties and together, they can be used to further devise techniques for EMS identification.
[ "Gollapalli, Sujatha", "Ang, Beng", "Ng, See-Kiong" ]
Identifying Early Maladaptive Schemas from Mental Health Question Texts
findings-emnlp.792
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.793.bib
https://aclanthology.org/2023.findings-emnlp.793/
@inproceedings{yang-etal-2023-vilm, title = "Re-{V}i{LM}: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning", author = "Yang, Zhuolin and Ping, Wei and Liu, Zihan and Korthikanti, Vijay and Nie, Weili and Huang, De-An and Fan, Linxi and Yu, Zhiding and Lan, Shiyi and Li, Bo and Shoeybi, Mohammad and Liu, Ming-Yu and Zhu, Yuke and Catanzaro, Bryan and Xiao, Chaowei and Anandkumar, Anima", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.793", doi = "10.18653/v1/2023.findings-emnlp.793", pages = "11844--11857", abstract = "Augmenting pretrained language models (LMs) with a vision encoder (e.g., Flamingo) has obtained state-of-the-art results in image-to-text generation. However, these models store all the knowledge within their parameters, thus often requiring enormous model parameters to model the abundant visual concepts and very rich text descriptions. Additionally, they are inefficient in incorporating new data, requiring a computational-expensive fine-tuning process. In this work, we introduce a Retrieval-augmented Visual Language Model, Re-ViLM, built upon the Flamingo, that supports retrieving the relevant knowledge from the external database for zero and in-context few-shot image-to-text generations. By storing certain knowledge explicitly in the external database, our approach reduces the number of model parameters and can easily accommodate new data during evaluation by simply updating the database. We also construct an interleaved image and text data that facilitates in-context few-shot learning capabilities.We demonstrate that Re-ViLM significantly boosts performance for image-to-text generation tasks, especially for zero-shot and few-shot generation in out-of-domain settings with 4x less parameters compared with baseline methods.", }
Augmenting pretrained language models (LMs) with a vision encoder (e.g., Flamingo) has obtained state-of-the-art results in image-to-text generation. However, these models store all the knowledge within their parameters, thus often requiring enormous model parameters to model the abundant visual concepts and very rich text descriptions. Additionally, they are inefficient in incorporating new data, requiring a computational-expensive fine-tuning process. In this work, we introduce a Retrieval-augmented Visual Language Model, Re-ViLM, built upon the Flamingo, that supports retrieving the relevant knowledge from the external database for zero and in-context few-shot image-to-text generations. By storing certain knowledge explicitly in the external database, our approach reduces the number of model parameters and can easily accommodate new data during evaluation by simply updating the database. We also construct an interleaved image and text data that facilitates in-context few-shot learning capabilities.We demonstrate that Re-ViLM significantly boosts performance for image-to-text generation tasks, especially for zero-shot and few-shot generation in out-of-domain settings with 4x less parameters compared with baseline methods.
[ "Yang, Zhuolin", "Ping, Wei", "Liu, Zihan", "Korthikanti, Vijay", "Nie, Weili", "Huang, De-An", "Fan, Linxi", "Yu, Zhiding", "Lan, Shiyi", "Li, Bo", "Shoeybi, Mohammad", "Liu, Ming-Yu", "Zhu, Yuke", "Catanzaro, Bryan", "Xiao, Chaowei", "An", "kumar, Anima" ]
Re-ViLM: Retrieval-Augmented Visual Language Model for Zero and Few-Shot Image Captioning
findings-emnlp.793
2302.04858
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.794.bib
https://aclanthology.org/2023.findings-emnlp.794/
@inproceedings{xie-etal-2023-syntax, title = "Syntax Matters: Towards Spoken Language Understanding via Syntax-Aware Attention", author = "Xie, Yifeng and Zhu, Zhihong and Cheng, Xuxin and Huang, Zhiqi and Chen, Dongsheng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.794", doi = "10.18653/v1/2023.findings-emnlp.794", pages = "11858--11864", abstract = "Spoken Language Understanding (SLU), a crucial component of task-oriented dialogue systems, has consistently garnered attention from both academic and industrial communities. Although incorporating syntactic information into models has the potential to enhance the comprehension of user utterances and yield impressive results, its application in SLU systems remains largely unexplored. In this paper, we propose a carefully designed model termed Syntax-aware attention (SAT) to enhance SLU, where attention scopes are constrained based on relationships within the syntactic structure. Experimental results on three datasets show that our model achieves substantial improvements and excellent performance. Moreover, SAT can be integrated into other BERT-based language models to further boost their performance.", }
Spoken Language Understanding (SLU), a crucial component of task-oriented dialogue systems, has consistently garnered attention from both academic and industrial communities. Although incorporating syntactic information into models has the potential to enhance the comprehension of user utterances and yield impressive results, its application in SLU systems remains largely unexplored. In this paper, we propose a carefully designed model termed Syntax-aware attention (SAT) to enhance SLU, where attention scopes are constrained based on relationships within the syntactic structure. Experimental results on three datasets show that our model achieves substantial improvements and excellent performance. Moreover, SAT can be integrated into other BERT-based language models to further boost their performance.
[ "Xie, Yifeng", "Zhu, Zhihong", "Cheng, Xuxin", "Huang, Zhiqi", "Chen, Dongsheng" ]
Syntax Matters: Towards Spoken Language Understanding via Syntax-Aware Attention
findings-emnlp.794
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.795.bib
https://aclanthology.org/2023.findings-emnlp.795/
@inproceedings{wang-etal-2023-chatgpt-defend, title = "Can {C}hat{GPT} Defend its Belief in Truth? Evaluating {LLM} Reasoning via Debate", author = "Wang, Boshi and Yue, Xiang and Sun, Huan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.795", doi = "10.18653/v1/2023.findings-emnlp.795", pages = "11865--11881", abstract = "Large language models (LLMs) such as ChatGPT and GPT-4 have shown impressive performance in complex reasoning tasks. However, it is difficult to know whether the models are reasoning based on deep understandings of truth and logic, or leveraging their memorized patterns in a relatively superficial way. In this work, we explore testing LLMs{'} reasoning by engaging with them in a debate-like conversation, where given a question, the LLM and the user need to discuss to make the correct decision starting from opposing arguments. Upon mitigating the Clever Hans effect, our task requires the LLM to not only achieve the correct answer on its own, but also be able to hold and defend its belief instead of blindly believing or getting misled by the user{'}s (invalid) arguments and critiques, thus testing in greater depth whether the LLM grasps the essence of the reasoning required to solve the problem. Across a range of complex reasoning benchmarks spanning math, commonsense, logic and BIG-Bench tasks, we find that despite their impressive performance as reported in existing work on generating correct step-by-step solutions in the beginning, LLMs like ChatGPT cannot maintain their beliefs in truth for a significant portion of examples when challenged by oftentimes absurdly invalid arguments. Our work points to danger zones of model alignment, and also suggests more careful treatments and interpretations of the recent findings that LLMs can improve their responses based on feedback.", }
Large language models (LLMs) such as ChatGPT and GPT-4 have shown impressive performance in complex reasoning tasks. However, it is difficult to know whether the models are reasoning based on deep understandings of truth and logic, or leveraging their memorized patterns in a relatively superficial way. In this work, we explore testing LLMs{'} reasoning by engaging with them in a debate-like conversation, where given a question, the LLM and the user need to discuss to make the correct decision starting from opposing arguments. Upon mitigating the Clever Hans effect, our task requires the LLM to not only achieve the correct answer on its own, but also be able to hold and defend its belief instead of blindly believing or getting misled by the user{'}s (invalid) arguments and critiques, thus testing in greater depth whether the LLM grasps the essence of the reasoning required to solve the problem. Across a range of complex reasoning benchmarks spanning math, commonsense, logic and BIG-Bench tasks, we find that despite their impressive performance as reported in existing work on generating correct step-by-step solutions in the beginning, LLMs like ChatGPT cannot maintain their beliefs in truth for a significant portion of examples when challenged by oftentimes absurdly invalid arguments. Our work points to danger zones of model alignment, and also suggests more careful treatments and interpretations of the recent findings that LLMs can improve their responses based on feedback.
[ "Wang, Boshi", "Yue, Xiang", "Sun, Huan" ]
Can ChatGPT Defend its Belief in Truth? Evaluating LLM Reasoning via Debate
findings-emnlp.795
2305.13160
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.796.bib
https://aclanthology.org/2023.findings-emnlp.796/
@inproceedings{meade-etal-2023-using, title = "Using In-Context Learning to Improve Dialogue Safety", author = "Meade, Nicholas and Gella, Spandana and Hazarika, Devamanyu and Gupta, Prakhar and Jin, Di and Reddy, Siva and Liu, Yang and Hakkani-Tur, Dilek", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.796", doi = "10.18653/v1/2023.findings-emnlp.796", pages = "11882--11910", abstract = "While large neural-based conversational models have become increasingly proficient dialogue agents, recent work has highlighted safety issues with these systems. For example, these systems can be goaded into generating toxic content, often perpetuating social biases or stereotypes. We investigate a retrieval-based approach for reducing bias and toxicity in responses from chatbots. It uses in-context learning to steer a model towards safer generations. Concretely, to generate a response to an unsafe dialogue context, we retrieve demonstrations of safe responses to similar dialogue contexts. We find our method performs competitively with existing approaches to dialogue safety without requiring training. We also show, using automatic and human evaluation, that reductions in toxicity obtained using our approach are not at the cost engagingness or coherency. Finally, we note our method can be used in compliment to existing dialogue safety approaches, such as RLHF.", }
While large neural-based conversational models have become increasingly proficient dialogue agents, recent work has highlighted safety issues with these systems. For example, these systems can be goaded into generating toxic content, often perpetuating social biases or stereotypes. We investigate a retrieval-based approach for reducing bias and toxicity in responses from chatbots. It uses in-context learning to steer a model towards safer generations. Concretely, to generate a response to an unsafe dialogue context, we retrieve demonstrations of safe responses to similar dialogue contexts. We find our method performs competitively with existing approaches to dialogue safety without requiring training. We also show, using automatic and human evaluation, that reductions in toxicity obtained using our approach are not at the cost engagingness or coherency. Finally, we note our method can be used in compliment to existing dialogue safety approaches, such as RLHF.
[ "Meade, Nicholas", "Gella, Sp", "ana", "Hazarika, Devamanyu", "Gupta, Prakhar", "Jin, Di", "Reddy, Siva", "Liu, Yang", "Hakkani-Tur, Dilek" ]
Using In-Context Learning to Improve Dialogue Safety
findings-emnlp.796
2302.00871
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.797.bib
https://aclanthology.org/2023.findings-emnlp.797/
@inproceedings{yoon-etal-2023-hear, title = "{HEAR}: Hearing Enhanced Audio Response for Video-grounded Dialogue", author = "Yoon, Sunjae and Kim, Dahyun and Yoon, Eunseop and Yoon, Hee and Kim, Junyeong and Yoo, Chang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.797", doi = "10.18653/v1/2023.findings-emnlp.797", pages = "11911--11924", abstract = "Video-grounded Dialogue (VGD) aims to answer questions regarding a given multi-modal input comprising video, audio, and dialogue history. Although there have been numerous efforts in developing VGD systems to improve the quality of their responses, existing systems are competent only to incorporate the information in the video and text and tend to struggle in extracting the necessary information from the audio when generating appropriate responses to the question. The VGD system seems to be deaf, and thus, we coin this symptom of current systems{'} ignoring audio data as a deaf response. To overcome the deaf response problem, Hearing Enhanced Audio Response (HEAR) framework is proposed to perform sensible listening by selectively attending to audio whenever the question requires it. The HEAR framework enhances the accuracy and audibility of VGD systems in a model-agnostic manner. HEAR is validated on VGD datasets (i.e., AVSD@DSTC7 and AVSD@DSTC8) and shows effectiveness with various VGD systems.", }
Video-grounded Dialogue (VGD) aims to answer questions regarding a given multi-modal input comprising video, audio, and dialogue history. Although there have been numerous efforts in developing VGD systems to improve the quality of their responses, existing systems are competent only to incorporate the information in the video and text and tend to struggle in extracting the necessary information from the audio when generating appropriate responses to the question. The VGD system seems to be deaf, and thus, we coin this symptom of current systems{'} ignoring audio data as a deaf response. To overcome the deaf response problem, Hearing Enhanced Audio Response (HEAR) framework is proposed to perform sensible listening by selectively attending to audio whenever the question requires it. The HEAR framework enhances the accuracy and audibility of VGD systems in a model-agnostic manner. HEAR is validated on VGD datasets (i.e., AVSD@DSTC7 and AVSD@DSTC8) and shows effectiveness with various VGD systems.
[ "Yoon, Sunjae", "Kim, Dahyun", "Yoon, Eunseop", "Yoon, Hee", "Kim, Junyeong", "Yoo, Chang" ]
HEAR: Hearing Enhanced Audio Response for Video-grounded Dialogue
findings-emnlp.797
2312.09736
[ "https://github.com/dbstjswo505/HEAR" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.798.bib
https://aclanthology.org/2023.findings-emnlp.798/
@inproceedings{zeng-etal-2023-improving, title = "Improving Consistency for Text Summarization with Energy Functions", author = "Zeng, Qi and Yin, Qingyu and Li, Zheng and Gao, Yifan and Nag, Sreyashi and Wang, Zhengyang and Yin, Bing and Ji, Heng and Zhang, Chao", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.798", doi = "10.18653/v1/2023.findings-emnlp.798", pages = "11925--11931", abstract = "Current abstractive summarization models often generate inconsistent content, i.e. texts that are not directly inferable from the source document, are not consistent with respect to world knowledge, or are self-contradictory. These inconsistencies motivate a new consistency taxonomy that we define as faithfulness, factuality, and self-supportiveness. However, most recent work on reducing inconsistency in document summarization only focuses on faithfulness detection and correction while ignoring other inconsistency phenomena, which limits the model{'}s scalability. To improve the general consistency we introduce EnergySum, where we apply the Residual Energy-based Model by designing energy scorers that reflect each type of consistency. These energy scores are utilized in candidate re-ranking during the sampling process. Experiments on XSUM and CNN/DM datasets show that EnergySum mitigates the trade-off between accuracy and consistency.", }
Current abstractive summarization models often generate inconsistent content, i.e. texts that are not directly inferable from the source document, are not consistent with respect to world knowledge, or are self-contradictory. These inconsistencies motivate a new consistency taxonomy that we define as faithfulness, factuality, and self-supportiveness. However, most recent work on reducing inconsistency in document summarization only focuses on faithfulness detection and correction while ignoring other inconsistency phenomena, which limits the model{'}s scalability. To improve the general consistency we introduce EnergySum, where we apply the Residual Energy-based Model by designing energy scorers that reflect each type of consistency. These energy scores are utilized in candidate re-ranking during the sampling process. Experiments on XSUM and CNN/DM datasets show that EnergySum mitigates the trade-off between accuracy and consistency.
[ "Zeng, Qi", "Yin, Qingyu", "Li, Zheng", "Gao, Yifan", "Nag, Sreyashi", "Wang, Zhengyang", "Yin, Bing", "Ji, Heng", "Zhang, Chao" ]
Improving Consistency for Text Summarization with Energy Functions
findings-emnlp.798
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.799.bib
https://aclanthology.org/2023.findings-emnlp.799/
@inproceedings{li-etal-2023-defining, title = "Defining a New {NLP} Playground", author = "Li, Sha and Han, Chi and Yu, Pengfei and Edwards, Carl and Li, Manling and Wang, Xingyao and Fung, Yi and Yu, Charles and Tetreault, Joel and Hovy, Eduard and Ji, Heng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.799", doi = "10.18653/v1/2023.findings-emnlp.799", pages = "11932--11951", abstract = "The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field{'}s 80 year history. This has resulted in concerns that the field will become homogenized and resource-intensive. This new status quo has put many academic researchers, especially PhD students, at a disadvantage. This paper aims to define a new NLP playground by proposing 20+ PhD-dissertation-worthy research directions, covering theoretical analysis, new and challenging problems, learning paradigms and interdisciplinary applications.", }
The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field{'}s 80 year history. This has resulted in concerns that the field will become homogenized and resource-intensive. This new status quo has put many academic researchers, especially PhD students, at a disadvantage. This paper aims to define a new NLP playground by proposing 20+ PhD-dissertation-worthy research directions, covering theoretical analysis, new and challenging problems, learning paradigms and interdisciplinary applications.
[ "Li, Sha", "Han, Chi", "Yu, Pengfei", "Edwards, Carl", "Li, Manling", "Wang, Xingyao", "Fung, Yi", "Yu, Charles", "Tetreault, Joel", "Hovy, Eduard", "Ji, Heng" ]
Defining a New NLP Playground
findings-emnlp.799
2310.20633
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.800.bib
https://aclanthology.org/2023.findings-emnlp.800/
@inproceedings{wang-etal-2023-upton, title = "{UPTON}: Preventing Authorship Leakage from Public Text Release via Data Poisoning", author = "Wang, Ziyao and Le, Thai and Lee, Dongwon", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.800", doi = "10.18653/v1/2023.findings-emnlp.800", pages = "11952--11965", abstract = "Consider a scenario where an author (e.g., activist, whistle-blower) with many public writings wishes to write {``}anonymously{''} when attackers may have already built an authorship attribution (AA) model based off of public writings including those of the author. To enable her wish, we ask a question {``}can one make the publicly released writings, T , unattributable so that AA models trained on T cannot attribute its authorship well?{''} Toward this question, we present a novel solution, UPTON, that exploits black-box data poisoning methods to weaken the authorship features in training samples and make released texts unlearnable. It is different from previous obfuscation works (e.g., adversarial attacks that modify test samples or backdoor works that only change the model outputs when triggering words occur). Using four authorship datasets (IMDb10, IMDb64, Enron and WJO), we present empirical validation where UPTON successfully downgrades the accuracy of AA models to the impractical level (e.g., {\textasciitilde} 35{\%}) while keeping texts still readable (e.g., {\textgreater} 0.9 in BERTScore). UPTON remains effective to AA models that are already trained on available clean writings of authors.", }
Consider a scenario where an author (e.g., activist, whistle-blower) with many public writings wishes to write {``}anonymously{''} when attackers may have already built an authorship attribution (AA) model based off of public writings including those of the author. To enable her wish, we ask a question {``}can one make the publicly released writings, T , unattributable so that AA models trained on T cannot attribute its authorship well?{''} Toward this question, we present a novel solution, UPTON, that exploits black-box data poisoning methods to weaken the authorship features in training samples and make released texts unlearnable. It is different from previous obfuscation works (e.g., adversarial attacks that modify test samples or backdoor works that only change the model outputs when triggering words occur). Using four authorship datasets (IMDb10, IMDb64, Enron and WJO), we present empirical validation where UPTON successfully downgrades the accuracy of AA models to the impractical level (e.g., {\textasciitilde} 35{\%}) while keeping texts still readable (e.g., {\textgreater} 0.9 in BERTScore). UPTON remains effective to AA models that are already trained on available clean writings of authors.
[ "Wang, Ziyao", "Le, Thai", "Lee, Dongwon" ]
UPTON: Preventing Authorship Leakage from Public Text Release via Data Poisoning
findings-emnlp.800
2211.09717
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.801.bib
https://aclanthology.org/2023.findings-emnlp.801/
@inproceedings{gu-etal-2023-iaeval, title = "{IAE}val: A Comprehensive Evaluation of Instance Attribution on Natural Language Understanding", author = "Gu, Peijian and Shen, Yaozong and Wang, Lijie and Wang, Quan and Wu, Hua and Mao, Zhendong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.801", doi = "10.18653/v1/2023.findings-emnlp.801", pages = "11966--11977", abstract = "Instance attribution (IA) aims to identify the training instances leading to the prediction of a test example, helping researchers understand the dataset better and optimize data processing. While many IA methods have been proposed recently, how to evaluate them still remains open. Previous evaluations of IA only focus on one or two dimensions and are not comprehensive. In this work, we introduce IAEval for IA methods, a systematic and comprehensive evaluation scheme covering four significant requirements: sufficiency, completeness, stability and plausibility. We elaborately design novel metrics to measure these requirements for the first time. Three representative IA methods are evaluated under IAEval on four natural language understanding datasets. Extensive experiments confirmed the effectiveness of IAEval and exhibited its ability to provide comprehensive comparison among IA methods. With IAEval, researchers can choose the most suitable IA methods for applications like model debugging.", }
Instance attribution (IA) aims to identify the training instances leading to the prediction of a test example, helping researchers understand the dataset better and optimize data processing. While many IA methods have been proposed recently, how to evaluate them still remains open. Previous evaluations of IA only focus on one or two dimensions and are not comprehensive. In this work, we introduce IAEval for IA methods, a systematic and comprehensive evaluation scheme covering four significant requirements: sufficiency, completeness, stability and plausibility. We elaborately design novel metrics to measure these requirements for the first time. Three representative IA methods are evaluated under IAEval on four natural language understanding datasets. Extensive experiments confirmed the effectiveness of IAEval and exhibited its ability to provide comprehensive comparison among IA methods. With IAEval, researchers can choose the most suitable IA methods for applications like model debugging.
[ "Gu, Peijian", "Shen, Yaozong", "Wang, Lijie", "Wang, Quan", "Wu, Hua", "Mao, Zhendong" ]
IAEval: A Comprehensive Evaluation of Instance Attribution on Natural Language Understanding
findings-emnlp.801
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.802.bib
https://aclanthology.org/2023.findings-emnlp.802/
@inproceedings{wu-etal-2023-scene, title = "Scene Graph Enhanced Pseudo-Labeling for Referring Expression Comprehension", author = "Wu, Cantao and Cai, Yi and Li, Liuwu and Wang, Jiexin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.802", doi = "10.18653/v1/2023.findings-emnlp.802", pages = "11978--11990", abstract = "Referring Expression Comprehension (ReC) is a task that involves localizing objects in images based on natural language expressions. Most ReC methods typically approach the task as a supervised learning problem. However, the need for costly annotations, such as clear image-text pairs or region-text pairs, hinders the scalability of existing approaches. In this work, we propose a novel scene graph-based framework that automatically generates high-quality pseudo region-query pairs. Our method harnesses scene graphs to capture the relationships between objects in images and generate expressions enriched with relation information. To ensure accurate mapping between visual regions and text, we introduce an external module that employs a calibration algorithm to filter out ambiguous queries. Additionally, we employ a rewriter module to enhance the diversity of our generated pseudo queries through rewriting. Extensive experiments demonstrate that our method outperforms previous pseudo-labeling methods by about 10{\%}, 12{\%}, and 11{\%} on RefCOCO, RefCOCO+, and RefCOCOg, respectively. Furthermore, it surpasses the state-of-the-art unsupervised approach by more than 15{\%} on the RefCOCO dataset.", }
Referring Expression Comprehension (ReC) is a task that involves localizing objects in images based on natural language expressions. Most ReC methods typically approach the task as a supervised learning problem. However, the need for costly annotations, such as clear image-text pairs or region-text pairs, hinders the scalability of existing approaches. In this work, we propose a novel scene graph-based framework that automatically generates high-quality pseudo region-query pairs. Our method harnesses scene graphs to capture the relationships between objects in images and generate expressions enriched with relation information. To ensure accurate mapping between visual regions and text, we introduce an external module that employs a calibration algorithm to filter out ambiguous queries. Additionally, we employ a rewriter module to enhance the diversity of our generated pseudo queries through rewriting. Extensive experiments demonstrate that our method outperforms previous pseudo-labeling methods by about 10{\%}, 12{\%}, and 11{\%} on RefCOCO, RefCOCO+, and RefCOCOg, respectively. Furthermore, it surpasses the state-of-the-art unsupervised approach by more than 15{\%} on the RefCOCO dataset.
[ "Wu, Cantao", "Cai, Yi", "Li, Liuwu", "Wang, Jiexin" ]
Scene Graph Enhanced Pseudo-Labeling for Referring Expression Comprehension
findings-emnlp.802
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.803.bib
https://aclanthology.org/2023.findings-emnlp.803/
@inproceedings{jiang-etal-2023-noisy, title = "Noisy Self-Training with Synthetic Queries for Dense Retrieval", author = "Jiang, Fan and Drummond, Tom and Cohn, Trevor", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.803", doi = "10.18653/v1/2023.findings-emnlp.803", pages = "11991--12008", abstract = "Although existing neural retrieval models reveal promising results when training data is abundant and the performance keeps improving as training data increases, collecting high-quality annotated data is prohibitively costly. To this end, we introduce a novel noisy self-training framework combined with synthetic queries, showing that neural retrievers can be improved in a self-evolution manner with no reliance on any external models. Experimental results show that our method improves consistently over existing methods on both general-domain (e.g., MS-MARCO) and out-of-domain (i.e., BEIR) retrieval benchmarks. Extra analysis on low-resource settings reveals that our method is data efficient and outperforms competitive baselines, with as little as 30{\%} of labelled training data. Further extending the framework for reranker training demonstrates that the proposed method is general and yields additional gains on tasks of diverse domains.", }
Although existing neural retrieval models reveal promising results when training data is abundant and the performance keeps improving as training data increases, collecting high-quality annotated data is prohibitively costly. To this end, we introduce a novel noisy self-training framework combined with synthetic queries, showing that neural retrievers can be improved in a self-evolution manner with no reliance on any external models. Experimental results show that our method improves consistently over existing methods on both general-domain (e.g., MS-MARCO) and out-of-domain (i.e., BEIR) retrieval benchmarks. Extra analysis on low-resource settings reveals that our method is data efficient and outperforms competitive baselines, with as little as 30{\%} of labelled training data. Further extending the framework for reranker training demonstrates that the proposed method is general and yields additional gains on tasks of diverse domains.
[ "Jiang, Fan", "Drummond, Tom", "Cohn, Trevor" ]
Noisy Self-Training with Synthetic Queries for Dense Retrieval
findings-emnlp.803
2311.15563
[ "https://github.com/fantabulous-j/self-training-dpr" ]
https://huggingface.co/papers/2311.15563
0
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.804.bib
https://aclanthology.org/2023.findings-emnlp.804/
@inproceedings{raunak-etal-2023-leveraging, title = "Leveraging {GPT}-4 for Automatic Translation Post-Editing", author = "Raunak, Vikas and Sharaf, Amr and Wang, Yiren and Awadalla, Hany and Menezes, Arul", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.804", doi = "10.18653/v1/2023.findings-emnlp.804", pages = "12009--12024", abstract = "While Neural Machine Translation (NMT) represents the leading approach to Machine Translation (MT), the outputs of NMT models still require translation post-editing to rectify errors and enhance quality under critical settings. In this work, we formalize the task of direct translation post-editing with Large Language Models (LLMs) and explore the use of GPT-4 to automatically post-edit NMT outputs across several language pairs. Our results demonstrate that GPT-4 is adept at translation post-editing, producing meaningful and trustworthy edits to translations that help improve its general quality as well as remove different classes of major errors in translations. In particular, human evaluations on assessing edit trustworthiness show that GPT-4 exhibits a large improvement over the prior state-of-the-art LLM. Notably, we improve upon state-of-the-art performance on WMT-22 English-Chinese, English-German, Chinese-English and German-English language pairs using GPT-4 based post-editing, as evaluated by state-of-the-art MT quality metrics. However, we also show that GPT-4 could produce hallucinated edits, thereby urging caution in its use as an expert translation post-editor.", }
While Neural Machine Translation (NMT) represents the leading approach to Machine Translation (MT), the outputs of NMT models still require translation post-editing to rectify errors and enhance quality under critical settings. In this work, we formalize the task of direct translation post-editing with Large Language Models (LLMs) and explore the use of GPT-4 to automatically post-edit NMT outputs across several language pairs. Our results demonstrate that GPT-4 is adept at translation post-editing, producing meaningful and trustworthy edits to translations that help improve its general quality as well as remove different classes of major errors in translations. In particular, human evaluations on assessing edit trustworthiness show that GPT-4 exhibits a large improvement over the prior state-of-the-art LLM. Notably, we improve upon state-of-the-art performance on WMT-22 English-Chinese, English-German, Chinese-English and German-English language pairs using GPT-4 based post-editing, as evaluated by state-of-the-art MT quality metrics. However, we also show that GPT-4 could produce hallucinated edits, thereby urging caution in its use as an expert translation post-editor.
[ "Raunak, Vikas", "Sharaf, Amr", "Wang, Yiren", "Awadalla, Hany", "Menezes, Arul" ]
Leveraging GPT-4 for Automatic Translation Post-Editing
findings-emnlp.804
2305.14878
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.805.bib
https://aclanthology.org/2023.findings-emnlp.805/
@inproceedings{imperial-madabushi-2023-uniform, title = "Uniform Complexity for Text Generation", author = "Imperial, Joseph Marvin and Madabushi, Harish Tayyar", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.805", doi = "10.18653/v1/2023.findings-emnlp.805", pages = "12025--12046", abstract = "Large language models (LLMs) have shown promising results in a wide array of generative NLP tasks, such as summarization and machine translation. In the context of narrative generation, however, existing models still do not capture factors that contribute to producing consistent text. For instance, it is logical that a piece of text or a story should be uniformly readable throughout and that this form of complexity should be controllable. As such, if the complexity of an input text prompt is rated first-grade reading level in the Flesch Reading Ease test, then the generated text continuing the plot should also be within this range of complexity. With this in mind, we introduce Uniform Complexity for Text Generation (UCTG), a new benchmark test which raises the challenge of making generative models observe uniform linguistic properties with respect to prompts. We experiment with over 150+ linguistically and cognitively motivated features for evaluating text complexity in humans and generative models. From our results, we find that models such as GPT-2 struggle to preserve the complexity of input prompts used in its generations, even if finetuned with professionally written texts.", }
Large language models (LLMs) have shown promising results in a wide array of generative NLP tasks, such as summarization and machine translation. In the context of narrative generation, however, existing models still do not capture factors that contribute to producing consistent text. For instance, it is logical that a piece of text or a story should be uniformly readable throughout and that this form of complexity should be controllable. As such, if the complexity of an input text prompt is rated first-grade reading level in the Flesch Reading Ease test, then the generated text continuing the plot should also be within this range of complexity. With this in mind, we introduce Uniform Complexity for Text Generation (UCTG), a new benchmark test which raises the challenge of making generative models observe uniform linguistic properties with respect to prompts. We experiment with over 150+ linguistically and cognitively motivated features for evaluating text complexity in humans and generative models. From our results, we find that models such as GPT-2 struggle to preserve the complexity of input prompts used in its generations, even if finetuned with professionally written texts.
[ "Imperial, Joseph Marvin", "Madabushi, Harish Tayyar" ]
Uniform Complexity for Text Generation
findings-emnlp.805
2204.05185
[ "https://github.com/imperialite/uniform-complexity-textgen" ]
https://huggingface.co/papers/2204.05185
1
0
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.806.bib
https://aclanthology.org/2023.findings-emnlp.806/
@inproceedings{wang-etal-2023-cue, title = "Cue-{C}o{T}: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with {LLM}s", author = "Wang, Hongru and Wang, Rui and Mi, Fei and Deng, Yang and Wang, Zezhong and Liang, Bin and Xu, Ruifeng and Wong, Kam-Fai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.806", doi = "10.18653/v1/2023.findings-emnlp.806", pages = "12047--12064", abstract = "Large Language Models (LLMs), such as ChatGPT, greatly empower dialogue systems with strong language understanding and generation capabilities. However, most of the previous works prompt the LLMs to directly generate a response based on the dialogue context, overlooking the underlying linguistic cues about the user status exhibited in the context. Such in-depth dialogue scenarios are challenging for existing LLMs to figure out the user{'}s hidden needs and respond satisfactorily through a single-step inference. To this end, we propose a novel linguistic cue-based chain-of-thoughts (Cue-CoT), which enhances the LLMs inference with an intermediate reasoning step to find cues exhibited in the dialogue, aiming to provide a more personalized and engaging response. To evaluate the approach, we build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English, targeting 3 major linguistic cues during the conversation: personality, emotion, and psychology. We conducted experiments on the proposed benchmark with 5 LLMs under both zero-shot and one-shot settings. Empirical results demonstrate our proposed Cue-CoT method outperforms standard prompting methods in terms of both helpfulness and acceptability on all datasets.", }
Large Language Models (LLMs), such as ChatGPT, greatly empower dialogue systems with strong language understanding and generation capabilities. However, most of the previous works prompt the LLMs to directly generate a response based on the dialogue context, overlooking the underlying linguistic cues about the user status exhibited in the context. Such in-depth dialogue scenarios are challenging for existing LLMs to figure out the user{'}s hidden needs and respond satisfactorily through a single-step inference. To this end, we propose a novel linguistic cue-based chain-of-thoughts (Cue-CoT), which enhances the LLMs inference with an intermediate reasoning step to find cues exhibited in the dialogue, aiming to provide a more personalized and engaging response. To evaluate the approach, we build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English, targeting 3 major linguistic cues during the conversation: personality, emotion, and psychology. We conducted experiments on the proposed benchmark with 5 LLMs under both zero-shot and one-shot settings. Empirical results demonstrate our proposed Cue-CoT method outperforms standard prompting methods in terms of both helpfulness and acceptability on all datasets.
[ "Wang, Hongru", "Wang, Rui", "Mi, Fei", "Deng, Yang", "Wang, Zezhong", "Liang, Bin", "Xu, Ruifeng", "Wong, Kam-Fai" ]
Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue Questions with LLMs
findings-emnlp.806
2305.11792
[ "https://github.com/rulegreen/dialogue_cot" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.807.bib
https://aclanthology.org/2023.findings-emnlp.807/
@inproceedings{mukherjee-etal-2023-contraste, title = "{CONTRASTE}: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet Extraction", author = "Mukherjee, Rajdeep and Kannen, Nithish and Pandey, Saurabh and Goyal, Pawan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.807", doi = "10.18653/v1/2023.findings-emnlp.807", pages = "12065--12080", abstract = "Existing works on Aspect Sentiment Triplet Extraction (ASTE) explicitly focus on developing more efficient fine-tuning techniques for the task. Instead, our motivation is to come up with a generic approach that can improve the downstream performances of multiple ABSA tasks simultaneously. Towards this, we present CONTRASTE, a novel pre-training strategy using CONTRastive learning to enhance the ASTE performance. While we primarily focus on ASTE, we also demonstrate the advantage of our proposed technique on other ABSA tasks such as ACOS, TASD, and AESC. Given a sentence and its associated (aspect, opinion, sentiment) triplets, first, we design aspect-based prompts with corresponding sentiments masked. We then (pre)train an encoder-decoder model by applying contrastive learning on the decoder-generated aspect-aware sentiment representations of the masked terms. For fine-tuning the model weights thus obtained, we then propose a novel multi-task approach where the base encoder-decoder model is combined with two complementary modules, a tagging-based Opinion Term Detector, and a regression-based Triplet Count Estimator. Exhaustive experiments on four benchmark datasets and a detailed ablation study establish the importance of each of our proposed components as we achieve new state-of-the-art ASTE results.", }
Existing works on Aspect Sentiment Triplet Extraction (ASTE) explicitly focus on developing more efficient fine-tuning techniques for the task. Instead, our motivation is to come up with a generic approach that can improve the downstream performances of multiple ABSA tasks simultaneously. Towards this, we present CONTRASTE, a novel pre-training strategy using CONTRastive learning to enhance the ASTE performance. While we primarily focus on ASTE, we also demonstrate the advantage of our proposed technique on other ABSA tasks such as ACOS, TASD, and AESC. Given a sentence and its associated (aspect, opinion, sentiment) triplets, first, we design aspect-based prompts with corresponding sentiments masked. We then (pre)train an encoder-decoder model by applying contrastive learning on the decoder-generated aspect-aware sentiment representations of the masked terms. For fine-tuning the model weights thus obtained, we then propose a novel multi-task approach where the base encoder-decoder model is combined with two complementary modules, a tagging-based Opinion Term Detector, and a regression-based Triplet Count Estimator. Exhaustive experiments on four benchmark datasets and a detailed ablation study establish the importance of each of our proposed components as we achieve new state-of-the-art ASTE results.
[ "Mukherjee, Rajdeep", "Kannen, Nithish", "P", "ey, Saurabh", "Goyal, Pawan" ]
CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet Extraction
findings-emnlp.807
2310.15577
[ "https://github.com/nitkannen/contraste" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.808.bib
https://aclanthology.org/2023.findings-emnlp.808/
@inproceedings{jiang-etal-2023-towards, title = "Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts", author = "Jiang, Gangwei and Jiang, Caigao and Xue, Siqiao and Zhang, James and Zhou, Jun and Lian, Defu and Wei, Ying", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.808", doi = "10.18653/v1/2023.findings-emnlp.808", pages = "12081--12095", abstract = "Continual pre-training has been urgent for adapting a pre-trained model to a multitude of domains and tasks in the fast-evolving world. In practice, a continually pre-trained model is expected to demonstrate not only greater capacity when fine-tuned on pre-trained domains but also a non-decreasing performance on unseen ones. In this work, we first investigate such anytime fine-tuning effectiveness of existing continual pre-training approaches, concluding with unanimously decreased performance on unseen domains. To this end, we propose a prompt-guided continual pre-training method, where we train a hypernetwork to generate domain-specific prompts by both agreement and disagreement losses. The agreement loss maximally preserves the generalization of a pre-trained model to new domains, and the disagreement one guards the exclusiveness of the generated hidden states for each domain. Remarkably, prompts by the hypernetwork alleviate the domain identity when fine-tuning and promote knowledge transfer across domains. Our method achieved improvements of 3.57{\%} and 3.4{\%} on two real-world datasets (including domain shift and temporal shift), respectively, demonstrating its efficacy.", }
Continual pre-training has been urgent for adapting a pre-trained model to a multitude of domains and tasks in the fast-evolving world. In practice, a continually pre-trained model is expected to demonstrate not only greater capacity when fine-tuned on pre-trained domains but also a non-decreasing performance on unseen ones. In this work, we first investigate such anytime fine-tuning effectiveness of existing continual pre-training approaches, concluding with unanimously decreased performance on unseen domains. To this end, we propose a prompt-guided continual pre-training method, where we train a hypernetwork to generate domain-specific prompts by both agreement and disagreement losses. The agreement loss maximally preserves the generalization of a pre-trained model to new domains, and the disagreement one guards the exclusiveness of the generated hidden states for each domain. Remarkably, prompts by the hypernetwork alleviate the domain identity when fine-tuning and promote knowledge transfer across domains. Our method achieved improvements of 3.57{\%} and 3.4{\%} on two real-world datasets (including domain shift and temporal shift), respectively, demonstrating its efficacy.
[ "Jiang, Gangwei", "Jiang, Caigao", "Xue, Siqiao", "Zhang, James", "Zhou, Jun", "Lian, Defu", "Wei, Ying" ]
Towards Anytime Fine-tuning: Continually Pre-trained Language Models with Hypernetwork Prompts
findings-emnlp.808
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.809.bib
https://aclanthology.org/2023.findings-emnlp.809/
@inproceedings{ghosal-etal-2023-language, title = "Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched Prompts", author = "Ghosal, Deepanway and Majumder, Navonil and Lee, Roy and Mihalcea, Rada and Poria, Soujanya", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.809", doi = "10.18653/v1/2023.findings-emnlp.809", pages = "12096--12102", abstract = "Visual question answering (VQA) is the task of answering questions about an image. The task assumes an understanding of both the image and the question to provide a natural language answer. VQA has gained popularity in recent years due to its potential applications in a wide range of fields, including robotics, education, and healthcare. In this paper, we focus on knowledge-augmented VQA, where answering the question requires commonsense knowledge, world knowledge, and reasoning about ideas and concepts not present in the image. We propose a multimodal framework that uses language guidance (LG) in the form of rationales, image captions, scene graphs, etc to answer questions more accurately. We benchmark our method on the multi-choice question-answering task of the A-OKVQA, Science-QA, VSR, and IconQA datasets using CLIP and BLIP models. We show that the use of language guidance is a simple but powerful and effective strategy for visual question answering. Our language guidance improves the performance of CLIP by 7.6{\%} and BLIP-2 by 4.8{\%} in the challenging A-OKVQA dataset. We also observe consistent improvement in performance on the Science-QA, VSR, and IconQA datasets when using the proposed language guidances. The implementation of LG-VQA is publicly available at https://github.com/declare-lab/LG-VQA.", }
Visual question answering (VQA) is the task of answering questions about an image. The task assumes an understanding of both the image and the question to provide a natural language answer. VQA has gained popularity in recent years due to its potential applications in a wide range of fields, including robotics, education, and healthcare. In this paper, we focus on knowledge-augmented VQA, where answering the question requires commonsense knowledge, world knowledge, and reasoning about ideas and concepts not present in the image. We propose a multimodal framework that uses language guidance (LG) in the form of rationales, image captions, scene graphs, etc to answer questions more accurately. We benchmark our method on the multi-choice question-answering task of the A-OKVQA, Science-QA, VSR, and IconQA datasets using CLIP and BLIP models. We show that the use of language guidance is a simple but powerful and effective strategy for visual question answering. Our language guidance improves the performance of CLIP by 7.6{\%} and BLIP-2 by 4.8{\%} in the challenging A-OKVQA dataset. We also observe consistent improvement in performance on the Science-QA, VSR, and IconQA datasets when using the proposed language guidances. The implementation of LG-VQA is publicly available at https://github.com/declare-lab/LG-VQA.
[ "Ghosal, Deepanway", "Majumder, Navonil", "Lee, Roy", "Mihalcea, Rada", "Poria, Soujanya" ]
Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched Prompts
findings-emnlp.809
2310.20159
[ "https://github.com/declare-lab/lg-vqa" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.810.bib
https://aclanthology.org/2023.findings-emnlp.810/
@inproceedings{algayres-etal-2023-xls, title = "{XLS}-{R} fine-tuning on noisy word boundaries for unsupervised speech segmentation into words", author = "Algayres, Robin and Diego-Simon, Pablo and Sagot, Beno{\^\i}t and Dupoux, Emmanuel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.810", doi = "10.18653/v1/2023.findings-emnlp.810", pages = "12103--12112", abstract = "Due to the absence of explicit word boundaries in the speech stream, the task of segmenting spoken sentences into word units without text supervision is particularly challenging. In this work, we leverage the most recent self-supervised speech models that have proved to quickly adapt to new tasks through fine-tuning, even in low resource conditions. Taking inspiration from semi-supervised learning, we fine-tune an XLS-R model to predict word boundaries themselves produced by top-tier speech segmentation systems: DPDP, VG-HuBERT and DP-Parse. Once XLS-R is fine-tuned, it is used to infer new word boundary labels that are used in turn for another fine-tuning step. Our method consistently improves the performance of each system and set a new state-of-the-art that is, on average 130{\%} higher than the previous one as measured by the F1 score on correctly discovered word tokens on five corpora featuring different languages. Finally, our system can segment speech from languages unseen during fine-tuning in a zero-shot fashion.", }
Due to the absence of explicit word boundaries in the speech stream, the task of segmenting spoken sentences into word units without text supervision is particularly challenging. In this work, we leverage the most recent self-supervised speech models that have proved to quickly adapt to new tasks through fine-tuning, even in low resource conditions. Taking inspiration from semi-supervised learning, we fine-tune an XLS-R model to predict word boundaries themselves produced by top-tier speech segmentation systems: DPDP, VG-HuBERT and DP-Parse. Once XLS-R is fine-tuned, it is used to infer new word boundary labels that are used in turn for another fine-tuning step. Our method consistently improves the performance of each system and set a new state-of-the-art that is, on average 130{\%} higher than the previous one as measured by the F1 score on correctly discovered word tokens on five corpora featuring different languages. Finally, our system can segment speech from languages unseen during fine-tuning in a zero-shot fashion.
[ "Algayres, Robin", "Diego-Simon, Pablo", "Sagot, Beno{\\^\\i}t", "Dupoux, Emmanuel" ]
XLS-R fine-tuning on noisy word boundaries for unsupervised speech segmentation into words
findings-emnlp.810
2310.05235
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.811.bib
https://aclanthology.org/2023.findings-emnlp.811/
@inproceedings{shum-etal-2023-automatic, title = "Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data", author = "Shum, Kashun and Diao, Shizhe and Zhang, Tong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.811", doi = "10.18653/v1/2023.findings-emnlp.811", pages = "12113--12139", abstract = "Chain-of-thought (CoT) advances the reasoning abilities of large language models (LLMs) and achieves superior performance in complex reasoning tasks. However, most CoT studies rely on carefully designed human-annotated rational chains to prompt LLMs, posing challenges for real-world applications where labeled data is available without rational chains. This paper proposes a new strategy, AutomateCoT (Automatic Prompt Augmentation and Selection with Chain-of-Thought), that can bypass human engineering of CoT by automatically augmenting rational chains from a small labeled dataset, and then pruning low-quality chains to construct a candidate pool of machinegenerated rationale chains based on the labels. Finally, it selects the optimal combination of several rationale chains from the pool for CoT prompting by employing a variance-reduced policy gradient strategy to estimate the significance of each example. Automate-CoT enables a quick adaptation of the CoT technique to different tasks. Experimental results demonstrate the effectiveness of our method, where competitive results are achieved on arithmetic reasoning (+2.7{\%}), commonsense reasoning (+3.4{\%}), symbolic reasoning (+3.2{\%}), and non-reasoning tasks (+2.5{\%}).", }
Chain-of-thought (CoT) advances the reasoning abilities of large language models (LLMs) and achieves superior performance in complex reasoning tasks. However, most CoT studies rely on carefully designed human-annotated rational chains to prompt LLMs, posing challenges for real-world applications where labeled data is available without rational chains. This paper proposes a new strategy, AutomateCoT (Automatic Prompt Augmentation and Selection with Chain-of-Thought), that can bypass human engineering of CoT by automatically augmenting rational chains from a small labeled dataset, and then pruning low-quality chains to construct a candidate pool of machinegenerated rationale chains based on the labels. Finally, it selects the optimal combination of several rationale chains from the pool for CoT prompting by employing a variance-reduced policy gradient strategy to estimate the significance of each example. Automate-CoT enables a quick adaptation of the CoT technique to different tasks. Experimental results demonstrate the effectiveness of our method, where competitive results are achieved on arithmetic reasoning (+2.7{\%}), commonsense reasoning (+3.4{\%}), symbolic reasoning (+3.2{\%}), and non-reasoning tasks (+2.5{\%}).
[ "Shum, Kashun", "Diao, Shizhe", "Zhang, Tong" ]
Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data
findings-emnlp.811
2302.12822
[ "https://github.com/shizhediao/automate-cot" ]
https://huggingface.co/papers/2302.12822
1
0
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.812.bib
https://aclanthology.org/2023.findings-emnlp.812/
@inproceedings{rao-etal-2023-makes, title = "What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations", author = "Rao, Kavel and Jiang, Liwei and Pyatkin, Valentina and Gu, Yuling and Tandon, Niket and Dziri, Nouha and Brahman, Faeze and Choi, Yejin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.812", doi = "10.18653/v1/2023.findings-emnlp.812", pages = "12140--12159", abstract = "Moral or ethical judgments rely heavily on the specific contexts in which they occur. Understanding varying shades of defeasible contextualizations (i.e., additional information that strengthens or attenuates the moral acceptability of an action) is critical to accurately represent the subtlety and intricacy of grounded human moral judgment in real-life scenarios. We introduce defeasible moral reasoning: a task to provide grounded contexts that make an action more or less morally acceptable, along with commonsense rationales that justify the reasoning. To elicit high-quality task data, we take an iterative self-distillation approach that starts from a small amount of unstructured seed knowledge from GPT-3 and then alternates between (1) self-distillation from student models; (2) targeted filtering with a critic model trained by human judgment (to boost validity) and NLI (to boost diversity); (3) self-imitation learning (to amplify the desired data quality). This process yields a student model that produces defeasible contexts with improved validity, diversity, and defeasibility. From this model we distill a high-quality dataset, $\delta$-Rules-of-Thumb, of 1.2M entries of contextualizations and rationales for 115K defeasible moral actions rated highly by human annotators 85.9{\%} to 99.8{\%} of the time. Using $\delta$-RoT we obtain a final student model that wins over all intermediate student models by a notable margin.", }
Moral or ethical judgments rely heavily on the specific contexts in which they occur. Understanding varying shades of defeasible contextualizations (i.e., additional information that strengthens or attenuates the moral acceptability of an action) is critical to accurately represent the subtlety and intricacy of grounded human moral judgment in real-life scenarios. We introduce defeasible moral reasoning: a task to provide grounded contexts that make an action more or less morally acceptable, along with commonsense rationales that justify the reasoning. To elicit high-quality task data, we take an iterative self-distillation approach that starts from a small amount of unstructured seed knowledge from GPT-3 and then alternates between (1) self-distillation from student models; (2) targeted filtering with a critic model trained by human judgment (to boost validity) and NLI (to boost diversity); (3) self-imitation learning (to amplify the desired data quality). This process yields a student model that produces defeasible contexts with improved validity, diversity, and defeasibility. From this model we distill a high-quality dataset, $\delta$-Rules-of-Thumb, of 1.2M entries of contextualizations and rationales for 115K defeasible moral actions rated highly by human annotators 85.9{\%} to 99.8{\%} of the time. Using $\delta$-RoT we obtain a final student model that wins over all intermediate student models by a notable margin.
[ "Rao, Kavel", "Jiang, Liwei", "Pyatkin, Valentina", "Gu, Yuling", "T", "on, Niket", "Dziri, Nouha", "Brahman, Faeze", "Choi, Yejin" ]
What Makes it Ok to Set a Fire? Iterative Self-distillation of Contexts and Rationales for Disambiguating Defeasible Social and Moral Situations
findings-emnlp.812
2310.15431
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.813.bib
https://aclanthology.org/2023.findings-emnlp.813/
@inproceedings{tu-etal-2023-empirical, title = "An Empirical Study on Multiple Knowledge from {C}hat{GPT} for Emotion Recognition in Conversations", author = "Tu, Geng and Liang, Bin and Qin, Bing and Wong, Kam-Fai and Xu, Ruifeng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.813", doi = "10.18653/v1/2023.findings-emnlp.813", pages = "12160--12173", abstract = "Multiple knowledge (e.g., co-reference, topics, emotional causes, etc) has been demonstrated effective for emotion detection. However, exploring this knowledge in Emotion Recognition in Conversations (ERC) is currently a blank slate due to the lack of annotated data and the high cost involved in obtaining such knowledge. Fortunately, the emergence of Large Language Models (LLMs) holds promise in filling this void. Therefore, we propose a Multiple Knowledge Fusion Model (MKFM) to effectively integrate such knowledge generated by LLMs for ERC and empirically study its impact on the model. Experimental results on three public datasets have demonstrated the effectiveness of multiple knowledge for ERC. Furthermore, we conduct a detailed analysis of the contribution and complementarity of this knowledge.", }
Multiple knowledge (e.g., co-reference, topics, emotional causes, etc) has been demonstrated effective for emotion detection. However, exploring this knowledge in Emotion Recognition in Conversations (ERC) is currently a blank slate due to the lack of annotated data and the high cost involved in obtaining such knowledge. Fortunately, the emergence of Large Language Models (LLMs) holds promise in filling this void. Therefore, we propose a Multiple Knowledge Fusion Model (MKFM) to effectively integrate such knowledge generated by LLMs for ERC and empirically study its impact on the model. Experimental results on three public datasets have demonstrated the effectiveness of multiple knowledge for ERC. Furthermore, we conduct a detailed analysis of the contribution and complementarity of this knowledge.
[ "Tu, Geng", "Liang, Bin", "Qin, Bing", "Wong, Kam-Fai", "Xu, Ruifeng" ]
An Empirical Study on Multiple Knowledge from ChatGPT for Emotion Recognition in Conversations
findings-emnlp.813
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.814.bib
https://aclanthology.org/2023.findings-emnlp.814/
@inproceedings{gan-etal-2023-exploiting, title = "Exploiting Contrastive Learning and Numerical Evidence for Confusing Legal Judgment Prediction", author = "Gan, Leilei and Li, Baokui and Kuang, Kun and Zhang, Yating and Wang, Lei and Luu, Anh and Yang, Yi and Wu, Fei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.814", doi = "10.18653/v1/2023.findings-emnlp.814", pages = "12174--12185", abstract = "Given the fact description text of a legal case, legal judgment prediction (LJP) aims to predict the case{'}s charge, applicable law article, and term of penalty. A core problem of LJP is distinguishing confusing legal cases where only subtle text differences exist. Previous studies fail to distinguish different classification errors with a standard cross-entropy classification loss and ignore the numbers in the fact description for predicting the term of penalty. To tackle these issues, in this work, first, in order to exploit the numbers in legal cases for predicting the term of penalty of certain charges, we enhance the representation of the fact description with extracted crime amounts which are encoded by a pre-trained numeracy model. Second, we propose a moco-based supervised contrastive learning to learn distinguishable representations and explore the best strategy to construct positive example pairs to benefit all three subtasks of LJP simultaneously. Extensive experiments on real-world datasets show that the proposed method achieves new state-of-the-art results, particularly for confusing legal cases. Ablation studies also demonstrate the effectiveness of each component.", }
Given the fact description text of a legal case, legal judgment prediction (LJP) aims to predict the case{'}s charge, applicable law article, and term of penalty. A core problem of LJP is distinguishing confusing legal cases where only subtle text differences exist. Previous studies fail to distinguish different classification errors with a standard cross-entropy classification loss and ignore the numbers in the fact description for predicting the term of penalty. To tackle these issues, in this work, first, in order to exploit the numbers in legal cases for predicting the term of penalty of certain charges, we enhance the representation of the fact description with extracted crime amounts which are encoded by a pre-trained numeracy model. Second, we propose a moco-based supervised contrastive learning to learn distinguishable representations and explore the best strategy to construct positive example pairs to benefit all three subtasks of LJP simultaneously. Extensive experiments on real-world datasets show that the proposed method achieves new state-of-the-art results, particularly for confusing legal cases. Ablation studies also demonstrate the effectiveness of each component.
[ "Gan, Leilei", "Li, Baokui", "Kuang, Kun", "Zhang, Yating", "Wang, Lei", "Luu, Anh", "Yang, Yi", "Wu, Fei" ]
Exploiting Contrastive Learning and Numerical Evidence for Confusing Legal Judgment Prediction
findings-emnlp.814
2211.08238
[ "https://github.com/leileigan/ContrastiveLJP" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.815.bib
https://aclanthology.org/2023.findings-emnlp.815/
@inproceedings{schmidt-etal-2023-one, title = "One For All {\&} All For One: Bypassing Hyperparameter Tuning with Model Averaging for Cross-Lingual Transfer", author = "Schmidt, Fabian David and Vuli{\'c}, Ivan and Glava{\v{s}}, Goran", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.815", doi = "10.18653/v1/2023.findings-emnlp.815", pages = "12186--12193", abstract = "Multilingual language models enable zero-shot cross-lingual transfer (ZS-XLT): fine-tuned on sizable source-language task data, they perform the task in target languages without labeled instances. The effectiveness of ZS-XLT hinges on the linguistic proximity between languages and the amount of pretraining data for a language. Because of this, model selection based on source-language validation is unreliable: it picks model snapshots with suboptimal target-language performance. As a remedy, some work optimizes ZS-XLT by extensively tuning hyperparameters: the follow-up work then routinely struggles to replicate the original results. Other work searches over narrower hyperparameter grids, reporting substantially lower performance. In this work, we therefore propose an unsupervised evaluation protocol for ZS-XLT that decouples performance maximization from hyperparameter tuning. As a robust and more transparent alternative to extensive hyperparameter tuning, we propose to accumulatively average snapshots from different runs into a single model. We run broad ZS-XLT experiments on both higher-level semantic tasks (NLI, extractive QA) and a lower-level token classification task (NER) and find that conventional model selection based on source-language validation quickly plateaus to suboptimal ZS-XLT performance. On the other hand, our accumulative run-by-run averaging of models trained with different hyperparameters boosts ZS-XLT performance and closely correlates with {``}oracle{''} ZS-XLT, i.e., model selection based on target-language validation performance.", }
Multilingual language models enable zero-shot cross-lingual transfer (ZS-XLT): fine-tuned on sizable source-language task data, they perform the task in target languages without labeled instances. The effectiveness of ZS-XLT hinges on the linguistic proximity between languages and the amount of pretraining data for a language. Because of this, model selection based on source-language validation is unreliable: it picks model snapshots with suboptimal target-language performance. As a remedy, some work optimizes ZS-XLT by extensively tuning hyperparameters: the follow-up work then routinely struggles to replicate the original results. Other work searches over narrower hyperparameter grids, reporting substantially lower performance. In this work, we therefore propose an unsupervised evaluation protocol for ZS-XLT that decouples performance maximization from hyperparameter tuning. As a robust and more transparent alternative to extensive hyperparameter tuning, we propose to accumulatively average snapshots from different runs into a single model. We run broad ZS-XLT experiments on both higher-level semantic tasks (NLI, extractive QA) and a lower-level token classification task (NER) and find that conventional model selection based on source-language validation quickly plateaus to suboptimal ZS-XLT performance. On the other hand, our accumulative run-by-run averaging of models trained with different hyperparameters boosts ZS-XLT performance and closely correlates with {``}oracle{''} ZS-XLT, i.e., model selection based on target-language validation performance.
[ "Schmidt, Fabian David", "Vuli{\\'c}, Ivan", "Glava{\\v{s}}, Goran" ]
One For All & All For One: Bypassing Hyperparameter Tuning with Model Averaging for Cross-Lingual Transfer
findings-emnlp.815
2310.10532
[ "https://github.com/fdschmidt93/ofa-xlt" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.816.bib
https://aclanthology.org/2023.findings-emnlp.816/
@inproceedings{canute-etal-2023-dimensions, title = "Dimensions of Online Conflict: Towards Modeling Agonism", author = "Canute, Matt and Jin, Mali and Holtzclaw, Hannah and Lusoli, Alberto and Adams, Philippa and Pandya, Mugdha and Taboada, Maite and Maynard, Diana and Chun, Wendy Hui Kyong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.816", doi = "10.18653/v1/2023.findings-emnlp.816", pages = "12194--12209", abstract = "Agonism plays a vital role in democratic dialogue by fostering diverse perspectives and robust discussions. Within the realm of online conflict there is another type: hateful antagonism, which undermines constructive dialogue. Detecting conflict online is central to platform moderation and monetization. It is also vital for democratic dialogue, but only when it takes the form of agonism. To model these two types of conflict, we collected Twitter conversations related to trending controversial topics. We introduce a comprehensive annotation schema for labelling different dimensions of conflict in the conversations, such as the source of conflict, the target, and the rhetorical strategies deployed. Using this schema, we annotated approximately 4,000 conversations with multiple labels. We then train both logistic regression and transformer-based models on the dataset, incorporating context from the conversation, including the number of participants and the structure of the interactions. Results show that contextual labels are helpful in identifying conflict and make the models robust to variations in topic. Our research contributes a conceptualization of different dimensions of conflict, a richly annotated dataset, and promising results that can contribute to content moderation.", }
Agonism plays a vital role in democratic dialogue by fostering diverse perspectives and robust discussions. Within the realm of online conflict there is another type: hateful antagonism, which undermines constructive dialogue. Detecting conflict online is central to platform moderation and monetization. It is also vital for democratic dialogue, but only when it takes the form of agonism. To model these two types of conflict, we collected Twitter conversations related to trending controversial topics. We introduce a comprehensive annotation schema for labelling different dimensions of conflict in the conversations, such as the source of conflict, the target, and the rhetorical strategies deployed. Using this schema, we annotated approximately 4,000 conversations with multiple labels. We then train both logistic regression and transformer-based models on the dataset, incorporating context from the conversation, including the number of participants and the structure of the interactions. Results show that contextual labels are helpful in identifying conflict and make the models robust to variations in topic. Our research contributes a conceptualization of different dimensions of conflict, a richly annotated dataset, and promising results that can contribute to content moderation.
[ "Canute, Matt", "Jin, Mali", "Holtzclaw, Hannah", "Lusoli, Alberto", "Adams, Philippa", "P", "ya, Mugdha", "Taboada, Maite", "Maynard, Diana", "Chun, Wendy Hui Kyong" ]
Dimensions of Online Conflict: Towards Modeling Agonism
findings-emnlp.816
2311.03584
[ "https://github.com/digital-democracies-institute/dimensions-of-online-conflict" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.817.bib
https://aclanthology.org/2023.findings-emnlp.817/
@inproceedings{chauhan-etal-2023-learning, title = "Learning under Label Proportions for Text Classification", author = "Chauhan, Jatin and Wang, Xiaoxuan and Wang, Wei", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.817", doi = "10.18653/v1/2023.findings-emnlp.817", pages = "12210--12223", abstract = "We present one of the preliminary NLP works under the challenging setup of Learning from Label Proportions (LLP), where the data is provided in an aggregate form called bags and only the proportion of samples in each class as the ground truth. This setup is inline with the desired characteristics of training models under Privacy settings and Weakly supervision. By characterizing some irregularities of the most widely used baseline technique DLLP, we propose a novel formulation that is also robust. This is accompanied with a learnability result that provides a generalization bound under LLP. Combining this formulation with a self-supervised objective, our method achieves better results as compared to the baselines in almost 87{\%} of the experimental configurations which include large scale models for both long and short range texts across multiple metrics.", }
We present one of the preliminary NLP works under the challenging setup of Learning from Label Proportions (LLP), where the data is provided in an aggregate form called bags and only the proportion of samples in each class as the ground truth. This setup is inline with the desired characteristics of training models under Privacy settings and Weakly supervision. By characterizing some irregularities of the most widely used baseline technique DLLP, we propose a novel formulation that is also robust. This is accompanied with a learnability result that provides a generalization bound under LLP. Combining this formulation with a self-supervised objective, our method achieves better results as compared to the baselines in almost 87{\%} of the experimental configurations which include large scale models for both long and short range texts across multiple metrics.
[ "Chauhan, Jatin", "Wang, Xiaoxuan", "Wang, Wei" ]
Learning under Label Proportions for Text Classification
findings-emnlp.817
2310.11707
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.818.bib
https://aclanthology.org/2023.findings-emnlp.818/
@inproceedings{xu-etal-2023-metarevision, title = "{M}eta{R}e{V}ision: Meta-Learning with Retrieval for Visually Grounded Compositional Concept Acquisition", author = "Xu, Guangyue and Kordjamshidi, Parisa and Chai, Joyce", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.818", doi = "10.18653/v1/2023.findings-emnlp.818", pages = "12224--12236", abstract = "Humans have the ability to learn novel compositional concepts by recalling primitive concepts acquired from past experience and generalizing these primitive concepts to novel compositions. Inspired by the above human{'}s compositional learning procedure, in this paper, we propose MetaReVision, a retrievalenhanced meta-learning model to solve the visually grounded compositional concept learning problem. The proposed MetaReVision consists of a retrieval module and a meta-learning module which are designed to incorporate retrieved primitive concepts as supporting set to meta-train visual-language models for grounded compositional concept recognition. Through meta-learning from episodes constructed by the retriever, MetaReVision learns a generic compositional representation that can be fast updated to recognize novel composi tional concepts. We create CompCOCO and CompFlickr to benchmark the grounded compositional concept learning. Our experimental results show MetaReVision outperforms other competitive baselines and the retrieval module does plays an important role in this compositional learning process.", }
Humans have the ability to learn novel compositional concepts by recalling primitive concepts acquired from past experience and generalizing these primitive concepts to novel compositions. Inspired by the above human{'}s compositional learning procedure, in this paper, we propose MetaReVision, a retrievalenhanced meta-learning model to solve the visually grounded compositional concept learning problem. The proposed MetaReVision consists of a retrieval module and a meta-learning module which are designed to incorporate retrieved primitive concepts as supporting set to meta-train visual-language models for grounded compositional concept recognition. Through meta-learning from episodes constructed by the retriever, MetaReVision learns a generic compositional representation that can be fast updated to recognize novel composi tional concepts. We create CompCOCO and CompFlickr to benchmark the grounded compositional concept learning. Our experimental results show MetaReVision outperforms other competitive baselines and the retrieval module does plays an important role in this compositional learning process.
[ "Xu, Guangyue", "Kordjamshidi, Parisa", "Chai, Joyce" ]
MetaReVision: Meta-Learning with Retrieval for Visually Grounded Compositional Concept Acquisition
findings-emnlp.818
2311.01580
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.819.bib
https://aclanthology.org/2023.findings-emnlp.819/
@inproceedings{kim-etal-2023-pr, title = "{PR}-{MCS}: Perturbation Robust Metric for {M}ulti{L}ingual Image Captioning", author = "Kim, Yongil and Hwang, Yerin and Yun, Hyeongu and Yoon, Seunghyun and Bui, Trung and Jung, Kyomin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.819", doi = "10.18653/v1/2023.findings-emnlp.819", pages = "12237--12258", abstract = "Vulnerability to lexical perturbation is a critical weakness of automatic evaluation metrics for image captioning. This paper proposes Perturbation Robust Multi-Lingual CLIPScore(PR-MCS), which exhibits robustness to such perturbations, as a novel reference-free image captioning metric applicable to multiple languages. To achieve perturbation robustness, we fine-tune the text encoder of CLIP with our language-agnostic method to distinguish the perturbed text from the original text. To verify the robustness of PR-MCS, we introduce a new fine-grained evaluation dataset consisting of detailed captions, critical objects, and the relationships between the objects for 3,000 images in five languages. In our experiments, PR-MCS significantly outperforms baseline metrics in capturing lexical noise of all various perturbation types in all five languages, while maintaining a strong correlation with human judgments.", }
Vulnerability to lexical perturbation is a critical weakness of automatic evaluation metrics for image captioning. This paper proposes Perturbation Robust Multi-Lingual CLIPScore(PR-MCS), which exhibits robustness to such perturbations, as a novel reference-free image captioning metric applicable to multiple languages. To achieve perturbation robustness, we fine-tune the text encoder of CLIP with our language-agnostic method to distinguish the perturbed text from the original text. To verify the robustness of PR-MCS, we introduce a new fine-grained evaluation dataset consisting of detailed captions, critical objects, and the relationships between the objects for 3,000 images in five languages. In our experiments, PR-MCS significantly outperforms baseline metrics in capturing lexical noise of all various perturbation types in all five languages, while maintaining a strong correlation with human judgments.
[ "Kim, Yongil", "Hwang, Yerin", "Yun, Hyeongu", "Yoon, Seunghyun", "Bui, Trung", "Jung, Kyomin" ]
PR-MCS: Perturbation Robust Metric for MultiLingual Image Captioning
findings-emnlp.819
2303.08389
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster